CN116486347B - Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition - Google Patents

Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition Download PDF

Info

Publication number
CN116486347B
CN116486347B CN202310144922.9A CN202310144922A CN116486347B CN 116486347 B CN116486347 B CN 116486347B CN 202310144922 A CN202310144922 A CN 202310144922A CN 116486347 B CN116486347 B CN 116486347B
Authority
CN
China
Prior art keywords
image information
visibility
fog
easy
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310144922.9A
Other languages
Chinese (zh)
Other versions
CN116486347A (en
Inventor
元保军
胡雪瑞
杜晓宾
马晓岩
叶冠宁
王慧中
马建红
许芃
卫权岗
田昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Meteorological Observation Data Center Henan Meteorological Archives
Original Assignee
Henan Meteorological Observation Data Center Henan Meteorological Archives
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Meteorological Observation Data Center Henan Meteorological Archives filed Critical Henan Meteorological Observation Data Center Henan Meteorological Archives
Priority to CN202310144922.9A priority Critical patent/CN116486347B/en
Publication of CN116486347A publication Critical patent/CN116486347A/en
Application granted granted Critical
Publication of CN116486347B publication Critical patent/CN116486347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for monitoring and capturing a group fog based on scale-invariant feature transform image recognition, belonging to the technical field of group fog monitoring, wherein the method comprises the following steps: acquiring an image information set, and fusing the acquired image information set into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transform algorithm; performing feature processing on the panoramic image information by using a VisNet convolutional neural network to obtain visibility features in the panoramic image information; taking the visibility characteristic and meteorological data as input of a BP neural network prediction model, and outputting a visibility value of a cluster fog easy occurrence point through calculation of an implicit layer; the method comprises the steps of obtaining the visibility level of the cluster fog easy-to-occur point according to the visibility value, outputting corresponding reminding information at the cluster fog easy-to-occur point based on the visibility level, and completing monitoring and capturing of the cluster fog.

Description

Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition
Technical Field
The application relates to the technical field of mass fog monitoring, in particular to a mass fog monitoring and capturing method and device based on scale invariant feature transformation image recognition.
Background
Traffic accidents caused by large fog with low visibility are important factors affecting the transportation capacity and traffic safety of highways, and prediction of strong fog with extremely low visibility is a difficult problem in meteorological work. The method has the advantages that the disaster weather monitoring and early warning can be maximally exerted in the expressway operation security emergency plan, and the problem to be solved is solved urgently.
The method improves the observation capability of low-visibility weather, enhances the time-space density of meteorological observation, effectively ensures traffic safety under severe weather conditions, aims at obtaining maximum economic benefit and social benefit, and becomes an important and urgent task for strengthening disaster prevention and reduction work at present.
With the development of image technology in recent years, a new type of visibility observation method is gradually formed by a video image method. According to the method, a black target object is shot through a camera (video camera), and the ratio of brightness between the target object and the background is solved from a digital image, so that the atmospheric visibility is calculated. The related studies of the video image recognition visibility are performed successively by FilipeC, dai Chongda and the like. It is not difficult to find that due to the non-blackbody characteristic of the target object and the non-uniformity of sky brightness, a certain deviation may exist in the observation result of the video image method, and the problem of visibility analysis error caused by the unilateral deviation of the camera image or the meteorological data exists at present.
Therefore, how to overcome the above-mentioned technical problems and drawbacks becomes a major problem to be solved.
Disclosure of Invention
In order to solve the problem of visibility analysis errors caused by unilateral deviation of current camera images or meteorological data, the application provides a cluster fog monitoring and capturing method and device based on scale invariant feature transformation image identification, which adopts the following technical scheme:
in a first aspect, the application provides a cluster fog monitoring and capturing method based on scale invariant feature transform image recognition, which comprises the following steps:
s1, acquiring an image information set, wherein the image information set is an image information set with different angles of each group fog easy-to-occur point; the image information set is acquired by shooting through a monitoring system;
s2, fusing the acquired image information set into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transformation algorithm;
s3, performing feature processing on the panoramic image information by using a VisNet convolutional neural network to acquire visibility features in the panoramic image information;
s4, taking the visibility characteristic and meteorological data as input of a BP neural network prediction model, and outputting a visibility value of the cluster fog easy occurrence point through calculation of an implicit layer;
and S5, matching the visibility level of the group fog easy-to-occur point according to the visibility value, and outputting corresponding reminding information at the group fog easy-to-occur point based on the visibility level to complete monitoring and capturing of the group fog.
Further, the monitoring system in step S1 includes: and arranging a plurality of cameras at each group fog easy-to-send point, forming a wireless network by each camera and other cameras through the Internet of things, wherein one camera is used as a sink node, all cameras take pictures at regular time and transmit the pictures to the sink node through the Internet of things, and compressing the images by the sink node and uploading the images to a server.
Further, arranging a plurality of cameras at each group fog easy-to-occur point comprises: each group fog easy-to-send point is provided with 6 cameras, the 6 cameras are arranged into a group, and the sink node is one of the 6 cameras.
Further, in the step S2, the obtained image information is fused into panoramic image information of the cluster fog easy occurrence point by adopting a scale invariant feature transform algorithm, which is specifically expressed as follows:
(1) Loading the image information set of each group fog easy-to-occur point, and extracting characteristic points in the image information set;
(2) Matching the characteristic points in the image information set; matching the feature points comprises displaying the matching feature points of the image information set, and taking out the matching feature points;
(3) Performing fitting geometric transformation on the image information set according to the matching characteristic points; taking one image in the image information set as a template image and the other images as transformation images; calculating a spatial transformation range of the transformed image based on the template image, and performing transformation on the transformed image;
(4) And fusing the transformed images to form the panoramic image information.
Further, the feature processing of the panoramic image information by using the VisNet convolutional neural network in the step S3 includes:
(1) Performing fast Fourier transform filtering on the panoramic image to obtain main object outline basic characteristics of the panoramic image;
(2) Performing spectral filtering on the panoramic image to obtain a visibility contrast identification area of the panoramic image;
(3) And taking the fast Fourier transform filtering result, the spectrum filtering result and the panoramic image as inputs of a training convolutional neural network to acquire the characteristic information of the panoramic image.
Further, the meteorological data in the step S4 includes: barometric pressure, temperature, relative humidity, precipitation, wind speed, wind direction, and visibility.
Further, the training of the BP neural network prediction model in step S4 is specifically shown as follows:
(1) Acquiring a group fog video stream fragment in a video stream of a history monitoring system;
(2) Performing frame processing on the group fog video stream segment, removing unavailable images, and screening registered images through analyzing and sharpening image pixels and background fields to obtain first image information; aiming at the difference of environments such as daytime and night light sources, sample libraries under different conditions are respectively generated;
(3) The panoramic image information of the first image information of the mass fog easy-occurrence points at different angles is obtained by adopting a scale-invariant feature transformation algorithm and is used as second image information;
(4) Acquiring meteorological data information of the mass fog easy occurrence point as first meteorological data information;
(5) And matching the second image information with the first meteorological data information according to a time scale and a space region, taking a matching result as input of a BP neural network prediction model, training the BP neural network prediction model, and performing parameter tuning on the BP neural network prediction model based on the difference between a visibility prediction value and a visibility actual value.
In a second aspect, the present application also provides a cluster fog monitoring and capturing device based on scale invariant feature transform image recognition, including: the system comprises an image information acquisition module, a panoramic image information processing module, a visibility value acquisition module and a visibility level acquisition module;
the image information acquisition module is used for acquiring an image information set, wherein the image information set is an image information set with different angles of each group fog easy-occurrence point; the image information set is acquired by shooting through a monitoring system;
the panoramic image information acquisition module is used for fusing the acquired image information sets into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transformation algorithm;
the panoramic image information processing module is used for carrying out feature processing on the panoramic image information by utilizing a VisNet convolutional neural network to obtain visibility features in the panoramic image information;
the visibility value acquisition module is used for taking the visibility characteristics and meteorological data as input of a BP neural network prediction model, and outputting the visibility value of the cluster fog easy occurrence point through calculation of an implicit layer;
the visibility level acquisition module is used for matching the visibility level of the group fog easy-to-occur point according to the visibility value, outputting corresponding reminding information at the group fog easy-to-occur point based on the visibility level, and completing monitoring and capturing of the group fog.
In a third aspect, the present application provides an electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device, cause the device to perform the method of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the method according to the first aspect.
In a fifth aspect, the present application provides a computer program for performing the method of the first aspect when the computer program is executed by a computer.
In one possible design, the program in the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
1. according to the application, panoramic image information and meteorological data information of the mass fog easy-to-occur points are used as input of the BP neural network prediction model, so that on one hand, recognition accuracy is improved, on the other hand, system complexity is improved, and under special conditions, such as occurrence of problems (such as camera damage and maintenance) of single input data, the model can also ensure continuous operation of a visibility recognition function.
2. According to the application, the video information of the historical monitoring system is used as training data of the BP neural network prediction model, so that abundant data support is provided for the BP neural network prediction model, and the accuracy of the BP neural network prediction model in predicting the visibility of the cluster fog easy-occurrence point is improved.
3. According to the application, the easily-occurring point of the group fog is monitored and captured in real time, so that the phenomenon of sudden decline of visibility caused by sudden group fog is given to trip personnel for timely information early warning, and the occurrence of dangerous situations is reduced.
4. The visibility detection technology based on image processing has good measurement precision and higher flexibility, is suitable for measuring the visibility under different weather conditions, can fully utilize a highway monitoring system, improves the utilization rate of traffic facility resources, and reduces the detection cost of the visibility.
5. The application can increase the visibility detection density under the expressway environment by the monitoring system, and improve the monitoring capability of traffic weather; meanwhile, the road visibility information is acquired by the traffic image, so that timely and reliable traffic visibility information can be provided for drivers and managers, decision making is carried out by auxiliary management, and information guarantee is provided for expressway control and linkage control.
Drawings
FIG. 1 is a diagram of an exemplary system architecture in which embodiments of the present application may be applied;
FIG. 2 is a flow chart of a method of an embodiment of the present application;
FIG. 3 is a flow chart of a scale invariant feature transform algorithm of an embodiment of the present application;
FIG. 4 is a flow chart of processing panoramic image features by a VisNet convolutional neural network in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of the basic structure of a VisNet convolutional neural network in accordance with an embodiment of the present application;
FIG. 6 is a schematic diagram of a neural network model established by combining image features extracted by a BP neural network prediction model with meteorological data according to an embodiment of the present application;
FIG. 7 is a flow chart of BP neural network predictive model training of an embodiment of the application;
FIG. 8 is a schematic diagram of night imaging capability effect of an embodiment of the present application;
FIG. 9 is a schematic illustration of an apparatus according to an embodiment of the application;
FIG. 10 is a schematic diagram of a computer device according to an embodiment of the application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving PictureExpertsGroupAudioLayerIII, moving picture experts compression standard audio layer 3), MP4 (moving picture experts group audio layer 4) players, laptop and desktop computers, etc.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the method for monitoring and capturing the mist of the mist generating device based on the scale-invariant feature transformation image recognition provided by the embodiment of the application is generally executed by a server/terminal device, and correspondingly, the device for monitoring and capturing the mist of the mist generating device based on the scale-invariant feature transformation image recognition is generally arranged in the server/terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Explanation: visNet is a convolutional neural network specifically designed and developed for image recognition visibility.
With continued reference to fig. 2, there is shown a flow chart of a method for monitoring and capturing a bolus fog based on scale-invariant feature transform image recognition of the present application, the method comprising the steps of:
s1, acquiring an image information set, wherein the image information set is an image information set with different angles of each group fog easy-to-occur point; the image information set is acquired by shooting through a monitoring system.
The monitoring system in the step S1 includes: and arranging a plurality of cameras at each group fog easy-to-send point, forming a wireless network by each camera and other cameras through the Internet of things, wherein one camera is used as a sink node, all cameras take pictures at regular time and transmit the pictures to the sink node through the Internet of things, and compressing the images by the sink node and uploading the images to a server.
Laying a plurality of cameras at each group fog easy-to-develop point comprises: each group fog easy-to-send point is provided with 6 cameras, the 6 cameras are arranged into a group, and the sink node is one of the 6 cameras.
In one possible implementation mode, the image data of the existing traffic monitoring cameras are combined with the data of automatic highway weather observation stations, intelligent grids, satellite data and the like to assist in acquiring the image information of the easily-transmitted points of the fog in channels such as traffic, weather APP, weather WeChat and the like.
S2, fusing the acquired image information set into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transformation algorithm;
in the step S2, the obtained image information is fused into panoramic image information of the cluster fog easy-occurrence point by adopting a scale invariant feature transform algorithm, please refer to fig. 3:
step 201, loading the image information set of each group fog easy-to-occur point, and extracting characteristic points in the image information set;
step 202, matching the characteristic points in the image information set; matching the feature points comprises displaying the matching feature points of the image information set, and taking out the matching feature points;
step 203, performing fitting geometric transformation on the image information set according to the matching feature points; taking one image in the image information set as a template image and the other images as transformation images; calculating a spatial transformation range of the transformed image based on the template image, and performing transformation on the transformed image;
and 204, fusing the transformed images to form the panoramic image information.
S3, performing feature processing on the panoramic image information by using a VisNet convolutional neural network to acquire visibility features in the panoramic image information;
in the step S3, please refer to fig. 4 for performing feature processing on the panoramic image information by using the VisNet convolutional neural network:
step 301, performing fast fourier transform filtering on the obtained panoramic image to obtain main object contour basic characteristics of the panoramic image;
step 302, performing spectral filtering on the panoramic image to obtain a visibility contrast identification area of the panoramic image;
and 303, taking the fast Fourier transform filtering result, the spectrum filtering result and the panoramic image as inputs of a training convolutional neural network, and acquiring characteristic information of the panoramic image.
In one possible implementation, the basic structure of the VisNet convolutional neural network is shown in fig. 5.
S4, taking the visibility characteristic and meteorological data as input of a BP neural network prediction model, and outputting a visibility value of the cluster fog easy occurrence point through calculation of an implicit layer;
in one possible implementation, a schematic diagram of the neural network model established by combining the image features extracted by the BP neural network prediction model with meteorological data is shown in FIG. 6.
The meteorological data in the step S4 includes: barometric pressure, temperature, relative humidity, precipitation, wind speed, wind direction, and visibility.
The specific implementation of the BP neural network prediction model training in step S4 is shown in fig. 7:
step 401, acquiring a group fog video stream fragment in a video stream of a history monitoring system;
step 402, performing frame processing on the group fog video stream segment, removing unavailable images, and screening registered images through analyzing and sharpening image pixels and background fields to obtain first image information; aiming at the difference of environments such as daytime and night light sources, sample libraries under different conditions are respectively generated;
in one possible implementation, the existing traffic monitoring camera is generally provided with infrared detection capability and night imaging capability for monitoring night conditions, and the effect is as shown in fig. 8.
Step 403, obtaining panoramic image information of the first image information of the mass fog easy-occurrence points with different angles by adopting a scale invariant feature transform algorithm, and taking the panoramic image information as second image information;
step 404, acquiring meteorological data information of the mass fog easy-to-occur points under the same time as first meteorological data information;
and step 405, matching the second image information with the first meteorological data information according to a time scale and a space region, taking a matching result as an input of a BP neural network prediction model, training the BP neural network prediction model, and performing parameter tuning on the BP neural network prediction model based on the difference between a visibility prediction value and a visibility actual value.
And S5, matching the visibility level of the group fog easy-to-occur point according to the visibility value, and outputting corresponding reminding information at the group fog easy-to-occur point based on the visibility level to complete monitoring and capturing of the group fog.
In one possible implementation, the visibility level data may also be used as input to a traffic high risk area analysis subsystem that effectively identifies traffic high risk caused by reduced visibility when the visibility is low or a high-low visibility transition area occurs.
In one possible implementation, the visibility is monitored according to the visibility level of the visibility inversion, and a visibility early warning index is established at the same time, and the meteorological condition influence is classified into 6 levels of safety, safer, basically safe, less safe, unsafe and extremely unsafe according to different threshold conditions, and early warning prompt is carried out on the user through different colors at a system interface to provide reference for the user to select a travel mode. When the monitoring value exceeds the threshold value or the level of the alarm, the system can timely send out the alarm to remind relevant personnel to pay attention to.
In a possible implementation manner, when the sudden occurrence of the cluster fog on the expressway is monitored and captured, a cluster fog early warning is sent to a vehicle which is running and needs to pass through a cluster fog point, the early warning content comprises the fact that the sudden cluster fog exists at a position which is away from a running vehicle by XX kilometers, the visibility level is level X, the vehicle which is running is required to be slowed down in time or stopped running, and the vehicle which is running away from a current road section can be planned in advance to drive away in time.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-only memory (ROM), or a random access memory (RandomAccessMemory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With continued reference to fig. 9, the cluster fog monitoring and capturing device based on scale invariant feature transform image recognition according to the present embodiment includes: an image information acquisition module 901, a panoramic image information acquisition module 902, a panoramic image information processing module 903, a visibility value acquisition module 904, and a visibility level acquisition module 905;
the image information acquisition module 901 is configured to acquire an image information set, where the image information set is an image information set with different angles for each group fog easy-occurrence point; the image information set is acquired by shooting through a monitoring system;
the panoramic image information obtaining module 902 is configured to fuse the obtained image information set into panoramic image information of each group fog easy occurrence point by adopting a scale invariant feature transform algorithm;
the panoramic image information processing module 903 is configured to perform feature processing on the panoramic image information by using a VisNet convolutional neural network, so as to obtain a visibility feature in the panoramic image information;
the visibility value obtaining module 904 is configured to take the visibility characteristic and meteorological data as input of a BP neural network prediction model, and output a visibility value of the cluster fog easy occurrence point through calculation of an implicit layer;
the visibility level obtaining module 905 is configured to match the visibility level of the group fog easy-occurrence point according to the visibility value, output corresponding reminding information at the group fog easy-occurrence point based on the visibility level, and complete monitoring and capturing of the group fog.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 10, fig. 10 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 10 includes a memory 10a, a processor 10b, and a network interface 10c communicatively coupled to each other via a system bus. It should be noted that only computer device 10 having components 10a-10c is shown in the figures, but it should be understood that not all of the illustrated components need be implemented and that more or fewer components may alternatively be implemented. It will be appreciated by those skilled in the art that a computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific circuits (IntegratedCircuit, ASIC), programmable gate arrays (Field-ProgrammableGateArray, FPGA), digital processors (DigitalSignalProcessor, DSP), embedded devices, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 10a includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 10a may be an internal storage unit of the computer device 10, such as a hard disk or a memory of the computer device 10. In other embodiments, the memory 10a may also be an external storage device of the computer device 10, such as a plug-in hard disk, a smart memory card (SmartMediaCard, SMC), a Secure Digital (SD) card, a flash card (FlashCard) or the like, which are provided on the computer device 10. Of course, the memory 10a may also include both internal storage units of the computer device 10 and external storage devices thereof. In this embodiment, the memory 10a is generally used for storing an operating system and various application software installed in the computer device 10, such as a program code of a method and an apparatus for monitoring and capturing a mist of a cluster based on scale-invariant feature transform image recognition. Further, the memory 10a may be used to temporarily store various types of data that have been output or are to be output.
The processor 10b may be a central processing unit (CentralProcessing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 10b is generally used to control the overall operation of the computer device 10. In this embodiment, the processor 10b is configured to execute a program code or process data stored in the memory 10a, for example, a program code of the method and apparatus for monitoring and capturing a mist of a cluster based on the scale-invariant feature transform image recognition.
The network interface 10c may comprise a wireless network interface or a wired network interface, the network interface 10c typically being used to establish a communication connection between the computer device 10 and other electronic devices.
The application also provides another embodiment, namely a nonvolatile computer readable storage medium, which stores a program of a method and a device for monitoring and capturing the fog based on scale-invariant feature transform image identification, the method and the device for monitoring and capturing the fog based on the scale-invariant feature transform image identification can be executed by at least one processor, so that the at least one processor executes the steps of the method and the device for monitoring and capturing the fog based on the scale-invariant feature transform image identification.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (9)

1. The method for monitoring and capturing the fog based on the scale-invariant feature transformation image identification is characterized by comprising the following steps of:
s1, acquiring an image information set, wherein the image information set is an image information set with different angles of each group fog easy-to-occur point; the image information set is acquired by shooting through a monitoring system;
s2, fusing the acquired image information set into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transformation algorithm;
s3, performing feature processing on the panoramic image information by using a VisNet convolutional neural network to acquire visibility features in the panoramic image information;
s4, taking the visibility characteristic and meteorological data as input of a BP neural network prediction model, and outputting a visibility value of the cluster fog easy occurrence point through calculation of an implicit layer; the training of the BP neural network prediction model in the step S4 is specifically shown as follows:
(1) Acquiring a group fog video stream fragment in a video stream of a history monitoring system;
(2) Performing frame processing on the group fog video stream segment, removing unavailable images, and screening registered images through analyzing and sharpening image pixels and background fields to obtain first image information;
(3) The panoramic image information of the first image information of the mass fog easy-occurrence points at different angles is obtained by adopting a scale-invariant feature transformation algorithm and is used as second image information;
(4) Acquiring meteorological data information of the mass fog easy-occurrence points under the same time as first meteorological data information;
(5) Matching the second image information with the first meteorological data information according to a time scale and a space region, taking a matching result as input of a BP neural network prediction model, training the BP neural network prediction model, and performing parameter tuning on the BP neural network prediction model based on the difference between a visibility prediction value and a visibility actual value;
and S5, matching the visibility level of the group fog easy-to-occur point according to the visibility value, and outputting corresponding reminding information at the group fog easy-to-occur point based on the visibility level to complete monitoring and capturing of the group fog.
2. The method for monitoring and capturing the mist of the clusters based on the scale-invariant feature transform image recognition according to claim 1, wherein the monitoring system in the step S1 comprises: and arranging a plurality of cameras at each group fog easy-to-send point, forming a wireless network by each camera and other cameras through the Internet of things, wherein one camera is used as a sink node, all cameras take pictures at regular time and transmit the pictures to the sink node through the Internet of things, and compressing the images by the sink node and uploading the images to a server.
3. The method for monitoring and capturing the group fog based on the scale-invariant feature transform image recognition according to claim 2, wherein arranging a plurality of cameras at each group fog easy-occurrence point comprises: each group fog easy-to-send point is provided with 6 cameras, the 6 cameras are arranged into a group, and the sink node is one of the 6 cameras.
4. The method for monitoring and capturing the fog cluster based on the scale-invariant feature transform image recognition according to claim 1, wherein the step S2 is characterized in that the acquired image information is fused into panoramic image information of the fog cluster easy-occurrence point by adopting a scale-invariant feature transform algorithm, and the method comprises the following specific steps:
(1) Loading the image information set of each group fog easy-to-occur point, and extracting characteristic points in the image information set;
(2) Matching the characteristic points in the image information set; matching the feature points comprises displaying the matching feature points of the image information set, and taking out the matching feature points;
(3) Performing fitting geometric transformation on the image information set according to the matching characteristic points; taking one image in the image information set as a template image and the other images as transformation images; calculating a spatial transformation range of the transformed image based on the template image, and performing transformation on the transformed image;
(4) And fusing the transformed images to form the panoramic image information.
5. The method for monitoring and capturing the fog based on the scale-invariant feature transform image recognition of claim 1, wherein the feature processing of the panoramic image information by using the VisNet convolutional neural network in the step S3 comprises:
(1) Performing fast Fourier transform filtering on the panoramic image to obtain main object outline basic characteristics of the panoramic image;
(2) Performing spectral filtering on the panoramic image to obtain a visibility contrast identification area of the panoramic image;
(3) And taking the fast Fourier transform filtering result, the spectrum filtering result and the panoramic image as inputs of a training convolutional neural network to acquire the characteristic information of the panoramic image.
6. The method for monitoring and capturing the mist of the clusters based on the scale-invariant feature transform image recognition of claim 1, wherein the meteorological data in step S4 comprises: barometric pressure, temperature, relative humidity, precipitation, wind speed, wind direction, and visibility.
7. Group fog monitoring and capturing device based on scale invariant feature transform image identification, which is characterized by comprising: the system comprises an image information acquisition module, a panoramic image information processing module, a visibility value acquisition module and a visibility level acquisition module;
the image information acquisition module is used for acquiring an image information set, wherein the image information set is an image information set with different angles of each group fog easy-occurrence point; the image information set is acquired by shooting through a monitoring system;
the panoramic image information acquisition module is used for fusing the acquired image information sets into panoramic image information of each group fog easy-occurrence point by adopting a scale invariant feature transformation algorithm;
the panoramic image information processing module is used for carrying out feature processing on the panoramic image information by utilizing a VisNet convolutional neural network to obtain visibility features in the panoramic image information;
the visibility value acquisition module is used for taking the visibility characteristics and meteorological data as input of a BP neural network prediction model, and outputting the visibility value of the cluster fog easy occurrence point through calculation of an implicit layer; the BP neural network prediction model training is specifically expressed as follows: (1) Acquiring a group fog video stream fragment in a video stream of a history monitoring system; (2) Performing frame processing on the group fog video stream segment, removing unavailable images, and screening registered images through analyzing and sharpening image pixels and background fields to obtain first image information; (3) The panoramic image information of the first image information of the mass fog easy-occurrence points at different angles is obtained by adopting a scale-invariant feature transformation algorithm and is used as second image information; (4) Acquiring meteorological data information of the mass fog easy-occurrence points under the same time as first meteorological data information; (5) Matching the second image information with the first meteorological data information according to a time scale and a space region, taking a matching result as input of a BP neural network prediction model, training the BP neural network prediction model, and performing parameter tuning on the BP neural network prediction model based on the difference between a visibility prediction value and a visibility actual value;
the visibility level acquisition module is used for matching the visibility level of the group fog easy-to-occur point according to the visibility value, outputting corresponding reminding information at the group fog easy-to-occur point based on the visibility level, and completing monitoring and capturing of the group fog.
8. An electronic device, comprising:
one or more processors, memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device, cause the device to perform the method of any of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method according to any of claims 1 to 6.
CN202310144922.9A 2023-02-21 2023-02-21 Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition Active CN116486347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310144922.9A CN116486347B (en) 2023-02-21 2023-02-21 Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310144922.9A CN116486347B (en) 2023-02-21 2023-02-21 Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition

Publications (2)

Publication Number Publication Date
CN116486347A CN116486347A (en) 2023-07-25
CN116486347B true CN116486347B (en) 2023-10-10

Family

ID=87223906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310144922.9A Active CN116486347B (en) 2023-02-21 2023-02-21 Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition

Country Status (1)

Country Link
CN (1) CN116486347B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003270343A (en) * 2002-03-13 2003-09-25 Mitsubishi Electric Corp Visibility estimation device
CN109145962A (en) * 2018-07-31 2019-01-04 南京信息工程大学 A kind of atmospheric parameter inverting observation method based on digital picture
CN110188586A (en) * 2018-04-13 2019-08-30 山东百世通大数据科技有限公司 System and application method based on meteorological observation, road camera shooting visibility identification
CN112989994A (en) * 2021-03-10 2021-06-18 安徽大学 Fog visibility estimation method based on depth relative learning under discrete label

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003270343A (en) * 2002-03-13 2003-09-25 Mitsubishi Electric Corp Visibility estimation device
CN110188586A (en) * 2018-04-13 2019-08-30 山东百世通大数据科技有限公司 System and application method based on meteorological observation, road camera shooting visibility identification
CN109145962A (en) * 2018-07-31 2019-01-04 南京信息工程大学 A kind of atmospheric parameter inverting observation method based on digital picture
CN112989994A (en) * 2021-03-10 2021-06-18 安徽大学 Fog visibility estimation method based on depth relative learning under discrete label

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Akmaljon Palvanov等.VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility.sensors.2019,第1-34页. *

Also Published As

Publication number Publication date
CN116486347A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2021135879A1 (en) Vehicle data monitoring method and apparatus, computer device, and storage medium
CN111310562A (en) Vehicle driving risk management and control method based on artificial intelligence and related equipment thereof
CN111680551A (en) Method and device for monitoring livestock quantity, computer equipment and storage medium
US11783588B2 (en) Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform
CN113139482A (en) Method and device for detecting traffic abnormity
KR102333143B1 (en) System for providing people counting service
CN112185131A (en) Vehicle driving state judgment method and device, computer equipment and storage medium
CN112233428B (en) Traffic flow prediction method, device, storage medium and equipment
CN113538963A (en) Method, apparatus, device and storage medium for outputting information
CN113052048A (en) Traffic incident detection method and device, road side equipment and cloud control platform
CN114743157B (en) Pedestrian monitoring method, device, equipment and medium based on video
JP2022093481A (en) Method, apparatus, electronic device, storage medium, and computer program for recognizing vehicle parking violation
CN117292321A (en) Motion detection method and device based on video monitoring and computer equipment
CN116486347B (en) Method and device for monitoring and capturing fog based on scale-invariant feature transformation image recognition
CN116385185A (en) Vehicle risk assessment auxiliary method, device, computer equipment and storage medium
CN116597371A (en) Dangerous object early warning method, system and computer equipment based on image monitoring
CN110996053B (en) Environment safety detection method and device, terminal and storage medium
CN113920720A (en) Highway tunnel equipment fault processing method and device and electronic equipment
CN112016503B (en) Pavement detection method, device, computer equipment and storage medium
CN114640841A (en) Abnormity determining method and device, electronic equipment and storage medium
CN115631509B (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN114639037B (en) Method for determining vehicle saturation of high-speed service area and electronic equipment
CN114155462A (en) Method and device for acquiring passenger flow state and fisheye camera
CN117727174A (en) Intelligent station operation management system, method, equipment and medium
CN117195078A (en) Monitoring intelligent processing method, device, equipment and medium based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant