CN115311601A - Fire detection analysis method based on video analysis technology - Google Patents
Fire detection analysis method based on video analysis technology Download PDFInfo
- Publication number
- CN115311601A CN115311601A CN202210946164.8A CN202210946164A CN115311601A CN 115311601 A CN115311601 A CN 115311601A CN 202210946164 A CN202210946164 A CN 202210946164A CN 115311601 A CN115311601 A CN 115311601A
- Authority
- CN
- China
- Prior art keywords
- fire
- information
- target
- neural network
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The invention provides a fire detection analysis method based on a video analysis technology, which comprises the steps of generating a fire data set of an actual acquisition scene according to an acquired fire scene and fire scale for integrating real-time image acquisition equipment; constructing a convolutional neural network model for analyzing and judging target fire information in fire data sets in different stages based on a convolutional neural network; substituting target fire information as training characteristic information into a convolutional neural network model for training to realize multi-scale target detection on the target fire information; based on an embedded computer, debugging the convolutional neural network model in a scene of testing only by using a CPU and a scene of testing by using the CPU and a GPU respectively; and taking the multi-scale target detection data of the target fire information displayed according to the optimized algorithm model as a target result for determining output. According to the invention, the fire multi-scale target detection is realized through the YOLOv4-Tiny model, and the calculation complexity and parameters are reduced based on the YOLOv4-Tiny algorithm.
Description
Technical Field
The invention relates to the technical field of fire information analysis, in particular to a fire detection analysis method based on a video analysis technology.
Background
With the continuous progress of society, the expansion of urban scale and the increase of population density, fire disasters have become one of the main disasters which threaten public safety and social development most often and generally. According to data analysis, the fire disaster is mainly concentrated in intensive personnel places, storage logistics and tall buildings, so that great personnel and economic losses are caused, and the social influence is increased.
By adopting a video analysis technology, a video analysis model and an algorithm are established, the depth analysis and study of video data are carried out, the fire is found and early-warning is carried out in the early stage of the fire, the loss of personal and property safety can be greatly reduced, and the technical defects of temperature-sensitive detectors, smoke-sensitive detectors and photosensitive detectors which are widely used at present can be overcome.
For example, the temperature, smoke and light sensing detectors are limited by the installation positions and effective detection distances of the sensors, the detection ranges of the temperature, smoke and light sensing detectors are limited, for example, fires in tall and large space buildings and long-passage buildings are generally difficult to detect in time, and the judgment of the sensors is based on a single characteristic and is easy to be interfered by ambient light, air flow and the like, so that false alarm or missing alarm is generated, and the stability is difficult to ensure. In addition, the propagation of the parameters of temperature, smoke, radiation, etc. generated by the fire takes time, which also causes a delay in response.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a fire detection and analysis method based on a video analysis technology to solve the problems in the background technology.
In order to realize the purpose, the invention is realized by the following technical scheme: a fire detection and analysis method based on a video analysis technology comprises the following steps:
first step, data preparation
Generating a fire data set of an actual acquisition scene according to the acquired fire scene and fire scale for integrating the real-time image acquisition equipment;
second, presetting an algorithm model
Constructing a convolutional neural network model for analyzing and judging target fire information in fire data sets in different stages based on a convolutional neural network;
thirdly, detecting the target fire information in real time
Substituting target fire information as training characteristic information into a convolutional neural network model for training to obtain at least one characteristic layer with high semantic information so as to realize multi-scale target detection on the target fire information;
fourthly, optimizing the algorithm model
Based on an embedded computer, debugging the convolutional neural network model in a scene of testing only by using a CPU and a scene of testing by using the CPU and a GPU respectively so as to improve the accuracy of the convolutional neural network model on target fire information; and respectively carrying out quality measurement on the convolutional neural network model on all categories by calculating a target detection evaluation index to obtain an optimization algorithm model, and then sequentially executing a third step;
and fifthly, taking multi-scale target detection data of the target fire information displayed according to the optimized algorithm model as a target result for determining output.
As an improvement of the fire detection analysis method based on the video analysis technology in the present invention, in the second step, the target fire information includes smoke image information and flame image information in an actual acquisition scene, wherein a YOLOv4-Tiny algorithm is adopted to establish a convolutional neural network model to train the target fire information, and the specific construction mode of the convolutional neural network model is as follows:
s2-1, obtaining a training sample, wherein at least first training characteristic information and second training characteristic information are marked in the training sample, the first training characteristic information is used for representing smoke image information, and the second training characteristic information is used for representing smoke image information;
s2-2, inputting the first training characteristic information and the second training characteristic information into a YOLOv4-Tiny algorithm network for training at the same time, and obtaining a convolutional neural network model.
As an improvement of the fire detection and analysis method based on the video analysis technology in the present invention, in step S2-1, when a training sample is obtained, it is necessary to randomly arrange fire information in the obtained fire data set based on a shuffle algorithm to judge the training authenticity of the convolutional neural network model.
As an improvement of the fire detection analysis method based on the video analysis technology in the present invention, in the second step, a convolutional neural network model established by the YOLOv4-Tiny algorithm uses CSPDarknet53 as a backbone network, and firstly, it needs to be convolved, normalized and activated to process, and secondly, it needs to be stacked based on CSPBlock to improve the real-time performance of the target fire information measurement, which is convenient for embedded application to a computer, wherein the activation function is a leak relu activation function, and its mathematical expression is:
in the formula, wherein, a i Is a fixed parameter in the interval (1, + ∞), and is expressed as a constant.
As an improvement of the fire detection analysis method based on the video analysis technology in the present invention, in the third step, the manner of obtaining at least one feature layer with high semantic information is generated according to the CSPDarknet53 backbone network adopted by the convolutional neural network model, and after obtaining the feature layer with high semantic information, classification and regression prediction preprocessing are performed on the feature layer, so as to improve the real-time performance of the target fire information measurement.
As an improvement of the fire detection analysis method based on the video analysis technique of the present invention, in the fourth step, the target detection evaluation index includes evaluation indexes mAP of YOLOv4-Tiny, a P-R curve, and an AP value, wherein,
the AP value is used for measuring the quality of the convolutional neural network model in each category;
the evaluation index mAP is used for measuring the quality of the convolutional neural network model in all categories, and after the AP value is obtained, the calculation mode of the evaluation index mAP is an average value of all APs.
As an improvement of the fire detection and analysis method based on the video analysis technology in the present invention, in the fifth step, after multi-scale target detection data of target fire information is obtained as a target result for determining output, the multi-scale target detection data needs to be marked, where a specific implementation manner of marking the multi-scale target detection data is as follows:
creating Annotations, JPEGImages and ImageSets folders based on the format of the VOC2007 dataset, wherein the Annotations, the JPEGImages and the ImageSets folders are used for storing label files, image files corresponding to the label files and indexes of fire data sets;
meanwhile, labeling pictures containing flame image information and smoke image information in the target result by using a labeimg image labeling tool: indicating the position and the type;
and then, the tag file stored in the exceptions folder is in an XML format, the VOC format is converted into a YOLO format by using a code, and a document containing object type and position information is obtained and used as new training characteristic data for subsequent training.
In a possible implementation manner provided by the present invention, after obtaining the target fire information, and before substituting the target fire information as training characteristic information into the convolutional neural network model for training, it is necessary to perform anti-distortion processing on the image in the target fire information, and a specific processing manner is as follows:
the method comprises the steps of adjusting the size of an input image after gray strips are added at the edge of the image by adopting a YOLOv4-Tiny algorithm, dividing the image into grids with different sizes for detecting objects with different sizes, wherein each grid point is responsible for detecting a single area according to the divided grids, and if the central point of the image to be detected falls in the area, identifying the detected image by the grid point.
Compared with the prior art, the invention has the beneficial effects that:
firstly, the invention realizes multi-scale target detection of fire through a YOLOv4-Tiny model, and reduces the calculation complexity and parameters based on a YOLOv4-Tiny algorithm;
secondly, the algorithm adopted by the invention is compared with other object detection methods of the same type, and the result shows that the YOLOv4-Tiny algorithm can effectively detect fire and smoke, the detection effect can be compared with SSD, YOLOv3 and YOLO4, and the detection speed is far faster than that of SSD, therefore, the YOLOv4-Tiny algorithm has better real-time performance and applicability than SSD, YOLOv3 and YOLO 4;
finally, the YOLOv4-Tiny algorithm can better meet the requirements of embedded fire deployment and the requirements of real-time target detection, and meanwhile, the method can be comprehensively considered according to the size of a model, the (time) efficiency and other indexes, and can be deployed in robots and unmanned planes to quickly identify and position fire and smoke so as to achieve fire detection and facility protection.
Drawings
The disclosure of the present invention is illustrated with reference to the accompanying drawings. It is to be understood that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention, in which like reference numerals are used to refer to like parts. Wherein:
FIG. 1 is a schematic diagram of a convolutional neural network structure used in an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of CSPBlock when stacking convolutional neural network models based on CSPBlock according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the overall structure of the YOLOv4-Tiny algorithm network proposed in one embodiment of the present invention;
FIG. 4 is a diagram illustrating a fire data set including smoke and flame images according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an actual application scenario of the YOLOv4-Tiny algorithm in one embodiment of the present invention;
FIG. 6 is a diagram illustrating specific index data of the YOLOv4-Tiny algorithm model in detecting smoke in combination with the accuracy, recall, F1 and P-R curve indexes in an embodiment of the present invention;
FIG. 7 is a diagram illustrating the comparison of the results of the YOLOv4-Tiny algorithm proposed in one embodiment of the present invention.
Detailed Description
It is easily understood that according to the technical solution of the present invention, a person skilled in the art can propose various alternative structures and implementation ways without changing the spirit of the present invention. Therefore, the following detailed description and the accompanying drawings are merely illustrative of the technical aspects of the present invention, and should not be construed as limiting or restricting the technical aspects of the present invention.
As an embodiment of the present invention, the present invention provides a technical solution: a fire detection and analysis method based on a video analysis technology comprises the following steps:
first step, data preparation
As shown in fig. 4, in order to detect fires and smoke of various sizes, a multi-scene and multi-scale fire data set is constructed by image acquisition devices such as a camera in the early stage through acquisition, shooting and sequencing of fire images, wherein 38000 pictures are required in the process of specific implementation, and meanwhile, in order to perform a comparison test and solve the sparsity of samples during early training, 38000 pictures need to be subjected to a shuffle algorithm, fire information (fire pictures containing smoke image information and flame image information) in the acquired fire data set is randomly arranged, preferably 10% of the fire information is selected as a test set, namely 3800 pictures, and the rest of the fire information is selected as a training set (34200 pictures, wherein 3420 of the 34200 pictures are selected as the verification set), and the verification set is used for verifying the performance of each model, and after different neural networks are trained on the training set, the performance of each model is judged by comparing the verification set; the tester objectively evaluates the performance of the neural network aiming at the trained neural network.
Second, presetting an algorithm model
It should be noted that, after the scene video information (target fire information) is collected by the camera in real time in the first step, the image analysis and detection by the computer are needed, and the analysis and judgment of smoke, flame and the like are achieved by combining the technologies of image processing, computer vision, machine learning and the like, therefore,
the method comprises the steps that a convolutional neural network model for analyzing and studying target fire information in fire data sets in different stages is required to be constructed on the basis of a convolutional neural network, wherein fire occurrence is divided into an initial stage, a development stage, a violent stage and a descent stage, the four stages of fire need to be analyzed and studied and judged respectively in practical application, and the purpose is to timely control and eliminate the fire in the initial stage.
Based on the technical concept, it can be understood that the target fire information comprises smoke image information and flame image information in an actual acquisition scene, wherein, the invention adopts a YOLOv4-Tiny algorithm to establish a convolution neural network model to train the target fire information,
as shown in fig. 1, as a further understanding of the technical concept of the present invention, a CNN (convolutional neural network) is one of the representative algorithms for deep learning, and is composed of an input layer, a convolutional layer, a pooling layer, a full link layer, and an output layer, and in target detection, the output layer may be designed to output the center coordinates, the size, and the classification of an object, and meanwhile, the CNN may avoid complex preprocessing of an image, may directly input an original fire image, and perform multi-layer analysis: in view of the above, as an embodiment of the present invention,
the specific construction mode of the convolutional neural network model is as follows:
s2-1, obtaining a training sample, wherein at least first training characteristic information and second training characteristic information are marked in the training sample, the first training characteristic information is used for representing smoke image information, and the second training characteristic information is used for representing smoke image information;
s2-2, inputting the first training characteristic information and the second training characteristic information into a YOLOv4-Tiny algorithm network for training at the same time to obtain a convolutional neural network model, wherein the fire and smoke are identified and analyzed by using a CSPDarknet53 (Cross Stage partial network works Darknet 53) algorithm in YOLOV4 based on the detection experiment of the YOLOv4 algorithm on the fire and smoke in various environments, so that the initial research and judgment on fire information is achieved.
As shown in fig. 2, in the convolutional neural network model specifically constructed, the convolutional neural network model established by the YOLOv4-Tiny algorithm of the present invention uses CSPDarknet53 as a backbone network, and performs convolution, normalization and activation function processing on the backbone network, and then performs a stacking network on the basis of CSPBlock to improve the real-time performance of the target fire information measurement, thereby facilitating embedded application to a computer, wherein the activation function is a leak relu activation function, and the mathematical expression thereof is:
expressed as a constant.
Thirdly, detecting the target fire information in real time
The target fire information is used as training characteristic information and is substituted into a convolutional neural network model for training to obtain at least one characteristic layer with high semantic information so as to realize multi-scale target detection of the target fire information, it is to be understood that in order to improve the real-time performance of the target fire information detection, the mode of obtaining at least one characteristic layer with high semantic information is generated according to a CSPDarknet53 backbone network adopted by the convolutional neural network model, and after the characteristic layer with high semantic information is obtained, classification and regression prediction pretreatment are required to be carried out on the characteristic layer, so that a function fitted by the neural network model can adapt to the fluctuation of complex functions, and the fitting is better;
based on the technical concept, it can be understood that a Feature Pyramid Network (FPN) in YOLOv4 is simplified by using YOLOv4-Tiny algorithm, so that two effective feature layers can be generated from a backbone of YOLOv4-Tiny, and multi-scale target detection of fire can be realized through the two feature layers with high semantic information.
Meanwhile, in the implementation, since only fire and smoke are identified, the final dimension of yolobeam is 21=3 = (5 +2), (3 is the number of anchors in each feature map, 5 is the sum of confidence and parameters, and 2 is the number of categories), and the overall structure of the YOLOv4-Tiny network is as shown in fig. 3, therefore, in order to further improve the real-time performance of fire object detection and promote embedded application, the algorithm model needs to be debugged:
fourthly, optimizing the algorithm model
It can be understood that the general embedded computer is divided into two cases: only a Central Processing Unit (CPU) and a GPU are provided, so that in a specific implementation formula, an embedded computer is needed to be based on, and the convolutional neural network model is debugged in a scene in which only the CPU is used for testing and a scene in which the CPU and the GPU are used for testing respectively, so that the accuracy and the real-time performance of the YOLOv4-Tiny network on object fire detection can be balanced, and the accuracy of the convolutional neural network model on target fire information is further improved; at the same time, the user can select the desired position,
the configuration of the computer often has a significant impact on training time, frames Per Second (FPS), and batch size, and therefore, the present invention proposes a computer configuration as shown in table 1:
the problem that the configuration of a computer in practical application has important influence on training time, frames Per Second (FPS) and batch size is solved; in addition to this, the present invention is,
after the convolutional neural network model is trained, the result of the whole test set is analyzed, and the further analysis of the constructed model can be achieved, wherein the invention respectively carries out the quality measurement on the convolutional neural network model on all categories by calculating the target detection evaluation index, and sequentially executes the third step after the optimization algorithm model is obtained,
as shown in fig. 5-6, the result analysis of the whole test set is performed on the trained convolutional neural network model (Threshold = 0.5), and it is found that the mapp (evaluation index) of YOLOv4-Tiny is 87.10%, specifically, the accuracy is not suitable for false negative, the recall is not suitable for false positive, the P-R curve (model evaluation index P-R curve) comprehensively considers false negative and false positive, so as to obtain the AP of fire (the AP measures the quality of the learned model in each category, the mapp measures the quality of the learned model in all categories, the calculation of the mapp after obtaining the AP becomes simple, i.e., the average value of all APs) is 88.98%, the AP of smoke is 85.22%, for this reason,
for further analysis of the model, the accuracy, the recall ratio, the F1 and the P-R curve indexes are combined for analysis, and the fire accuracy of YOLOv4-Tiny is found to be 96.05%, the smoke accuracy is found to be 91.40%, the recall ratio of the detected flame is found to be 81.50%, and the recall ratio of the detected smoke is found to be 71.96%. It follows that in the YOLOv4-Tiny model, the AP, accuracy, recall rate and F1 for detecting smoke are generally lower than in the case of sensors for detecting fires, etc., since smoke is not characterized as obvious as a fire and they are more susceptible to environmental disturbances than a fire.
It can be understood that, through the above experimental results on the test set, the performance of the algorithm using the CPU is similar to that of the algorithm using the CPU and the GPU, while the addition of the GPU greatly speeds up the identification process, and in the case of using only the CPU, the FPS of the YOLOv4-Tiny algorithm is 6, and under the same conditions, the result is still far superior to that of other methods. FIG. 7 is a diagram illustrating the comparison of the results of the YOLOv4-Tiny algorithm proposed in one embodiment of the present invention.
In addition, the model size of YOLOv4-Tiny is 23,004KB, also much smaller than the other models (e.g., 93,292KB for SSD, 240 for YOLOv3, 642KB, 250,180KB for YOLOv 4). Therefore, the used YOLOv4-Tiny algorithm can better meet the requirements of embedded deployment of fire and the requirements of real-time target detection, and meanwhile, the method can be comprehensively considered according to the size of a model, time efficiency and other indexes and can be deployed in robots and unmanned planes to quickly identify and position fire and smoke so as to achieve fire detection and facility protection.
Fifthly, the multi-scale target detection data of the target fire information displayed according to the optimized algorithm model is used as a target result of determining output, it should be noted that after the multi-scale target detection data of the target fire information is obtained as the target result of determining output, the multi-scale target detection data also needs to be marked, wherein the specific implementation mode of marking the multi-scale target detection data is as follows:
creating Annotations, JPEGImages and ImageSets folders based on the format of the VOC2007 dataset, wherein the Annotations, the JPEGImages and the ImageSets folders are used for storing label files, image files corresponding to the label files and indexes of fire datasets;
meanwhile, labeling pictures containing flame image information and smoke image information in the target result by using a labeimg image labeling tool: indicating the position and the type;
and then, the tag file stored in the options folder is in an XML format, and the VOC format is converted into the YOLO format by using codes, so that a document containing object type and position information is obtained and used as new training characteristic data for subsequent training.
In an embodiment of the present invention, after obtaining the target fire information and before substituting the target fire information as training feature information into the convolutional neural network model for training, it is necessary to perform anti-aliasing processing on the image in the target fire information, and the specific processing manner is as follows:
the method comprises the steps of adjusting the size of an input image after gray strips are added at the edge of the image by adopting a YOLOv4-Tiny algorithm, dividing the image into grids with different sizes for detecting objects with different sizes, wherein each grid point is responsible for detecting a single area according to the divided grids, and if the central point of the image to be detected falls in the area, identifying the detected image by the grid point.
The technical scope of the present invention is not limited to the above description, and those skilled in the art can make various changes and modifications to the above-described embodiments without departing from the technical spirit of the present invention, and such changes and modifications should fall within the protective scope of the present invention.
Claims (8)
1. A fire detection and analysis method based on a video analysis technology is characterized in that: the method comprises the following steps:
first step, data preparation
Generating a fire data set of an actual acquisition scene according to the acquired fire scene and fire scale for collecting the real-time image acquisition equipment;
second, presetting an algorithm model
Constructing a convolutional neural network model for analyzing and judging target fire information in fire data sets in different stages based on a convolutional neural network;
thirdly, detecting the target fire information in real time
Substituting target fire information as training characteristic information into a convolutional neural network model for training to obtain at least one characteristic layer with high semantic information so as to realize multi-scale target detection on the target fire information;
fourthly, optimizing the algorithm model
On the basis of an embedded computer, debugging the convolutional neural network model in a scene of testing only by using a CPU and a scene of testing by using the CPU and a GPU respectively so as to improve the accuracy of the convolutional neural network model on target fire information; and respectively carrying out quality measurement on the convolutional neural network model on all categories by calculating a target detection evaluation index to obtain an optimization algorithm model, and then sequentially executing a third step;
and fifthly, taking multi-scale target detection data of the target fire information displayed according to the optimized algorithm model as a target result for determining output.
2. A fire detection and analysis method based on video analysis technology as claimed in claim 1, characterized in that: in the second step, the target fire information comprises smoke image information and flame image information in an actual acquisition scene, wherein a YOLOv4-Tiny algorithm is adopted to establish a convolutional neural network model to train the target fire information, and the specific construction mode of the convolutional neural network model is as follows:
s2-1, obtaining a training sample, wherein at least first training characteristic information and second training characteristic information are marked in the training sample, the first training characteristic information is used for representing smoke image information, and the second training characteristic information is used for representing smoke image information;
s2-2, inputting the first training characteristic information and the second training characteristic information into a YOLOv4-Tiny algorithm network for training at the same time, and obtaining a convolutional neural network model.
3. A fire detection and analysis method based on video analysis technology as claimed in claim 2, characterized in that: in step S2-1, when training samples are acquired, fire information in the acquired fire data sets needs to be randomly arranged based on a shuffle algorithm to determine the training authenticity of the convolutional neural network model.
4. A fire detection and analysis method based on video analysis technology as claimed in claim 2, characterized in that: in the second step, a convolutional neural network model established by a YOLOv4-Tiny algorithm adopts CSPDarknet53 as a backbone network, and firstly, convolution, normalization and activation function processing are required to be performed on the backbone network, and secondly, a network is required to be stacked on the basis of CSPBlock, so that the real-time performance of target fire information measurement is improved, and the embedded application to a computer is facilitated, wherein the activation function is a leak ReLU activation function, and the mathematical expression of the activation function is as follows:
5. A fire detection and analysis method based on video analysis technology as claimed in claim 1 or 4, characterized in that: and in the third step, the mode of obtaining at least one characteristic layer with high semantic information is generated according to a CSPDarknet53 backbone network adopted by a convolutional neural network model, and after the characteristic layer with high semantic information is obtained, classification and regression prediction pretreatment are required to be carried out on the characteristic layer so as to improve the real-time performance of target fire information measurement.
6. A fire detection and analysis method based on video analysis technology as claimed in claim 1 or above, characterized in that: in the fourth step, the target detection evaluation index includes an evaluation index mAP of YOLOv4-Tiny, a P-R curve, and an AP value, wherein,
the AP value is used for measuring the quality of the convolutional neural network model in each category;
the evaluation index mAP is used for measuring the quality of the convolutional neural network model in all categories, and after the AP value is obtained, the calculation mode of the evaluation index mAP is an average value of all APs.
7. A fire detection and analysis method based on video analysis technology as claimed in claim 1, characterized in that: in the fifth step, after obtaining multi-scale target detection data of target fire information as a target result of determining output, the multi-scale target detection data also needs to be marked, wherein the specific implementation manner of marking the multi-scale target detection data is as follows:
creating Annotations, JPEGImages and ImageSets folders based on the format of the VOC2007 dataset, wherein the Annotations, the JPEGImages and the ImageSets folders are used for storing label files, image files corresponding to the label files and indexes of fire data sets;
meanwhile, labeling the pictures containing the flame image information and the smoke image information in the target result by using a labelImg image labeling tool: indicating the position and type;
and then, the tag file stored in the options folder is in an XML format, and the VOC format is converted into the YOLO format by using codes, so that a document containing object type and position information is obtained and used as new training characteristic data for subsequent training.
8. A fire detection and analysis method based on video analysis technology as claimed in claim 1, characterized in that: after the target fire information is obtained, and before the target fire information is substituted into the convolutional neural network model as training characteristic information for training, the image in the target fire information needs to be subjected to anti-distortion processing, and the specific processing mode is as follows:
the method comprises the steps of adjusting the size of an input image after adding gray strips at the edge of the image by adopting a YOLOv4-Tiny algorithm, dividing the image into grids with different sizes for detecting objects with different sizes, wherein each grid point is used for detecting a single area according to the divided grids, and if the center point of the image to be detected falls in the area, identifying the detected image by the grid point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210946164.8A CN115311601A (en) | 2022-08-08 | 2022-08-08 | Fire detection analysis method based on video analysis technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210946164.8A CN115311601A (en) | 2022-08-08 | 2022-08-08 | Fire detection analysis method based on video analysis technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115311601A true CN115311601A (en) | 2022-11-08 |
Family
ID=83860802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210946164.8A Pending CN115311601A (en) | 2022-08-08 | 2022-08-08 | Fire detection analysis method based on video analysis technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311601A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116597595A (en) * | 2023-06-30 | 2023-08-15 | 广州里工实业有限公司 | Factory fire monitoring and scheduling system |
-
2022
- 2022-08-08 CN CN202210946164.8A patent/CN115311601A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116597595A (en) * | 2023-06-30 | 2023-08-15 | 广州里工实业有限公司 | Factory fire monitoring and scheduling system |
CN116597595B (en) * | 2023-06-30 | 2024-04-16 | 广州里工实业有限公司 | Factory fire monitoring and scheduling system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522819B (en) | Fire image identification method based on deep learning | |
CN113011319B (en) | Multi-scale fire target identification method and system | |
CN108038846A (en) | Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks | |
CN112163572A (en) | Method and device for identifying object | |
CN111815576B (en) | Method, device, equipment and storage medium for detecting corrosion condition of metal part | |
CN114399719A (en) | Transformer substation fire video monitoring method | |
CN115512387A (en) | Construction site safety helmet wearing detection method based on improved YOLOV5 model | |
CN111062950A (en) | Method, storage medium and equipment for multi-class forest scene image segmentation | |
CN111539325A (en) | Forest fire detection method based on deep learning | |
CN111259736B (en) | Real-time pedestrian detection method based on deep learning in complex environment | |
CN115311601A (en) | Fire detection analysis method based on video analysis technology | |
CN114662605A (en) | Flame detection method based on improved YOLOv5 model | |
CN110321867A (en) | Shelter target detection method based on part constraint network | |
CN116030266A (en) | Pavement crack detection and classification method based on improved YOLOv3 under natural scene | |
KR102602439B1 (en) | Method for detecting rip current using CCTV image based on artificial intelligence and apparatus thereof | |
CN111898440A (en) | Mountain fire detection method based on three-dimensional convolutional neural network | |
Sridhar et al. | Wildfire Detection and Avoidance of false Alarm Using Densenet | |
CN114629047A (en) | Method, device and equipment for detecting slippage of damper | |
CN111191575B (en) | Naked flame detection method and system based on flame jumping modeling | |
CN112507925A (en) | Fire detection method based on slow characteristic analysis | |
Zou | Flame image recognition detection based on improved YOLOv7 | |
CN114445729B (en) | Small target fire detection method based on improved YOLO algorithm | |
CN118015568B (en) | Driving risk detection method and system based on artificial intelligence | |
Telaga et al. | Real-time Students’ Safety Helmet-wearing Detection Based on Convolutional Neural Network | |
CN221177861U (en) | Intelligent gas meter reading device based on edge processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |