CN115841642A - Dynamic characteristic assisted visible light fire detection and identification method, device and medium - Google Patents

Dynamic characteristic assisted visible light fire detection and identification method, device and medium Download PDF

Info

Publication number
CN115841642A
CN115841642A CN202211517483.3A CN202211517483A CN115841642A CN 115841642 A CN115841642 A CN 115841642A CN 202211517483 A CN202211517483 A CN 202211517483A CN 115841642 A CN115841642 A CN 115841642A
Authority
CN
China
Prior art keywords
fire
dynamic
detection
identification
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211517483.3A
Other languages
Chinese (zh)
Other versions
CN115841642B (en
Inventor
朱佩佩
吴元
赖作镁
李顺枝
万加龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 10 Research Institute
Original Assignee
CETC 10 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 10 Research Institute filed Critical CETC 10 Research Institute
Priority to CN202211517483.3A priority Critical patent/CN115841642B/en
Publication of CN115841642A publication Critical patent/CN115841642A/en
Application granted granted Critical
Publication of CN115841642B publication Critical patent/CN115841642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention discloses a dynamic characteristic assisted visible light fire detection and identification method, equipment and medium, belonging to the field of fire detection and identification and comprising the following steps: the method comprises the steps of extracting a forward motion significant frame by fixing a fire video shot by a visible light camera and utilizing directional motion characteristics in a fire combustion process, and assisting a fire detection and identification task in a current frame by utilizing dynamic characteristics of the significant frame and the current frame so as to improve the fire detection and identification effect; the motion characteristic auxiliary module based on the conditional convolution is also provided, and the motion characteristic auxiliary module and an improved S-shaped PANET (Path Aggregation Network) structure are embedded into an end-to-end real-time intelligent target detection and identification algorithm, so that the organic combination of fire video data characteristics on a time dimension and a space dimension is realized, the early fire identification accuracy is improved, and the fire false alarm rate is reduced.

Description

Dynamic characteristic assisted visible light fire detection and identification method, device and medium
Technical Field
The invention relates to the field of fire detection and identification, in particular to a dynamic characteristic assisted visible light fire detection and identification method, equipment and medium.
Background
The fire disaster is a very destructive disaster with high occurrence frequency, which directly endangers human life and property, causes environmental pollution and destroys ecological balance. Along with the development of modern society, the urban scale is gradually enlarged, and personnel intensive places increase, and remote mountain area equipment and facilities are unattended for a long time, and the like, all bring great fire hazard. Under the condition of limited manpower and material resources, the visible light camera is used for monitoring the fire, finding the fire as soon as possible and disposing in time, thus being a method for effectively reducing the fire hazard. The automatic fire recognition is carried out on tens of thousands of visible light images shot by the camera, so that people can be helped to reduce the workload, the investment of personnel strength is effectively reduced, and the monitoring on the position where the fire is likely to occur can be realized at all times.
Video fire detection techniques require the detection of flames and smoke. In the traditional detection and identification method, smoke detection is mainly carried out by utilizing the color characteristics, wavelet coefficient characteristics and dynamic characteristics of smoke and combining strategies such as a classifier and threshold screening, and flame detection is mainly carried out on the basis of model methods such as flame color, flame flicker frequency characteristics and dynamic edge characteristics. The traditional feature extraction method based on the artificial feature extraction operator has poor adaptability to ever-changing fire conditions in real scenes. With the wide application of artificial intelligence technology, the target detection technology is gradually developed from a traditional image feature extraction operator combined with a machine learning method to a method utilizing deep learning, and the detection effect is gradually improved. At the present stage, in a fire recognition task based on a photoelectric video image, researchers combine a deep convolutional neural network and a YCbCr space model to detect water surface flames, and the missing detection of the water surface flames in a traditional algorithm is effectively overcome; an author adopts a classic target detection and identification intelligent algorithm-YOLOv 3 to detect and identify flames and smoke in an image, so that the fire identification rate is greatly improved; in addition, a method for extracting the fire region at the pixel level by adopting the Unet deep segmentation network can improve the fire positioning accuracy.
However, the fire detection and identification algorithm still has a few difficulties. Particularly, in the early stage of a fire, the fire distinguishing algorithm needs to comprehensively consider the accuracy and the false alarm rate, and the characteristics of flame and smoke generated in the early stage of the fire are weak, so that the flame and smoke are difficult to be fully mined by a characteristic extraction module in a target detection and identification algorithm, and the missing detection of the fire is easily caused. In addition, the typical target flame and smoke when the fire occurs are non-rigid targets, the appearance characteristic difference is large, the smoke color and texture characteristics are weak, and false alarm is easily caused by confusion with other targets.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a dynamic characteristic assisted visible light fire detection and identification method, equipment and medium, improves the fire detection and identification effect, improves the early fire identification accuracy rate and reduces the fire false alarm rate.
The purpose of the invention is realized by the following scheme:
a dynamic characteristic assisted visible light fire detection and identification method comprises the following steps:
extracting image frames with directional motion characteristics in a selected time range from the video frames through the fire video, and combining the image frames at the selected time to obtain the dynamic characteristics of the video time context dynamic information considered in the fire burning process; and the dynamic characteristics are utilized to assist in fire detection and identification of the image frames.
Further, the method for extracting the image frames with the directional motion characteristics in the selected time range from the video frames through the fire videos and combining the image frames at the selected time comprises the following substeps:
s1, acquiring a fire video image;
s2, extracting non-downward motion image frames in continuous video frames within a time length T obtained before the current moment by utilizing an optical flow method and optical flow direction statistics, and calling the image frames as forward motion remarkable frames;
and S3, extracting dynamic characteristics by using the forward motion salient frame and the current time frame.
Further, in step S2, the process of extracting the forward motion salient frame includes the following sub-steps:
s21, extracting a sampling optical flow of the video image frame obtained within the time length T before the current moment by an optical flow method;
s22, clustering the light stream direction of each obtained frame image by using a clustering method, wherein the clustering number is N, the number of the clustering categories belonging to forward motion is M, M is less than N, and the number of the non-forward motion categories is N-M;
s23, counting the forward motion light flow proportion gamma:
Figure BDA0003972370670000031
wherein k is i 、k j Respectively representing the number of optical flows belonging to the ith and the j category; the image frame with the larger forward motion optical flow proportion gamma is taken as a forward motion salient frame.
Further, in step S3, the performing dynamic feature extraction by using the forward motion salient frame and the current time frame includes the sub-steps of:
s31, extracting a motion foreground region of the current frame;
s32, screening the extracted foreground region based on the fire image characteristics;
and S33, normalizing the screened foreground area to obtain dynamic characteristics.
Further, the method for assisting fire detection and identification of image frames by using the dynamic features comprises the following sub-steps:
and step S4: the method comprises the following steps of building an intelligent fire target detection and identification model overall framework, adding a dynamic characteristic auxiliary flow in the intelligent fire target detection and identification model, wherein the dynamic characteristic auxiliary flow comprises the following specific steps:
1) Respectively carrying out space attention feature analysis on the extracted dynamic features and the current frame;
2) The attention feature is mapped to the weighted weights of the rolling calculation sub-group and the parameters are updated using the full join and activation functions.
Further, in step S4, the intelligent fire target detection and identification model overall framework includes a backbone network, a connection hack, and a detection head; the trunk network adopts a Darknet53 trunk structure embedded with conditional convolution; the connecting Neck adopts an S-shaped PANET structure: expanding a U-shaped characteristic transmission loop of an original PANET structure to an S-shaped characteristic transmission loop to obtain an S-shaped PANET structure, wherein the S-shaped PANET structure is used for deepening the connection between a main network of an intelligent fire target detection model and a detection head; the detection head adopts a detection output layer based on multi-scale anchor point design.
Further, in step 2), the method comprises the sub-steps of: the spatial attention feature of the dynamic feature is represented as Z, the spatial attention feature of the current frame is represented as X, and the specific mapping process is as follows:
step a), inputting the space attention feature Z of the dynamic feature into a mapping function composed of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000041
Step b), inputting the spatial attention feature X of the current frame into another group of mapping functions consisting of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000042
/>
Step c), mapping the weight vector
Figure BDA0003972370670000043
And a mapping weight vector ≧>
Figure BDA0003972370670000044
Using point-and-multiply combination to obtain the weighting weight:
Figure BDA0003972370670000045
and d), combining the weighted weights and convolution parameters of the convolution operator group through dot product operation, and updating the convolution parameters as follows: w i =c i *W i Wherein c is i As a weighted weight vector
Figure BDA0003972370670000046
To (3) is a single numerical value of (a).
Further, after the whole framework of the intelligent fire target detection and identification model is built, the method comprises the following steps:
step S5: training an intelligent fire target detection and recognition model based on a dynamic characteristic auxiliary process by using training data, and testing the trained model by using test data; in the training and testing process, inputting dynamic characteristics in a dynamic characteristic auxiliary process, and inputting a current frame at the input end of the intelligent fire target detection and identification model; and filtering the output of the intelligent fire recognition model by utilizing the post-processing of maximum suppression to obtain a final fire detection result.
A dynamic-feature-assisted visible-light fire detection and identification device, comprising:
the dynamic characteristic acquisition module is used for extracting image frames with directional motion characteristics in a selected time range from the video frames through the fire video and combining the image frames at the selected moment to acquire dynamic characteristics of video time context dynamic information in the fire burning process;
and the dynamic characteristic auxiliary module is used for assisting the fire detection and identification of the image frames by utilizing the dynamic characteristics.
A readable storage medium, in which a computer program is stored, which computer program is loaded by a processor and executes a method according to any of the above.
The beneficial effects of the invention include:
the technical scheme of the embodiment of the invention solves the technical problem that the accuracy of fire detection and identification is difficult to improve due to the fact that the traditional fire target detection and identification method based on the photoelectric video does not fully excavate the video time context dynamic information and relies on the static characteristics such as the appearance and the color of a single-frame image, directional motion characteristics in the fire combustion process are excavated through the fire video shot by a fixed camera, and a dynamic characteristic auxiliary module is utilized to assist the fire detection and identification of the current image frame, so that the fire detection and identification effect is improved.
The technical scheme of the embodiment of the invention provides a motion characteristic auxiliary module based on conditional convolution, and the module and an improved S-shaped PANET structure are further embedded into an end-to-end real-time intelligent target detection and identification algorithm, so that the organic combination of fire video data characteristics on a time dimension and a space dimension is realized, the early fire identification accuracy is improved, and the fire false alarm rate is reduced.
The technical scheme of the embodiment of the invention fully utilizes the time context information in the fire video based on the intelligent fire recognition model of the dynamic feature auxiliary module, can better complement and confirm the static target features in the single-frame image, has better inhibiting effect on the fire targets which are missed due to the weak static features and the error targets which are similar to the fire due to the color texture, and can improve the correct detection recognition rate of the fire.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of an intelligent fire recognition method based on dynamic feature assistance according to an embodiment of the present invention;
FIG. 2 is a flow chart of forward motion salient frame extraction according to an embodiment of the present invention
FIG. 3 is a schematic structural diagram of a dynamic feature support module according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent fire recognition model based on a dynamic characteristic auxiliary module according to an embodiment of the present invention.
Detailed Description
All features disclosed in all embodiments in this specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
In view of the problems in the background, the inventor of the present invention has found, after creative thinking, that the existing fire recognition method based on artificial intelligence mainly utilizes the static features of a single-frame image, does not fully mine the dynamic information between consecutive frames, and even does not use the dynamic information to assist the classification of the static features, so that the problems of insufficient early-stage fire recognition rate, high false alarm rate, and the like are difficult to be effectively solved by the deep-learning type fire intelligent recognition algorithm.
After the technical problems are recognized, aiming at the difficulty of fire identification, the technical scheme of the embodiment of the invention provides an intelligent fire detection and identification technical scheme based on dynamic feature assistance. According to the technical scheme, by designing a forward motion significant frame extraction method, motion characteristics between a forward motion significant frame and a current frame are extracted, and the model can effectively utilize the dynamic characteristics; a dynamic characteristic auxiliary module is set up, dynamic characteristics are used for assisting a fire detection and identification task based on a photoelectric image frame, organic combination of video frame time context information and single frame image static information is achieved, and then the fire identification rate is improved; in addition, in another inventive concept of the technical scheme of the embodiment of the invention, an improved S-shaped PANet structure is provided, a U-shaped feature transmission loop of the original PANet structure is expanded into an S-shaped transmission loop, the feature extraction capability of the PANet is improved, and the detection accuracy of the intelligent detection recognition model is improved.
In a further inventive concept, the technical scheme of the embodiment of the invention aims to provide an intelligent fire detection and identification method assisted by time context dynamic characteristics, aiming at the technical problem that the traditional fire target detection and identification method based on photoelectric video does not fully excavate video time context dynamic information and depends on static characteristics such as single-frame image appearance, color and the like, so that the accuracy of fire detection and identification is difficult to improve. Through the fire condition video that fixed camera was shot, excavate the directional motion characteristic that the fire condition combustion process had, utilize dynamic characteristics auxiliary module to assist the fire condition detection discernment of current image frame for promote the fire condition and detect the discernment effect, concrete step is as follows:
step S1: acquiring a fire video image through a fixed visible light camera;
step S2: and extracting image frames which obviously move non-downwards (upwards, leftwards, rightwards and the like) from continuous video frames (within a time length T) obtained before the current moment by using an optical flow method and optical flow direction statistics, and calling the image frames as forward motion remarkable frames. The extraction process of the forward motion salient frame comprises the following steps:
1) Carrying out sampling optical flow extraction on video image frames obtained before the current moment and in the time length T by an optical flow method;
2) And clustering the light stream direction of each obtained frame image by using a clustering method, wherein the clustering number is N, the number of the clustering categories belonging to the forward motion is M (M < N), and the number of the non-forward motion categories is N-M.
3) Counting the forward motion optical flow proportion gamma:
Figure BDA0003972370670000081
wherein k is i 、k j The numbers of optical flows belonging to the i-th and j-th classes are indicated. The image frame with the larger forward motion optical flow proportion gamma is taken as a forward motion salient frame.
And step S3: and extracting dynamic characteristics by using the forward motion salient frame and the current time frame. The extraction process comprises the following steps:
1) Extracting a motion foreground region of the current frame;
2) Screening the extracted foreground area based on the fire image characteristics;
3) And normalizing the screened foreground area to obtain the dynamic characteristics.
And step S4: and constructing an intelligent fire recognition model based on the dynamic characteristic auxiliary module. The model has the innovation points that a dynamic characteristic auxiliary module is added into a common target intelligent detection and identification model, and an S-type PANet structure is further designed.
The dynamic characteristic auxiliary module comprises the following working processes:
1) Respectively carrying out spatial attention feature analysis on the extracted dynamic features and the current frame, wherein the spatial attention is composed of operators such as a Global Average Pooling (GAP), full Connectivity (FC) and a ReLU activation function;
2) The attention features are mapped to weighted weights of the rolling calculation sub-group and parameters of the dynamic feature assist module are updated using full connectivity and Sigmoid activation functions. The spatial attention feature of the dynamic feature is represented as Z, the spatial attention feature of the current frame is represented as X, and the mapping process is as follows:
a) Inputting the space attention feature Z of the dynamic feature into a mapping function composed of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000082
b) Inputting the spatial attention feature X of the current frame into another mapping function composed of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000091
c) Mapping the weight vector
Figure BDA0003972370670000092
And a mapping weight vector ≧>
Figure BDA0003972370670000093
Using point-and-multiply combination to obtain the weighting weight:
Figure BDA0003972370670000094
d) Combining the weighted weights and convolution parameters of the convolution operator group through dot product operation, and updating the convolution parameters as follows: w i =c i *W i Wherein c is i As a weighted weight vector
Figure BDA0003972370670000095
To the individual value of (a).
The difference between the proposed S-type PANet structure and the original one is: the PANet structure is expanded to an S-shaped feature transfer loop from an original U-shaped feature transfer loop, connection between the intelligent detection model trunk and the detection head is deepened, and multi-scale feature learning and extraction are facilitated. The specific module composition and connection mode of the S-type PANet structure are shown in FIG. 4.
Step S5: training an intelligent fire recognition model based on the dynamic characteristic auxiliary module by using training data, and testing the trained model by using test data. In the training and testing process, the dynamic characteristics are input into the dynamic characteristic auxiliary module, and the current frame is input into the input end of the intelligent model. And filtering the output of the intelligent fire recognition model by utilizing the post-processing of maximum suppression to obtain a final fire detection result.
In other implementation manners of the technical scheme of the embodiment of the invention, the method comprises the following steps:
firstly, a fire video image is obtained through a fixed visible light camera erected on a support. After the fire visible light video image is obtained, frame extraction marking is carried out on the video, and the position where flame exists in the image is marked out through a rectangular frame and used for training of an intelligent detection recognition model.
And extracting the forward motion salient frames in the historical moments by using an optical flow method and direction statistics. In the implementation, a Lucas-Kanade optical flow method is adopted, and an optical flow equation of pixel points in the surrounding neighborhood is solved for an original image through a least square method so as to obtain an optical flow. Specifically, a certain pixel point in the previous frame image I is represented as u = [ u ] x ,u y ] T And v = [ u ] at certain pixel point in the next frame image x +d x ,u y +d y ] T Matching it, i.e. the gray scale difference is minimal, then d = [ d ] x ,d y ] T Is the optical flow of image I at point d.
And clustering the optical flow directions of the obtained images by using a k-means clustering method, wherein the number of clusters is N =8, the number of cluster categories belonging to forward motion is M =5, and the number of non-forward motion categories is N-M =3. Counting the forward motion optical flow proportion gamma:
Figure BDA0003972370670000101
the image frame with the larger forward motion optical flow proportion gamma is taken as a forward motion salient frame.
The forward motion salient frame and the current time frame are used for dynamic feature extraction, and an improved ViBe algorithm based on RGB channel screening is adopted in the extraction method in the embodiment. Namely, the method comprises the following steps: firstly, extracting a motion foreground area of a current frame: and calculating the Euclidean distance from the pixel value V (x) of the pixel point x to each pixel point in the sample set, counting the distance threshold value of each point to be less than or equal to the threshold value R as the number of similar sample points, if the number of similar sample points is greater than the minimum number of sample points, judging the point x as a background point, and otherwise, judging the point x as a foreground point. Secondly, screening foreground points with colors similar to flame and smoke in the foreground points by utilizing numerical values of the image on three RGB color channels, and filtering other foreground points; and finally, normalizing the screened foreground area to obtain a dynamic characteristic diagram.
And constructing an intelligent fire detection and identification model based on the dynamic characteristic auxiliary module. The present embodiment includes the steps of: firstly, an intelligent fire target detection and identification model overall framework is built, the model adopts an end-to-end target detection and identification network structure, and the structure mainly comprises a main network, a connecting Neck, a detection head and the like. The trunk network adopts a Darknet53 trunk embedded with conditional convolution, and specifically comprises a conditional convolution operator, a residual block structure, an SPP structure and the like, and the connection relation and the embedding position of the conditional convolution among the structures are shown in FIG. 4. The connection Neck adopts a modified S-shaped PANet structure, and the difference between the original PANet structure and the improved S-shaped PANet structure is that the PANet structure is expanded from an original U-shaped characteristic transmission loop to an S-shaped loop.
And inputting the marked video image frames into an intelligent fire recognition model based on a dynamic characteristic auxiliary module for training. In the training process, the commonly used training parameter setting in Yolov5 (You Only Look one v 5) is adopted, and maximum value inhibition is combined to be used as post-processing. And storing the trained model, inputting a test video to test the trained model, and obtaining a final fire detection result.
The intelligent fire recognition model based on the dynamic characteristic auxiliary module provided by the embodiment makes full use of the time context information in the fire video, can better complement and confirm with the static target characteristics in the single-frame image, has better inhibiting effect on the fire target which is missed due to the fact that the static characteristics are weak and the wrong target which is similar to the fire due to the color texture, and can improve the detection recognition rate of the fire.
It should be noted that the following embodiments can be combined and/or expanded, replaced in any way that is logical in any way from the above detailed description, such as the technical principles disclosed, the technical features disclosed or the technical features implicitly disclosed, etc., within the scope of protection defined by the claims of the present invention.
Example 1
A dynamic characteristic assisted visible light fire detection and identification method comprises the following steps:
extracting image frames with directional motion characteristics in a selected time range from the video frames through the fire video, and combining the image frames at the selected time to obtain the dynamic characteristics of the video time context dynamic information considered in the fire burning process; and the dynamic characteristics are utilized to assist in fire detection and identification of the image frames.
Example 2
On the basis of the embodiment 1, the method for extracting the image frame with the directional motion characteristics in the selected time range from the video frame through the fire video and combining the image frame at the selected time comprises the following substeps:
s1, acquiring a fire video image;
s2, extracting non-downward motion image frames in continuous video frames within a time length T obtained before the current moment by utilizing an optical flow method and optical flow direction statistics, and calling the image frames as forward motion remarkable frames;
and S3, extracting dynamic characteristics by using the forward motion salient frame and the current time frame.
Example 3
On the basis of embodiment 2, in step S2, the process of extracting the forward motion salient frame includes the following sub-steps:
s21, extracting sampling optical flow of the video image frame obtained in the time length T before the current moment by an optical flow method;
s22, clustering the light stream direction of each obtained frame image by using a clustering method, wherein the number of clusters is N, the number of cluster types belonging to forward motion is M, M is less than N, and the number of non-forward motion types is N-M;
s23, counting the forward motion light flow proportion gamma:
Figure BDA0003972370670000121
wherein k is i 、k j Respectively representing the number of optical flows belonging to the ith and the j category; the image frame with the larger forward motion optical flow proportion gamma is taken as a forward motion salient frame.
Example 4
On the basis of embodiment 2, in step S3, the performing dynamic feature extraction by using the forward motion salient frame and the current time frame includes the sub-steps of:
s31, extracting a motion foreground region of the current frame;
s32, screening the extracted foreground region based on the fire image characteristics;
and S33, normalizing the screened foreground area to obtain dynamic characteristics.
Example 5
On the basis of the embodiment 1, the method for assisting fire detection and identification of the image frame by using the dynamic feature comprises the following sub-steps:
and step S4: the method comprises the following steps of building an intelligent fire target detection and identification model overall framework, adding a dynamic characteristic auxiliary flow in the intelligent fire target detection and identification model, wherein the dynamic characteristic auxiliary flow comprises the following specific steps:
1) Respectively carrying out space attention feature analysis on the extracted dynamic features and the current frame;
2) The attention feature is mapped to the weighted weights of the rolling calculation sub-group and the parameters are updated using the full join and activation functions.
Example 6
On the basis of the embodiment 5, in the step S4, the overall framework of the intelligent fire target detection and identification model includes a backbone network, a connection hack and a detection head; the trunk network adopts a Darknet53 trunk structure embedded with conditional convolution; the connecting Neck adopts an improved S-type PANet structure: expanding a U-shaped characteristic transmission loop of an original PANET structure to an S-shaped characteristic transmission loop to obtain an S-shaped PANET structure, wherein the S-shaped PANET structure is used for deepening the connection between a main network of an intelligent fire target detection model and a detection head; the detection head adopts a detection output layer based on multi-scale anchor point design.
Example 7
On the basis of embodiment 5, in step 2), the method comprises the following substeps: the spatial attention feature of the dynamic feature is represented as Z, the spatial attention feature of the current frame is represented as X, and the specific mapping process is as follows:
step a), inputting the space attention feature Z of the dynamic feature into a mapping function composed of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000131
Step b), inputting the spatial attention feature X of the current frame into another group of mapping functions consisting of FC and Sigmoid to obtain a mapping weight vector
Figure BDA0003972370670000141
Step c), mapping the weight vector
Figure BDA0003972370670000142
And a mapping weight vector ≧>
Figure BDA0003972370670000143
Using point-and-multiply combination to obtain the weighting weight:
Figure BDA0003972370670000144
and d), combining the weighted weights and convolution parameters of the convolution operator group through dot product operation, and updating the convolution parameters as follows: w is a group of i =c i *W i Wherein c is i As a weighted weight vector
Figure BDA0003972370670000145
To the individual value of (a).
Example 8
On the basis of the embodiment 6, after the whole framework of the intelligent fire target detection and identification model is built, the method comprises the following steps:
step S5: training an intelligent fire target detection and recognition model based on a dynamic characteristic auxiliary process by using training data, and testing the trained model by using test data; in the training and testing process, inputting dynamic characteristics in a dynamic characteristic auxiliary process, and inputting a current frame at the input end of the intelligent fire target detection and identification model; and filtering the output of the intelligent fire recognition model by utilizing the post-processing of maximum suppression to obtain a final fire detection result.
Example 9
A dynamic characteristic assisted visible light fire detection and identification device comprises a dynamic characteristic acquisition module, a dynamic characteristic acquisition module and a dynamic characteristic acquisition module, wherein the dynamic characteristic acquisition module is used for extracting image frames with directional motion characteristics in a selected time range from video frames through a fire video and combining the image frames at the selected time to acquire dynamic characteristics of video time context dynamic information in the fire combustion process;
and the dynamic characteristic auxiliary module is used for assisting the fire detection and identification of the image frame by utilizing the dynamic characteristic.
Example 10
A readable storage medium, in which a computer program is stored, which computer program is loaded by a processor and executes a method according to any of embodiments 1-8.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
According to an aspect of an embodiment of the present invention, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
As another aspect, an embodiment of the present invention further provides a computer-readable medium, which may be included in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
The parts not involved in the present invention are the same as or can be implemented using the prior art.
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.
In addition to the foregoing examples, those skilled in the art, having the benefit of this disclosure, may derive other embodiments from the teachings of the foregoing disclosure or from modifications and variations utilizing knowledge or skill of the related art, which may be interchanged or substituted for features of various embodiments, and such modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the present invention as set forth in the following claims.

Claims (10)

1. A dynamic characteristic assisted visible light fire detection and identification method is characterized by comprising the following steps:
extracting image frames with directional motion characteristics in a selected time range from the video frames through the fire video, and combining the image frames at the selected time to obtain the dynamic characteristics of the video time context dynamic information considered in the fire burning process; and the dynamic characteristics are utilized to assist in fire detection and identification of the image frames.
2. The dynamic feature assisted visible light fire detection and identification method according to claim 1, wherein the extracting of the image frames with directional motion features in a selected time range from the video frames by the fire video and combining the image frames at the selected time comprises the sub-steps of:
s1, acquiring a fire video image;
s2, extracting non-downward motion image frames in continuous video frames within a time length T obtained before the current moment by utilizing an optical flow method and optical flow direction statistics, and calling the image frames as forward motion remarkable frames;
and S3, extracting dynamic characteristics by using the forward motion salient frame and the current time frame.
3. The visible light fire detection and identification method assisted by dynamic features of claim 2, wherein in step S2, the process of extracting the forward motion salient frames includes the following sub-steps:
s21, extracting sampling optical flow of the video image frame obtained in the time length T before the current moment by an optical flow method;
s22, clustering the light stream direction of each obtained frame image by using a clustering method, wherein the clustering number is N, the number of the clustering categories belonging to forward motion is M, M is less than N, and the number of the non-forward motion categories is N-M;
s23, counting the forward motion light flow proportion gamma:
Figure FDA0003972370660000021
wherein k is i 、k j Respectively representing the number of optical flows belonging to the ith and the j category; and taking the image frame with the larger forward motion optical flow proportion gamma as a forward motion remarkable frame.
4. The visible light fire detection and identification method assisted by dynamic features of claim 2, wherein in step S3, the dynamic feature extraction using the forward motion salient frame and the current time frame comprises the following sub-steps:
s31, extracting a motion foreground region of the current frame;
s32, screening the extracted foreground region based on the fire image characteristics;
s33, normalizing the screened foreground area to obtain dynamic characteristics.
5. The dynamic feature assisted visible light fire detection and identification method according to claim 1, wherein the dynamic feature is used for assisting fire detection and identification of image frames, and the method comprises the following sub-steps:
and step S4: the method comprises the following steps of building an intelligent fire target detection and identification model overall framework, adding a dynamic characteristic auxiliary flow in the intelligent fire target detection and identification model, wherein the dynamic characteristic auxiliary flow comprises the following specific steps:
1) Respectively carrying out space attention feature analysis on the extracted dynamic features and the current frame;
2) The attention feature is mapped to the weighted weights of the convolution sub-group and the parameters are updated using the full join and activation functions.
6. The visible light fire detection and identification method assisted by dynamic features of claim 5, wherein in step S4, the intelligent fire target detection and identification model overall framework comprises a backbone network, a connection Neck and a detection head; the trunk network adopts a Darknet53 trunk structure embedded with conditional convolution; the connecting Neck adopts an improved S-type PANet structure: expanding a U-shaped characteristic transmission loop of an original PANET structure to an S-shaped characteristic transmission loop to obtain an S-shaped PANET structure, wherein the S-shaped PANET structure is used for deepening the connection between a main network of an intelligent fire target detection model and a detection head; the detection head adopts a detection output layer based on multi-scale anchor point design.
7. The visible light fire detection and identification method assisted by dynamic features of claim 5, wherein in step 2), the method comprises the sub-steps of: the spatial attention feature of the dynamic feature is represented as Z, the spatial attention feature of the current frame is represented as X, and the specific mapping process is as follows:
step a), inputting the space attention feature Z of the dynamic feature into a mapping function composed of FC and Sigmoid to obtain a mapping weight vector
Figure FDA0003972370660000031
Step b), inputting the spatial attention feature X of the current frame into another group of mapping functions consisting of FC and Sigmoid to obtain a mapping weight vector
Figure FDA0003972370660000032
Step c), mapping the weight vector
Figure FDA0003972370660000033
And mapping the weight vector
Figure FDA0003972370660000034
Using point-and-multiply combination to obtain the weighting weight:
Figure FDA0003972370660000035
step d), combining the weighted weights and convolution parameters of the convolution operator group through dot product operation and updatingThe convolution parameters are: w i =c i *W i Wherein c is i As a weighted weight vector
Figure FDA0003972370660000036
To (3) is a single numerical value of (a).
8. The dynamic characteristic-assisted visible light fire detection and identification method according to claim 6, wherein after an intelligent fire target detection and identification model integral framework is built, the method comprises the following steps:
step S5: training an intelligent fire target detection and recognition model based on a dynamic characteristic auxiliary process by using training data, and testing the trained model by using test data; in the training and testing process, inputting dynamic characteristics in a dynamic characteristic auxiliary process, and inputting a current frame at the input end of the intelligent fire target detection and identification model; and filtering the output of the intelligent fire recognition model by utilizing the post-processing of maximum suppression to obtain a final fire detection result.
9. A dynamic feature assisted visible light fire detection and identification device, comprising:
the dynamic characteristic acquisition module is used for extracting image frames with directional motion characteristics in a selected time range from the video frames through the fire video and combining the image frames at the selected moment to acquire dynamic characteristics of video time context dynamic information in the fire burning process;
and the dynamic characteristic auxiliary module is used for assisting the fire detection and identification of the image frames by utilizing the dynamic characteristics.
10. A readable storage medium, in which a computer program is stored which, when being loaded by a processor, carries out the method according to any one of claims 1 to 8.
CN202211517483.3A 2022-11-30 2022-11-30 Dynamic feature-assisted visible light fire detection and identification method, device and medium Active CN115841642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211517483.3A CN115841642B (en) 2022-11-30 2022-11-30 Dynamic feature-assisted visible light fire detection and identification method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211517483.3A CN115841642B (en) 2022-11-30 2022-11-30 Dynamic feature-assisted visible light fire detection and identification method, device and medium

Publications (2)

Publication Number Publication Date
CN115841642A true CN115841642A (en) 2023-03-24
CN115841642B CN115841642B (en) 2023-11-07

Family

ID=85576278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211517483.3A Active CN115841642B (en) 2022-11-30 2022-11-30 Dynamic feature-assisted visible light fire detection and identification method, device and medium

Country Status (1)

Country Link
CN (1) CN115841642B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101869442B1 (en) * 2017-11-22 2018-06-20 공주대학교 산학협력단 Fire detecting apparatus and the method thereof
CN108399359A (en) * 2018-01-18 2018-08-14 中山大学 Fire detection method for early warning in real time under a kind of video sequence
CN109961019A (en) * 2019-02-28 2019-07-02 华中科技大学 A kind of time-space behavior detection method
CN109978756A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Object detection method, system, device, storage medium and computer equipment
CN112766179A (en) * 2021-01-22 2021-05-07 郑州轻工业大学 Fire smoke detection method based on motion characteristic hybrid depth network
CN113963301A (en) * 2021-11-04 2022-01-21 西安邮电大学 Space-time feature fused video fire and smoke detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101869442B1 (en) * 2017-11-22 2018-06-20 공주대학교 산학협력단 Fire detecting apparatus and the method thereof
CN108399359A (en) * 2018-01-18 2018-08-14 中山大学 Fire detection method for early warning in real time under a kind of video sequence
CN109961019A (en) * 2019-02-28 2019-07-02 华中科技大学 A kind of time-space behavior detection method
CN109978756A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Object detection method, system, device, storage medium and computer equipment
CN112766179A (en) * 2021-01-22 2021-05-07 郑州轻工业大学 Fire smoke detection method based on motion characteristic hybrid depth network
CN113963301A (en) * 2021-11-04 2022-01-21 西安邮电大学 Space-time feature fused video fire and smoke detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
侯新国 等: "《基于光流动态特征与SVM 的阴燃火检测方法》", 《海军工程大学学报》, vol. 31, no. 1, pages 67 - 73 *

Also Published As

Publication number Publication date
CN115841642B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN110059694B (en) Intelligent identification method for character data in complex scene of power industry
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN103069434B (en) For the method and system of multi-mode video case index
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111339883A (en) Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene
CN106815576B (en) Target tracking method based on continuous space-time confidence map and semi-supervised extreme learning machine
CN107194396A (en) Method for early warning is recognized based on the specific architecture against regulations in land resources video monitoring system
CN112270331A (en) Improved billboard detection method based on YOLOV5
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN108345900B (en) Pedestrian re-identification method and system based on color texture distribution characteristics
CN110348342B (en) Pipeline disease image segmentation method based on full convolution network
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN102663362A (en) Moving target detection method t based on gray features
CN110059076A (en) A kind of Mishap Database semi-automation method for building up of power transmission and transformation line equipment
CN107871315B (en) Video image motion detection method and device
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN107729811B (en) Night flame detection method based on scene modeling
CN108764287A (en) Object detection method and system based on deep learning and grouping convolution
CN117611988A (en) Automatic identification and monitoring method and system for newly-increased farmland management and protection attribute
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
CN115841642A (en) Dynamic characteristic assisted visible light fire detection and identification method, device and medium
CN110796008A (en) Early fire detection method based on video image
CN106127813A (en) The monitor video motion segments dividing method of view-based access control model energy sensing
Ouyang et al. An Anchor-free Detector with Channel-based Prior and Bottom-Enhancement for Underwater Object Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant