CN117854215A - Fire alarm method and device based on time sequence image - Google Patents

Fire alarm method and device based on time sequence image Download PDF

Info

Publication number
CN117854215A
CN117854215A CN202311712749.4A CN202311712749A CN117854215A CN 117854215 A CN117854215 A CN 117854215A CN 202311712749 A CN202311712749 A CN 202311712749A CN 117854215 A CN117854215 A CN 117854215A
Authority
CN
China
Prior art keywords
data
fire
image
smoke
time sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311712749.4A
Other languages
Chinese (zh)
Inventor
魏少华
潘晓东
李伟泽
赵学慧
罗熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202311712749.4A priority Critical patent/CN117854215A/en
Publication of CN117854215A publication Critical patent/CN117854215A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a fire alarm method and a fire alarm device based on time sequence images, which relate to the technical field of fire early warning and comprise the following steps: a historical data set in the existing video data is collected in advance; training an image recognition model based on YOLOV3 based on the historical data set, so that the image recognition model outputs open flame and smoke image data; based on open fire and smoke image data output by the image recognition model, extracting a characteristic sequence of the open fire and smoke image data according to a time sequence; training a time sequence prediction model based on the feature sequence, so that the time sequence prediction model outputs a predicted open fire and smoke feature threshold; collecting video data from an installed monitoring device, and identifying open flame and smoke image data in the video data based on an image identification model; based on image detection, the time sequence characteristics of fire evolution are utilized, the fire evolution rule is learned, the fire change condition is predicted, and finally the purpose of fire pre-alarming is achieved.

Description

Fire alarm method and device based on time sequence image
Technical Field
The invention relates to the technical field of fire early warning, in particular to a fire alarm method and device based on time sequence images.
Background
Conventional smoke alarms form alarms by physical principles, such as by using smoke to interfere with the movement of charged particles to cause a change in current, or by using smoke particles to scatter infrared light, thereby forming an alarm. The traditional sensor used in a large factory workshop has certain defects, firstly, the workshop area is large, the installation position of the sensor is not well selected, and a small-range fire can not be detected in time. Second, the normal production of workshops may generate fine dust entering the sensor and may easily generate false alarms. Thirdly, the sensor is aged after being arranged for a long time, so that the problems of alarm faults and the like are easy to generate, and dangerous signals cannot be responded in time.
With the development of deep learning technology, detecting smoke by using related technologies such as computer image recognition and the like is a mainstream method of fire early warning at present. Has higher accuracy and stability compared with physical devices. The image smoke detection based on deep learning can reach very high precision, however, the time from the evolution process of fire to the diffusion of fire from a small range of open fire and smoke stage to a large range is very short, the defect of fire alarm strength is realized only by means of static video image detection, and when the detection result is obtained to generate an alarm, the fire is likely to be diffused.
The patent CN114936718A realizes a high-precision detection method of the fire disaster of the parking lot in a weak monitoring mode by using small sample training data, and the patent CN109522819A improves the accuracy of smoke detection in a single image by using a dark channel image and a deep learning method. The method detects the smoke or fire disaster which has actually occurred through image recognition, further learns the fire evolution process based on the image detection and recognition, and can predict the future development change condition in the early stage of the fire evolution so as to achieve the aim of early warning.
In summary, the conventional physical smoke alarm device has a certain defect, and the image smoke detection based on deep learning has a problem of timeliness. The invention combines the image recognition and the characteristic time sequence prediction algorithm to realize the smoke and open fire detection method, and the method can timely find and timely respond to the crisis source to generate an alarm so as to ensure the personal and property safety.
Disclosure of Invention
In order to overcome the technical problems, the invention aims to provide a fire alarming method and device based on time sequence images, so as to solve the problem that in the prior art, the deep learning-based image smoke detection only depends on static video image detection to realize the insufficient fire alarming force, and when the detection result is obtained to generate an alarm, the fire probably forms diffusion, so that early warning cannot be performed in time.
The aim of the invention can be achieved by the following technical scheme:
specifically, a fire alarm method based on time sequence images is provided, which comprises the following steps:
a historical data set in the existing video data is collected in advance;
training an image recognition model based on YOLOV3 based on the historical data set, so that the image recognition model outputs open flame and smoke image data;
based on open fire and smoke image data output by the image recognition model, extracting a characteristic sequence of the open fire and smoke image data according to a time sequence;
training a time sequence prediction model based on the feature sequence, so that the time sequence prediction model outputs a predicted open fire and smoke feature threshold;
collecting video data from an installed monitoring device, and identifying open flame and smoke image data in the video data based on an image identification model;
outputting predicted open fire and smoke feature thresholds based on the open fire and smoke image data using a time series prediction model;
and selecting whether to output an alarm or not based on the open fire and smoke characteristic threshold and a preset threshold.
As a further scheme of the invention: the historical data set comprises N groups of training data, N is a positive integer, and each group of training data comprises characteristic data and label data;
the characteristic data are video data in the existing fire video;
the tag data is open flame and smoke image data in the video data when each set of training data is collected.
As a further scheme of the invention: the training mode of the image recognition model is as follows:
and carrying out grid division on the characteristic data in each group of training data, finding out grids where the frame center points corresponding to the targets are located, and collecting position data, confidence coefficient data and classification data by each grid.
As a further scheme of the invention: the confidence data comprises identification scores and covered IOU products, and frame errors and IOU errors are calculated through a loss function;
Loss=a×lossobj+b×lossrect+c×lossclc;
where lossobj represents the loss of position data, losuret represents the loss of confidence data, losclc represents the loss of classification data, and a, b and c are weight coefficients.
As a further scheme of the invention: the calculation mode of the lossobj is as follows:
f L1 (x 1 ,x 2 )=|x 1 -x 2 |;
Lossobj L1 =f L1 (x p ,x l )+f L1 (y p ,y l )+f L1 (w p ,w l )+f L1 (h p ,h l );
wherein (x) p ,y p ) Representing the real coordinates of the grid, w p Represents the true width of the grid, h p Represents the true height of the grid, (x) l ,y l ) Representing coordinates of a predictive network of an image recognition model, w l Predicting the width of a network by representing an image recognition model, h l The height of the network is predicted on behalf of the image recognition model.
As a further scheme of the invention: the losuret is calculated by adopting a BCE loss function:
Lossrect=-{mlog[p(score)+(1-m)log(1-p(score)]};
wherein m is a binary label, the value of m is 0 or 1, score is the confidence coefficient of the image recognition model prediction grid, and p (n) is the probability of the image recognition model output the confidence coefficient of the grid.
As a further scheme of the invention: the convolution neural network model adopted by the image recognition model is subjected to 3-layer convolution, convolution kernels are 7×7, 5×5 and 3×3 in sequence, 4-medium pooling combination is adopted after the 3-layer convolution, and finally target region characteristics are output through two full-connection layers.
As a further scheme of the invention: the calculation mode of the characteristic value label of the target area characteristic is as follows:
wherein d and ed represent weight coefficients, s t Representing the target area, s representing the original image area, score representing the target area confidence.
As a further scheme of the invention: the time sequence prediction model evaluates the predicted value according to the actual value of the subsequence, and corrects the parameter according to the evaluation result:
wherein,represents the exponential weighted prediction value at the time of t, y t The actual value at time t is indicated, and α is the smoothing constant.
The fire alarm device based on the time sequence image is realized based on the fire alarm method, and comprises the following steps:
the monitoring device is arranged at the front end and used for acquiring video data;
the control module is loaded on the monitoring device and processes video data acquired by the monitoring device by using the image recognition model and the time sequence prediction model;
the alarm is arranged at the rear end, and the control module controls the alarm according to the characteristic threshold value of the open fire and the smoke and the preset threshold value.
The invention has the beneficial effects that:
1. compared with the traditional physical sensor alarm device, the method has the advantages of being more stable and more accurate by utilizing a computer and a video monitoring mode, and compared with the current mainstream image smoke or open fire detection method, the method utilizes the time sequence characteristic of fire evolution on the basis of image detection, learns the fire evolution rule, predicts the fire change condition and finally achieves the purpose of fire early warning.
2. According to the invention, the method and the device for detecting and alarming the smoke and the open flame are realized by combining the models in different fields, the advantages of the different models in the respective fields are fully exerted, the smoke and the open flame positions in the monitoring image can be accurately identified by adopting the image identification model, the characteristics of the target area can be extracted by adopting the CNN model, the future development change can be predicted by the time sequence model according to the historical characteristics, the actual complex problem can be solved after fusion, and compared with the traditional alarming device, the device can generate an alarm before serious fire disaster occurs, and the loss is reduced.
3. According to the method, the effect of detecting smoke and open fire targets in the video monitoring sampling image is obvious by means of the characteristic extraction capability of the convolutional neural network through the image detection method based on deep learning, and the accurate prediction of fire evolution characteristics in a period of time in the future is realized by combining a time sequence prediction method, so that the predictability of smoke alarm is improved.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a flow chart of a model process of the present invention;
fig. 3 is a diagram of the image recognition YOLOv3 model structure of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples:
as shown in fig. 1-3, the invention discloses a fire alarm method based on time sequence images, which comprises the following three steps:
s1: yolov 3-based image recognition model training
1. Sampling data samples: the data mainly extracts a plurality of images containing open fire and smoke targets from the past fire videos, marks the smoke and open fire areas according to the target detection label making standard, builds a basic data set and divides the data set, most of the data is used as training data to train model parameters, and a small part of the images are used for model training result verification. And uniformly labeling the pre-training data sets after the data sets are divided, and acquiring labeling results as correction indexes of parameter training.
1.2, data preprocessing: the data preprocessing is mainly divided into two parts, namely data normalization processing format conversion and data enhancement operation. Firstly, converting the acquired data into a format, for example, converting a file acquired by video into a PNG format, and converting the resolution of a picture into resolution with uniform size; and then data enhancement operation is carried out on the data, aiming at the detected multi-scale problem, the robustness of the model needs to be increased, the angle between the smoke and the open flame of the picture and the on-site environment factor of the picture are subjected to a random color dithering data enhancement strategy or noise disturbance is added in training to increase the robustness.
1.3, selecting an image recognition pre-training model: the invention selects the common convolutional neural network model YOLOv3 as a basic image recognition model, and the model is widely applied to the field of target detection of images, and is characterized in that a single-stage detection mode is adopted, the region detection process in the middle of a two-stage model is abandoned, the detection result is directly obtained from the image, the speed advantage is more obvious, the accuracy is also improved to a certain extent, YOLO v3 adopts Darknet-53 as a backbone network, the network mainly adopts 3X 3 and 1X 1 convolutional kernels, and the idea of residual error is utilized, and a batch normalization layer and a ReLU layer are added after each convolutional layer, so that the overfitting can be prevented, the gradient disappearance and gradient explosion problems in the training process are avoided, and the overall network model of YOLOv3 is shown in figure 3.
In the invention, the model is subjected to parameter training by using the constructed data set, and the model accuracy is evaluated by verifying the data. The model can be used for rapidly detecting the sampling image in the video and obtaining the detection result of the target image, wherein the detection result comprises the regional position information of open fire and smoke in the image.
1.4, training an image recognition model: the model is trained using the previous samples and the partitioned data set, and no additional preprocessing is required because the size of the images sampled from the monitoring device is uniform. Firstly, carrying out grid division on an image, finding out grids of a frame center point corresponding to a target, collecting position data, confidence coefficient data and classification data by each grid, scoring the confidence coefficient of each grid, sequencing target detection results, clearing redundant frames by a non-maximum suppression method, wherein the confidence data comprises identification scores and coverage IOU products, finally calculating frame position errors and IOU errors by using a loss function, the model loss function is realized in a manner that three losses sequentially represent position data loss, confidence data loss and classification data loss, a, b and c coefficients represent weight combinations, and the confidence coefficient is usually set to be the maximum weight.
Loss=a×lossobj+b×lossrect+c×lossclc;
The loss of position data adopts an L1 loss function, and the calculation mode is as follows
f L1 (x 1 ,x 2 )=|x 1 -x 2 |;
Lossobj L1 =f L1 (x p ,x l )+f L1 (y p ,y l )+f L1 (w p ,w l )+f L1 (h p ,h l );
Wherein (x) p ,y p ) Representing the real coordinates of the grid, w p Represents the true width of the grid, h p Representing true of a gridHeight of reality (x) l ,y l ) Representing coordinates of a predictive network of an image recognition model, w l Predicting the width of a network by representing an image recognition model, h l Predicting the height of the network by representing the image recognition model;
the confidence data loss function adopts BCE loss, and the calculation mode is as follows
Lossrect=-{mlog[p(score)+(1-m)log(1-p(score)]};
Wherein m is a binary label, the value of m is 0 or 1, score is the confidence coefficient of the image recognition model prediction grid, and p (n) is the probability of the image recognition model output the confidence coefficient of the grid;
and calculating the difference between the model output result and the labeling result through the loss function, gradually correcting the model parameters by using a classical back propagation algorithm, and finally achieving a relatively good model training effect.
S2: feature extraction model training
2.1, model construction: in the step 1.3, training of an image target detection model is completed, and the positions of open fire and smoke features in an image can be acquired through an image recognition model, so that how to extract the target features becomes a key of a problem. The invention builds a convolutional neural network model fusing a pooling model, which is used for extracting the characteristics of an image target area, the traditional convolutional neural network needs to train the image input with fixed size, the position and the size of a detection target obtained by a YOLO v3 model in the image are not fixed, therefore, the invention blends pyramid pooling operation in CNN, the operation replaces the traditional average pooling operation in CNN by utilizing a special pooling layer, the characteristic size is not limited, a fixed output characteristic diagram is generated, and finally the final characteristic output is obtained by calculating through two full-connection layers.
2.2 feature extraction training: in S1, training of an image target recognition model is completed, a training process of a feature extraction fusion model is introduced, firstly, a result image output by a target detection model possibly contains a plurality of smog and open fire targets, and in order to strengthen the danger capability, the method selects the targets with the confidence coefficient larger than 0.8 and the largest area for regional sampling. And inputting the sampled original image area into a feature extraction model to perform feature calculation, and updating model parameters by calculating errors between model output and feature reference values. Because the target region features have no corresponding real labels, the design of the invention combines the target region and the confidence according to the corresponding weights to calculate the feature value label of the target region, wherein the calculation mode is as follows, a and b represent weights, S t And (3) representing the area of the target area, wherein S represents the area of the original image, score represents the confidence of the target area, and the confidence is output by the target detection model.
S3: training a time sequence model based on a sliding window;
3.1, the characteristic of the time series model is that can predict the characteristic change situation of a period of time in the future according to the historical sequence characteristic of a period of time. The invention fully utilizes the characteristic sequences obtained in the fire occurrence and development processes to train the model, so that the model can well predict the development conditions of open fire and smoke in a certain time period in the future.
And constructing an image data set arranged in time sequence through video monitoring sampling, sequentially inputting the models trained in the step S1 and the step S2, and finally obtaining a target feature sequence with time attributes. Firstly, carrying out sub-sequence segmentation on the characteristic sequence, and predicting future open fire and smoke development characteristic values through a time sequence model based on a sliding window on the basis of the sub-sequence. The invention adopts an exponential weighting sliding window to carry out model training, the model input is the first n feature sequences, the output result is the (n+1) th feature predicted value, and the predicted value is evaluated according to the actual value of the subsequenceThe price is corrected by the evaluation result, and the calculation mode is as follows,represents the exponential weighted prediction value at the time of t, y t The actual value at the time t is represented, alpha is a smooth constant, and the exponential weighting can give larger weight to the characteristic closer to the time t, so that the rapid fire disaster change condition can be adapted.
The overall flow of the model process of the present invention is shown in fig. 2.
Example 2:
the invention discloses a fire alarm device based on time sequence images, which is realized based on a fire alarm method in an embodiment 1, and comprises the following steps:
the monitoring device is arranged at the front end and used for acquiring video data;
the control module is loaded on the monitoring device and processes video data acquired by the monitoring device by using the image recognition model and the time sequence prediction model;
the alarm is arranged at the rear end, and the control module controls the alarm according to the characteristic threshold value of the open fire and the smoke and a preset threshold value;
it should be noted that, the front end that monitoring device set up can workshop, corridor or room, monitoring device is high definition camera, high definition camera gathers on-the-spot high definition image, then control module discerns high definition image through image recognition model, select open fire and smog image data wherein, then extract open fire and smog image data's characteristic sequence according to time sequence with open fire and smog image data, control module predicts open fire and smog characteristic threshold according to characteristic sequence use time sequence prediction model, because the inside open fire of having preset of control module and smog characteristic threshold, compare the open fire of the presettingof open fire of prediction and smog characteristic threshold with smog characteristic threshold, if the open fire of prediction and smog characteristic threshold are greater than or equal to the open fire and smog characteristic threshold of presettingof presetting, control module starts the alarm, remind the front end to appear the fire alarm, if open fire of prediction and smog characteristic threshold is less than the open fire and smog characteristic threshold of presettingof presetting, the open fire and smog image data of onsite can't develop into the conflagration, control module can not start the alarm.
Example 3:
as shown in fig. 1, firstly, in a non-emergency situation, an image is automatically intercepted from a high-definition monitoring video every 5 seconds and is input into a trained image recognition model, and if the confidence coefficient and the target area do not exceed a threshold value, the conventional sampling detection process is continued;
it should be noted that the fact that the confidence and the target area do not exceed the threshold means that the predicted open fire and smoke characteristic threshold is smaller than the preset open fire and smoke characteristic threshold, and that the open fire and smoke image data on site cannot develop into fire is explained, the control module does not start the alarm, the high-definition monitoring video continues to intercept the image data at the frequency of 5 seconds and transmit the image data to the control module, and the control module recognizes the transmitted image data through the image recognition model.
Further, when the area of the captured open fire and smoke target area is larger than a certain threshold, triggering a smoke feature extraction model, automatically reducing a sampling interval, intercepting 5 frames of images within one second, inputting the images into the model, acquiring a target detection result of each image, extracting an original image area corresponding to the open fire target from the detection result, and storing the original image area, wherein at the moment, the pre-trained feature extraction model is activated;
when the area of the captured open fire and smoke target area is larger than a certain threshold, namely the predicted open fire and smoke characteristic threshold is larger than or equal to the preset open fire and smoke characteristic threshold, the control module controls the monitoring device to automatically reduce the sampling interval, and image data are acquired at the frequency of one second.
Further, after the target area feature extraction model is activated, a continuous target area within a period of time is input into the model to obtain a feature sequence of smoke or open fire, each image with an ordered time dimension corresponds to a feature, and the feature sequence is ordered in the time dimension.
Further, the extracted characteristic sequence is input into a time sequence prediction model to obtain smoke and open flame characteristic predictions in a certain time in the future, and an alarm is triggered when the characteristic prediction value exceeds a threshold value.
Further, when the image recognition model detects an open fire and smoke area, a subsequent series of algorithm operations are triggered, and when the time sequence model predicts that the future characteristic change condition exceeds a certain threshold value, serious alarm is generated, and relevant responsible personnel are timely notified, so that the method can timely and quickly respond, and the loss is reduced to the minimum.
Finally, when the target detection subsequent model is activated, the subsequent steps are repeated continuously, the image sampling and the characteristic sequence are updated continuously, the future smoke and open fire characteristic change condition is predicted continuously, and the purpose of timely control before fire occurrence is achieved.
The foregoing describes one embodiment of the present invention in detail, but the description is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.

Claims (10)

1. The fire alarm method based on the time sequence image is characterized by comprising the following steps of:
a historical data set in the existing video data is collected in advance;
training an image recognition model based on YOLOV3 based on the historical data set, so that the image recognition model outputs open flame and smoke image data;
based on open fire and smoke image data output by the image recognition model, extracting a characteristic sequence of the open fire and smoke image data according to a time sequence;
training a time sequence prediction model based on the feature sequence, so that the time sequence prediction model outputs a predicted open fire and smoke feature threshold;
collecting video data from an installed monitoring device, and identifying open flame and smoke image data in the video data based on an image identification model;
outputting predicted open fire and smoke feature thresholds based on the open fire and smoke image data using a time series prediction model;
and selecting whether to output an alarm or not based on the open fire and smoke characteristic threshold and a preset threshold.
2. The fire alarm method based on time sequence images according to claim 1, wherein the historical data set comprises N groups of training data, N is a positive integer, and each group of training data comprises characteristic data and tag data;
the characteristic data are video data in the existing fire video;
the tag data is open flame and smoke image data in the video data when each set of training data is collected.
3. The fire alarm method based on time sequence images according to claim 2, wherein the training mode of the image recognition model is as follows:
and carrying out grid division on the characteristic data in each group of training data, finding out grids where the frame center points corresponding to the targets are located, and collecting position data, confidence coefficient data and classification data by each grid.
4. A fire alarm method based on time series images according to claim 3 wherein the confidence data comprises identification score and coverage IOU product, and border error and IOU error are calculated by a loss function;
Loss=a×lossobj+b×lossrect+c×lossclc;
where lossobj represents the loss of position data, losuret represents the loss of confidence data, losclc represents the loss of classification data, and a, b and c are weight coefficients.
5. The fire alarm method based on time sequence image as claimed in claim 4, wherein the calculation mode of the lossobj is:
f L1 (x 1 ,x 2 )=|x 1 -x 2 |;
Lossobj L1 =f L1 (x p ,x l )+f L1 (y p ,y l )+f L1 (w p ,w l )+f L1 (h p ,h l );
wherein (x) p ,y p ) Representing the real coordinates of the grid, w p Represents the true width of the grid, h p Represents the true height of the grid, (x) l ,y l ) Representing coordinates of a predictive network of an image recognition model, w l Predicting the width of a network by representing an image recognition model, h l The height of the network is predicted on behalf of the image recognition model.
6. The fire alarm method based on time series image as claimed in claim 4, wherein the losurect is calculated by using BCE loss function:
Lossrect=-{mlog[p(score)+(1-m)log(1-p(score)]};
wherein m is a binary label, the value of m is 0 or 1, score is the confidence coefficient of the image recognition model prediction grid, and p (n) is the probability of the image recognition model output the confidence coefficient of the grid.
7. The fire alarm method based on time sequence images according to claim 6, wherein the convolutional neural network model adopted by the image recognition model is subjected to 3-layer convolution first, convolution kernels are 7×7, 5×5, 3×3 and 4-layer convolution sequentially, and then pooling combination is adopted, and finally target region characteristics are output through two full-connection layers.
8. The fire alarm method based on time sequence image as claimed in claim 7, wherein the characteristic value label of the target area characteristic is calculated by the following method:
wherein d and ed represent weight coefficients, s t Representing the target area, s representing the original image area, score representing the target area confidence.
9. The fire alarm method based on time sequence image according to claim 1, wherein the time sequence prediction model evaluates the predicted value according to the actual value of the subsequence, and corrects the parameter according to the evaluation result:
wherein,represents the exponential weighted prediction value at the time of t, y t The actual value at time t is indicated, and α is the smoothing constant.
10. A fire alarm device based on time series images, the fire alarm device being implemented based on the fire alarm method according to any one of claims 1 to 9, comprising:
the monitoring device is arranged at the front end and used for acquiring video data;
the control module is loaded on the monitoring device and processes video data acquired by the monitoring device by using the image recognition model and the time sequence prediction model;
the alarm is arranged at the rear end, and the control module controls the alarm according to the characteristic threshold value of the open fire and the smoke and the preset threshold value.
CN202311712749.4A 2023-12-13 2023-12-13 Fire alarm method and device based on time sequence image Pending CN117854215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311712749.4A CN117854215A (en) 2023-12-13 2023-12-13 Fire alarm method and device based on time sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311712749.4A CN117854215A (en) 2023-12-13 2023-12-13 Fire alarm method and device based on time sequence image

Publications (1)

Publication Number Publication Date
CN117854215A true CN117854215A (en) 2024-04-09

Family

ID=90537275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311712749.4A Pending CN117854215A (en) 2023-12-13 2023-12-13 Fire alarm method and device based on time sequence image

Country Status (1)

Country Link
CN (1) CN117854215A (en)

Similar Documents

Publication Publication Date Title
KR102129893B1 (en) Ship tracking method and system based on deep learning network and average movement
CN109615019B (en) Abnormal behavior detection method based on space-time automatic encoder
CN112287816A (en) Dangerous working area accident automatic detection and alarm method based on deep learning
CN112101221B (en) Method for real-time detection and identification of traffic signal lamp
CN110084151B (en) Video abnormal behavior discrimination method based on non-local network deep learning
CN110717481B (en) Method for realizing face detection by using cascaded convolutional neural network
CN112069975A (en) Comprehensive flame detection method based on ultraviolet, infrared and vision
CN112200021B (en) Target crowd tracking and monitoring method based on limited range scene
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN113361326B (en) Wisdom power plant management and control system based on computer vision target detection
CN114399719B (en) Transformer substation fire video monitoring method
CN112926522B (en) Behavior recognition method based on skeleton gesture and space-time diagram convolution network
CN113657305B (en) Video-based intelligent detection method for black smoke vehicle and ringeman blackness level
CN113469050A (en) Flame detection method based on image subdivision classification
CN110909672A (en) Smoking action recognition method based on double-current convolutional neural network and SVM
CN112488213A (en) Fire picture classification method based on multi-scale feature learning network
CN115311601A (en) Fire detection analysis method based on video analysis technology
CN114662605A (en) Flame detection method based on improved YOLOv5 model
KR20230060214A (en) Apparatus and Method for Tracking Person Image Based on Artificial Intelligence
CN117274881A (en) Semi-supervised video fire detection method based on consistency regularization and distribution alignment
CN117409347A (en) ESNN-based early fire detection method
CN117854215A (en) Fire alarm method and device based on time sequence image
CN116740627A (en) Violation warning method based on operation knowledge
CN114283367B (en) Artificial intelligent open fire detection method and system for garden fire early warning
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination