CN113052226A - Time-sequence fire identification method and system based on single-step detector - Google Patents

Time-sequence fire identification method and system based on single-step detector Download PDF

Info

Publication number
CN113052226A
CN113052226A CN202110300878.7A CN202110300878A CN113052226A CN 113052226 A CN113052226 A CN 113052226A CN 202110300878 A CN202110300878 A CN 202110300878A CN 113052226 A CN113052226 A CN 113052226A
Authority
CN
China
Prior art keywords
flame
time
fire
lstm
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110300878.7A
Other languages
Chinese (zh)
Inventor
高尚兵
陈浩霖
吕昊泽
相林
于坤
于永涛
张海艳
蔡创新
王国平
鲜金龙
龚宇晨
曾钰涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202110300878.7A priority Critical patent/CN113052226A/en
Publication of CN113052226A publication Critical patent/CN113052226A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Fire Alarms (AREA)

Abstract

The invention discloses a time-sequence fire identification method and a time-sequence fire identification system based on a single-step detector, wherein a real flame video data set containing a complex environment is obtained based on a single-step detection method, and the data set is preprocessed; constructing an LSTM-C deep neural network model consisting of a plurality of memory units, and training the LSTM-C deep neural network model by using a data set; extracting flame space information under a monitored area by using a single-step detector; predicting flame space information in a period of time by using the trained LSTM-C model; and judging the prediction information and then making corresponding fire prevention measures. The invention can be used for real-time flame detection and early warning, and has better robustness and wide application value.

Description

Time-sequence fire identification method and system based on single-step detector
Technical Field
The invention belongs to the technical field of image processing and fire prevention, and particularly relates to a time-series fire identification method and system based on a single-step detector.
Background
In recent years, with the research of deep learning, the application field is wider and wider. In the existing flame detection system, researchers mainly detect flames in pictures, such as: kim et al propose a deep learning based forest fire monitoring technique that utilizes images acquired by optical sensors from unmanned aerial vehicles. Through training of the past forest fire image set, a forest fire monitoring technology based on deep learning is designed; huttner et al proposed a deep learning based approach based on Google's inclusion v3 and evaluated the impact of using different optimizers, learning rates and reduction functions on convergence time and results; muhammad et al propose a CCTV surveillance camera early fire detection framework based on a fine tuning convolutional neural network, which can detect fire in different indoor and outdoor environments; shen et al, using the YOLO model to perform flame detection and comparing it with a shallow learning method to determine the most effective flame detection method; muhammad et al propose a video fire detection system based on convolutional neural networks, and the inspiration comes from the GoogleLeNet architecture. And carrying out fine adjustment on the model according to the fire detection problem so as to improve the precision and the efficiency. The reason for selecting GoogLeNet is given by the users, and the GoogLeNet is good in classification accuracy and suitable for realizing a small model on an FPGA; yi et al propose a new significance detection algorithm for rapid localization and segmentation of core fire regions in aerial images; muhammad et al uses a smaller convolution kernel based on the Squeezenet architecture, reduces the amount of calculation, and obtains excellent effect of flame detection; priya et al provides a convolutional neural network inclusion-v 3 algorithm based on transfer learning, a satellite image is trained, a data set is divided into a fire image and a non-fire image, a confusion matrix is generated, the efficiency of a frame is specified, a fire occurrence area is extracted from the satellite image adopting a local binary pattern, and the false detection rate is reduced;
the above-described method based on deep learning has a good effect in generalization, but may fail to detect false detection. Aiming at the problem, the invention provides a time-sequence fire identification method and system based on a single-step detector.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention aims to provide a time-series fire identification method and system based on a single-step detector, which can detect flames in real time, keep better accuracy and reduce the conditions of false detection and missed detection.
The technical scheme is as follows: the invention provides a time-sequence fire identification method based on a single-step detector, which specifically comprises the following steps:
(1) acquiring a real flame video data set containing a complex environment based on a single-step detection method, and preprocessing the data set;
(2) constructing an LSTM-C deep neural network model and training the LSTM-C deep neural network model by using a data set; the LSTM-C deep neural network consists of a plurality of memory units, and each memory unit comprises a forgetting gate, an input gate and an output gate; the memory unit can selectively memorize the correction parameters of the feedback loss function descending along with the gradient and identify a plurality of time sequences;
(3) extracting flame space information under a monitored area by using a single-step detector;
(4) predicting flame space information in a period of time by using the trained LSTM-C model;
(5) and judging the prediction information and then making corresponding fire prevention measures.
Further, the step (1) includes the steps of:
(11) acquiring flame space information in each frame by using a single-step detection method for the disclosed flame video, wherein the flame space information comprises whether flame exists, the flame area and the flame center position, and taking the sequence as a flame space information tag data set of the video file;
(12) marking an image corresponding to each video frame image in the disclosed flame video as 1 according to the label data set, and setting other parts as 0 to form a one-dimensional flame label data set;
(13) and finally, taking the flame space information label data set and the flame label data set of the video frame as a training data set of the LSTM-C model.
Further, the step (2) is realized as follows:
(21) the formula of the forgetting door is ft=σ(Wf·[ht-1,xt]+bf) (ii) a Wherein σ is sigmoid function, ftOutput value of forgetting gate, WfTo forget the weight of the door neural network, ht-1Is the output of the node at time t-1, xtAs input to the node at time t, bfBiasing of the neural network for forgetting;
(22) the calculation formula of the input gate is it=σ(Wi·[ht-1,xt]+bi) (ii) a Wherein itFor inputting the output value of the gate, WiAs a weight of the input gated neural network, biIs the bias of the input gate neural network;
(23) the calculation formula of the output gate is ot=σ(Wo·[ht-1,xt]+bo) (ii) a Wherein o istTo output the output value of the gate, WoAs a weight of the output gate neural network, boIs the bias of the input gate neural network;
(24) putting a plurality of time sequences into the memory units constructed in the steps (21) to (23) for weighted summation to construct an LSTM-C model;
(25) the constructed data was used to train the LSTM-C model.
Further, the step (4) comprises the steps of:
(41) acquiring flame space information in each frame of monitoring image by using a single-step detection method, wherein the flame space information comprises whether flames exist, flame areas and flame center positions, when a Queue with a preset threshold size is not full, the flame space information is subjected to enqueuing operation, otherwise, Queue head elements of the Queue are deleted, and then the enqueuing operation is performed on the flame space information;
(42) and transmitting the Queue in full Queue into the well-trained LSTM-C to identify the fire.
The invention also provides a time-sequence fire identification system based on the single-step detector, which comprises the following components: an image preprocessing module: the video frame image preprocessing and normalization unit is used for reading the video frame image and preprocessing and normalizing the video frame image; a flame detection module: the single-step detection model is used for detecting the normalized video frame image to acquire the spatial information of the flame; a fire prediction module: the device is used for carrying out fire prediction on the flame space information stored by the flame detection module by using the LSTM-C model; a flame alarm module: the fire disaster prediction module is used for monitoring the prediction result in the fire disaster prediction module; when a fire disaster is judged to occur, a flame alarm is sent out to prompt a user, a withdrawing guidance strategy for the user to the non-security personnel in the fire disaster area is provided, and the security personnel are guided to take fire extinguishing measures; and providing a real-time detection interface for a user.
Based on the same inventive concept, the invention also provides a time-series fire identification system based on the single-step detector, which comprises a memory, a GPU and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the time-series fire identification method based on the single-step detector when being loaded to the GPU.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: the single-step detection method is not only lack of time sequence characteristics, but also has the characteristics of non-linearity, time variation, easy interference, limited observation time and the like of flames on a time sequence, so that the method establishes a fire identification model LSTM-C with time sequence on the basis of the single-step detector, improves the accuracy of fire identification, can be used for real-time fire identification and early warning, and has better robustness and wide application value.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an exemplary graph of LSTM-C training data of the present invention;
FIG. 3 is a schematic diagram of the LSTM-C model structure of the present invention;
FIG. 4 is a diagram of the structure of the LSTM-C model constructed using the tenserflow according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A large number of parameters are involved in the present embodiment, and each variable will now be described as shown in table 1.
TABLE 1 parameter description Table
Figure BDA0002986195950000041
Figure BDA0002986195950000051
The invention provides a time-series fire identification method based on a single-step detector, which specifically comprises the following steps as shown in figure 1:
step 1: a real flame video dataset comprising a complex environment is acquired based on a single-step detection method and the dataset is preprocessed.
Flame space information in each frame is obtained by using a single-step detection method for the disclosed flame video, the flame space information comprises whether flame exists or not, the flame area and the flame center position, and the sequence is used as a flame space information tag data set of the video file. And marking the image corresponding to each video frame image in the disclosed flame video as 1 according to the label data set, and setting other parts as 0 to form a one-dimensional flame label data set. And finally, taking the flame space information label data set and the flame label data set of the video frame as a training data set of the LSTM-C model.
The Data set Data obtained is shown in fig. 2. The method specifically comprises the following steps:
(1) the data set P1{ Frame is constructed by intercepting each Frame in the real flame video data1,Frame2,…,FrameN},FrameNThe intercepted Nth video frame.
(2) The single-step detector is used for detecting each frame of image to obtain spatial information of flame to construct a tag data set L1, wherein L1 is { Label }1,Label2,…,LabelNCorresponding to the position of the flame in the Frame, each Label is represented by (x)1,y1S) in which (x)1,y1) Represents the coordinate of the midpoint of the largest flame object in area, and s represents the sum of the areas of all flame objects.
(3) The setting label of fire occurrence of each video frame image in the Data set P1 is set to 1, the other sets are set to 0, a label Data set L2 of LSTM-C is formed, and finally L1 and L2 are constructed as a training Data set Data of the LSTM-C model.
Step 2: an LSTM-C deep neural network model is constructed and trained using the dataset.
The LSTM-C deep neural network comprises a memory unit, wherein the memory unit comprises a forgetting gate, an input gate and an output gate, and can selectively memorize correction parameters of a loss function of feedback which is reduced along with gradient and identify a plurality of time sequences.
The formula of the forgetting door is ft=σ(Wf·[ht-1,xt]+bf) (ii) a Wherein σ is sigmoid function, ftOutput value of forgetting gate, WfTo forget the weight of the door neural network, ht-1Is the output of the node at time t-1, xtAs input to the node at time t, bfTo forget the biasing of the door neural network.
The calculation formula of the input gate is it=σ(Wi·[ht-1,xt]+bi) (ii) a Wherein itFor inputting the output value of the gate, WiAs a weight of the input gated neural network, biIs the bias of the input gated neural network.
The calculation formula of the output gate is ot=σ(Wo·[ht-1,xt]+bo) (ii) a Wherein o istTo output the output value of the gate, WoAs a weight of the output gate neural network, boIs the bias of the input gated neural network.
And (4) putting a plurality of time sequences into the constructed memory units from (22) to (24) to carry out weighted summation to construct an LSTM-C model. The constructed data was used to train the LSTM-C model.
The LSTM-C model is shown in fig. 3 and 4.
(1) Setting the time sequence of the LSTM-C model as 5-10 frames, and then establishing data S1, S2, S3, S4, S5 and S6 by the flame space information L1 according to the time sequence of 5-10 frames.
(2) After 6 time sequences represented by S1, S2, S3, S4, S5, and S6 are respectively transmitted into the LSTM layer with a hidden layer of 50, the results are transmitted into the sense layer to obtain H1, H2, H3, H4, H5, and H6.
(3) H1, H2, H3, H4, H5 and H6 are spliced to obtain H.
(4) And finally, transmitting the H into a Dense layer, and establishing an LSTM-C model.
(5) The pre-trained weights are set to random values, and the input dimensions of the LSTM-C model are set to (3, 10).
(6) Setting LSTM-C model parameters, including: adam gradient descent method was used and learning rate was set to 1 × 10-4And the loss function is a binary cross entropy function.
(7) After learning the time series in the LSTM-C model using L1 in Data as an input value and L2 as a supervisory value, a model M for predicting the occurrence of a fire was obtained.
And step 3: flame space information under the monitored area is extracted using a single step detector.
And 4, step 4: flame space information over a period of time is predicted using the trained LSTM-C model.
The method comprises the steps of obtaining flame space information in each frame of monitoring image by using a single-step detection method, wherein the flame space information comprises whether flames exist, the area of the flames and the central position of the flames, when a Queue with a preset threshold size is not full, the flame space information is subjected to enqueuing operation, otherwise, Queue head elements of the Queue are deleted, and then the enqueuing operation is performed on the flame space information.
And 5: and judging the flame space information and then making corresponding fire prevention countermeasures.
Predicting the frame and the type of the flame object by the trained single-step detection model, and acquiring and storing the spatial information of the flame; then, predicting whether a fire disaster occurs by using the model M; and finally, making a corresponding disaster prevention strategy for the prediction result.
And acquiring a video frame image IMG, and preprocessing the video frame image IMG to obtain an IMG 0. The size of the video frame image IMG0 is normalized to obtain an image IMG 1. The embodiment is normalized to the size of 416 × 416 pixels to be used as an input of the next step of the single-step detection model.
The images with the normalized sizes are transmitted into single-step detection, the coordinate position and area size information of flames corresponding to the video frame images are stored in a queue Loc, and the target positions of the flames are marked in the preprocessed and normalized video frame IMG1 according to the Loc, so that visualization of flame tracking is achieved.
And transmitting the Loc queue into a model M for identification and judgment, and sending a flame alarm when the probability that the model M predicts that the video stream is a fire is higher than 80%. And if the fire disaster is possible, prompting the security personnel to remind the security personnel to take necessary disaster prevention measures. It should be noted that the algorithm only needs to remind, and does not take over the control right of alarming when a fire disaster is possible, namely after alarming, security personnel need to intervene for control manually to avoid a dangerous situation.
The invention also discloses a time-sequence fire identification system based on the single-step detector, which mainly comprises the following modules: an image preprocessing module: the video frame image preprocessing and normalization unit is used for reading the video frame image and preprocessing and normalizing the video frame image; a flame detection module: the single-step detection model is used for detecting the normalized video frame image to acquire the spatial information of the flame; a fire prediction module: the device is used for carrying out fire prediction on the flame space information stored by the flame detection module by using the LSTM-C model; a flame alarm module: the fire disaster prediction module is used for monitoring the prediction result in the fire disaster prediction module. When a fire disaster is judged to occur, a flame alarm is sent out to prompt a user, then a withdrawing guidance strategy is provided for the user to the non-security personnel in the fire disaster area, and the security personnel are guided to carry out fire extinguishing measures; and provides a real-time detection interface for the user. The system can realize the time-series fire identification method based on the single-step detector, belongs to the same inventive concept, and specific details refer to the embodiment of the method and are not repeated herein.
Based on the same inventive concept, the invention also discloses a time-series fire identification system based on the single-step detector, which comprises a memory, a GPU and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the time-series fire identification method based on the single-step detector when being loaded to the GPU.

Claims (6)

1. A time-series fire identification method based on a single-step detector is characterized by comprising the following steps:
(1) acquiring a real flame video data set containing a complex environment based on a single-step detection method, and preprocessing the data set;
(2) constructing an LSTM-C deep neural network model and training the LSTM-C deep neural network model by using a data set; the LSTM-C deep neural network consists of a plurality of memory units, and each memory unit comprises a forgetting gate, an input gate and an output gate; the memory unit can selectively memorize the correction parameters of the feedback loss function descending along with the gradient and identify a plurality of time sequences;
(3) extracting flame space information under a monitored area by using a single-step detector;
(4) predicting flame space information in a period of time by using the trained LSTM-C model;
(5) and judging the prediction information and then making corresponding fire prevention measures.
2. The single-step detector-based time-sequenced fire identification method according to claim 1, characterized in that said step (1) comprises the steps of:
(11) acquiring flame space information in each frame by using a single-step detection method for the disclosed flame video, wherein the flame space information comprises whether flame exists, the flame area and the flame center position, and taking the sequence as a flame space information tag data set of the video file;
(12) marking an image corresponding to each video frame image in the disclosed flame video as 1 according to the label data set, and setting other parts as 0 to form a one-dimensional flame label data set;
(13) and finally, taking the flame space information label data set and the flame label data set of the video frame as a training data set of the LSTM-C model.
3. The method for time-series fire identification based on single-step detector as claimed in claim 1, wherein the step (2) is implemented as follows:
(21) the formula of the forgetting door is ft=σ(Wf·[ht-1,xt]+bf) (ii) a Wherein σ is sigmoid function, ftOutput value of forgetting gate, WfTo forget the weight of the door neural network, ht-1Is the output of the node at time t-1, xtAs input to the node at time t, bfBiasing of the neural network for forgetting;
(22) the calculation formula of the input gate is it=σ(Wi·[ht-1,xt]+bi) (ii) a Wherein itFor inputting the output value of the gate, WiAs a weight of the input gated neural network, biIs the bias of the input gate neural network;
(23) output ofThe calculation formula of the door is ot=σ(Wo·[ht-1,xt]+bo) (ii) a Wherein o istTo output the output value of the gate, WoAs a weight of the output gate neural network, boIs the bias of the input gate neural network;
(24) putting a plurality of time sequences into the memory units constructed in the steps (21) to (23) for weighted summation to construct an LSTM-C model;
(25) the constructed data was used to train the LSTM-C model.
4. The single-step detector-based time-sequenced fire identification method according to claim 1, characterized in that said step (4) comprises the steps of:
(41) acquiring flame space information in each frame of monitoring image by using a single-step detection method, wherein the flame space information comprises whether flames exist, flame areas and flame center positions, when a Queue with a preset threshold size is not full, the flame space information is subjected to enqueuing operation, otherwise, Queue head elements of the Queue are deleted, and then the enqueuing operation is performed on the flame space information;
(42) and transmitting the Queue in full Queue into the well-trained LSTM-C to identify the fire.
5. A single-step detector based time-sequenced fire identification system using the method according to any of claims 1 to 4, characterized in that it comprises:
an image preprocessing module: the video frame image preprocessing and normalization unit is used for reading the video frame image and preprocessing and normalizing the video frame image;
a flame detection module: the single-step detection model is used for detecting the normalized video frame image to acquire the spatial information of the flame;
a fire prediction module: the device is used for carrying out fire prediction on the flame space information stored by the flame detection module by using the LSTM-C model;
a flame alarm module: the fire disaster prediction module is used for monitoring the prediction result in the fire disaster prediction module; when a fire disaster is judged to occur, a flame alarm is sent out to prompt a user, a withdrawing guidance strategy for the user to the non-security personnel in the fire disaster area is provided, and the security personnel are guided to take fire extinguishing measures; and providing a real-time detection interface for a user.
6. A single-step detector based time-sequenced fire identification system comprising a memory, a GPU and a computer program stored on the memory and executable on a processor, wherein said computer program, when loaded into the GPU, implements the single-step detector based time-sequenced fire identification method according to any of claims 1-4.
CN202110300878.7A 2021-03-22 2021-03-22 Time-sequence fire identification method and system based on single-step detector Pending CN113052226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300878.7A CN113052226A (en) 2021-03-22 2021-03-22 Time-sequence fire identification method and system based on single-step detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300878.7A CN113052226A (en) 2021-03-22 2021-03-22 Time-sequence fire identification method and system based on single-step detector

Publications (1)

Publication Number Publication Date
CN113052226A true CN113052226A (en) 2021-06-29

Family

ID=76513990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300878.7A Pending CN113052226A (en) 2021-03-22 2021-03-22 Time-sequence fire identification method and system based on single-step detector

Country Status (1)

Country Link
CN (1) CN113052226A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743328A (en) * 2021-09-08 2021-12-03 无锡格林通安全装备有限公司 Flame detection method and device based on long-term and short-term memory model
CN113985913A (en) * 2021-09-24 2022-01-28 大连海事大学 Collection-division type multi-unmanned-plane rescue system based on urban fire spread prediction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537215A (en) * 2018-03-23 2018-09-14 清华大学 A kind of flame detecting method based on image object detection
CN110443969A (en) * 2018-05-03 2019-11-12 中移(苏州)软件技术有限公司 A kind of fire point detecting method, device, electronic equipment and storage medium
CN110796664A (en) * 2019-10-14 2020-02-14 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111310662A (en) * 2020-02-17 2020-06-19 淮阴工学院 Flame detection and identification method and system based on integrated deep network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537215A (en) * 2018-03-23 2018-09-14 清华大学 A kind of flame detecting method based on image object detection
CN110443969A (en) * 2018-05-03 2019-11-12 中移(苏州)软件技术有限公司 A kind of fire point detecting method, device, electronic equipment and storage medium
CN110796664A (en) * 2019-10-14 2020-02-14 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111310662A (en) * 2020-02-17 2020-06-19 淮阴工学院 Flame detection and identification method and system based on integrated deep network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IOANNIS E. LIVIERIS 等: "An Advanced CNN-LSTMModel for Cryptocurrency Forecasting", 《ELECTRONICS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743328A (en) * 2021-09-08 2021-12-03 无锡格林通安全装备有限公司 Flame detection method and device based on long-term and short-term memory model
CN113985913A (en) * 2021-09-24 2022-01-28 大连海事大学 Collection-division type multi-unmanned-plane rescue system based on urban fire spread prediction
CN113985913B (en) * 2021-09-24 2024-04-12 大连海事大学 Integrated and separated type multi-unmanned aerial vehicle rescue system based on urban fire spread prediction

Similar Documents

Publication Publication Date Title
CN111666857B (en) Human behavior recognition method, device and storage medium based on environment semantic understanding
CN108416250B (en) People counting method and device
CN112257557B (en) High-altitude parabolic detection and identification method and system based on machine vision
US11615623B2 (en) Object detection in edge devices for barrier operation and parcel delivery
US11080434B2 (en) Protecting content on a display device from a field-of-view of a person or device
CN112287816B (en) Dangerous work area accident automatic detection and alarm method based on deep learning
JP5224401B2 (en) Monitoring system and method
US20190258866A1 (en) Human presence detection in edge devices
CN107977646B (en) Partition delivery detection method
CN109815787B (en) Target identification method and device, storage medium and electronic equipment
CN113052226A (en) Time-sequence fire identification method and system based on single-step detector
CN107122743B (en) Security monitoring method and device and electronic equipment
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
CN110414400A (en) A kind of construction site safety cap wearing automatic testing method and system
CN113963301A (en) Space-time feature fused video fire and smoke detection method and system
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN116363748A (en) Power grid field operation integrated management and control method based on infrared-visible light image fusion
Nakkach et al. Smart border surveillance system based on deep learning methods
CN116402811B (en) Fighting behavior identification method and electronic equipment
CN113538513A (en) Method, device and equipment for controlling access of monitored object and storage medium
CN105451235A (en) Wireless sensor network intrusion detection method based on background updating
CN112435240B (en) Deep vision mobile phone detection system for workers to illegally use mobile phones
Chen et al. Efficient Motion Symbol Detection and Multikernel Learning for AER Object Recognition
ELBAŞI et al. Control charts approach for scenario recognition in video sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210629