CN108257347B - Flame image sequence classification method and device by using convolutional neural network - Google Patents

Flame image sequence classification method and device by using convolutional neural network Download PDF

Info

Publication number
CN108257347B
CN108257347B CN201810020679.9A CN201810020679A CN108257347B CN 108257347 B CN108257347 B CN 108257347B CN 201810020679 A CN201810020679 A CN 201810020679A CN 108257347 B CN108257347 B CN 108257347B
Authority
CN
China
Prior art keywords
neural network
network model
training
optical flow
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810020679.9A
Other languages
Chinese (zh)
Other versions
CN108257347A (en
Inventor
李腾
刘亚
王妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810020679.9A priority Critical patent/CN108257347B/en
Publication of CN108257347A publication Critical patent/CN108257347A/en
Application granted granted Critical
Publication of CN108257347B publication Critical patent/CN108257347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a method and a device for classifying flame image sequences by utilizing a convolutional neural network, wherein the method comprises the following steps: acquiring a foreground image sequence, and acquiring a comprehensive light flow diagram corresponding to the foreground image sequence; acquiring a first training set and a first testing set; acquiring whether the integrated light flow graph corresponds to a category label of an image sequence of real flames; training a preset first convolution neural network model; testing the trained first convolution neural network model to obtain a first test result; judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting the training parameters of the trained first convolution neural network model, and returning to execute the first convolution neural network model preset for training; and classifying the comprehensive optical flow graph to be classified by utilizing the target first convolution neural network. By applying the embodiment of the invention, the false detection rate of flame region detection can be reduced.

Description

Flame image sequence classification method and device by using convolutional neural network
Technical Field
The invention relates to the technical field of flame detection, in particular to a flame image sequence classification method and device by using a convolutional neural network.
Background
The fire can be used everywhere in daily life, but fire is likely to be generated by accidental or improper use of the fire, and huge damage is caused to the natural environment or human lives and properties. Timely and effective flame detection can greatly reduce or even avoid the damage.
At present, the common fire detection method is based on a temperature sensor or a smoke sensor, but the method has low timeliness and limited use scenes. In order to improve the detection timeliness, the flame detection method based on image processing is widely applied. The principle of the flame detection method based on image processing is as follows: the method comprises the steps of firstly manually designing features to represent dynamic features of flames, then inputting the dynamic features into image monitoring equipment, obtaining a large number of images when the image monitoring equipment works, and judging whether a fire disaster occurs or not by the image monitoring equipment according to the similarity between the features in the obtained images and the dynamic features. In general, the dynamic feature is a motion feature of a pixel.
However, in practical applications, if a large number of moving objects exist in the environment monitored by the image monitoring device, the image obtained by the image monitoring device may include a large number of moving pixels, and too many moving pixels may interfere with the feature comparison process of the image monitoring device, and may not have a high accuracy in identifying the flame region, thereby causing false detection.
Disclosure of Invention
The invention aims to provide a method and a device for classifying a flame image sequence by using a convolutional neural network, so as to reduce the probability of false detection.
The invention solves the technical problems through the following technical scheme:
the embodiment of the invention provides a flame image sequence classification method by utilizing a convolutional neural network, which comprises the following steps:
acquiring a foreground image sequence, and acquiring a comprehensive light flow graph corresponding to the foreground image sequence aiming at each foreground image sequence;
acquiring a first training set and a first test set, wherein the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram;
for each integrated optical flow graph in the first training set, acquiring a category label of whether the integrated optical flow graph corresponds to an image sequence of real flames;
training a preset first convolution neural network model by utilizing each integrated light flow graph in the first training set;
testing the trained first convolution neural network model by using the comprehensive optical flow diagram in the first test set to obtain a first test result;
judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting training parameters of the trained first convolution neural network model, and returning to the step of training a preset first convolution neural network model by using each integrated light flow graph in the first training set;
and classifying the comprehensive optical flow graph corresponding to the image sequence to be classified by utilizing the target first convolution neural network.
Optionally, in a specific implementation manner of the embodiment of the present invention, the obtaining a foreground image sequence includes:
acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second testing set are both sets of images;
training a preset second convolutional neural network model by using each image in the second training set;
testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result;
judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to the step of training the preset second convolutional neural network model by using each image in the second training set;
and detecting the image sequence to be classified by using the target second convolutional neural network model to obtain a corresponding foreground image sequence.
Optionally, in a specific implementation manner of the embodiment of the present invention, before obtaining the scene category label, the category label of the flame region, and the category label of the non-flame region corresponding to the image, the method further includes:
and carrying out image enhancement processing on the images in the second training set by utilizing an image enhancement algorithm.
Optionally, in a specific implementation manner of the embodiment of the present invention, before obtaining the scene category label, the category label of the flame region, and the category label of the non-flame region corresponding to the image, the method further includes:
preprocessing the image sequences in the second training set, wherein the preprocessing comprises: color histogram equalization processing, brightness and contrast transformation processing, horizontal mirror inversion processing, Gaussian blur processing and random noise adding processing.
Optionally, in a specific implementation manner of the embodiment of the present invention, the acquiring a comprehensive light flow diagram corresponding to the foreground image sequence includes:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
Optionally, in a specific implementation manner of the embodiment of the present invention, the first test result includes:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
The embodiment of the invention also provides a flame image sequence classification device using the convolutional neural network, which comprises: a first acquisition module, a second acquisition module, a third acquisition module, a training module, a testing module, a judging module and a detecting module, wherein,
the first acquisition module is used for acquiring foreground image sequences and acquiring a comprehensive light flow diagram corresponding to each foreground image sequence;
the second obtaining module is configured to obtain a first training set and a first test set, where the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram;
the third obtaining module is configured to obtain, for each integrated optical flow diagram in the first training set, whether the integrated optical flow diagram corresponds to a category label of an image sequence of real flames;
the training module is used for training a preset first convolution neural network model by utilizing each integrated light flow graph in the first training set;
the testing module is used for testing the trained first convolution neural network model by using the comprehensive optical flow diagram in the first testing set to obtain a first testing result;
the judging module is used for judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting training parameters of the trained first convolution neural network model, and returning to the step of training a preset first convolution neural network model by using each integrated light flow graph in the first training set;
the detection module is used for classifying the comprehensive optical flow graph corresponding to the image sequence to be classified by utilizing the target first convolution neural network.
Optionally, in a specific implementation manner of the embodiment of the present invention, the first obtaining module is specifically configured to:
acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second testing set are both sets of images;
training a preset second convolutional neural network model by using each image in the second training set;
testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result;
judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to the step of training the preset second convolutional neural network model by using each image in the second training set;
and detecting the image sequence to be classified by using the target second convolutional neural network model to obtain a corresponding foreground image sequence.
Optionally, in a specific implementation manner of the embodiment of the present invention, the first obtaining module is specifically configured to:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
Optionally, in a specific implementation manner of the embodiment of the present invention, the first test result includes:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
Compared with the prior art, the invention has the following advantages:
by applying the embodiment of the invention, the comprehensive optical flow diagram generated by the foreground image sequence is used for training the convolutional neural network. Because only the motion information of the motion area in the foreground image sequence is calculated, the interference of background change to the comprehensive light stream diagram is avoided, the dynamic characteristic of flame can be accurately expressed, and the false detection rate of flame area detection is reduced.
Drawings
Fig. 1 is a schematic flowchart of a method for classifying a flame image sequence by using a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a diagram of a frame of image in an image sequence according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a flame image sequence classification apparatus using a convolutional neural network according to an embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
Fig. 1 is a schematic flowchart of a method for classifying a flame image sequence by using a convolutional neural network according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101: and acquiring a foreground image sequence, and acquiring a comprehensive light flow graph corresponding to the foreground image sequence aiming at each foreground image sequence.
For example, the number of foreground image sequences acquired in this step may be 6000, and this step is described by taking an acquisition method of one foreground image sequence X as an example:
a video segment a corresponding to the foreground image sequence X may be taken as input, for example, the video segment a may include 5 consecutive video frames. And identifying suspected flame areas contained in all video frames in the video clip A by using a flame area identification algorithm, and setting the suspected flame areas contained in all the video frames in the video clip A as foreground areas.
If the foreground image sequence X corresponding to video segment a also has 5 frames of foreground images:
foreground image 1, foreground image 2, foreground image 3, foreground image 4 and foreground image 5.
Then, the light stream images between the foreground image 1 and the foreground image 2, between the foreground image 2 and the foreground image 3, between the foreground image 3 and the foreground image 4, and between the foreground image 4 and the foreground image 5 can be calculated by using a dense light stream algorithm to obtain the light stream image 1, the light stream image 2, the light stream image 3, and the light stream image 4, and the four light stream images are superimposed to obtain the comprehensive light stream image 1 corresponding to the foreground image sequence X.
For other video segments, other integrated optical flow images are obtained according to the method.
It can be understood that if a certain video segment does not contain a suspected flame region after being detected by the second convolutional neural network model or by other methods, it can be determined that the video segment is a non-flame video segment, and the video segment does not have a corresponding foreground image sequence, so that not all video segments can obtain a foreground image sequence. The number of video frames contained in the video segment a may be greater than the number of foreground images contained in its corresponding foreground image sequence.
In practical application, the background of all images in the foreground image sequence can be set to be black, so as to enhance the contrast between the suspected flame area and the background in the foreground image.
It is emphasized that any method capable of identifying flame regions can be used to identify suspected flame regions contained in all video frames in the video segment a, and the embodiments of the present invention are not limited thereto. In addition, the flame region may include a single pixel point, or may include a region formed by two or more adjacent pixel points. Additionally, the suspected flame region may be a real flame region; the suspected flame region may also be a non-real flame region similar to a flame region, for example, it may be a hot sun-rising image, an artificially drawn flame image, or the like, rather than a real flame.
S102: acquiring a first training set and a first test set, wherein the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram.
Illustratively, the number of integrated optical flow images obtained in S102 may be 4000. A set of 3000 integrated optical-flow images out of the 4000 integrated optical-flow images may be used as a first training set, and a set of the remaining 1000 integrated optical-flow images may be used as a first test set.
In practical applications, 3000 integrated optical-flow images may be randomly selected and a set of 3000 integrated optical-flow images may be used as the first training set, and similarly, 1000 integrated optical-flow images may be randomly selected and a set of 1000 integrated optical-flow images may be used as the first testing set.
In general, in order to ensure a better training effect, the integrated optical flow images in the first training set and the integrated optical flow images in the first testing set may be completely different.
S103: for each integrated optical flow graph in the first training set, obtaining whether the integrated optical flow graph corresponds to a category label of an image sequence of real flames.
For example, the class labels for obtaining whether the integrated optical flow graph corresponds to the image sequence of the real flame may be manually pre-labeled.
For example, the class labels mentioned in this step can be class labels of the image sequence of the integrated optical flow graph corresponding to real flames; class labels may also be assigned to the sequence of images of the integrated optical flow map corresponding to non-real flames. In general, the class labels corresponding to the image sequence of real flames can be understood as: the image sequence corresponding to the category label has an image of real flame.
S104: and training a preset first convolution neural network model by utilizing each integrated light flow graph in the first training set.
For example, the first convolutional neural network model may include 1 11 × 11 convolutional layer, 1 5 × 5 convolutional layer, 3 × 3 convolutional layers, 5 ReLU active layers, 3 pooling layers, 2 Dropout layers, 3 fully connected layers, and 1 softmax loss layer. And connecting a ReLU active layer after all the convolution layers, wherein all the pooling layers adopt a maximum pooling mode, and Dropout _ ratios of all the Dropout layers are 0.5. And 2 neurons are set in the last full-connection layer and are used for calculating the category loss of the two comprehensive optical flow characteristic graphs.
And inputting all the comprehensive optical flow images in the marked first training set obtained in the step S103 into the first convolution neural network model to train the first convolution neural network model until the first convolution neural network model converges to obtain a first convolution neural network model 1.
In practical application, the first convolutional neural network model is trained until the first convolutional neural network model converges to a preset range, and the training step is ended.
S105: and testing the trained first convolution neural network model by using the comprehensive optical flow diagram in the first test set to obtain a first test result.
Illustratively, the first convolution neural network model 1 obtained in step S104 is used to classify the integrated optical flow image in the first test set obtained in step S102. For example, the foreground image sequence corresponding to the integrated optical flow image 1 is identified as a foreground image sequence of real flames, the foreground image sequence corresponding to the integrated optical flow image 2 is identified as a foreground image sequence of real flames, and the foreground image sequence corresponding to the integrated optical flow image 3 is identified as a foreground image sequence of non-real flames. And then calculating the accuracy of the first convolution neural network model 1 in classifying the comprehensive optical flow images in the first test set obtained in the step S102 according to the statistics of the classification result of whether the foreground image sequences corresponding to all the comprehensive optical flow graphs are actually foreground image sequences of real flames or not.
S106: judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting the training parameters of the trained first convolution neural network model, and returning to execute the step of S104.
In general, when the accuracy is higher than 0.95, it is determined that the first convolutional neural network model 1 obtained in step S104 is the target first convolutional neural network model meeting the training requirement.
If the accuracy of the first convolution neural network model 1 obtained in the step S104 for classifying the comprehensive optical flow images in the first test set obtained in the step S102 is lower than 0.95, it is necessary to increase or decrease the model parameters of the first convolution neural network model, train the first convolution neural network model again to obtain the first convolution neural network model 2, and then execute the steps S105 and S106 again until the accuracy of the trained first convolution neural network model for classifying the comprehensive optical flow images in the first test set obtained in the step S102 is higher than 0.95.
S107: and classifying the comprehensive optical flow graph corresponding to the image sequence to be classified by utilizing the target first convolution neural network.
And for each image sequence to be classified, acquiring a comprehensive optical flow diagram of each image sequence, and classifying the comprehensive optical flow diagram corresponding to the image sequence to be classified by using the first convolution neural network model trained in the step S106.
It is understood that the classification results include: the image sequence corresponding to the integrated light flow diagram is a real flame image sequence, and the image sequence corresponding to the integrated light flow diagram is a non-real flame image sequence.
By applying the embodiment of the invention shown in fig. 1, the comprehensive light flow graph generated by the foreground image sequence is used for training the convolutional neural network model. Because only the motion information of the motion area in the foreground image sequence is calculated, the interference of background change to the comprehensive light stream diagram is avoided, the dynamic characteristic of flame can be accurately expressed, and the false detection rate of flame area detection is reduced.
In a specific implementation manner of the embodiment of the present invention, the obtaining a foreground image sequence includes:
S101A (not shown): acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second test set are both sets of images.
For example, 10000 pieces of image data having a flame region in three scenes, i.e., an indoor scene, an outdoor scene, and a forest scene, may be collected. Fig. 3 is a schematic diagram of a frame of image in an image sequence according to an embodiment of the present invention, and fig. 3 is an image with a flame region in a forest scene, where as shown in fig. 3, 001 is a category label corresponding to a non-flame region in the image data, 002 is a category label corresponding to a flame region in the image data, and 003 is a category label of the forest scene to which the image data belongs. It is to be understood that the category labels in the image data include, but are not limited to, the above-described category labels.
8000 images of the images with the flame regions in the indoor scene, 8000 images of the images with the flame regions in the outdoor scene, 8000 images of the images with the flame regions in the forest scene, a set of 24000 images in total serves as a training set, and a set of the rest 6000 images serves as a testing set.
And manually calibrating scene class labels to which all images belong in the training set and class labels of flame regions and non-flame regions in the images.
S101B (not shown): and training a preset second convolutional neural network model by utilizing each image in the second training set.
For example, the predetermined second convolutional neural network model may include 1 11 × 11 convolutional layer, 1 6 × 6 convolutional layer, 1 5 × 5 convolutional layer, 3 × 3 convolutional layers, 2 1 × 1 convolutional layers, 3 pooling layers, 6 ReLU active layers, 4 Dropout layers, 3 fully connected layers, 1 63 × 63 deconvolution, 1 Crop layer, 1 joint loss function layer. And (3) connecting a ReLU active layer to the other convolution layers except the 2 convolution layers with the number of 1 x 1, wherein all the pooling layers adopt a maximum pooling mode, and Dropout _ ratios of all the Dropout layers are 0.5.
Specifically, in the training process of the second convolutional neural network model, whether the second convolutional neural network model converges may be determined by using a joint loss function.
The joint loss function may be defined as follows:
L=-logyi-∑logzjwherein, in the step (A),
l is the joint loss of the second convolutional neural network model; y isiThe probability of the image scene category i, the serial number of the category label indoor, outdoor and forest ∑ logzjA loss function for the identified classifications of all of the images; z is a radical ofjThe probability of a single pixel type j in the image is represented by j, and the type label flame area and the corresponding sequence number of the non-flame area are represented by j.
The second convolutional neural network model whose joint loss value is lower than the set value may be used as the target second convolutional neural network model, and the training of the second convolutional neural network model is ended.
And inputting all the images in the marked second training set obtained in the step S101A into a second convolutional neural network model to train the second convolutional neural network model until the second convolutional neural network model converges to obtain a second convolutional neural network model 1.
In practical application, the second convolutional neural network model is trained until the second convolutional neural network model converges to the preset range, and the second convolutional neural network model is considered to have converged, so that the training step is ended.
In practical applications, the second convolutional neural network model may be a multitasking convolutional neural network model.
S101C (not shown): and testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result.
Illustratively, the images in the second test set obtained in the step S101A are subjected to scene classification and/or flame region detection by using the second convolutional neural network model 1 obtained in the step S101B, and the accuracy of the scene classification and/or flame region detection of the images in the second test set obtained in the step S101A by using the second convolutional neural network model 1 is calculated.
S101D (not shown): judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to execute the step of training the preset second convolutional neural network model by using each image in the second training set.
In general, when the accuracy is higher than 0.95, it is determined that the second convolutional neural network model 1 obtained in step S101B is the target second convolutional neural network model meeting the training requirement.
If the accuracy rate of scene classification and flame region detection on the images in the second test set obtained in the step S101A by the second convolutional neural network model 1 obtained in the step S101B is lower than 0.95, the model parameters of the second convolutional neural network model need to be adjusted up or down, the second convolutional neural network model needs to be trained again to obtain a second convolutional neural network model 2, and then the steps S101C and S101D are executed again until the accuracy rate of scene classification and flame region detection on the images in the second test set obtained in the step S101A by the trained second convolutional neural network model is higher than 0.95.
S101E (not shown): and detecting the image sequence to be classified by using the target second convolutional neural network model to obtain a corresponding foreground image sequence.
By applying the embodiment of the invention, the second convolutional neural network is utilized to combine the tasks of image scene classification and flame region detection in the image for training, the information of the image scene is fully utilized, and the detection rate and the accuracy rate of the flame region in the image are further improved.
In a specific implementation manner of the embodiment of the present invention, before the step of S101A, the method further includes:
S101F (not shown): and carrying out image enhancement processing on the images in the second training set by utilizing an image enhancement algorithm.
By applying the embodiment of the invention, the model can be fully trained.
In a specific implementation manner of the embodiment of the present invention, before the step of S101A, the method further includes:
S101G (not shown): preprocessing the image sequences in the second training set, wherein the preprocessing comprises: color histogram equalization processing, brightness and contrast transformation processing, horizontal mirror inversion processing, Gaussian blur processing and random noise adding processing.
By applying the embodiment of the invention, the model can be fully trained.
In a specific implementation manner of the embodiment of the present invention, the acquiring the integrated light flow graph corresponding to the foreground image sequence includes:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
By applying the embodiment of the invention, the comprehensive light flow graph can be obtained.
In a specific implementation manner of the embodiment of the present invention, the first test result includes:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
Corresponding to the embodiment of the invention shown in fig. 1, the embodiment of the invention also provides a flame image sequence classification device using the convolutional neural network.
Fig. 3 is a schematic structural diagram of a flame image sequence classification apparatus using a convolutional neural network according to an embodiment of the present invention, as shown in fig. 3, the apparatus includes: a first acquisition module 301, a second acquisition module 302, a third acquisition module 303, a training module 304, a testing module 305, a determination module 306, and a detection module 307, wherein,
the first obtaining module 301 is configured to obtain a foreground image sequence, and obtain, for each foreground image sequence, an integrated light flow map corresponding to the foreground image sequence;
the second obtaining module 302 is configured to obtain a first training set and a first test set, where the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram;
the third obtaining module 303 is configured to obtain, for each integrated optical flow diagram in the first training set, whether the integrated optical flow diagram corresponds to a category label of an image sequence of real flames;
the training module 304 is configured to train a preset first convolutional neural network model by using each integrated light flow graph in the first training set;
the testing module 305 is configured to test the trained first convolutional neural network model by using the integrated optical flow graph in the first test set, and obtain a first test result;
the judging module 306 is configured to judge whether the first test result is greater than a first preset threshold; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting training parameters of the trained first convolution neural network model, and returning to the step of training a preset first convolution neural network model by using each integrated light flow graph in the first training set;
the detecting module 307 is configured to classify the integrated optical flow graph corresponding to the image sequence to be classified by using the target first convolution neural network model.
By applying the embodiment of the invention shown in fig. 3, the comprehensive light flow graph generated by the foreground image sequence is used for training the convolutional neural network model. Because only the motion information of the motion area in the foreground image sequence is calculated, the interference of background change to the comprehensive light stream diagram is avoided, the dynamic characteristic of flame can be accurately expressed, and the false detection rate of flame area detection is reduced.
In a specific implementation manner of the embodiment of the present invention, the first obtaining module 301 is specifically configured to:
acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second testing set are both sets of images;
training a preset second convolutional neural network model by using each image in the second training set;
testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result;
judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to the step of training the preset second convolutional neural network model by using each image in the second training set;
and detecting the image sequence to be classified by using the target second convolutional neural network model to obtain a corresponding foreground image sequence.
By applying the embodiment of the invention, the second convolutional neural network is utilized to combine the tasks of image scene classification and flame region detection in the image for training, the information of the image scene is fully utilized, and the detection rate and the accuracy rate of the flame region in the image are further improved.
In a specific implementation manner of the embodiment of the present invention, the first obtaining module 301 is specifically configured to:
and carrying out image enhancement processing on the images in the second training set by utilizing an image enhancement algorithm.
By applying the embodiment of the invention, the model can be fully trained.
In a specific implementation manner of the embodiment of the present invention, the first obtaining module 301 is specifically configured to:
preprocessing the image sequences in the second training set, wherein the preprocessing comprises: color histogram equalization processing, brightness and contrast transformation processing, horizontal mirror inversion processing, Gaussian blur processing and random noise adding processing.
By applying the embodiment of the invention, the model can be fully trained.
In a specific implementation manner of the embodiment of the present invention, the first obtaining module 301 is specifically configured to:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
By applying the embodiment of the invention, the comprehensive light flow graph can be obtained.
In a specific implementation manner of the embodiment of the present invention, the first test result includes:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A method for flame image sequence classification using a convolutional neural network, the method comprising:
acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second testing set are both sets of images;
training a preset second convolutional neural network model by using each image in the second training set;
testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result;
judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to the step of training the preset second convolutional neural network model by using each image in the second training set;
detecting image sequences to be classified by using the target second convolutional neural network model to obtain corresponding foreground image sequences, and acquiring a comprehensive light-ray diagram corresponding to the foreground image sequences aiming at each foreground image sequence;
acquiring a first training set and a first test set, wherein the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram;
for each integrated optical flow graph in the first training set, acquiring a category label of whether the integrated optical flow graph corresponds to an image sequence of real flames;
training a preset first convolution neural network model by utilizing each integrated light flow graph in the first training set;
testing the trained first convolution neural network model by using the comprehensive optical flow diagram in the first test set to obtain a first test result;
judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting training parameters of the trained first convolution neural network model, and returning to the step of training a preset first convolution neural network model by using each integrated light flow graph in the first training set;
and classifying the comprehensive optical flow graph corresponding to the image sequence to be classified by utilizing the target first convolution neural network.
2. The method for classifying the flame image sequence by using the convolutional neural network as claimed in claim 1, wherein before acquiring the scene class label, the class label of the flame region and the class label of the non-flame region corresponding to the image, the method further comprises:
and carrying out image enhancement processing on the images in the second training set by utilizing an image enhancement algorithm.
3. The method for classifying the flame image sequence by using the convolutional neural network as claimed in claim 1, wherein before acquiring the scene class label, the class label of the flame region and the class label of the non-flame region corresponding to the image, the method further comprises:
preprocessing the image sequences in the second training set, wherein the preprocessing comprises: color histogram equalization processing, brightness and contrast transformation processing, horizontal mirror inversion processing, Gaussian blur processing and random noise adding processing.
4. The method for classifying the flame image sequence by using the convolutional neural network as claimed in claim 1, wherein the obtaining of the integrated light flow graph corresponding to the foreground image sequence comprises:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
5. The method of claim 1, wherein the first test result comprises:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
6. An apparatus for classifying a sequence of flame images using a convolutional neural network, the apparatus comprising: a first acquisition module, a second acquisition module, a third acquisition module, a training module, a testing module, a judging module and a detecting module, wherein,
the first obtaining module is used for
Acquiring a second training set and a second testing set, and acquiring a scene category label, a flame region category label and a non-flame region category label corresponding to each image in the second training set; wherein the second training set and the second testing set are both sets of images;
training a preset second convolutional neural network model by using each image in the second training set;
testing the trained second convolutional neural network model by using the images in the second test set to obtain a second test result;
judging whether the second test result is larger than a second preset threshold value or not; if so, taking the trained second convolutional neural network model as a target second convolutional neural network model; if not, adjusting the training parameters of the trained second convolutional neural network model, and returning to the step of training the preset second convolutional neural network model by using each image in the second training set;
detecting image sequences to be classified by using the target second convolutional neural network model to obtain corresponding foreground image sequences, and acquiring a comprehensive light-ray diagram corresponding to the foreground image sequences aiming at each foreground image sequence;
the second obtaining module is configured to obtain a first training set and a first test set, where the first training set and the first test set are both at least one integrated optical flow diagram set in the integrated optical flow diagram;
the third obtaining module is configured to obtain, for each integrated optical flow diagram in the first training set, whether the integrated optical flow diagram corresponds to a category label of an image sequence of real flames;
the training module is used for training a preset first convolution neural network model by utilizing each integrated light flow graph in the first training set;
the testing module is used for testing the trained first convolution neural network model by using the comprehensive optical flow diagram in the first testing set to obtain a first testing result;
the judging module is used for judging whether the first test result is greater than a first preset threshold value or not; if so, taking the trained first convolution neural network model as a target first convolution neural network model; if not, adjusting training parameters of the trained first convolution neural network model, and returning to the step of training a preset first convolution neural network model by using each integrated light flow graph in the first training set;
the detection module is used for classifying the comprehensive optical flow graph corresponding to the image sequence to be classified by utilizing the target first convolution neural network.
7. The apparatus for classifying a flame image sequence using a convolutional neural network as claimed in claim 6, wherein the first obtaining module is specifically configured to:
acquiring optical flow graphs of the two adjacent foreground images by using a dense optical flow algorithm according to optical flow characteristics contained in the two adjacent foreground images in the foreground image sequence; and superposing the optical flow graphs to obtain a comprehensive optical flow graph corresponding to the foreground image sequence.
8. The apparatus of claim 6, wherein the first test result comprises:
and the accuracy of classifying the comprehensive optical flow graph of the first test set by using the first convolution neural network model.
CN201810020679.9A 2018-01-10 2018-01-10 Flame image sequence classification method and device by using convolutional neural network Active CN108257347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810020679.9A CN108257347B (en) 2018-01-10 2018-01-10 Flame image sequence classification method and device by using convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810020679.9A CN108257347B (en) 2018-01-10 2018-01-10 Flame image sequence classification method and device by using convolutional neural network

Publications (2)

Publication Number Publication Date
CN108257347A CN108257347A (en) 2018-07-06
CN108257347B true CN108257347B (en) 2020-09-29

Family

ID=62725984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810020679.9A Active CN108257347B (en) 2018-01-10 2018-01-10 Flame image sequence classification method and device by using convolutional neural network

Country Status (1)

Country Link
CN (1) CN108257347B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117742B (en) * 2018-07-20 2022-12-27 百度在线网络技术(北京)有限公司 Gesture detection model processing method, device, equipment and storage medium
CN109165575B (en) * 2018-08-06 2024-02-20 天津艾思科尔科技有限公司 Pyrotechnic recognition algorithm based on SSD frame
CN110110780B (en) * 2019-04-30 2023-04-07 南开大学 Image classification method based on antagonistic neural network and massive noise data
CN110765937A (en) * 2019-10-22 2020-02-07 新疆天业(集团)有限公司 Coal yard spontaneous combustion detection method based on transfer learning
CN111145275A (en) * 2019-12-30 2020-05-12 重庆市海普软件产业有限公司 Intelligent automatic control forest fire prevention monitoring system and method
CN113712525A (en) * 2020-05-21 2021-11-30 深圳市理邦精密仪器股份有限公司 Physiological parameter processing method and device and medical equipment
CN112001375B (en) * 2020-10-29 2021-01-05 成都睿沿科技有限公司 Flame detection method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339602A (en) * 2008-07-15 2009-01-07 中国科学技术大学 Video frequency fire hazard aerosol fog image recognition method based on light stream method
CN102034240A (en) * 2010-12-23 2011-04-27 北京邮电大学 Method for detecting and tracking static foreground
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106709521A (en) * 2016-12-26 2017-05-24 深圳极视角科技有限公司 Fire pre-warning method and fire pre-warning system based on convolution neural network and dynamic tracking
CN106934404A (en) * 2017-03-10 2017-07-07 深圳市瀚晖威视科技有限公司 A kind of image flame identifying system based on CNN convolutional neural networks
CN107480729A (en) * 2017-09-05 2017-12-15 江苏电力信息技术有限公司 A kind of transmission line forest fire detection method based on depth space-time characteristic of field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339602A (en) * 2008-07-15 2009-01-07 中国科学技术大学 Video frequency fire hazard aerosol fog image recognition method based on light stream method
CN102034240A (en) * 2010-12-23 2011-04-27 北京邮电大学 Method for detecting and tracking static foreground
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106709521A (en) * 2016-12-26 2017-05-24 深圳极视角科技有限公司 Fire pre-warning method and fire pre-warning system based on convolution neural network and dynamic tracking
CN106934404A (en) * 2017-03-10 2017-07-07 深圳市瀚晖威视科技有限公司 A kind of image flame identifying system based on CNN convolutional neural networks
CN107480729A (en) * 2017-09-05 2017-12-15 江苏电力信息技术有限公司 A kind of transmission line forest fire detection method based on depth space-time characteristic of field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FlowNet: Learning Optical Flow with Convolutional Networks;Alexey Dosovitskiy 等;《2015 IEEE International Conference on Computer Vision》;20160218;2758-2766页 *

Also Published As

Publication number Publication date
CN108257347A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108257347B (en) Flame image sequence classification method and device by using convolutional neural network
CN110688925B (en) Cascade target identification method and system based on deep learning
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
CN107609470B (en) Method for detecting early smoke of field fire by video
CN111611905B (en) Visible light and infrared fused target identification method
CN111626188B (en) Indoor uncontrollable open fire monitoring method and system
US20160334546A1 (en) Weather recognition method and device based on image information detection
CN108805900B (en) Method and device for determining tracking target
CN109472193A (en) Method for detecting human face and device
CN108399359B (en) Real-time fire detection early warning method under video sequence
CN110334660A (en) A kind of forest fire monitoring method based on machine vision under the conditions of greasy weather
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
Wang et al. Fire detection based on flame color and area
CN116629465B (en) Smart power grids video monitoring and risk prediction response system
Khan et al. Machine vision based indoor fire detection using static and dynamic features
CN107729811B (en) Night flame detection method based on scene modeling
CN115880765A (en) Method and device for detecting abnormal behavior of regional intrusion and computer equipment
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN107301653A (en) Video image fire disaster flame detection method based on BP neural network
Sarkar et al. Universal skin detection without color information
Huda et al. Effects of pre-processing on the performance of transfer learning based person detection in thermal images
CN111325185B (en) Face fraud prevention method and system
CN111666916B (en) Kitchen violation identification method based on self-learning technology
CN112115824A (en) Fruit and vegetable detection method and device, electronic equipment and computer readable medium
Yang et al. Fire alarm for video surveillance based on convolutional neural network and SRU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant