CN113128422A - Image smoke and fire detection method and system of deep neural network - Google Patents

Image smoke and fire detection method and system of deep neural network Download PDF

Info

Publication number
CN113128422A
CN113128422A CN202110441498.5A CN202110441498A CN113128422A CN 113128422 A CN113128422 A CN 113128422A CN 202110441498 A CN202110441498 A CN 202110441498A CN 113128422 A CN113128422 A CN 113128422A
Authority
CN
China
Prior art keywords
image
sample
smoke
network
fire
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110441498.5A
Other languages
Chinese (zh)
Other versions
CN113128422B (en
Inventor
陈秀祥
张大福
李秋华
胡俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Helpsoft Industry Co ltd
Original Assignee
Chongqing Helpsoft Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Helpsoft Industry Co ltd filed Critical Chongqing Helpsoft Industry Co ltd
Priority to CN202110441498.5A priority Critical patent/CN113128422B/en
Publication of CN113128422A publication Critical patent/CN113128422A/en
Application granted granted Critical
Publication of CN113128422B publication Critical patent/CN113128422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an image smoke and fire detection method and system of a deep neural network, wherein the method comprises the following steps: drawing boundary points of firework areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images with the sample images to obtain area images only containing the firework areas; constructing a firework target simulation framework of the conditional depth convolution generation countermeasure network, and sending the area image into the conditional depth convolution generation countermeasure network to train and fit to generate a new firework sample; the forest fire monitoring system is obtained and does not contain the initial image of the smoke and fire target and is fused with the smoke and fire sample in S2 to construct a training data set, and the system comprises a camera module, an image set module, an image operation module, a processing module, a neural network construction module and a data operation module. According to the invention, the smoke and fire sample is obtained without manual ignition at the initial stage of forest environment fire monitoring, so that the method is more convenient, the risk of forest fire caused by manual ignition is prevented, and the cost is saved.

Description

Image smoke and fire detection method and system of deep neural network
Technical Field
The invention relates to the technical field of image processing, in particular to an image smoke and fire detection method and system of a deep neural network.
Background
Forest resources are very important natural resources, and the loss caused by forest fires is very large every year, so the prevention of the forest fires is very important. The traditional forest fire prevention means is mainly based on manual inspection, and the method has the defects of poor timeliness, low efficiency, high cost and the like. In order to solve the problem of forest fire prevention by means of manual inspection, the detection system for forest fire prevention based on the computer technology and the photoelectric sensor can be used for scanning the forest on a large scale, long time and fast to monitor the fire, and is applied on a large scale. Accordingly, the automatic detection and positioning technology for the forest smoke and fire based on digital image processing and digital video processing is widely applied, and the early warning efficiency and precision of the forest smoke and fire inspection are greatly improved.
However, the conventional digital image processing technology based on artificial design features faces problems of poor adaptability, high false alarm rate and the like when processing forest patrol images and videos, such as poor adaptability to different seasons and different illumination, difficulty in distinguishing clouds, fog, fireworks and the like.
Based on the problem of poor adaptability, the deep neural network technology is widely applied to image processing by virtue of strong fitting capability. The deep neural network is applied to smoke and fire detection in the forest inspection video image, so that the detection precision is greatly improved, and the false alarm rate is reduced. During training, the deep neural network needs a large number of firework image samples under different scenes, and the generalization capability of the deep neural network is improved. However, the difficulty of acquiring a large number of image of smoke and fire with respect to an actual scene is great, and especially for newly deployed monitoring device scenes, the risk and cost are extremely high because smoke and fire samples are generally acquired by means of manual ignition. When the system is deployed, a conventional deep neural network needs large computing resources, has high requirement on computing capacity, generally needs a GPU (graphics processing unit) computing server, is high in cost, is difficult to integrate into the front end of a camera, and realizes end-to-end low-cost, real-time and efficient detection.
Disclosure of Invention
The invention aims to provide an image smoke and fire detection method of a deep neural network, so as to solve the problems of high risk and high cost of obtaining smoke and fire samples by manual ignition.
The image smoke and fire detection method of the deep neural network in the scheme comprises the following steps:
step S1, obtaining a plurality of sample images with firework targets, forming an image sample set by the sample images containing the firework targets, drawing boundary points of firework areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images and the sample images to obtain area images only containing the firework areas;
step S2, constructing a firework target simulation framework of the conditional depth convolution generation countermeasure network, sending the area image into the conditional depth convolution generation countermeasure network for training, and fitting to generate a new firework sample;
and step S3, acquiring an initial image of the forest fire monitoring system, wherein the image does not contain a smoke and fire target, and fusing the initial image and the smoke and fire sample in the step S2 to construct a training data set.
The beneficial effect of this scheme is:
the image sample set is formed by adopting the obtained sample image containing smoke and fire, namely the image of fire in the shot actual environment, the boundary point of a smoke and fire area is drawn out from the sample image to generate a mask image, the mask image and the sample image are multiplied to obtain an area image of the smoke and fire area, then training is carried out to obtain a smoke and fire sample, the smoke and fire sample is fused into an initial image without fire to obtain a training data set, the initial image is an image when the forest environment to be detected does not send the fire, the image in the training data set is used as the smoke and fire sample, manual ignition is not needed to be carried out at the forest environment where the fire needs to be monitored to obtain the smoke and fire sample, the method is more convenient, the risk of forest fire caused by manual ignition is prevented, and.
Further, the step S2 includes the following sub-steps:
s2.1: adopting a deconvolution neural network to construct a generation network in a generation countermeasure network, inputting random noise and outputting a simulation sample;
s2.2: using the sample image and the generated simulation sample for training to generate a discrimination network in the countermeasure network, and outputting the probability that the simulation sample belongs to the firework category;
s2.3: generating a group of new simulation samples for judging the training of the network after updating and generating network parameters through a back propagation algorithm;
s2.4: and repeating the steps S2.1, S2.2 and S2.3, training a generation network and a discrimination network, and enabling the data distribution of the generated simulation sample and the sample image to be 90% identical, so that the discrimination network cannot distinguish the simulation sample from the sample image to randomly generate the simulation image of the firework target.
The beneficial effects are that: after network parameters are updated and generated through a back propagation algorithm and simulation samples are generated, training of a discrimination network is carried out, the discrimination network is used for judging the simulation samples, simulation images of smoke and fire targets are generated, and accuracy of the simulation images of the smoke and fire targets is improved.
Further, the step S3 includes the following sub-steps:
s3.1: sequentially zooming sample images in the image sample set according to the actual zooming parameters of the camera;
s3.2: superposing the zoomed sample image and an initial image which is recorded by a monitoring system and does not contain a firework target at a randomly selected position, and recording the regional information of the firework target to finish the image fusion operation;
s3.3: and completing the construction of the training data set after all the images are fused in sequence.
The beneficial effects are that: the accuracy of the firework area after image fusion is improved by scaling the sample image according to the actual scaling parameters of the camera and then performing superposition fusion.
Further, the method also comprises the following steps:
step S4, scaling the fused image in the training data set according to the actual scaling parameters of the camera;
step S5, performing luminosity distortion operation on the fused image after the scale scaling to generate brightness samples under different illumination;
step S6, performing geometric distortion operation on the brightness sample after the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processing samples to fuse according to a preset proportion coefficient, and repeating the fusion operation for multiple times;
and S8, sending the training data set processed in the steps S4-S7 into a preset depth network, and carrying out smoke and fire detection training to obtain a complete weight model.
The beneficial effects are that: images in the training data set are fusion images, the fusion images in the training data set are processed, then repeated fusion is carried out, the coverage degree of smoke and fire samples under different brightness to actual smoke and fire forms is higher, and the accuracy of smoke and fire detection of a complete weight model is improved.
Further, the method also comprises the following steps:
step S9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
and step S10, reconstructing and optimizing the preset depth network structure.
The beneficial effects are that: and the complete weight model is compressed, so that the operation speed is increased, and the operation cost is reduced.
Further, the step S10 includes the following sub-steps:
s10.1: eliminating useless output layers in the preset deep network by analyzing the preset deep network model;
s10.2: three layers of a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network are fused into one layer, and a network structure is vertically integrated;
s10.3: and fusing the layers which are input into the same tensor and execute the same operation in the preset depth network together to horizontally combine the network structure.
The beneficial effects are that: the accuracy of fire detection is improved by reconstructing and optimizing the preset depth network.
The image smoke and fire detection system of the deep neural network comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module;
the image acquisition module is used for acquiring a plurality of sample images with smoke and fire, the image operation module is used for tracing boundary points of smoke and fire areas on the sample images and generating mask images, the processing module multiplies the mask images with the sample images to obtain area images only containing the smoke and fire areas, the neural network building module is used for building a smoke and fire target simulation framework of a conditional depth convolution generation countermeasure network, the area images are added into the conditional depth convolution data generation countermeasure network to be trained and fitted to generate new smoke and fire samples, the processing module acquires the initial images and fuses the initial images with the smoke and fire samples to build a training data set, the data operation module is used for carrying out scale scaling on the fusion images in the training data set according to actual scaling parameters of a camera, and the data operation module carries out distortion operation on the fusion images subjected to scale scaling to generate brightness sample luminosity samples under different illumination conditions The data operation module carries out geometric distortion on the brightness sample with the luminosity distortion to obtain a processing sample, the processing module randomly obtains two processing samples for multiple times and fuses according to a preset proportionality coefficient, and the processing module sends a training data set subjected to the fusion of the preset proportionality coefficient into a preset depth network to carry out smoke and fire detection training to obtain a complete weight model.
The beneficial effect of this scheme is:
the method comprises the steps of obtaining a sample image containing a smoke and fire target through an image set module, obtaining an original image of a forest fire monitoring fire which does not generate fire at an initial moment through a camera module, obtaining a region image of an independent smoke and fire region from the sample image, adding the region image to a conditional depth volume data generation countermeasure network to form a smoke and fire sample, fusing the initial image and the smoke and fire sample to form a training data set, sequentially carrying out scale scaling, luminosity distortion and geometric distortion on the fused image in the training data set, fusing and training two random processing samples to obtain a complete weight model, forming the smoke and fire sample without igniting at the initial monitoring moment of the forest fire, and improving the accuracy of a training result through the existing smoke and fire sample through processing.
Further, the processing module compresses the 16-bit floating point precision of the complete weight model to 8-bit integer data precision, and reconstructs and optimizes the structure of the preset depth network.
The beneficial effects are that: and the processing module performs precision compression on the complete weight model, and performs structural reconstruction and optimization of a preset depth network, so that the accuracy of subsequent fire detection is improved.
Drawings
FIG. 1 is a block flow diagram of an image smoke and fire detection method of a deep neural network according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of an image fire and smoke detection system of a deep neural network according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a training data set training process in an image fire detection system of a deep neural network according to an embodiment of the present invention;
fig. 4 is a flowchart of training the YOLO V4 deep neural network in the image smoke and fire detection method of the deep neural network according to the embodiment of the present invention.
Detailed Description
The following is a more detailed description of the present invention by way of specific embodiments.
Example one
An image smoke detection system of a deep neural network, as shown in figure 2: the intelligent forest fire monitoring system comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module, wherein the camera module can use the existing camera equipment for monitoring forest fire, the image set module can use the existing hardware storage equipment, the image operation module can use the existing software, the processing module can use the existing four-core 64-bit ARM CPU, a 128-core integrated NVIDIA GPU and a 4GB LPDDR4 memory.
The image acquisition module is used for acquiring a plurality of sample images with smoke and fire; the image operation module is used for drawing boundary points of a firework area on the sample image and generating a mask image, and the image operation module can process the sample image through the existing image processing software; the processing module multiplies the mask image with the sample image to obtain an area image containing only the smoke and fire areas.
The simulation system comprises a neural network building module, a processing module, a data operation module, a processing module and a weight model training module, wherein the neural network building module is used for building a firework target simulation framework of a conditional depth convolution generation countermeasure network, the neural network building module adds an area image into a conditional depth volume data generation countermeasure network for training and fitting to generate a new firework sample, the processing module obtains an initial image and fuses the initial image with the firework sample to build a training data set, the data operation module is used for scaling the fused image in the training data set according to actual scaling parameters of a camera, the data operation module performs photometric distortion operation on the scaled fused image to generate brightness samples under different illumination, the data operation module performs geometric distortion on the luminosity distorted brightness samples to obtain processing samples, the processing module randomly obtains two processing samples for multiple times and fuses according to a preset scale factor, the processing module sends the training data set fused by the preset scale factor into the preset depth network for firework And the processing module compresses the 16-bit floating point precision of the complete weight model to 8-bit integer data precision, and reconstructs and optimizes the structure of the preset depth network.
As shown in fig. 1, the image smoke and fire detection method of the deep neural network based on the image smoke and fire detection system of the deep neural network includes the following steps:
step S1, obtaining a plurality of sample images with firework targets, forming an image sample set by the sample images containing the firework targets, drawing boundary points of firework areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images and the sample images to obtain area images only containing the firework areas;
step S2, constructing a firework target simulation framework of the conditional depth convolution generation countermeasure network, sending the region image into the conditional depth convolution generation countermeasure network for training, and fitting to generate a new firework sample, wherein the method specifically comprises the following substeps, S2.1: adopting a deconvolution neural network to construct a generation network in a generation countermeasure network, inputting random noise and outputting a simulation sample; s2.2: using the sample image and the generated simulation sample for training to generate a discrimination network in the countermeasure network, and outputting the probability that the simulation sample belongs to the firework category; s2.3: generating a group of new simulation samples for judging the training of the network after updating and generating network parameters through a back propagation algorithm; s2.4: repeating S2.1, S2.2 and S2.3, training a generation network and a discrimination network, and enabling the data distribution of the generated simulation sample and the sample image to be 90% identical, so that the discrimination network cannot distinguish the simulation sample from the sample image to randomly generate a simulation image of a firework target;
step S3, acquiring an initial image of the forest fire monitoring system, which does not include a smoke and fire target, and fusing the initial image and the smoke and fire sample in step S2 to construct a training data set, specifically including the following substeps, as shown in fig. 3, S3.1: sequentially zooming sample images in the image sample set according to the actual zooming parameters of the camera; s3.2: superposing the zoomed sample image and an initial image which is recorded by a monitoring system and does not contain a firework target at a randomly selected position, and recording the regional information of the firework target to finish the image fusion operation; s3.3: completing the construction of a training data set after all the images are fused in sequence;
step S4, scaling the fused image in the training data set according to the actual scaling parameters of the camera;
step S5, performing luminosity distortion operation on the fused image after the scale scaling to generate brightness samples under different illumination;
step S6, performing geometric distortion operation on the brightness sample after the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processing samples to fuse according to a preset proportion coefficient, and repeating the fusion operation for multiple times;
and step S8, sending the training data set processed in the steps S4-S7 into a preset depth network, and performing smoke and fire detection training to obtain a complete weight model as shown in FIG. 4, wherein the preset depth network is an existing YOLO V4 deep neural network, and the YOLO V4 network structure consists of a backbone network CSPDarknet53, a Neck structure and a Head structure. Continuously extracting feature maps by a backbone network, extracting and splicing the feature maps by Neck for detection, and predicting the object type and position by Head;
step S9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
step S10, reconstructing and optimizing the preset deep network structure, and during reconstruction and optimization, performing the following substeps:
s10.1: analyzing the preset deep network model by using the self-contained function of the existing software, and eliminating a useless output layer in the preset deep network;
s10.2: three layers of a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network are fused into one layer, and a network structure is vertically integrated;
s10.3: and fusing the layers which are input into the same tensor and execute the same operation in the preset depth network together to horizontally combine the network structure.
The forest firework that this embodiment is one detects need not the artificial firework sample of making of lighting a fire when new area detects, uses manpower sparingly, and it is more convenient to use, can reach the effect of low-cost, real-time, high-efficient detection.
Example two
The difference from the first embodiment is that the image smoke and fire detection system of the deep neural network further includes a color identification module, the color identification module identifies color information at a plurality of preset position points in the sample image and sends the color information to the processing module, the color identification module can be identified by existing PS software, the preset position points are set according to intersections in the grid, the density of the preset position points is set based on the area of each grid in the grid being one square millimeter, the processing module obtains the color information and counts the color types, the processing module obtains the color type corresponding to the maximum count value, the processing module compares the color type corresponding to the maximum count value with the preset type and determines the shooting time information of the sample image, the shooting time information includes day time and night time, for example, the color type corresponding to the maximum count value is green and is determined as day time, the color type corresponding to the maximum counting value is black, the night time is judged, the color type corresponding to the maximum counting value is orange and similar colors, the night time is judged, the processing module judges whether the shooting time information of the sample images has the daytime time and the night time at the same time, if yes, the processing module sends the sample images to the image operation module, if not, the processing module suspends sending the sample images to the image operation module, and when the fact that the shooting time information has the daytime time and the night time at the same time is judged, the processing module sends the sample images to the image operation module.
In step S1, the acquisition of the sample image includes the following substeps:
s1.1, identifying color information at a plurality of preset position points in a sample image, arranging the preset position points in a grating manner, counting the color types of the color information to obtain a count value, comparing the count values of different color types and judging the color type corresponding to the maximum count value, comparing the color type corresponding to the maximum count value with the preset type and judging the shooting time information of the sample image;
s1.2, when the shooting time information of the sample images has the daytime time and the night time at the same time, forming the sample images into an image sample set, when the shooting time information of the sample images has the daytime time or the night time, suspending the forming of the sample images into the image sample set until the shooting time information of the sample images has the daytime time and the night time at the same time, and forming the sample images into the image sample set.
In the daytime and the nighttime, the information such as the brightness and the color of the firework object on the shot sample image can be different, so that the difference between the subsequent firework object processing and the actual firework is caused. In the second embodiment, the color information of a plurality of position points in the sample image is identified, the number of each color type is determined, the time information is determined according to the color type corresponding to the maximum count value, and finally, whether the sample image forms an image sample set or not is judged according to the time information, so that the completeness of the firework target form which can be contained in the sample image obtained before firework detection is improved, and the processed simulated firework is closer to the actual firework.
The foregoing is merely an example of the present invention and common general knowledge of known specific structures and features of the embodiments is not described herein in any greater detail. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (8)

1. An image smoke and fire detection method of a deep neural network is characterized by comprising the following steps:
step S1, obtaining a plurality of sample images with firework targets, forming an image sample set by the sample images containing the firework targets, drawing boundary points of firework areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images and the sample images to obtain area images only containing the firework areas;
step S2, constructing a firework target simulation framework of the conditional depth convolution generation countermeasure network, sending the area image into the conditional depth convolution generation countermeasure network for training, and fitting to generate a new firework sample;
and step S3, acquiring an initial image of the forest fire monitoring system, wherein the image does not contain a smoke and fire target, and fusing the initial image and the smoke and fire sample in the step S2 to construct a training data set.
2. The image smoke and fire detection method of the deep neural network of claim 1, wherein: the step S2 includes the following sub-steps:
s2.1: adopting a deconvolution neural network to construct a generation network in a generation countermeasure network, inputting random noise and outputting a simulation sample;
s2.2: using the sample image and the generated simulation sample for training to generate a discrimination network in the countermeasure network, and outputting the probability that the simulation sample belongs to the firework category;
s2.3: generating a group of new simulation samples for judging the training of the network after updating and generating network parameters through a back propagation algorithm;
s2.4: and repeating the steps S2.1, S2.2 and S2.3, training a generation network and a discrimination network, and enabling the data distribution of the generated simulation sample and the sample image to be 90% identical, so that the discrimination network cannot distinguish the simulation sample from the sample image to randomly generate the simulation image of the firework target.
3. The image smoke and fire detection method of the deep neural network of claim 2, wherein: the step S3 includes the following sub-steps:
s3.1: sequentially zooming sample images in the image sample set according to the actual zooming parameters of the camera;
s3.2: superposing the zoomed sample image and an initial image which is recorded by a monitoring system and does not contain a firework target at a randomly selected position, and recording the regional information of the firework target to finish the image fusion operation;
s3.3: and completing the construction of the training data set after all the images are fused in sequence.
4. The image smoke and fire detection method of the deep neural network of claim 3, wherein: further comprising the steps of:
step S4, scaling the fused image in the training data set according to the actual scaling parameters of the camera;
step S5, performing luminosity distortion operation on the fused image after the scale scaling to generate brightness samples under different illumination;
step S6, performing geometric distortion operation on the brightness sample after the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processing samples to fuse according to a preset proportion coefficient, and repeating the fusion operation for multiple times;
and S8, sending the training data set processed in the steps S4-S7 into a preset depth network, and carrying out smoke and fire detection training to obtain a complete weight model.
5. The image smoke and fire detection method of the deep neural network of claim 4, wherein: further comprising the steps of:
step S9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
and step S10, reconstructing and optimizing the preset depth network structure.
6. The image smoke and fire detection method of the deep neural network of claim 5, wherein: the step S10 includes the following sub-steps:
s10.1: eliminating useless output layers in the preset deep network by analyzing the preset deep network model;
s10.2: three layers of a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network are fused into one layer, and a network structure is vertically integrated;
s10.3: and fusing the layers which are input into the same tensor and execute the same operation in the preset depth network together to horizontally combine the network structure.
7. The image smoke and fire detection system of the deep neural network comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module;
the image acquisition module is used for acquiring a plurality of sample images with smoke and fire, the image operation module is used for tracing boundary points of smoke and fire areas on the sample images and generating mask images, the processing module multiplies the mask images with the sample images to obtain area images only containing the smoke and fire areas, the neural network building module is used for building a smoke and fire target simulation framework of a conditional depth convolution generation countermeasure network, the area images are added into the conditional depth convolution data generation countermeasure network to be trained and fitted to generate new smoke and fire samples, the processing module acquires the initial images and fuses the initial images with the smoke and fire samples to build a training data set, the data operation module is used for carrying out scale scaling on the fusion images in the training data set according to actual scaling parameters of a camera, and the data operation module carries out distortion operation on the fusion images subjected to scale scaling to generate brightness sample luminosity samples under different illumination conditions The data operation module carries out geometric distortion on the brightness sample with the luminosity distortion to obtain a processing sample, the processing module randomly obtains two processing samples for multiple times and fuses according to a preset proportionality coefficient, and the processing module sends a training data set subjected to the fusion of the preset proportionality coefficient into a preset depth network to carry out smoke and fire detection training to obtain a complete weight model.
8. The image smoke and fire detection system of the deep neural network of claim 7, wherein: and the processing module compresses the 16-bit floating point precision of the complete weight model to 8-bit integer data precision, and reconstructs and optimizes the structure of the preset depth network.
CN202110441498.5A 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network Active CN113128422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110441498.5A CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110441498.5A CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Publications (2)

Publication Number Publication Date
CN113128422A true CN113128422A (en) 2021-07-16
CN113128422B CN113128422B (en) 2024-03-29

Family

ID=76779275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110441498.5A Active CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Country Status (1)

Country Link
CN (1) CN113128422B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664047A (en) * 2022-05-26 2022-06-24 长沙海信智能系统研究院有限公司 Expressway fire identification method and device and electronic equipment
CN116468974A (en) * 2023-06-14 2023-07-21 华南理工大学 Smoke detection method, device and storage medium based on image generation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118467A (en) * 2018-08-31 2019-01-01 武汉大学 Based on the infrared and visible light image fusion method for generating confrontation network
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
CN109460708A (en) * 2018-10-09 2019-03-12 东南大学 A kind of Forest fire image sample generating method based on generation confrontation network
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110991242A (en) * 2019-11-01 2020-04-10 武汉纺织大学 Deep learning smoke identification method for negative sample excavation
IT201800009442A1 (en) * 2018-10-15 2020-04-15 Laser Navigation Srl Control and management system of a process within an environment through artificial intelligence techniques and related method
CN111145275A (en) * 2019-12-30 2020-05-12 重庆市海普软件产业有限公司 Intelligent automatic control forest fire prevention monitoring system and method
EP3671261A1 (en) * 2018-12-21 2020-06-24 Leica Geosystems AG 3d surveillance system comprising lidar and multispectral imaging for object classification
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
CN111882514A (en) * 2020-07-27 2020-11-03 中北大学 Multi-modal medical image fusion method based on double-residual ultra-dense network
CN112270207A (en) * 2020-09-27 2021-01-26 青岛邃智信息科技有限公司 Smoke and fire detection method in community monitoring scene
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118467A (en) * 2018-08-31 2019-01-01 武汉大学 Based on the infrared and visible light image fusion method for generating confrontation network
CN109460708A (en) * 2018-10-09 2019-03-12 东南大学 A kind of Forest fire image sample generating method based on generation confrontation network
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
IT201800009442A1 (en) * 2018-10-15 2020-04-15 Laser Navigation Srl Control and management system of a process within an environment through artificial intelligence techniques and related method
EP3671261A1 (en) * 2018-12-21 2020-06-24 Leica Geosystems AG 3d surveillance system comprising lidar and multispectral imaging for object classification
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110991242A (en) * 2019-11-01 2020-04-10 武汉纺织大学 Deep learning smoke identification method for negative sample excavation
CN111145275A (en) * 2019-12-30 2020-05-12 重庆市海普软件产业有限公司 Intelligent automatic control forest fire prevention monitoring system and method
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
CN111882514A (en) * 2020-07-27 2020-11-03 中北大学 Multi-modal medical image fusion method based on double-residual ultra-dense network
CN112270207A (en) * 2020-09-27 2021-01-26 青岛邃智信息科技有限公司 Smoke and fire detection method in community monitoring scene
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
袁飞等: "基于轻量级卷积神经网络的烟雾识别算法", 《 西南交通大学学报》, vol. 55, no. 05, pages 1111 - 1116 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664047A (en) * 2022-05-26 2022-06-24 长沙海信智能系统研究院有限公司 Expressway fire identification method and device and electronic equipment
CN116468974A (en) * 2023-06-14 2023-07-21 华南理工大学 Smoke detection method, device and storage medium based on image generation
CN116468974B (en) * 2023-06-14 2023-10-13 华南理工大学 Smoke detection method, device and storage medium based on image generation

Also Published As

Publication number Publication date
CN113128422B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN111274930B (en) Helmet wearing and smoking behavior identification method based on deep learning
CN109902633A (en) Accident detection method and device based on the camera supervised video of fixed bit
CN103069434A (en) Multi-mode video event indexing
CN113128422A (en) Image smoke and fire detection method and system of deep neural network
CN111832398B (en) Unmanned aerial vehicle image distribution line pole tower ground wire broken strand image detection method
CN109886219A (en) Shed object detecting method, device and computer readable storage medium
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN111383429A (en) Method, system, device and storage medium for detecting dress of workers in construction site
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
CN113255797B (en) Dangerous goods detection method and system based on deep learning model
CN113887412A (en) Detection method, detection terminal, monitoring system and storage medium for pollution emission
CN112052878B (en) Method, device and storage medium for shielding identification of radar
Zhang et al. Application research of YOLO v2 combined with color identification
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN111539325A (en) Forest fire detection method based on deep learning
CN112464893A (en) Congestion degree classification method in complex environment
CN113762314A (en) Smoke and fire detection method and device
CN114998737A (en) Remote smoke detection method, system, electronic equipment and medium
CN114399734A (en) Forest fire early warning method based on visual information
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
Zheng et al. A lightweight algorithm capable of accurately identifying forest fires from UAV remote sensing imagery
Cao et al. YOLO-SF: YOLO for fire segmentation detection
CN113191274A (en) Oil field video intelligent safety event detection method and system based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant