CN115019164A - Image type fire detector smoke and fire identification method based on deep learning method - Google Patents

Image type fire detector smoke and fire identification method based on deep learning method Download PDF

Info

Publication number
CN115019164A
CN115019164A CN202210422620.9A CN202210422620A CN115019164A CN 115019164 A CN115019164 A CN 115019164A CN 202210422620 A CN202210422620 A CN 202210422620A CN 115019164 A CN115019164 A CN 115019164A
Authority
CN
China
Prior art keywords
image
smoke
flame
infrared
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210422620.9A
Other languages
Chinese (zh)
Inventor
马忠国
谭继双
龚俊峰
孙晓蕾
胡宝先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Dingxin Communication Fire Safety Co ltd
Original Assignee
Qingdao Dingxin Communication Fire Safety Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Dingxin Communication Fire Safety Co ltd filed Critical Qingdao Dingxin Communication Fire Safety Co ltd
Priority to CN202210422620.9A priority Critical patent/CN115019164A/en
Publication of CN115019164A publication Critical patent/CN115019164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The application discloses image type fire detector firework identification method and computer readable storage medium based on deep learning method, including: judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model; if yes, comparing the visible light image with the near infrared image according to the suspected flame area, and judging whether the suspected flame area comprises a flame target; if the suspected flame area comprises the flame target, flame positioning information is obtained according to the suspected flame area; the firework detection model is a YOLOv5s model, an activation function in a residual network in the YOLOv5s model is an SMU function, a C3 network in a YOLOv5s backbone network of the YOLOv5s model is a Shuffle Block network, and the depth of a YOLOv5s head detection network feature map of the YOLOv5s model is reduced. This application uses the firework detection model after the improvement, adapts to the analysis of the visible light image of embedded front end equipment more, has ensured the precision, has reduced the degree of depth of characteristic map simultaneously, has improved detection speed.

Description

Image type fire detector smoke and fire identification method based on deep learning method
Technical Field
The application relates to the field of image identification, in particular to an image type fire detector smoke and fire identification method based on a deep learning method and a computer readable storage medium.
Background
In the existing image type fire detector, a traditional machine learning method is used for comprehensively judging gray scale, color, shape, texture, change trend and the like of suspected fire to determine whether the fire occurs, and a flame identification algorithm of the traditional machine learning method needs to artificially extract the characteristics of flame, so that the loss of flame information can be caused, the condition of misjudgment can be caused, and the positioning precision of the flame can be influenced. The image type fire detector completes detection of smoke and flame at the edge end, resources are limited, and the existing deep learning target detection method is high in parameters and large in calculation amount and cannot achieve real-time detection.
At present, most of image type fire detectors in China use a visible light and near infrared multiband composite mode, namely the characteristics of clear visible light imaging, high resolution and the like are utilized, and the characteristics of near infrared light are also applied to effectively prevent false alarm, stop missing alarm and the like. In recent years, software and hardware devices and technologies are updated, more and more scenes are gradually applied with a deep learning method to solve the problems, and smoke and fire detection also belongs to one of the target detection problems. However, the existing flame identification method mostly uses artificially extracted dynamic and static characteristics such as flame form, shaking frequency and the like to identify flame, and the method has poor generalization capability, needs to use a large amount of heuristic threshold values, and has weak robustness and environmental adaptability;
the smoke and fire detection requires high accuracy, high reliability and strong robustness, timely prediction and alarm are needed at the initial stage of fire, the existing deep learning target detection algorithm cannot balance the detection speed and the detection precision, and the detection effect on smaller targets cannot be achieved.
Therefore, an image-based fire detector smoke and fire identification method based on a deep learning method is high in accuracy and higher in detection speed.
Disclosure of Invention
In view of the above, an object of the present application is to provide an image-based fire detector fire and smoke recognition method based on a deep learning method and a computer-readable storage medium, which can improve detection speed while ensuring accuracy. The specific scheme is as follows:
an image type fire detector smoke and fire identification method based on a deep learning method comprises the following steps:
simultaneously collecting a visible light image and a near infrared image of the same detection area;
judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model;
if the suspected flame area exists, comparing the visible light image with the near-infrared image according to the suspected flame area, and judging whether the suspected flame area comprises a flame target;
if the suspected flame area comprises the flame target, flame positioning information is obtained according to the suspected flame area;
the firework detection model is obtained by training a Yolov5s model by utilizing a historical visible light image and a historical near infrared image, an activation function in a residual network in the Yolov5s model is an SMU function, a C3 network in a Yolov5s backbone network of the Yolov5s model is a Shuffle Block network, and the depth of a Yolov5s head detection network feature map of the Yolov5s model is set according to a preset rule so as to reduce the parameter number and the calculation amount of the Yolov5s model.
Optionally, the step of comparing the visible light image with the near-infrared image according to the suspected flame area to determine whether the suspected flame area includes a flame target includes:
mapping the suspected flame area in the visible light image to the near-infrared image to obtain a near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image;
and judging whether the near-infrared suspected flame area comprises a flame target or not by using a preset near-infrared flame identification method.
Optionally, the process of obtaining flame positioning information according to the suspected flame area if the suspected flame area includes the flame target includes:
if the suspected flame area comprises the flame target, obtaining a near-infrared suspected flame area in the near-infrared image;
and correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame positioning information.
Optionally, when the size of the feature map of the input smoke and fire detection model is 21x21x24, the field of experience of the smoke and fire detection model is 32, and the sizes of the prior boxes are (251, 142), (243, 256) and (492, 327);
when the size of the feature map input into the smoke and fire detection model is 42x42x24, the reception field of the smoke and fire detection model is 16, and the sizes of the prior boxes are (50, 64), (104, 83) and (117, 176);
when the signature size of the input smoke and fire detection model is 84x84x24, the reception field of the smoke and fire detection model is 8, and the prior frame sizes are (9, 9), (17, 16) and (34, 29).
Optionally, the process of mapping the suspected flame area in the visible light image to the near-infrared image to obtain a near-infrared suspected flame area in the near-infrared image, which is overlapped with the mapping result of the suspected flame area, includes:
and mapping the suspected flame area in the visible light image to the near-infrared image by using a preset calibration matching parameter between the visible light image and the near-infrared image to obtain the near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image.
Optionally, the step of determining whether the near-infrared suspected flame area includes a flame target by using a preset near-infrared flame recognition method includes:
and judging whether the near-infrared suspected flame area comprises a flame target by utilizing an Otsu method.
Optionally, the process of simultaneously acquiring the visible light image and the near-infrared image of the same detection region includes:
two cameras in the binocular cameras are used for simultaneously acquiring image sequences of the visible light images and the near infrared images in the same detection area including multiple frames respectively.
Optionally, after the visible light image and the near-infrared image of the same detection area are simultaneously acquired, the method further includes:
judging whether a smoke area exists in the visible light image or not by using the pre-established smoke and fire detection model;
and if so, obtaining corresponding smoke positioning information according to the smoke area.
Optionally, the process of correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame positioning information includes:
correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area;
continuously tracking and judging the area corresponding to the subsequent visible light image in the image sequence according to the corrected suspected flame area to obtain the flame positioning information;
the process of obtaining corresponding smoke positioning information according to the smoke region includes:
and continuously tracking and judging the corresponding area of the subsequent visible light image in the image sequence according to the smoke area to obtain the corresponding smoke positioning information.
Optionally, the method further includes:
and sending the flame positioning information and/or the smoke positioning information to a user terminal.
The application also discloses a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the image type fire detector smoke and fire identification method based on the deep learning method.
In this application, the image type fire detector smoke and fire recognition method based on the deep learning method includes: simultaneously collecting a visible light image and a near infrared image of the same detection area; judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model; if the suspected flame area exists, comparing the visible light image with the near-infrared image according to the suspected flame area, and judging whether the suspected flame area comprises a flame target; if the suspected flame area comprises the flame target, flame positioning information is obtained according to the suspected flame area; the firework detection model is obtained by training a Yolov5s model by utilizing a historical visible light image and a historical near infrared image, an activation function in a residual network in the Yolov5s model is an SMU function, a C3 network in a Yolov5s backbone network of the Yolov5s model is a Shuffle Block network, and the depth of a Yolov5s head detection network feature map of the Yolov5s model is set according to a preset rule so as to reduce the parameter number and the calculation amount of the Yolov5s model.
The firework detection model after network structure and activation function are improved is used, the analysis of visible light images of embedded front-end equipment is adapted to, the accuracy of analysis is guaranteed, meanwhile, the depth matching of the characteristic diagram is reduced, the corresponding network structure and activation function are improved, the analysis speed of the firework detection model is guaranteed not to be too slow, and the analysis speed of the firework detection model is improved while the accuracy is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a smoke and fire identification method of an image-based fire detector based on a deep learning method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a shuffle block network structure in the smoke and fire detection model disclosed in the embodiment of the present application;
FIG. 3 is a schematic flow chart of another image-based fire detector smoke and fire identification method based on a deep learning method disclosed in the embodiment of the present application;
FIG. 4 is a schematic flow chart of another image-based fire detector smoke and fire identification method based on a deep learning method disclosed in the embodiment of the present application;
FIG. 5 is a schematic diagram of a modified smoke detection model of the present application in comparison to an existing original model.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a deep learning method-based image type fire detector smoke and fire identification method, and as shown in figure 1, the method comprises the following steps:
s11: and simultaneously collecting the visible light image and the near infrared image of the same detection area.
Specifically, the visible light image and the near-infrared image of the detection area are collected simultaneously, and the collection areas corresponding to the two images are the same, so that comparison reference can be subsequently performed, and whether the detection area has a fire or not can be subsequently analyzed from a visible light angle and a near-infrared angle.
Specifically, the size and the data format of the image to be acquired can be preset so as to be matched with the smoke and fire detection model subsequently and input into the smoke and fire detection model, and the size and the format conversion of the image subsequently needed is avoided.
S12: and judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model.
Specifically, in order to adapt to whether flames exist in the visible light image analysis, the smoke and fire detection model in the embodiment of the present application is obtained by training a YOLOv5s model with a historical visible light image and a historical near-infrared image, and the YOLOv5s model is improved, an activation function in a residual network in the YOLOv5s model is replaced with an SMU function, a C3 network in a YOLOv5s backbone network of the YOLOv5s model is a Shuffle Block network, and the depth of a YOLOv5s head detection network feature map of the YOLOv5s model is set according to a preset rule, so as to reduce parameter quantity and calculation quantity of the YOLOv5s model, and thus improve the detection speed of the smoke and fire detection model.
Specifically, as the interference factors included in the visible light image are more than those of the near-infrared image, in order to accelerate the analysis speed of the smoke and fire detection model and ensure that the smoke and fire detection model is inconvenient in accuracy, an activation function in a residual network in the YOLOv5s model is replaced by an SMU function, a C3 network in a YOLOv5s backbone network of the YOLOv5s model is a Shuffle Block network, and as shown in a configuration parameter table of the YOLOv5s backbone network in the smoke and fire detection model shown in fig. 2 and table 1, the structure of the smoke and fire detection model is optimized, so that the smoke and fire detection model can adapt to the analysis of the visible light image; in addition, in order to ensure the analysis speed of the smoke and fire detection model, the YOLOv5s head detection network can be pruned according to the theory that the memory access amount can be minimized according to the same channel size, and the depth of the YOLOv5s head detection network feature map of the YOLOv5s model is set, for example, the number of channels of the YOLOv5s head detection network feature map is reduced by half, as shown in the configuration parameter table of the YOLOv5s head detection network in the smoke and fire detection model in table 2, the original 512 channel number and 256 channel number are reduced by half, and are changed into the 256 channel number and 128 channel number in table 2, so that the depth of the feature map input into the YOLOv5s model is reduced, and the influence of excessive features in the visible light image on the analysis speed of the smoke and fire detection model is reduced.
TABLE 1
Figure BDA0003608548530000061
Figure BDA0003608548530000071
Specifically, the first column "module input" in table 1 represents the input required for the row configuration, where-1 represents the acquisition of input from the previous layer; the second column, "number of modules," represents the total number of subsequent modules; the third column, "before modification" represents the module used in the prior art; the fourth column, "modified," represents a module used by the pyrotechnic detection model of the embodiments of the present application that is modified as compared to the prior art. Wherein Focus is used for carrying out image slicing operation; conv is used to perform a depth separable convolution operation; c3 is a cross-scale connection layer; the Shuffle _ Block is used for carrying out packet convolution operation; the SPP is used to perform a spatial pyramid pooling operation.
TABLE 2
Figure BDA0003608548530000072
Specifically, as shown in table 2, a first column parameter "module input" in table 2 represents an input required for the row configuration, taking [ -1, 6] as an example, to represent that an input is obtained from the upper layer and the 6 th layer, -1 represents the upper layer, and 6 represents the sixth layer; the second column, "number of modules," represents the total number of subsequent modules; the third column "module" represents the module used by the smoke detection model; the fourth column indicates "before modification" for parameters used by the modules of the prior art and "after modification" for parameters used by the pyrotechnic detection models of the embodiments of the present application as modified compared to the prior art. Taking the Conv parameter in row 10 of table 2 as an example, 128 denotes the number of convolution kernels, 3 denotes the convolution kernel size, and 2 denotes the step size. Wherein Conv is used to perform a depth separable convolution operation; upesample is the upsampling layer; concat is used to perform channel fusion between the output of the previous layer of the network and the output of the x-th layer, where x is 6, 4, 14, and 10 in the table. C3 is a cross-scale connection layer; detect is a detection layer.
S13: if the suspected flame area exists, comparing the visible light image with the near-infrared image according to the suspected flame area, and judging whether the suspected flame area comprises a flame target.
Specifically, if the suspected flame area is determined to exist in the visible light image by the smoke and fire detection model, the suspected flame area in the visible light image is compared with the same area in the corresponding near-infrared image to judge whether the same flame target also appears in the near-infrared image.
It is understood that if no suspected flame region exists, the next image is continuously judged, and the subsequent comparison process is not executed.
S14: if the suspected flame area comprises a flame target, flame positioning information is obtained according to the suspected flame area;
specifically, if the comparison is successful, that is, if the region corresponding to the suspected flame region in the near-infrared image also includes a flame target, it is determined that a fire really exists in the suspected flame region, and then specific flame positioning information can be obtained according to the suspected flame region in the visible light image, so as to perform a fire alarm in the following and inform the user of the fire occurrence place.
It can be understood that if the comparison result between the additive light image and the near-infrared image is different, which is likely to indicate that the analysis result of the visible light image is wrong, the subsequent flame analysis and positioning are not performed, and the next image is analyzed continuously.
Therefore, the firework detection model after the network structure and the activation function are improved is used in the embodiment of the application, the analysis of the visible light image of the embedded front-end equipment is adapted to, the analysis accuracy is ensured, meanwhile, the improvement of the depth matching of the characteristic diagram and the corresponding network structure and the activation function is reduced, the analysis speed of the firework detection model is not too low, and the analysis speed of the firework detection model is improved while the accuracy is achieved.
It should be noted that the acquisition of the image can be completed by the camera, so that a plurality of continuous visible light images and near-infrared images corresponding to multiple frames can be acquired, and when flame identification is performed, after a suspected flame area is identified in one frame of the visible light image, the same determination process can be continuously performed on the same continuous multiple frames in the same area subsequently, so that tracking determination on a key area is realized, and the final determination accuracy is improved.
Specifically, the foregoing SMU function formula may be:
Figure BDA0003608548530000091
where erf is a gaussian error function, the formula for the function is:
Figure BDA0003608548530000092
where α ═ 0.25 is a hyperparameter, μ is a smoothing coefficient, and the trainable parameters may be initialized to 1000000.
The embodiment of the application discloses a specific image type fire detector smoke and fire identification method based on a deep learning method, and compared with the previous embodiment, the embodiment further explains and optimizes the technical scheme. Referring to fig. 3, specifically:
s21: simultaneously collecting a visible light image and a near infrared image of the same detection area;
s22: and judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model.
The firework detection model is a Yolov5s model and is obtained by training historical visible light images and historical near infrared images, an activation function in a residual network in the Yolov5s model is an SMU function, a C3 network in a Yolov5s backbone network of the Yolov5s model is a Shuffle Block network, and the depth of a Yolov5s head detection network feature map of the Yolov5s model is set according to a preset rule so as to reduce the parameter number and the calculation amount of the Yolov5s model.
S23: and if the suspected flame area exists, mapping the suspected flame area in the visible light image into the near-infrared image to obtain the near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image.
Specifically, the suspected flame area in the visible light image can be mapped to an area corresponding to the near-infrared image, so that a near-infrared suspected flame area in the near-infrared image, which is overlapped with the mapping result of the suspected flame area, is obtained, wherein the size of the visible light image and the size of the near-infrared image are preset, so that the specific mapping process can preset calibration matching parameters between the visible light image and the near-infrared image, coordinates of the visible light image and the near-infrared image are unified, the suspected flame area in the visible light image is mapped to the near-infrared image through coordinate conversion, and the near-infrared suspected flame area in the near-infrared image, which is overlapped with the mapping result of the suspected flame area, is obtained.
S24: and judging whether the near-infrared suspected flame area comprises a flame target or not by using a preset near-infrared flame identification method.
Specifically, after a near-infrared suspected flame area in the near-infrared image is obtained, the near-infrared suspected flame area is judged by using a preset near-infrared flame identification method aiming at flame target identification in the near-infrared image, whether flame exists in the area, namely a flame target, is judged, secondary judgment is performed by using the near-infrared image, and the detection accuracy is improved.
S25: if the suspected flame area comprises a flame target, obtaining a near-infrared suspected flame area in the near-infrared image;
s26: and correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame positioning information.
Specifically, if the near-infrared suspected flame area includes a flame target, the near-infrared suspected flame area may be identified as a near-infrared suspected flame area, and the near-infrared suspected flame area may be mapped into the visible light image according to the calibration matching parameter between the visible light image and the near-infrared image, so as to correct the original suspected flame area in the visible light image, so that the suspected flame area is more accurate, and finally obtain the corrected flame positioning information.
In addition, the embodiment of the application also discloses a specific image type fire detector smoke and fire identification method based on a deep learning method, and as shown in fig. 4, the specific method comprises the following steps:
s31: two paths of cameras in the binocular cameras are used for simultaneously acquiring visible light images and near infrared images of the same detection area respectively.
Specifically, a binocular camera can be adopted, the camera for collecting visible light is used for collecting visible light images, the camera for collecting near-infrared images is used for collecting near-infrared images, and the visible light images and the near-infrared images in the same detection area can be synchronously collected at the same time.
It can be understood that when a camera is used to collect video images, a continuous video stream can be obtained, and therefore, an image sequence including a plurality of image frames is formed, for example, two cameras in a binocular camera are used to respectively and simultaneously collect image sequences including a plurality of frames of visible light images and near infrared images in the same detection area, and subsequently, when flame is judged, after a flame target appears in one image, the next frame or the next image can be selected from the image sequences to continuously judge whether the flame target appears in the image, so that continuous judgment detection, namely tracking detection is formed, the detection accuracy is ensured, and a detection error caused by a single image can also be avoided.
S32: and judging whether a smoke area exists in the visible light image by using a pre-established smoke and fire detection model.
Specifically, the embodiment of the application analyzes the visible light image, so that whether flame exists or not can be analyzed, whether smoke exists or not can be analyzed through the visible light image, and judgment on the smoke is achieved.
Specifically, the smoke and fire detection model simultaneously trains flame targets and smoke in the historical visible light images and the historical near-infrared images, so that the smoke and fire detection model simultaneously has the detection capability on the smoke and the flame.
S33: and if so, obtaining corresponding smoke positioning information according to the smoke area.
Specifically, smoke cannot be displayed obviously or completely in the near-infrared image, so that the smoke area is not used for comparing with the near-infrared image, whether the smoke exists or not is determined directly according to the judgment result of the visible light image, and if the smoke area exists, corresponding smoke positioning information can be obtained.
Furthermore, in the process of judging the image sequence aiming at the video stream, the tracking judgment can be continuously carried out on the corresponding region of the subsequent visible light image in the image sequence according to the smoke region, so as to obtain the corresponding smoke positioning information.
It will be appreciated that the detection of an image in a sequence of images by a smoke and fire detection model is identical to the detection of an image alone, except that the final alarm condition may be different during the analysis of the sequence of images, for example, requiring a continuous determination that flame and/or smoke are present in a plurality of images to alarm.
It should be noted that, the determination of the smoke region and the suspected flame region by the smoke detection model in steps S32 and S34 may be performed simultaneously, that is, after a visible light image is received, the detection of the suspected flame region and the smoke region is performed on the image simultaneously, and the two steps may not be in a sequential order, so that S33 and subsequent steps S34 to S39 also do not have an absolute sequential execution order, which is not limited herein.
S34: and judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model.
The firework detection model is obtained by training a Yolov5s model by utilizing a historical visible light image and a historical near infrared image, an activation function in a residual network in the Yolov5s model is an SMU function, a C3 network in a Yolov5s backbone network of the Yolov5s model is a Shuffle Block network, and the depth of a Yolov5s head detection network feature map of the Yolov5s model is set according to a preset rule so as to reduce the parameter number and the calculation amount of the Yolov5s model.
S35: and if the suspected flame area exists, mapping the suspected flame area in the visible light image into the near-infrared image to obtain the near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image.
S36: and judging whether the near-infrared suspected flame area comprises a flame target by utilizing the Otsu method.
Specifically, the OTSU method can be used for performing secondary verification on the near-infrared suspected flame area, specifically, the near-infrared suspected flame target area can be properly amplified, then the OTSU method is used for performing threshold segmentation on the near-infrared suspected flame target area, a flame target under a near-infrared image is extracted, the aspect ratio, the circularity, the flame sharp angle number and other graphic features of the flame target are calculated, the features are compared with a preset flame threshold, and if the features exceed the preset flame threshold range, the flame target is judged to be an interference area; otherwise, judging the target flame.
In the process of properly amplifying the near-infrared suspected flame target area, the amplification factor may be set to 2 times.
S37: if the suspected flame area comprises a flame target, obtaining a near-infrared suspected flame area in the near-infrared image;
s38: and correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame positioning information.
Specifically, when the image sequence is processed, a suspected flame area in the visible light image can be corrected according to the near-infrared suspected flame area; and continuously tracking and judging the corresponding area of the subsequent visible light image in the image sequence according to the corrected suspected flame area to obtain flame positioning information.
S39: and sending the flame positioning information and/or the smoke positioning information to the user terminal.
Specifically, after the flame positioning information and/or the smoke positioning information are obtained, an alarm can be given, and meanwhile, the flame positioning information and/or the smoke positioning information are sent to the user terminal so that the user can check the flame positioning information and/or the smoke positioning information and respond in time.
Specifically, in order to identify flames and smoke with small target sizes at the beginning of a fire and to take account of the characteristic that the difference between the target sizes of the flames and the smoke is obvious, a priori blocks of a YOLOv5s model may be set, for example, as shown in a priori block setting data table of a smoke and fire detection model in table 3, when the size of a feature diagram of the smoke and fire detection model is 21x21x24, the sensing field of the smoke and fire detection model is set to be 32, and the sizes of the priori blocks are (251, 142), (243, 256) and (492, 327); when the size of the feature map of the input smoke and fire detection model is 42x42x24, setting the receptive field of the smoke and fire detection model to be 16 and the sizes of the prior frames to be (50, 64), (104, 83) and (117, 176); when the size of the feature map of the input smoke and fire detection model is 84x84x24, the reception field of the smoke and fire detection model is set to be 8, and the prior frame sizes are (9, 9), (17, 16) and (34, 29).
TABLE 3
Figure BDA0003608548530000131
Specifically, the front-back comparison of the smoke and fire detection model in the embodiment of the application can be seen in fig. 5, compared with the original smoke and fire detection model before modification, the smoke and fire detection model after modification in the embodiment of the application reduces the size of the model by 52.7%, reduces the parameter amount by 53.6%, reduces the calculated amount by 79.9%, only sacrifices the accuracy by 3.5%, and solves the problems that the existing image-type fire detector smoke and fire identification method based on the deep learning method is poor in generalization capability, robustness, environmental adaptability and the like; on the other hand, the method multiplexes the characteristics of the near-infrared image, carries out multi-stage judgment on the firework target, and has the characteristics of low false alarm rate, strong adaptability, good stability and the like. The network structure of the existing deep learning target detection method is modified, and the model parameters and the calculated amount are reduced, so that the requirements of low resources and real time of edge equipment are met.
In addition, the embodiment of the application also discloses a computer readable storage medium, a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to realize the image type fire detector smoke and fire identification method based on the deep learning method.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The technical content provided by the present application is described in detail above, and the principle and the implementation of the present application are explained in the present application by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image type fire detector smoke and fire identification method based on a deep learning method is characterized by comprising the following steps:
simultaneously collecting a visible light image and a near infrared image of the same detection area;
judging whether a suspected flame area exists in the visible light image by using a pre-established smoke and fire detection model;
if the suspected flame area exists, comparing the visible light image with the near-infrared image according to the suspected flame area, and judging whether the suspected flame area comprises a flame target;
if the suspected flame area comprises the flame target, flame positioning information is obtained according to the suspected flame area;
the firework detection model is obtained by training a Yolov5s model by utilizing a historical visible light image and a historical near infrared image, an activation function in a residual network in the Yolov5s model is an SMU function, a C3 network in a Yolov5s backbone network of the Yolov5s model is a Shuffle Block network, and the depth of a Yolov5s head detection network feature map of the Yolov5s model is set according to a preset rule so as to reduce the parameter number and the calculation amount of the Yolov5s model.
2. The image-based fire detector fire and smoke identification method based on the deep learning method according to claim 1, wherein the step of comparing the visible light image with the near-infrared image according to the suspected flame area to determine whether the suspected flame area includes a flame target comprises:
mapping the suspected flame area in the visible light image to the near-infrared image to obtain a near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image;
and judging whether the near-infrared suspected flame area comprises a flame target or not by using a preset near-infrared flame identification method.
3. The image-based fire detector fire and smoke identification method based on the deep learning method according to claim 2, wherein if the suspected flame area includes the flame target, the process of obtaining flame localization information from the suspected flame area comprises:
if the suspected flame area comprises the flame target, obtaining a near-infrared suspected flame area in the near-infrared image;
and correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame positioning information.
4. The image-based fire detector fire and smoke identification method based on the deep learning method of claim 1, wherein when the feature map size of the fire and smoke detection model is 21x21x24, the reception field of the fire and smoke detection model is 32, and the prior box sizes are (251, 142), (243, 256) and (492, 327);
when the size of the feature map input into the smoke and fire detection model is 42x42x24, the reception field of the smoke and fire detection model is 16, and the sizes of the prior boxes are (50, 64), (104, 83) and (117, 176);
when the signature size of the input smoke and fire detection model is 84x84x24, the reception field of the smoke and fire detection model is 8, and the prior frame sizes are (9, 9), (17, 16) and (34, 29).
5. The image-based fire detector fire and smoke identification method based on the deep learning method according to claim 1, wherein the step of mapping the suspected flame area in the visible light image into the near-infrared image to obtain a near-infrared suspected flame area in the near-infrared image, which coincides with the mapping result of the suspected flame area, comprises:
and mapping the suspected flame area in the visible light image to the near-infrared image by using a preset calibration matching parameter between the visible light image and the near-infrared image to obtain the near-infrared suspected flame area which is superposed with the mapping result of the suspected flame area in the near-infrared image.
6. The image-based fire detector fire and smoke identification method based on the deep learning method according to claim 1, wherein the process of determining whether the near-infrared suspected flame area includes a flame target by using a preset near-infrared flame identification method comprises:
and judging whether the near-infrared suspected flame area comprises a flame target by utilizing an Otsu method.
7. The image type fire detector fire and smoke identification method based on the deep learning method according to any one of claims 1 to 6, wherein the process of simultaneously acquiring the visible light image and the near infrared image of the same detection area comprises the following steps:
two cameras in the binocular cameras are used for simultaneously acquiring image sequences of the visible light images and the near infrared images in the same detection area including multiple frames respectively.
8. The image-based fire detector fire and smoke identification method based on the deep learning method of claim 7, wherein after the visible light image and the near infrared image of the same detection area are collected simultaneously, the method further comprises the following steps:
judging whether a smoke area exists in the visible light image or not by using the pre-established smoke and fire detection model;
and if so, obtaining corresponding smoke positioning information according to the smoke area.
9. The image-based fire detector fire and smoke identification method based on the deep learning method according to claim 8, wherein the process of correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area to obtain flame location information comprises:
correcting the suspected flame area in the visible light image according to the near-infrared suspected flame area;
continuously tracking and judging the area corresponding to the subsequent visible light image in the image sequence according to the corrected suspected flame area to obtain the flame positioning information;
the process of obtaining the corresponding smoke positioning information according to the smoke area comprises the following steps:
and continuously tracking and judging the corresponding area of the subsequent visible light image in the image sequence according to the smoke area to obtain the corresponding smoke positioning information.
10. The image type fire detector fire and smoke identification method based on the deep learning method according to claim 9, further comprising:
and sending the flame positioning information and/or the smoke positioning information to a user terminal.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the image-based fire detector fire and smoke identification method according to any one of claims 1 to 10.
CN202210422620.9A 2022-04-21 2022-04-21 Image type fire detector smoke and fire identification method based on deep learning method Pending CN115019164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210422620.9A CN115019164A (en) 2022-04-21 2022-04-21 Image type fire detector smoke and fire identification method based on deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210422620.9A CN115019164A (en) 2022-04-21 2022-04-21 Image type fire detector smoke and fire identification method based on deep learning method

Publications (1)

Publication Number Publication Date
CN115019164A true CN115019164A (en) 2022-09-06

Family

ID=83067045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210422620.9A Pending CN115019164A (en) 2022-04-21 2022-04-21 Image type fire detector smoke and fire identification method based on deep learning method

Country Status (1)

Country Link
CN (1) CN115019164A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117765680A (en) * 2024-02-22 2024-03-26 中国矿业大学深圳研究院 Forest fire hazard monitoring and early warning method, device, equipment and storage medium
CN117953432A (en) * 2024-03-26 2024-04-30 湖北信通通信有限公司 Intelligent smoke and fire identification method and system based on AI algorithm

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117765680A (en) * 2024-02-22 2024-03-26 中国矿业大学深圳研究院 Forest fire hazard monitoring and early warning method, device, equipment and storage medium
CN117765680B (en) * 2024-02-22 2024-05-03 中国矿业大学深圳研究院 Forest fire hazard monitoring and early warning method, device, equipment and storage medium
CN117953432A (en) * 2024-03-26 2024-04-30 湖北信通通信有限公司 Intelligent smoke and fire identification method and system based on AI algorithm
CN117953432B (en) * 2024-03-26 2024-06-11 湖北信通通信有限公司 Intelligent smoke and fire identification method and system based on AI algorithm

Similar Documents

Publication Publication Date Title
CN106802113B (en) Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN115019164A (en) Image type fire detector smoke and fire identification method based on deep learning method
US20180210556A1 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
EP2426642B1 (en) Method, device and system for motion detection
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
WO2020252974A1 (en) Method and device for tracking multiple target objects in motion state
JP2018523868A5 (en)
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
JP2017534046A (en) Building height calculation method, apparatus and storage medium
CN108038867A (en) Fire defector and localization method based on multiple features fusion and stereoscopic vision
CN111126122B (en) Face recognition algorithm evaluation method and device
JP2016015045A (en) Image recognition device, image recognition method, and program
CN109274883A (en) Posture antidote, device, terminal and storage medium
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN111401246A (en) Smoke concentration detection method, device, equipment and storage medium
CN110443224A (en) Page turning detection method, device, electronic equipment and storage medium
CN116403141A (en) Firework detection method, system and storage medium
CN105184804A (en) Sea surface small target detection method based on airborne infrared camera aerially-photographed image
CN112287867A (en) Multi-camera human body action recognition method and device
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN111856445A (en) Target detection method, device, equipment and system
CN104537690B (en) One kind is based on the united moving spot targets detection method of maximum time index
CN116625249A (en) Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof
CN109886212A (en) From the method and apparatus of rolling fingerprint synthesis fingerprint on site
CN112598738B (en) Character positioning method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination