CN115937508A - Method and device for detecting fireworks - Google Patents

Method and device for detecting fireworks Download PDF

Info

Publication number
CN115937508A
CN115937508A CN202210705354.0A CN202210705354A CN115937508A CN 115937508 A CN115937508 A CN 115937508A CN 202210705354 A CN202210705354 A CN 202210705354A CN 115937508 A CN115937508 A CN 115937508A
Authority
CN
China
Prior art keywords
area
smoke
segmentation
image
suspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210705354.0A
Other languages
Chinese (zh)
Inventor
陈斌
冯谨强
刘继超
金岩
胡国锋
唐至威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainayun IoT Technology Co Ltd
Qingdao Hainayun Digital Technology Co Ltd
Qingdao Hainayun Intelligent System Co Ltd
Original Assignee
Hainayun IoT Technology Co Ltd
Qingdao Hainayun Digital Technology Co Ltd
Qingdao Hainayun Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainayun IoT Technology Co Ltd, Qingdao Hainayun Digital Technology Co Ltd, Qingdao Hainayun Intelligent System Co Ltd filed Critical Hainayun IoT Technology Co Ltd
Priority to CN202210705354.0A priority Critical patent/CN115937508A/en
Publication of CN115937508A publication Critical patent/CN115937508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting fireworks, which comprises the following steps: acquiring a plurality of frames of continuous images from video image data; carrying out motion detection on each frame of image to obtain a motion area of each frame of image; inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image; respectively acquiring a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area; respectively calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area; if the ratio of the first area to the ratio of the second area to the ratio of the first area to the second area in the images of the preset continuous frames in the images adjacent to the frame of image is greater than or equal to a first preset threshold, the real smoke segmentation area and the real flame segmentation area are respectively a real smoke area and a real flame area; if a real smoke area and a real flame area exist in one frame of image, and the real smoke area and the real flame area have a superposition part, a smoke and fire alarm is generated.

Description

Method and device for detecting fireworks
Technical Field
The invention belongs to the technical field of image processing and recognition, and particularly relates to a method and a device for detecting smoke and fire.
Background
A natural disaster with higher probability of occurrence of a fire disaster is realized, how timely and effective control is carried out at the initial stage of the fire disaster, it is extremely important to reduce property loss and even casualties, smoke and fire detection is usually carried out on an image based on deep learning in the prior art, if the detection result is that a suspected smoke and fire detection area exists, early warning is directly carried out, secondary verification can not be carried out on the detection result, so that the problems of serious false alarm and alarm missing caused by easy existence still exist, the accuracy is low, and whether the fire disaster occurs is confirmed and verified in an irregular mode by manpower.
The present invention has been made in view of this situation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method and a device for detecting smoke and fire so as to solve the problem of low accuracy rate of smoke and fire detection in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that:
in a first aspect, the present invention provides a method of detecting a fire comprising:
acquiring multiple frames of continuous images from video image data;
carrying out motion detection on each frame of image to obtain a motion area of each frame of image;
inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
respectively acquiring a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by surrounding all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
respectively calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area;
if the first area proportion ratio and the second area proportion ratio are both larger than or equal to a first preset threshold value, the target smoke segmentation area and the target flame segmentation area are a real smoke area and a real flame area respectively;
and if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area have a superposition part, generating a smoke and fire alarm.
Optionally, the method further includes:
generating a smoke and fire alarm if the real smoke region or the real flame region is present in each of the consecutive multi-frame images.
Optionally, when it is determined that the suspected smoke partition area is a suspected smoke partition area in the frame of image, the motion area is input to the image semantic partition neural network model to be identified, so as to obtain a suspected smoke partition area and a suspected flame partition area of each frame of image, where the method includes:
aiming at a motion region of each frame of image, identifying each pixel point of the motion region by utilizing an image semantic segmentation neural network model to obtain probability values of each pixel point respectively being light smoke, dense smoke, small fire and big fire;
determining the type of each pixel point according to the maximum probability value of each pixel point;
integrating pixel points of the light smoke type and the dense smoke type to obtain the suspected smoke segmentation area;
and integrating the pixel points of the small fire type and the large fire type to obtain the suspected flame segmentation area.
Optionally, the suspected smoke segmentation area is obtained according to three adjacent frames of images, and includes:
aiming at each frame of image, merging the suspected smoke segmentation areas of the frame of image and two frames of images adjacent to the frame of image to obtain the suspected smoke segmentation areas of the frame of image;
and for each frame of image, combining the suspected flame segmentation areas of the frame of image and the two frames of images adjacent to the frame of image to obtain the suspected flame segmentation areas of the frame of image.
Optionally, the respectively obtaining a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area includes:
respectively acquiring coordinate values of each pixel point contained in each of the suspected smoke segmentation area, the suspected flame segmentation area and the motion area;
taking pixel points with the same coordinate values in the suspected smoke partition area and the motion area as first pixel points, and generating the target smoke partition area according to all the first pixel points;
and taking pixel points with the same coordinate value in the suspected flame partition area and the motion area as second pixel points, and generating the target flame partition area according to all the second pixel points.
Optionally, the calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area respectively includes:
taking the ratio of the number of all the first pixel points contained in the target smoke partition area to the total number of all the pixel points in the motion area as the first area ratio;
and taking the ratio of the number of all the second pixel points contained in the target flame partition area to the total number of all the pixel points in the motion area as the second area ratio.
In a second aspect, the present invention provides a device for detecting a fire, comprising:
the first acquisition module is used for acquiring multiple frames of continuous images from video image data;
the motion detection module is used for carrying out motion detection on each frame of image to obtain a motion area of each frame of image;
the identification module is used for inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
a second obtaining module, configured to obtain a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area, respectively; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
the calculation module is used for calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area respectively;
a first determining module, configured to determine that the target smoke segmentation area and the target flame segmentation area are a real smoke area and a real flame area, respectively, when the first area ratio and the second area ratio are both greater than or equal to a first preset threshold;
and the second determining module is used for generating a smoke and fire alarm if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area have overlapped parts.
Optionally, the second determining module further includes:
a first determining unit, configured to generate a smoke and fire alarm if the real smoke region or the real flame region is present in each of consecutive multi-frame images.
Optionally, the identification module includes:
the first identification unit is used for identifying each pixel point of the motion area by utilizing an image semantic segmentation neural network model aiming at the motion area of each frame of image to obtain the probability values of light smoke, thick smoke, small fire and big fire of each pixel point;
the second determining unit is used for determining the type of each pixel point according to the maximum probability value of each pixel point;
the first integration unit is used for integrating pixel points of the light smoke type and the dense smoke type to obtain the suspected smoke segmentation area;
and the second integration unit is used for integrating the pixel points of the small fire type and the large fire type to obtain the suspected flame segmentation area.
Optionally, the identification module further includes:
a third integration unit, configured to, for each frame of image, merge the suspected smoke segmentation areas of the frame of image and two frames of images adjacent to the frame of image to obtain the suspected smoke segmentation area of the frame of image;
and the fourth integration unit is used for combining the suspected flame segmentation areas of each frame of image and the two adjacent frames of images of the frame of image to obtain the suspected flame segmentation areas of the frame of image.
Optionally, the second obtaining module further includes:
the first acquisition unit is used for respectively acquiring coordinate values of each pixel point contained in each of the suspected smoke segmentation area, the suspected flame segmentation area and the motion area;
a first generation unit, configured to use a pixel point with a completely same coordinate value in the suspected smoke partition area and the motion area as the first pixel point, and generate the target smoke partition area according to all the first pixel points;
and the second generation unit is used for taking pixel points with the same coordinate values in the suspected flame partition area and the motion area as the second pixel points and generating the target flame partition area according to all the second pixel points.
Optionally, the calculating module further includes:
a first calculating unit, configured to use a ratio of the number of all first pixel points included in the target smoke partition region to the total number of all pixel points in the motion region as the first region proportion ratio;
and the second calculation unit is used for taking the ratio of the number of all the second pixel points contained in the target flame segmentation region to the total number of all the pixel points in the motion region as the second region proportion ratio.
The invention provides a method of detecting a fire or a fire, comprising: firstly, acquiring multiple continuous images from video image data; then carrying out motion detection on each frame of image to obtain a motion area of each frame of image; secondly, inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image; respectively acquiring a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area; the target smoke segmentation area is an area formed by surrounding all first pixels falling in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by surrounding all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time; respectively calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area; if the first area proportion ratio and the second area proportion ratio are both larger than or equal to a first preset threshold value, the target smoke segmentation area and the target flame segmentation area are a real smoke area and a real flame area respectively; and finally, generating a smoke and fire alarm if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area have a superposition part.
According to the invention, the image is processed to obtain the moving area of the image, then the moving area is processed to obtain the suspected smoke segmentation area and the suspected flame segmentation area, the target smoke segmentation area and the target flame segmentation area are further obtained, whether the real smoke area and the real flame area exist is determined by verifying the target smoke segmentation area and the target flame segmentation area, and finally, the smoke and fire alarm is generated.
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention, are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention without limiting the invention to the right. It is obvious that the drawings in the following description are only some embodiments, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic flow chart of a method for detecting smoke and fire according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a device for detecting smoke and fire according to an embodiment of the invention.
It should be noted that the drawings and the description are not intended to limit the scope of the inventive concept in any way, but to illustrate it by a person skilled in the art with reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and the following embodiments are used for illustrating the present invention and are not intended to limit the scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
As shown in fig. 1 to 2, an embodiment of the present invention provides a method for detecting fireworks, including:
s101, acquiring multiple continuous images from video image data;
s102, carrying out motion detection on each frame of image to obtain a motion area of each frame of image;
s103, inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
s104, respectively acquiring a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
s105, respectively calculating a first area proportion ratio and a second area proportion ratio of the target smoke division area and the target flame division area in the motion area;
s106, if the first area ratio and the second area ratio are both larger than or equal to a first preset threshold, the target smoke segmentation area and the target flame segmentation area are really a real smoke area and a real flame area respectively;
s107, if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area are overlapped, generating a smoke and fire alarm.
In the above step S101, the video image data is a surveillance video captured by a camera.
Specifically, the server acquires a monitoring video in a certain scene acquired by the camera in real time, and the monitoring video is processed to obtain continuous multi-frame images.
In step S102, the motion area is an area where dynamic changes exist in the pixel points, and the motion area may include fireworks, running vehicles, moving creatures, and the like.
Specifically, the server performs motion detection on each frame of image according to the frame of image and two adjacent frames of images of the frame of image, so as to obtain a motion area of the frame of image.
For example, taking the motion detection of the nth frame image to obtain the motion region of the nth frame image as an example, the nth-1 frame image is first obtained, and the pixel value of each pixel point in the nth frame image and the nth frame image are combinedComparing the pixel value of each pixel point in the n-1 frame image, reserving the pixel points with changed pixel values, intercepting the area of the pixel points with changed pixel values as the detection image d of the n frame image and the n-1 frame image n-1 (ii) a Similarly, an n +1 th frame image is obtained, the pixel value of each pixel point in the n frame image is compared with the pixel value of each pixel point in the n +1 th frame image, the pixel points with changed pixel values are reserved, the regions where the pixel values are intercepted and the changed pixel points are sent are used as the detective image d of the n frame image and the n +1 th frame image n
Further, the frame difference image d n-1 And d n Setting the pixel points with the middle pixel value smaller than the second preset threshold value as 0, setting the pixel points with the pixel value larger than or equal to the second preset threshold value as 255, and combining the areas where the pixel points in the two frames of the error detection images are 255 to generate the motion area of the nth frame of image.
Since the reference size input to each image semantic segmentation neural network model is different, the image size input to the image semantic segmentation neural network model to be used needs to be referred to when the region to be segmented of each frame image is cropped. In order to input a motion region of each frame of image in step 2 into each image semantic segmentation neural network model in an appropriate size, and input the motion region into the image semantic segmentation neural network model to identify a suspected smoke segmentation region and a suspected flame segmentation region of each frame of image, step 103 further includes:
obtaining image sizes W and H of an image semantic segmentation neural network model to which a motion region is to be input, and obtaining right external rectangular diagonal coordinates (minX, minY), (maxX, maxY) of the motion region, if W<The X axis is intercepted according to minX and maxX of a positive external rectangle, if W is more than or equal to maxX-minX, the X axis is intercepted according to maxX-minX
Figure BDA0003705099030000101
And &>
Figure BDA0003705099030000102
And (6) intercepting./>
If H is H<The Y axis is cut according to minY and maxY of the positive external rectangle, if H is more than or equal to maxY-minY, the Y axis is cut according to
Figure BDA0003705099030000103
And &>
Figure BDA0003705099030000104
And (6) intercepting.
In step S103, the suspected smoke-divided region includes a real smoke region and a smoke region portion that is mistaken for the presence of an interference factor in the image. The suspected flame-segmented region contains the region of the real fire and the portion of the fire region that is mistaken for the presence of the disturbing factor in the image.
Specifically, the motion area of each frame of image is input to an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of the frame of image.
However, the actual suspected smoke/fire segmented region of one frame of image may be the suspected smoke/fire segmented region itself in the frame of image, or may be obtained from the suspected smoke/fire segmented regions of the three adjacent frames of images.
When the suspected smoke partition area is judged to be the suspected smoke partition area in the frame of image, and the suspected flame partition area is the suspected flame partition area in the frame of image, the motion area of each frame of image is input to the image semantic partition neural network model for more detailed understanding, so as to identify each frame of image, and the suspected smoke partition area and the suspected flame partition area of each frame of image are obtained, S103 includes:
s1031, aiming at the motion region of each frame of image, identifying each pixel point of the motion region by utilizing an image semantic segmentation neural network model to obtain probability values of light smoke, thick smoke, small fire and big fire of each pixel point;
s1032, determining the type of each pixel point according to the maximum probability value of each pixel point;
s1033, integrating pixel points of the light smoke type and the dense smoke type to obtain the suspected smoke segmentation area;
s1034, integrating the pixel points of the small fire type and the large fire type to obtain the suspected flame segmentation area.
In step S1031, specifically, the motion region of each frame of image is input into the image semantic segmentation neural network model, and light smoke identification, dense smoke identification, small fire identification, and big fire identification are respectively performed on each pixel point, so as to obtain a probability value that each pixel point belongs to light smoke, a probability value of dense smoke, a probability value of small fire, and a probability value of big fire.
For example, the motion area of each frame of image is input into the image semantic segmentation neural network model, different colors such as red, yellow, blue and the like can be obtained, the red represents the same type such as the color of smoke, the yellow can represent the color of organisms, the blue can represent the color of a background and the like, and the different color areas are composed of different pixel points. The Decoder process is to reproduce the classified features of the images through deconvolution, restore the original sizes of the images through up-sampling, and finally output the maximum values of different classifications through a soft-max classifier to obtain the final segmentation image.
Taking the A pixel points in the motion area of a certain frame of image as an example, the probability values of light smoke, thick smoke, small fire and big fire are respectively identified for the A pixel points by utilizing the image semantic segmentation neural network model, and four probability values are obtained, namely a 1 、a 2 、a 3 And a 4 . Secondly, the probability value that the A pixel belongs to the organism/background can be identified for the A pixel by utilizing the image semantic segmentation neural network model.
In step S1032, specifically, the four probability values of each pixel point are compared, and the type of each pixel point is determined according to the type corresponding to the maximum probability value.
For example, if the maximum of the four probability values is a 2, And a is 2 And the probability that the pixel point belongs to the dense smoke is represented, and then the pixel point is determined to belong to the dense smoke type.
In step S1033, specifically, a region surrounded by a combination of pixels of a light smoke type and pixels of a dense smoke type is used as a suspected smoke partition region in the motion region, and the region surrounded by a combination of the two types of pixels is partitioned from the motion region to obtain the suspected smoke partition region.
In the step S1034, specifically, a region surrounded by the combination of the pixels of the small fire type and the pixels of the large fire type is used as a suspected flame segmentation region in the motion region, and the region surrounded by the combination of the pixels of the small fire type and the pixels of the large fire type is segmented from the motion region to obtain the suspected flame segmentation region.
The suspected smoke segmentation area and the suspected flame segmentation area are obtained according to three adjacent frames of images, and specifically comprise the following steps:
s1035, for each frame of image, merging the suspected smoke segmentation areas of the frame of image and two frames of images adjacent to the frame of image to obtain the suspected smoke segmentation areas of the frame of image;
and S1036, aiming at each frame of image, combining the frame of image and the suspected flame segmentation areas of the two adjacent frames of images of the frame of image to obtain the suspected flame segmentation areas of the frame of image.
In step 1035, specifically, two adjacent frame images of each frame image are determined according to each frame image, and the suspected smoke segmented areas of the three adjacent frame images are combined to obtain the actual suspected smoke segmented area of the frame image.
For example, taking the image of the nth frame as an example, two frames of images adjacent to the image of the nth frame are respectively determined as the (n-1) th frame and the (n + 1) th frame, the suspected smoke segmentation areas in the images of the (n-1) th frame, the (n) th frame and the (n + 1) th frame are respectively the area 1, the area 2 and the area 3, the three areas are combined and overlapped areas are removed, and the finally combined area is taken as the actual suspected smoke segmentation area of the image of the nth frame. The invention also provides a method for obtaining a suspected segmentation area of one frame of image according to three adjacent frames of images when the conditions are met, taking the suspected smoke segmentation area of the nth frame of image as an example, the nth frame of image has n pixel points in total, the nth-1 frame of image has k pixel points in total, firstly counting the number of pixel points which are completely the same as coordinate points in the nth frame of image and the nth-1 frame of image as x, calculating the ratio of x to n + k-x, and judging whether the ratio is greater than or equal to a second preset threshold value; the method comprises the steps of obtaining a total number of n pixel points in the same nth frame image, obtaining a total number of m pixel points in the nth +1 frame image, counting the number of the pixel points, which are completely the same as the coordinate points in the nth frame image and the nth +1 frame image, to be y, calculating the ratio of y to n + m-y, judging whether the ratio is greater than or equal to a second preset threshold, and if the two ratios are greater than or equal to the second preset threshold, merging suspected smoke segmentation areas of the nth-1 frame image, the nth frame image and the nth +1 frame image and removing a repeated area to obtain a suspected smoke segmentation area of the nth frame image.
In step 1036, specifically, two adjacent frame images of each frame image are determined according to each frame image, and the respective suspected flame segmentation areas of the three adjacent frame images are merged to obtain an actual suspected flame segmentation area of the frame image.
In step 104, the target smoke segmentation area is a segmentation area only containing smoke, and the target flame segmentation area is a segmentation area only containing flames.
Specifically, a target smoke segmentation area in the suspected smoke segmentation area of each frame of image and a target flame segmentation area in the suspected flame segmentation area of the frame of image are respectively obtained.
For more detailed understanding of the above step S104, the method further includes:
s1041, respectively obtaining coordinate values of each pixel point contained in each of the suspected smoke partition area, the suspected flame partition area and the motion area;
s1042, taking pixel points with the same coordinate value in the suspected smoke partition area and the motion area as first pixel points, and generating the target smoke partition area according to all the first pixel points;
and S1043, taking pixel points with the same coordinate values in the suspected flame partition area and the motion area as second pixel points, and generating the target flame partition area according to all the second pixel points.
In step S041, specifically, for a frame of image, coordinate values of each pixel point in the suspected smoke partition area of the frame of image, coordinate axes of each pixel point in the suspected flame partition area of the frame of image, and coordinate values of each pixel point in the motion area of the frame of image are obtained.
For example, when 2 pixel points exist in the suspected smoke partition area, coordinate values of the two pixel points are respectively obtained; similarly, the coordinate values of the pixel points in the suspected flame segmentation region and the motion region are also the same obtaining mode.
In the step S1042, the coordinate values are the values of the pixel points on the abscissa and the values on the ordinate.
And comparing the horizontal and vertical coordinate values of each pixel point in the suspected smoke partition area with the horizontal and vertical coordinate values of each pixel point in the motion area to obtain a plurality of pixel points with completely identical horizontal and vertical coordinate values, taking the pixel points as first pixel points, and taking the area generated by the surrounding of all the first pixel points as a target suspected smoke partition area.
For example, the suspected smoke segmentation area of the nth frame image has 3 pixel points, and the coordinate values of the three pixel points are (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 ) The motion area of the nth frame image has 4 pixel points, and the coordinate values are (u) respectively 1 ,v 1 )、(u 2 ,v 2 )、(u 3 ,v 3 )、(u 4 ,v 4 ) Respectively will (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 ) And (u) 1 ,v 1 )、(u 2 ,v 2 )、(u 3 ,v 3 )、(u 4 ,v 4 ) By contrast, if x 1 Is equal to u 2 And y is 1 Is equal to y 1 Then will (x) 1 ,y 1 ) Corresponding pixel point sum (u) 2 ,v 2 ) And the corresponding pixel points are used as the same pixel point, namely the first pixel point. And if 2 first pixel points exist, taking the area surrounded by the 2 first pixel points as a target suspected smoke segmentation area.
In the step S1043, the abscissa and ordinate values of each pixel point in the suspected flame partition area are compared with the abscissa and ordinate values of each pixel point in the motion area to obtain a plurality of pixel points with the same abscissa and ordinate values, the pixel points are used as second pixel points, and an area generated by enclosing all the second pixel points is used as a target suspected smoke partition area.
For example, the suspected flame segmentation area of the nth frame image has 4 pixel points, and the coordinate values of the three pixel points are (m) 1 ,n 1 )、(m 2 ,n 2 )、(m 3 ,n 3 )、(m 4 ,n 4 ) The motion area of the nth frame image has 3 pixel points, and the coordinate values are (j) respectively 1 ,k 1 )、(j 2 ,k 2 )、(j 3 ,k 3 ) Respectively will (m) 1 ,n 1 )、(m 2 ,n 2 )、(m 3 ,n 3 )(m 4 ,n 4 ) And (j) 1 ,k 1 )、(j 2 ,k 2 )、(j 3 ,k 3 ) For comparison, if m 3 Is equal to j 1 And n is 3 Is equal to k 1 Then will (m) 3 ,n 3 ) Corresponding pixel point sum (j) 1 ,k 1 ) The corresponding pixel points are used as the same pixel point, namely the second pixel point. If 3 second pixel points exist, the area surrounded by the 3 second pixel points is used as a target suspected flame segmentation area.
In the step 105, for each frame of image, calculating a first ratio of the target smoke segmentation area in the motion area of the frame of image, that is, a first area ratio; and calculating a second proportion of the target flame segmentation area in the frame image motion area, namely a second area proportion ratio.
For more detailed understanding, respectively calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area, step S105 includes:
s1051, taking the ratio of the number of all the first pixel points contained in the target smoke segmentation region to the total number of all the pixel points in the motion region as the ratio of the first region;
s1052, taking the ratio of the number of all the second pixel points contained in the target flame partition area to the total number of all the pixel points in the motion area as the second area ratio.
In the step S1051, the number of all the first pixel points in the target smoke segmentation region is p, and the total number of all the pixel points in the motion region is q, and the total number is counted
Figure BDA0003705099030000161
As the first area ratio.
In the above step S1052, the number of all the second pixel points in the target flame segmentation region is counted as r, and the total number of all the pixel points in the motion region is counted as q, respectively, and the total number of all the pixel points in the target flame segmentation region is counted as r, and the total number of the second pixel points in the target flame segmentation region is counted as q
Figure BDA0003705099030000162
As the second area ratio. />
In the step S106, it is determined whether the first area ratio is greater than or equal to a first preset threshold, and whether the second area ratio is greater than or equal to a first preset threshold. If the first area ratio is larger than or equal to a first preset threshold, determining the target smoke segmentation area as a real smoke segmentation area; and if the ratio of the second area to the ratio is larger than or equal to a first preset threshold, determining the target flame segmentation area as a real fire segmentation area.
In step S107, if a real smoke segmentation area and a real fire segmentation area exist in each frame of image, and the real smoke segmentation area and the real fire segmentation area are overlapped on the abscissa or on the ordinate, a smoke and fire alarm is directly generated.
The present invention, in addition to the determination of the manner of fireworks in step S107, further includes:
and S108, if the real smoke area or the real flame area exists in the continuous multi-frame images, generating a smoke and fire alarm.
Smoke and fire alarms can also be generated if only real smoke regions exist in the continuous multiframe images; smoke and fire alarms can also be generated if only real flame regions are present in successive multiframe images. The consecutive multiframes may be consecutive 3 or more frames, and the present invention is not limited thereto.
In a second aspect, the present invention provides a device for detecting a fire, comprising: a first obtaining module 201, a motion detection module 202, a recognition module 203, a second obtaining module 204, a calculation module 205, a first determination module 206, a second determination module 207:
a first obtaining module 201, configured to obtain multiple frames of continuous images from video image data;
the motion detection module 202 is configured to perform motion detection on each frame of image to obtain a motion region of each frame of image;
the identification module 203 is used for inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
a second obtaining module 204, configured to obtain a target smoke segmentation area in the suspected smoke segmentation areas and a target flame segmentation area in the suspected flame segmentation areas, respectively; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by surrounding all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
a calculating module 205, configured to calculate a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area, respectively;
a first determining module 206, configured to determine that the target smoke segmentation area and the target flame segmentation area are a real smoke area and a real flame area, respectively, when the first area ratio and the second area ratio are both greater than or equal to a first preset threshold;
a second determining module 207, configured to generate a smoke and fire alarm if the real smoke region and the real flame region are both present in one frame of image, and there is an overlapping portion between the real smoke region and the real flame region.
Optionally, the second determining module further includes:
a first determining unit, configured to generate a smoke and fire alarm if the real smoke region or the real flame region is present in each of consecutive multi-frame images.
Optionally, the identification module includes:
the first identification unit is used for identifying each pixel point of the motion area by utilizing an image semantic segmentation neural network model aiming at the motion area of each frame of image to obtain the probability values of light smoke, thick smoke, small fire and big fire of each pixel point;
the second determining unit is used for determining the type of each pixel point according to the maximum probability value of each pixel point;
the first integration unit is used for integrating pixel points of a light smoke type and a dense smoke type to obtain the suspected smoke segmentation area;
and the second integration unit is used for integrating the pixels of the small fire type and the large fire type to obtain the suspected flame segmentation area.
Optionally, the identification module further includes:
a third integration unit, configured to, for each frame of image, merge the suspected smoke partition areas of the frame of image and two adjacent frames of images of the frame of image to obtain the suspected smoke partition area of the frame of image;
and the fourth integration unit is used for combining the suspected flame segmentation areas of each frame of image and the two adjacent frames of images of the frame of image to obtain the suspected flame segmentation areas of the frame of image.
Optionally, the second obtaining module further includes:
the first acquisition unit is used for respectively acquiring coordinate values of each pixel point contained in each of the suspected smoke segmentation area, the suspected flame segmentation area and the motion area;
a first generation unit, configured to use a pixel point with a completely same coordinate value in the suspected smoke partition area and the motion area as the first pixel point, and generate the target smoke partition area according to all the first pixel points;
and the second generation unit is used for taking pixel points with the same coordinate values in the suspected flame partition area and the motion area as second pixel points and generating the target flame partition area according to all the second pixel points.
Optionally, the calculation module further includes:
a first calculating unit, configured to use a ratio of the number of all first pixel points included in the target smoke partition region to the total number of all pixel points in the motion region as the first region proportion ratio;
and the second calculation unit is used for taking the ratio of the number of all the second pixel points contained in the target flame partition area to the total number of all the pixel points in the motion area as the second area ratio.
Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (10)

1. A method of detecting a fire or fire, comprising: the method comprises the following steps:
acquiring a plurality of frames of continuous images from video image data;
carrying out motion detection on each frame of image to obtain a motion area of each frame of image;
inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
respectively acquiring a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by surrounding all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
respectively calculating a first area proportion ratio and a second area proportion ratio of the target smoke division area and the target flame division area in the motion area;
if the first area ratio and the second area ratio are both greater than or equal to a first preset threshold, the target smoke segmentation area and the target flame segmentation area are really a real smoke area and a real flame area respectively;
and if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area have a superposition part, generating a smoke and fire alarm.
2. A method of detecting a fire or smoke according to claim 1, further comprising:
and if the real smoke area or the real flame area exists in the continuous multi-frame images, generating a smoke and fire alarm.
3. The method for detecting fireworks according to claim 1, wherein the step of judging that the suspected smoke segmentation area is the suspected smoke segmentation area in the frame image, and when the suspected flame segmentation area is the suspected flame segmentation area in the frame image, the step of inputting the motion area into the image semantic segmentation neural network model for identification to obtain the suspected smoke segmentation area and the suspected flame segmentation area in each frame image comprises:
aiming at the motion area of each frame of image, identifying each pixel point of the motion area by utilizing an image semantic segmentation neural network model to obtain the probability values of each pixel point respectively being light smoke, thick smoke, small fire and big fire;
determining the type of each pixel point according to the maximum probability value of each pixel point;
integrating pixel points of the light smoke type and the dense smoke type to obtain the suspected smoke segmentation area;
and integrating the pixels of the small fire type and the large fire type to obtain the suspected flame segmentation area.
4. A method of detecting a fire or a fire according to claim 3, wherein said suspected smoke and flame regions are derived from three adjacent frames of images, comprising:
aiming at each frame of image, merging the suspected smoke segmentation areas of the frame of image and two frames of images adjacent to the frame of image to obtain the suspected smoke segmentation areas of the frame of image;
and for each frame of image, combining the suspected flame segmentation areas of the frame of image and the two frames of images adjacent to the frame of image to obtain the suspected flame segmentation areas of the frame of image.
5. A method of detecting a fire or a fire as claimed in claim 1 wherein said obtaining a target one of said suspected smoke-segmented regions and a target one of said suspected flame-segmented regions respectively comprises:
respectively acquiring coordinate values of each pixel point contained in each of the suspected smoke segmentation area, the suspected flame segmentation area and the motion area;
taking pixel points with the same coordinate values in the suspected smoke partition area and the motion area as first pixel points, and generating the target smoke partition area according to all the first pixel points;
and taking pixel points with the same coordinate value in the suspected flame partition area and the motion area as second pixel points, and generating the target flame partition area according to all the second pixel points.
6. A method of detecting a fire or a smoke according to claim 5, wherein said calculating a first area ratio and a second area ratio of said target smoke-segmented area and said target flame-segmented area in said motion area respectively comprises:
taking the ratio of the number of all the first pixel points contained in the target smoke partition area to the total number of all the pixel points in the motion area as the first area ratio;
and taking the ratio of the number of all the second pixel points contained in the target flame partition area to the total number of all the pixel points in the motion area as the second area ratio.
7. A device for detecting a fire, comprising:
the first acquisition module is used for acquiring multiple frames of continuous images from video image data;
the motion detection module is used for carrying out motion detection on each frame of image to obtain a motion area of each frame of image;
the identification module is used for inputting the motion area into an image semantic segmentation neural network model for identification to obtain a suspected smoke segmentation area and a suspected flame segmentation area of each frame of image;
a second obtaining module, configured to obtain a target smoke segmentation area in the suspected smoke segmentation area and a target flame segmentation area in the suspected flame segmentation area, respectively; the target smoke segmentation area is an area formed by surrounding all first pixel points which fall in the suspected smoke segmentation area and the motion area at the same time; the target flame segmentation area is an area formed by surrounding all second pixel points which fall in the suspected flame segmentation area and the motion area at the same time;
the calculation module is used for calculating a first area ratio and a second area ratio of the target smoke segmentation area and the target flame segmentation area in the motion area respectively;
a first determining module, configured to determine that the target smoke segmentation area and the target flame segmentation area are a real smoke area and a real flame area, respectively, when the first area ratio and the second area ratio are both greater than or equal to a first preset threshold;
and the second determining module is used for generating a smoke and fire alarm if the real smoke area and the real flame area exist in one frame of image and the real smoke area and the real flame area have a superposition part.
8. A device for detecting fireworks according to claim 7, wherein the second determining module further comprises:
a first determining unit, configured to generate a smoke and fire alarm if the real smoke region or the real flame region is present in each of consecutive multi-frame images.
9. A device for detecting fireworks as claimed in claim 6, wherein the identification module comprises:
the first identification unit is used for identifying each pixel point of the motion area by utilizing an image semantic segmentation neural network model aiming at the motion area of each frame of image to obtain the probability values of light smoke, thick smoke, small fire and big fire of each pixel point;
the second determining unit is used for determining the type of each pixel point according to the maximum probability value of each pixel point;
the first integration unit is used for integrating pixel points of a light smoke type and a dense smoke type to obtain the suspected smoke segmentation area;
and the second integration unit is used for integrating the pixel points of the small fire type and the large fire type to obtain the suspected flame segmentation area.
10. A pyrotechnic device as claimed in claim 9 wherein the identification module further comprises:
a third integration unit, configured to, for each frame of image, merge the suspected smoke segmentation areas of the frame of image and two frames of images adjacent to the frame of image to obtain the suspected smoke segmentation area of the frame of image;
and the fourth integration unit is used for combining the suspected flame segmentation areas of each frame of image and the two adjacent frames of images of the frame of image to obtain the suspected flame segmentation areas of the frame of image.
CN202210705354.0A 2022-06-21 2022-06-21 Method and device for detecting fireworks Pending CN115937508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210705354.0A CN115937508A (en) 2022-06-21 2022-06-21 Method and device for detecting fireworks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210705354.0A CN115937508A (en) 2022-06-21 2022-06-21 Method and device for detecting fireworks

Publications (1)

Publication Number Publication Date
CN115937508A true CN115937508A (en) 2023-04-07

Family

ID=86554536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210705354.0A Pending CN115937508A (en) 2022-06-21 2022-06-21 Method and device for detecting fireworks

Country Status (1)

Country Link
CN (1) CN115937508A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311000A (en) * 2023-05-16 2023-06-23 合肥中科类脑智能技术有限公司 Firework detection method, device, equipment and storage medium
CN117496218A (en) * 2023-10-07 2024-02-02 广州市平可捷信息科技有限公司 Smoke detection method and system based on image recognition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311000A (en) * 2023-05-16 2023-06-23 合肥中科类脑智能技术有限公司 Firework detection method, device, equipment and storage medium
CN117496218A (en) * 2023-10-07 2024-02-02 广州市平可捷信息科技有限公司 Smoke detection method and system based on image recognition
CN117496218B (en) * 2023-10-07 2024-05-07 广州市平可捷信息科技有限公司 Smoke detection method and system based on image recognition

Similar Documents

Publication Publication Date Title
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN115937508A (en) Method and device for detecting fireworks
CN111739250B (en) Fire detection method and system combining image processing technology and infrared sensor
CN108805042B (en) Detection method for monitoring video sheltered from leaves in road area
CN112102409B (en) Target detection method, device, equipment and storage medium
CN111222478A (en) Construction site safety protection detection method and system
CN111325051A (en) Face recognition method and device based on face image ROI selection
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN112257643A (en) Smoking behavior and calling behavior identification method based on video streaming
CN114885119A (en) Intelligent monitoring alarm system and method based on computer vision
CN114821414A (en) Smoke and fire detection method and system based on improved YOLOV5 and electronic equipment
CN114998737A (en) Remote smoke detection method, system, electronic equipment and medium
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium
CN112132043A (en) Fire fighting channel occupation self-adaptive detection method based on monitoring video
CN111127358A (en) Image processing method, device and storage medium
CN110569840A (en) Target detection method and related device
CN110659627A (en) Intelligent video monitoring method based on video segmentation
CN113936252A (en) Battery car intelligent management system and method based on video monitoring
CN113158963B (en) Method and device for detecting high-altitude parabolic objects
CN113408479A (en) Flame detection method and device, computer equipment and storage medium
CN114399734A (en) Forest fire early warning method based on visual information
CN112686214A (en) Face mask detection system and method based on Retinaface algorithm
JP5286113B2 (en) Smoke detector
CN113298027B (en) Flame detection method and device, electronic equipment and storage medium
CN114937302A (en) Smoking identification method, device and equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination