CN114549866A - Smoke and fire detection method, device, equipment and medium - Google Patents

Smoke and fire detection method, device, equipment and medium Download PDF

Info

Publication number
CN114549866A
CN114549866A CN202210151126.3A CN202210151126A CN114549866A CN 114549866 A CN114549866 A CN 114549866A CN 202210151126 A CN202210151126 A CN 202210151126A CN 114549866 A CN114549866 A CN 114549866A
Authority
CN
China
Prior art keywords
frame image
optical flow
target
current frame
flow information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210151126.3A
Other languages
Chinese (zh)
Inventor
周敏敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Jinan Boguan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Boguan Intelligent Technology Co Ltd filed Critical Jinan Boguan Intelligent Technology Co Ltd
Priority to CN202210151126.3A priority Critical patent/CN114549866A/en
Publication of CN114549866A publication Critical patent/CN114549866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The application discloses fireworks detection method, device, equipment and medium include: acquiring a current frame image, and extracting a target frame image meeting the preset interval frame number with the current frame image from a video to be detected; acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph; constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire characteristic information; and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object. The method improves the detection rate and reduces false detection by constructing a high-dimensional map and filtering a static target, and is particularly suitable for infrared scenes.

Description

Smoke and fire detection method, device, equipment and medium
Technical Field
The invention relates to the technical field of image processing and recognition, in particular to a smoke and fire detection method, a smoke and fire detection device, smoke and fire detection equipment and smoke and fire detection media.
Background
With the continuous popularization of industrial automation, the concept of safety production is continuously mentioned, especially fire protection, gas stations, cotton plants and the like, and any sporadic thermal power can cause major accidents, so the control of open fire is particularly important.
At present, on one hand, the smoke alarm is used for early warning, but the equipment is easy to age or is not damaged for a long time; on the other hand, smoke and fire detection is carried out based on a video image, but most of the technologies on the market use a color image scene, rely on color information, and are not suitable for a gray level image in an infrared scene, while aiming at the gray level information in the infrared image, the gray level information is generally directly extracted or converted into a pseudo color image, but certain false detection exists in the two modes.
In summary, how to detect smoke and fire in a gray image in an infrared scene and reduce false detection is a problem to be solved at present.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a smoke and fire detection method, device, apparatus and medium, which can detect smoke and fire in a grayscale image in an infrared scene and reduce false detection. The specific scheme is as follows:
in a first aspect, the present application discloses a smoke and fire detection method comprising:
acquiring a current frame image, and extracting a target frame image meeting a preset interval frame number with the current frame image from a video to be detected;
acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph;
constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information;
and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object.
Optionally, the acquiring optical flow information of the pixel points corresponding to the current frame image and the target frame image in the coordinate axis direction in the two-dimensional coordinate system, and generating a corresponding optical flow information map includes:
acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the x direction and the y direction in a two-dimensional coordinate system, and generating an x-direction optical flow information graph and a y-direction optical flow information graph;
correspondingly, the constructing a high-dimensional map based on the optical flow information map and the current frame image comprises:
and constructing a high-dimensional map based on the x-direction optical flow information map, the y-direction optical flow information map, the current frame image and corresponding weight coefficients thereof.
Optionally, before constructing the high-dimensional map based on the x-direction optical flow information map, the y-direction optical flow information map, the current frame image and the corresponding weight coefficients thereof, the method further includes:
determining a first weight coefficient of an x-direction optical flow information graph; determining a second weight coefficient of a y-direction optical flow information graph and determining a third weight coefficient of the current frame image; wherein the first weight coefficient is the same as the second weight coefficient, and a sum of the first weight coefficient, the second weight coefficient, and the third weight coefficient is equal to 1.
Optionally, the determining a third weight coefficient of the current frame image includes:
constructing a comparison image comprising the current frame image and a fourth weight coefficient;
comparing the current frame image with the comparison image by using a structural similarity method to obtain a comparison value;
and when the comparison value exceeds a preset threshold value, determining the fourth weight coefficient as a third weight coefficient of the current frame image.
Optionally, the inputting the current frame image, the optical flow information graph, and the high-dimensional atlas into a detection network to determine a target detection area similar to the smoke and fire feature information includes:
respectively inputting the current frame image, the optical flow information graph and the high-dimensional map into a first branch network, a second branch network and a third branch network in a YOLO detection network; the number of channels in the first branch network, the second branch network and the third branch network meets a preset proportion, and the preset proportion is determined based on a synthetic proportion relation of each branch network input image;
fusing first feature information extracted by the first branch network, second feature information extracted by the second branch network and third feature information extracted by the third branch network to obtain fused feature information;
and processing the fused characteristic information by using the convolutional layer, and classifying and regressing through the detection head to determine a target detection region similar to the smoke and fire characteristic information.
Optionally, the detecting the target detection area by using a predefined static target filtering method, and determining whether a target object in the target detection area is in a static state includes:
extracting a previous frame image of the current frame image from a video to be detected, and screening out a region position corresponding to the target detection region from the previous frame image;
extracting a first data block from the target detection area and extracting a second data block from the area position;
and acquiring optical flow information of corresponding data points in the first data block and the second data block, generating corresponding optical flow data graphs, and judging whether the target object in the target detection area is in a static state or not based on the optical flow data graphs.
Optionally, the determining whether the target object in the target detection area is in a stationary state based on the optical flow data map includes:
sampling the optical flow data graph according to a preset sampling point interval to obtain a sampled optical flow data graph;
inputting the sampled optical flow data graph into a classifier to determine whether a target object in the target detection area is in a static state.
In a second aspect, the present application discloses a smoke and fire detection device comprising:
the video frame acquisition module is used for acquiring a current frame image and extracting a target frame image which meets the preset interval frame number with the current frame image from a video to be detected;
the optical flow information acquisition module is used for acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the coordinate axis direction in a two-dimensional coordinate system and generating a corresponding optical flow information graph;
the detection module is used for constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire characteristic information;
and the removing module is used for detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state or not, and removing the target detection area corresponding to the target object if the target object is in the static state.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program for implementing the steps of the previously disclosed smoke detection method.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the previously disclosed smoke detection method when executed by a processor.
Therefore, the method includes the steps that firstly, a current frame image is obtained, and a target frame image meeting the preset interval frame number with the current frame image is extracted from a video to be detected; acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph; constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information; and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object. Therefore, the optical flow information between corresponding pixel points of a current frame image and a target frame image is acquired to generate an optical flow information graph representing the motion state of a target, a high-dimensional graph is constructed by combining the current frame image, the optical flow information graph and the high-dimensional graph are input to a detection network to be detected to obtain a target detection area, and finally whether a target object is in a static state or not is judged by a predefined static target filtering method to distinguish the static target from the motion target and eliminate the detection area where the static target is located. Through the technical scheme, smoke and fire in the gray level image can be detected in an infrared scene, the detection rate is improved, and the false detection is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a smoke and fire detection method disclosed herein;
FIG. 2a is a schematic diagram of an original image disclosed in the present application;
FIG. 2b is a schematic view of a high dimensional map as disclosed herein;
FIG. 2c is a schematic diagram of the labeling of beat difference points in a high-dimensional map according to the present disclosure;
FIG. 3a is a pictorial illustration of stationary target optical flow data as disclosed herein;
FIG. 3b is a schematic diagram of optical flow data of a moving object disclosed in the present application
FIG. 4 is a flow chart of a particular pyrotechnic detection method disclosed herein;
FIG. 5 is a flow chart of a particular pyrotechnic detection method disclosed herein;
FIG. 6 is a diagram of an original inspection network model disclosed in the present application;
FIG. 7 is a diagram of a modified detection network model disclosed herein;
FIG. 8 is a diagram of a layer convolution model disclosed herein;
FIG. 9 is a schematic illustration of a pyrotechnic detection device in accordance with the disclosure herein;
fig. 10 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
At present, regarding gray information in an infrared image, the gray information is generally directly extracted or converted into a pseudo color image, but both of the two methods have certain false detection. Therefore, the embodiment of the application discloses a smoke and fire detection method, a smoke and fire detection device, smoke and fire detection equipment and a smoke and fire detection medium, which can detect smoke and fire in a gray level image in an infrared scene and reduce false detection.
Referring to fig. 1, an embodiment of the present application discloses a smoke and fire detection method, including:
step S11: acquiring a current frame image, and extracting a target frame image meeting a preset interval frame number with the current frame image from a video to be detected.
In this embodiment, a current frame image is first acquired, and a target frame image satisfying a preset interval frame number with the current frame image is extracted from a video to be detected. It should be noted that the value range of the preset interval frame number is 1 to 5, that is, the difference between the interval frame number of the extracted target frame image and the interval frame number of the current frame image is 1 to 5 frames.
Step S12: and acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph.
In the embodiment, the optical flow information of pixel points corresponding to a current frame image and a target frame image on a coordinate axis in a two-dimensional coordinate system is calculated by using an optical flow method, and a corresponding optical flow information graph is generated.
Step S13: and constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire characteristic information.
In this embodiment, after the optical flow information map is obtained, a high-dimensional map is constructed by combining the current frame image, and the current frame image, the optical flow information map and the high-dimensional map are input to a detection network for detection, so as to determine a target detection area similar to the smoke and fire characteristic information. The method has the advantages that the current frame image is used for representing the gray information, the optical flow information graph is used for representing the motion state information, the method for detecting the smoke and fire multi-dimensional graph by fusing the gray information and the smoke and fire optical flow motion state information in the infrared scene is achieved, and the problem that the gray image is converted into the multi-dimensional information image in the infrared scene is solved. Fig. 2a and 2b are schematic diagrams of an original image and a high-dimensional map respectively, and fig. 2c is a schematic diagram of a jumping difference point marked on the basis of the high-dimensional map in fig. 2b, so that it can be seen that the high-dimensional map can effectively distinguish a jumping flame region, so as to achieve the purposes of increasing the detection rate and reducing the false detection rate.
Step S14: and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object.
In this embodiment, the detecting the target detection area by using the predefined static target filtering method and determining whether the target object in the target detection area is in a static state may include: extracting a previous frame image of the current frame image from a video to be detected, and screening out a region position corresponding to the target detection region from the previous frame image; extracting a first data block from the target detection area and extracting a second data block from the area position; and acquiring optical flow information of corresponding data points in the first data block and the second data block, generating corresponding optical flow data graphs, and judging whether the target object in the target detection area is in a static state or not based on the optical flow data graphs. It can be understood that, in this embodiment, it is necessary to extract the previous frame image of the current frame image from the video to be detected, and mark the current frame image as GrayiRecording the previous frame as PreImgiFrom PreImgiSelecting a region of interest (ROI) in the imageiThe area position corresponding to the target detection area; then, a first data block is extracted from the target detection area and is marked as ROInowExtracting a second data block from the region position, and recording the second data block as ROIpreAnd performing optical flow analysis on the first data block and the second data block to generate an optical flow data graph, and judging whether the target object in the target detection area is in a static state or not through the optical flow data graph.
The above determining whether the target object in the target detection area is in a stationary state based on the optical flow data map includes: sampling the optical flow data graph according to a preset sampling point interval to obtain a sampled optical flow data graph; inputting the sampled optical flow data graph into a classifier to determine whether a target object in the target detection area is in a static state. It can be understood that, in this embodiment, sampling and point sampling processing needs to be performed on the obtained optical flow data map to obtain a sampled optical flow data map, and the preset sampling point interval may be set to 10, that is, one sampling point is obtained every 10 points. Fig. 3a and fig. 3b disclose a sampled optical flow data graph of a stationary object and an optical flow data graph of a moving object, respectively, and finally the two sampled optical flow data graphs are input into a classifier to determine whether the target object in the target detection area is in a stationary state. Through the technical scheme, the problem of false detection caused by static overexposure sources and the like can be reduced.
Therefore, the method includes the steps that firstly, a current frame image is obtained, and a target frame image meeting the preset interval frame number with the current frame image is extracted from a video to be detected; acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph; constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information; and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object. Therefore, the optical flow information between corresponding pixel points of a current frame image and a target frame image is acquired to generate an optical flow information graph representing the motion state of a target, a high-dimensional graph is constructed by combining the current frame image, the optical flow information graph and the high-dimensional graph are input to a detection network to be detected to obtain a target detection area, and finally whether a target object is in a static state or not is judged by a predefined static target filtering method to distinguish the static target from the motion target and eliminate the detection area where the static target is located. Through the technical scheme, smoke and fire in the gray level image can be detected in an infrared scene, the detection rate is improved, and the false detection is reduced.
Referring to fig. 4, the embodiment of the present application discloses a specific smoke and fire detection method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. The method specifically comprises the following steps:
step S21: acquiring a current frame image, and extracting a target frame image meeting a preset interval frame number with the current frame image from a video to be detected.
Step S22: and acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the x direction and the y direction in a two-dimensional coordinate system, and generating an x-direction optical flow information graph and a y-direction optical flow information graph.
In this embodiment, optical flow information of pixels corresponding to the current frame image and the target frame image in the x direction and the y direction in the two-dimensional coordinate system is calculated by using an optical flow method, and an x-direction optical flow information map and a y-direction optical flow information map are generated.
Step S23: determining a first weight coefficient of an x-direction optical flow information graph; determining a second weight coefficient of a y-direction optical flow information graph and determining a third weight coefficient of the current frame image; wherein the first weight coefficient is the same as the second weight coefficient, and a sum of the first weight coefficient, the second weight coefficient, and the third weight coefficient is equal to 1.
In this embodiment, it is necessary to determine the first weight coefficient α of the x-direction optical flow information map, determine the second weight coefficient β of the y-direction optical flow information map, and determine the third weight coefficient γ of the current frame image to construct the high-dimensional map. It should be noted that α and β belong to optical flow map weights in different directions, so the values are the same, that is, the first weight coefficient α and the second weight coefficient β are the same, and it is also required to satisfy that the sum of α, β, γ is 1, that is:
α+β+γ=1;
the determining the third weight coefficient of the current frame image includes: constructing a comparison image comprising the current frame image and a fourth weight coefficient; comparing the current frame image with the comparison image by using a structural similarity method to obtain a comparison value; and when the comparison value exceeds a preset threshold value, determining the fourth weight coefficient as a third weight coefficient of the current frame image. It should be noted that γ is the weight of the gray map, and it is necessary to ensure that the information of the gray map cannot be lost too much when acquiring the γ value, otherwise the key information is lost. Therefore, in the embodiment, a comparison image including the current frame image and the fourth weight coefficient is constructed, the current frame image and the comparison image are compared by using a Structural Similarity (SSIM) method, whether a comparison value exceeds a preset threshold is determined, and when the comparison value is smaller than the preset threshold, the comparison image at this time is considered to be unreliable. In this embodiment, the preset threshold is set to 0.6, and it should be noted that SSIM is a full-reference image quality evaluation index, and measures image similarity from three aspects of brightness, contrast, and structure, where the formula is as follows:
Figure BDA0003503577510000091
wherein i represents a current frame image, and j represents a comparison image; u. ofiAnd ujRespectively representing the average values of the current frame image and the comparison image;
Figure BDA0003503577510000092
and
Figure BDA0003503577510000093
respectively representing the variance, σ, of the current frame image and the comparison imageijRepresents a covariance; c. C1And c2Is two constants, respectively c1=(0.01*255)2,c2=(0.03*255)2
The SSIM value is 0 to 1, and larger values indicate more similarity, and in this embodiment, α ═ 0.3, β ═ 0.3, and γ ═ 0.4 are used.
Step S24: and constructing a high-dimensional map based on the x-direction optical flow information map, the y-direction optical flow information map, the current frame image and corresponding weight coefficients thereof, and inputting the current frame image, the x-direction optical flow information map, the y-direction optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire characteristic information.
In this embodiment, the high-dimensional map is constructed by the following formula:
Himgi=αFlow_xi+βFlow_yi+γGrayi
wherein, HimgiAs a high-dimensional map, Flow _ xiFlow _ y, a graph of x-direction optical Flow informationiFor y-direction optical flow information maps, GrayiIs the current frame image.
Step S25: and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object.
For more specific processing procedures of the steps S21, S24, and S25, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, before constructing the high-dimensional atlas, the embodiment of the application needs to determine specific weight coefficient values corresponding to the current frame image, the x-direction optical flow information graph and the y-direction optical flow information graph, and when determining the third weight coefficient of the current frame image, needs to evaluate by using a structural similarity method to obtain the third weight coefficient meeting a preset threshold condition, so as to avoid losing too much key information in the gray scale map.
Referring to fig. 5, the embodiment of the present application discloses a specific smoke and fire detection method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. The method specifically comprises the following steps:
step S31: acquiring a current frame image, and extracting a target frame image meeting a preset interval frame number with the current frame image from a video to be detected.
Step S32: and acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph.
Step S33: constructing a high-dimensional map based on the optical flow information map and the current frame image, and respectively inputting the current frame image, the optical flow information map and the high-dimensional map into a first branch network, a second branch network and a third branch network in a YOLO detection network; the number of channels in the first branch network, the second branch network and the third branch network meets a preset proportion, and the preset proportion is determined based on a synthetic proportion relation of each branch network input image.
In this embodiment, the used detection network is YOLO (i.e., young Only Look Once), the original main network is as shown in fig. 6, the main network is modified accordingly, the original single image input is divided into three branch networks for input, and the structures of the branch networks are similar. The modified detection network is shown in fig. 7, and specifically includes inputting a current frame image into a first branch network, inputting an optical flow information graph into a second branch network, and inputting a high-dimensional graph into a third branch network; the first branch network is used for extracting a current frame image, namely texture information of an original image, and the texture information is used as first characteristic information; the second branch network is used for extracting the motion information in the optical flow information graph as second characteristic information; the third branch network is used for extracting high-dimensional unknown useful information as third characteristic information. It should be further noted that the number of channels in the first branch network, the second branch network, and the third branch network satisfies a preset ratio, and based on the complexity of the data features, the number of channels in the three branch networks is different, and the number of channels is, from small to large, the current frame image, the optical flow information graph, and the high-dimensional graph in sequence. The redesign of the backbone network can effectively extract the required useful information, and the problem that the network parameter fitting is not successful due to the fact that one branch extracts network information with multiple purposes is avoided.
The preset ratio between the numbers of channels may be determined based on a composite ratio relationship of the input images of each branch network when determining the preset ratio. It can be understood that, the current frame image is a gray-scale image, the optical flow information map is an optical flow information map including an x direction and a y direction, and the high-dimensional map is a 3-channel high-dimensional map synthesized by the current frame image and the x-direction optical flow information map and the y-direction optical flow information map, so the preset proportional relationship between the channel numbers in the embodiment is set as 1: 2: 3.
step S34: and fusing the first characteristic information extracted by the first branch network, the second characteristic information extracted by the second branch network and the third characteristic information extracted by the third branch network to obtain fused characteristic information.
In this implementation, the first feature information extracted by the first branch network, the second feature information extracted by the second branch network, and the third feature information extracted by the third branch network are fused at the last layer to obtain the fused feature information, which is mainly performed by channel addition.
Step S35: and processing the fused characteristic information by using the convolutional layer, and classifying and regressing through the detection head to determine a target detection region similar to the smoke and fire characteristic information.
In this embodiment, a layer of convolution is used to perform dimension reduction and feature extraction on the fused feature information, and then regression classification is performed through a detection head (i.e., head) to determine a target detection region similar to the smoke and fire feature information. Note that a convolution consists of a 1 × 1 convolution layer, a batchnorm layer, an active layer, and a 3 × 3 convolution layer, as shown in fig. 8.
Step S36: and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object.
For more specific processing procedures of the steps S31, S32, and S36, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
It can be seen that, in the embodiment of the present application, the main network of the detection network is modified, and the original single image input is divided into three branch networks for input, specifically, the current frame image is input to the first branch network, the optical flow information graph is input to the second branch network, and the high-dimensional graph is input to the third branch network, and the number of channels in each branch network satisfies the preset proportional relationship. Therefore, the embodiment of the application realizes a layered processing mode of different information streams by the backbone network and a method for selecting the number of channels among the layers.
Referring to fig. 9, an embodiment of the present application discloses a smoke and fire detection device, including:
the video frame acquisition module 11 is configured to acquire a current frame image, and extract a target frame image satisfying a preset number of frame intervals with the current frame image from a to-be-detected video;
an optical flow information obtaining module 12, configured to obtain optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generate a corresponding optical flow information graph;
the detection module 13 is configured to construct a high-dimensional map based on the optical flow information map and the current frame image, and input the current frame image, the optical flow information map, and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire feature information;
a rejecting module 14, configured to detect the target detection area by using a predefined static target filtering method, and determine whether a target object in the target detection area is in a static state, and if the target object is in the static state, reject the target detection area corresponding to the target object.
Therefore, the method includes the steps that firstly, a current frame image is obtained, and a target frame image meeting the preset interval frame number with the current frame image is extracted from a video to be detected; acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph; constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information; and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object. Therefore, the optical flow information between corresponding pixel points of a current frame image and a target frame image is acquired to generate an optical flow information graph representing the motion state of a target, a high-dimensional graph is constructed by combining the current frame image, the optical flow information graph and the high-dimensional graph are input to a detection network to be detected to obtain a target detection area, and finally whether a target object is in a static state or not is judged by a predefined static target filtering method to distinguish the static target from the motion target and eliminate the detection area where the static target is located. Through the technical scheme, smoke and fire in the gray level image can be detected in an infrared scene, the detection rate is improved, and the false detection is reduced.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The method specifically comprises the following steps: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is adapted to store a computer program, which is loaded and executed by the processor 21 to implement the relevant steps of the method for smoke detection performed by a computer device as disclosed in any of the previous embodiments.
In this embodiment, the power supply 23 is used to provide operating voltage for each hardware device on the computer device 20; the communication interface 24 can create a data transmission channel between the computer device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., the resources stored thereon include an operating system 221, a computer program 222, data 223, etc., and the storage may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the computer device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, which may be Windows, Unix, Linux, or the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the smoke detection method disclosed in any of the foregoing embodiments by the computer device 20. The data 223 may include data received by the computer device and transmitted from an external device, data collected by the input/output interface 25, and the like.
Further, the present application also discloses a storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the method steps executed in the smoke and fire detection process disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method, the device, the apparatus and the storage medium for detecting smoke and fire provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in the present document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of smoke detection, comprising:
acquiring a current frame image, and extracting a target frame image meeting a preset interval frame number with the current frame image from a video to be detected;
acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system, and generating a corresponding optical flow information graph;
constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information;
and detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state, and if the target object is in the static state, rejecting the target detection area corresponding to the target object.
2. The smoke and fire detection method according to claim 1, wherein the acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in a coordinate axis direction in a two-dimensional coordinate system and generating a corresponding optical flow information map comprises:
acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the x direction and the y direction in a two-dimensional coordinate system, and generating an optical flow information graph in the x direction and an optical flow information graph in the y direction;
correspondingly, the constructing a high-dimensional map based on the optical flow information map and the current frame image comprises:
and constructing a high-dimensional map based on the x-direction optical flow information map, the y-direction optical flow information map, the current frame image and corresponding weight coefficients thereof.
3. The smoke and fire detection method according to claim 2, wherein before the constructing the high-dimensional map based on the x-direction optical flow information map, the y-direction optical flow information map, the current frame image and the corresponding weight coefficients thereof, the method further comprises:
determining a first weight coefficient of an x-direction optical flow information graph; determining a second weight coefficient of a y-direction optical flow information graph and determining a third weight coefficient of the current frame image; wherein the first weight coefficient is the same as the second weight coefficient, and a sum of the first weight coefficient, the second weight coefficient, and the third weight coefficient is equal to 1.
4. A smoke detection method according to claim 3, wherein said determining a third weighting factor for said current frame image comprises:
constructing a comparison image comprising the current frame image and a fourth weight coefficient;
comparing the current frame image with the comparison image by using a structural similarity method to obtain a comparison value;
and when the comparison value exceeds a preset threshold value, determining the fourth weight coefficient as a third weight coefficient of the current frame image.
5. The smoke and fire detection method according to claim 1, wherein the inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to smoke and fire characteristic information comprises:
respectively inputting the current frame image, the optical flow information graph and the high-dimensional map into a first branch network, a second branch network and a third branch network in a YOLO detection network; the number of channels in the first branch network, the second branch network and the third branch network meets a preset proportion, and the preset proportion is determined based on a synthetic proportion relation of each branch network input image;
fusing first feature information extracted by the first branch network, second feature information extracted by the second branch network and third feature information extracted by the third branch network to obtain fused feature information;
and processing the fused characteristic information by using the convolutional layer, and classifying and regressing through the detection head to determine a target detection region similar to the smoke and fire characteristic information.
6. The smoke and fire detection method according to any one of claims 1 to 5, wherein the detecting the target detection area by using a predefined static target filtering method and determining whether a target object in the target detection area is in a static state comprises:
extracting a previous frame image of the current frame image from a video to be detected, and screening out a region position corresponding to the target detection region from the previous frame image;
extracting a first data block from the target detection area and extracting a second data block from the area position;
and acquiring optical flow information of corresponding data points in the first data block and the second data block, generating corresponding optical flow data graphs, and judging whether the target object in the target detection area is in a static state or not based on the optical flow data graphs.
7. The smoke and fire detection method of claim 6, wherein said determining whether a target object in the target detection area is in a stationary state based on the optical flow data map comprises:
sampling the optical flow data graph according to a preset sampling point interval to obtain a sampled optical flow data graph;
inputting the sampled optical flow data graph into a classifier to determine whether a target object in the target detection area is in a static state.
8. A smoke and fire detection device, comprising:
the video frame acquisition module is used for acquiring a current frame image and extracting a target frame image which meets the preset interval frame number with the current frame image from a video to be detected;
the optical flow information acquisition module is used for acquiring optical flow information of pixel points corresponding to the current frame image and the target frame image in the coordinate axis direction in a two-dimensional coordinate system and generating a corresponding optical flow information graph;
the detection module is used for constructing a high-dimensional map based on the optical flow information map and the current frame image, and inputting the current frame image, the optical flow information map and the high-dimensional map into a detection network to determine a target detection area similar to the smoke and fire characteristic information;
and the removing module is used for detecting the target detection area by using a predefined static target filtering method, judging whether a target object in the target detection area is in a static state or not, and removing the target detection area corresponding to the target object if the target object is in the static state.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program for carrying out the steps of the pyrotechnic detection method according to any of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program realizes the steps of the smoke detection method according to any one of claims 1 to 7 when executed by a processor.
CN202210151126.3A 2022-02-14 2022-02-14 Smoke and fire detection method, device, equipment and medium Pending CN114549866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210151126.3A CN114549866A (en) 2022-02-14 2022-02-14 Smoke and fire detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210151126.3A CN114549866A (en) 2022-02-14 2022-02-14 Smoke and fire detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114549866A true CN114549866A (en) 2022-05-27

Family

ID=81676050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210151126.3A Pending CN114549866A (en) 2022-02-14 2022-02-14 Smoke and fire detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114549866A (en)

Similar Documents

Publication Publication Date Title
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN109325954B (en) Image segmentation method and device and electronic equipment
CN111178183B (en) Face detection method and related device
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
JP2007047965A (en) Method and device for detecting object of digital image, and program
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
KR102391853B1 (en) System and Method for Processing Image Informaion
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN112766206A (en) High-order video vehicle detection method and device, electronic equipment and storage medium
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
CN113724286A (en) Method and device for detecting saliency target and computer-readable storage medium
CN112818888A (en) Video auditing model training method, video auditing method and related device
CN111985314A (en) ViBe and improved LBP-based smoke detection method
KR102177247B1 (en) Apparatus and method for determining manipulated image
CN114399734A (en) Forest fire early warning method based on visual information
CN114373162A (en) Dangerous area personnel intrusion detection method and system for transformer substation video monitoring
CN111435457B (en) Method for classifying acquisitions acquired by sensors
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
JP2011170890A (en) Face detecting method, face detection device, and program
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN114549866A (en) Smoke and fire detection method, device, equipment and medium
CN116977256A (en) Training method, device, equipment and storage medium for defect detection model
CN113222843B (en) Image restoration method and related equipment thereof
CN112133100B (en) Vehicle detection method based on R-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination