CN116012785A - Fire level determining method, device, equipment and medium - Google Patents

Fire level determining method, device, equipment and medium Download PDF

Info

Publication number
CN116012785A
CN116012785A CN202310013207.1A CN202310013207A CN116012785A CN 116012785 A CN116012785 A CN 116012785A CN 202310013207 A CN202310013207 A CN 202310013207A CN 116012785 A CN116012785 A CN 116012785A
Authority
CN
China
Prior art keywords
flame
density
determining
image frame
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310013207.1A
Other languages
Chinese (zh)
Inventor
邢哲
周照
闫立俊
牛京
陈长江
曹亮
韩磊磊
王康
张斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Softcom Power Information Technology Group Co ltd
Original Assignee
Softcom Power Information Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Softcom Power Information Technology Group Co ltd filed Critical Softcom Power Information Technology Group Co ltd
Priority to CN202310013207.1A priority Critical patent/CN116012785A/en
Publication of CN116012785A publication Critical patent/CN116012785A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method, a device, equipment and a medium for determining fire level. The method comprises the steps of detecting flame density in an image frame based on a flame density detection model, and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in the to-be-monitored area by the monitoring equipment; determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame. According to the technical scheme, the flame density in the monitoring area can be monitored in real time, so that the fire level in the monitoring area is accurately and quickly divided, the reasonable allocation of fire resources is facilitated, the manpower resources are saved, and the early warning and prevention of fire disaster are improved.

Description

Fire level determining method, device, equipment and medium
Technical Field
The present invention relates to the field of fire monitoring technologies, and in particular, to a method, an apparatus, a device, and a medium for determining a fire level.
Background
With the rapid development of internet technology and information technology, the popularization of big data, artificial intelligence, internet of things and 5G networks is driven, and meanwhile, the development of image processing technology and video processing technology is promoted. The security monitoring system based on the image processing technology or the video processing technology is widely applied and popularized in different occasions in different fields because of rich acquired image information, high pixel resolution, advanced technology and visual convenience.
Currently, security monitoring systems for fire monitoring are all used for monitoring whether fire scenes exist in a monitored area, such as whether flames exist or not. However, the fire level in the monitored area cannot be judged, and further, fire early warning information cannot be directly and accurately fed back to provide the monitoring personnel with reasonable allocation of fire resources, the monitoring personnel are required to judge autonomously according to the monitoring video, fatigue response is easy to occur to the monitoring personnel, and the fire situation with high hazard is ignored.
Therefore, how to provide a technical scheme for accurately dividing fire level is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The invention provides a fire level determining method, device, equipment and medium, which can monitor the flame density in a monitoring area in real time so as to accurately and quickly divide the fire level in the monitoring area, thereby being beneficial to reasonably dispatching fire resources, saving human resources and improving the early warning and prevention of fire.
According to an aspect of the present invention, there is provided a method of determining a fire level, the method comprising:
detecting flame density in an image frame based on a flame density detection model, and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame.
According to another aspect of the present invention, there is provided a fire level determining apparatus, the apparatus comprising:
the flame density map determining module is used for detecting the flame density in the image frame based on the flame density detecting model and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
the fire intensity level determining module is used for determining the fire intensity level of the flame in the image frame according to the flame intensity map; wherein the flame density map is used to represent density data of mark points on each flame.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of determining a fire level according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the method of determining a fire level according to any one of the embodiments of the present invention.
According to the technical scheme, the flame density in the image frame is detected based on the flame density detection model, and a flame density map corresponding to the image frame is determined; the image frames are obtained by monitoring flame in the to-be-monitored area by the monitoring equipment; determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame. According to the technical scheme, the flame density in the monitoring area can be monitored in real time, so that the fire level in the monitoring area is accurately and quickly divided, the reasonable allocation of fire resources is facilitated, the manpower resources are saved, and the early warning and prevention of fire disaster are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for determining fire level according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method for determining fire level according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of a preprocessed convolutional neural network according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a first convolutional neural network according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a third convolutional neural network according to a second embodiment of the present invention;
fig. 6 is a schematic diagram of a fourth convolutional neural network according to a second embodiment of the present invention
Fig. 7 is a schematic structural diagram of a fire level determining device according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device embodying an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a fire level determining method according to an embodiment of the present invention, where the present embodiment is applicable to a case of monitoring and early warning an area where a fire is liable to occur indoors or outdoors, the method may be performed by a fire level determining device, and the fire level determining device may be implemented in hardware and/or software. As shown in fig. 1, the method includes:
s110, detecting flame density in an image frame based on a flame density detection model, and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in the to-be-monitored area by monitoring equipment.
In indoor scenes, smoke alarms are typically used to monitor the concentration of indoor smoke to determine if a fire is occurring in the room; in outdoor scenes, because the monitoring area is large and the outdoor air circulation is strong, whether a fire occurs in the monitoring area is usually monitored by using a monitor terminal display. The fire disaster is monitored through the monitoring video, a large amount of human resources can be consumed, and a large amount of false alarms exist, so that monitoring personnel can easily generate fatigue response.
In view of the above, the image frames in the surveillance video are identified using a flame density detection model to determine whether a fire occurs in the surveillance area and the level of the fire. The flame density detection model may be used for density detection of image frames extracted in the surveillance video, among other things. The flame density map may be characterized by the duty ratio of the pixel point where the flame is located in the image frame, or the number of flames in the image frame, or the marking characteristics of the flame in the image frame.
Specifically, the acquired image frame may be input into a pre-trained flame density detection model, and a flame density map corresponding to the image frame may be output.
The monitoring device can be a device with an image acquisition and transmission function, such as a monitoring camera and an infrared camera.
As an alternative but non-limiting implementation, the determination of the image frame may include, but is not limited to, the following procedure of steps A1 to A2:
and A1, acquiring video data obtained by monitoring the flame in the to-be-monitored area by the monitoring equipment.
For the comprehensive detection of the fire level in the monitored area, at least one monitoring device can be arranged at the site where the fire is likely to occur.
The video data may be video data including flame in a video frame collected by the monitoring device. For example, whether a flame is present in the video data may be determined by a flame identification model. The Video data can be acquired by the monitoring equipment, so that the Video in the monitoring area can be acquired, and then the Video Capture method in the traditional Open CV (Open Source Computer Vision Library ) is used for loading the RTSP (Real Time Streaming Protocol, real-time streaming protocol) address of the monitoring equipment of each operation center, so that the access to the Video stream acquired by the monitoring equipment is realized.
And A2, intercepting the video data according to a preset period to obtain an image frame.
Typically, the frame rate of the acquired video data is typically around 25 frames per second. However, in an actual fire scenario, the rate of change of flame density in the same monitored area is not excessive. Accordingly, the clip period of the image frames may not be necessarily selected according to the video frame rate, and for example, the preset period may be set to clip the video data one frame per second, i.e., clip the image frames from the video data once every 0.5 seconds.
It will be appreciated that the image frames captured by the monitoring device are typically color images, and that the image frames may be preprocessed after they are acquired to convert them into grayscale image frames in order to reduce the computational complexity of the flame density detection model in detecting the image frames. That is, an image frame composed of three channels of RGB is converted into a single-channel image, so that each pixel point in the image has only 0-255 values. The advantage of this arrangement is that the calculation amount can be reduced on the basis of ensuring the accuracy of the subsequent calculation result. Specifically, the gray processing method in the Open CV may be directly adopted, that is, when the image frame is read, the reading mode is set to 0.
Furthermore, the converted gray image frame can be converted into an image matrix, so that the method has the advantage that the calculation complexity of the flame density detection model in detecting the image frame can be further reduced. Specifically, each pixel point in the gray image frame is represented by only one numerical value (i.e., gray value), so that the pixel data of the gray image frame can be regarded as an image matrix, the rows of the image matrix correspond to the heights (in units of pixels) of the gray image frame, the columns of the image matrix correspond to the widths (in units of pixels) of the gray image frame, the elements of the image matrix correspond to the pixel data of the gray image frame, and the values of the matrix elements are the gray values of each pixel point.
S120, determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame.
The flame density map includes density data of marking points of each flame in the image frame, and it is understood that the larger the density data is, the larger the fire of the flame is. The fire level may be a dangerous level of flame in the image frame in the current scene, and may be determined according to a specific scene.
In an alternative embodiment, the fire level of the flames in the image frame is determined according to the fire density map, a fire level determination model can be established, and the fire level determination model is obtained by training a plurality of fire density maps marked with fire levels in advance. In an embodiment of the invention, the flame density map may be input into a fire level determination model to output the fire level of the flame.
In another alternative embodiment, determining the fire level of the flame in the image frame from the flame density map may include, but is not limited to, the process of steps B1 through B2 as follows:
and B1, determining flame density data according to the flame density map.
It should be noted that, at least one flame exists in the image frame, and the density data of different flames are different, so that the flame density data of the image frame needs to be determined according to the density data of each flame in the flame density map.
In embodiments of the present invention, flame density data may be determined in the following manner. For example, the maximum density data in the density data of each flame in the flame density map is taken as the flame density data of the image frame; for another example, fire level coefficients of each flame are respectively determined according to the density data of each flame in the flame density map, and weighted summation is performed to determine flame density data of the image frames; for another example, an average value of the density data of each flame in the flame density map is taken as flame density data of the image frame. It should be noted that, the method for determining the flame density data of the image frame according to the embodiment of the present invention is not limited.
And B2, determining the fire level of the flame in the image frame according to the comparison result of the flame density data and a preset threshold value.
The preset threshold value can be set according to a specific scene where the monitoring area is located and a shooting distance of the monitoring equipment. For example, if the current scene is a wild mountain, a flame density threshold value of 5/10/15 may be set, which respectively indicates a fire level of small/medium/large; if the current scene is indoor, a flame density threshold of 10/20/30 can be set, which respectively indicates that the fire level is small/medium/large. It should be noted that, the embodiment of the present invention does not limit the classification of fire.
The embodiment of the invention provides a fire level determining method, which comprises the steps of detecting flame density in an image frame based on a flame density detecting model, and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in the to-be-monitored area by the monitoring equipment; determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame. According to the technical scheme, the flame density in the monitoring area can be monitored in real time, so that the fire level in the monitoring area is accurately and quickly divided, the reasonable allocation of fire resources is facilitated, the manpower resources are saved, and the early warning and prevention of fire disaster are improved.
Example two
Fig. 2 is a flowchart of a fire level determining method according to a second embodiment of the present invention, where the present embodiment is optimized based on the foregoing embodiments. As shown in fig. 2, the method includes:
s210, preprocessing the image frame based on the preprocessing sub-model, and determining a first feature map.
Wherein the pre-processing sub-model may be used to perform a preliminary extraction of flame features in the image frame. The establishing process of the preprocessing sub-model can be based on the acquired parameters, and the convolution kernel parameters of each convolution layer in the pre-constructed preprocessing convolution neural network are adjusted to obtain the preprocessing sub-model.
Specifically, fig. 3 is a schematic diagram of a preprocessed convolutional neural network according to a second embodiment of the present invention. As shown in fig. 3, the preprocessed convolutional neural network may include two convolutional layers, the first comprising 16 convolutional kernels each 9 x 9 in size, and the second comprising 32 convolutional kernels each 7 x 7 in size, and two activation function layers. Wherein two activation function layers are located after two convolution layers, respectively, the activation function layers may be PReLU (Parametric Rectified Linear Unit, parametric linear rectification function). The benefit of setting the activation function layer is that the non-linearity of the pre-processing sub-model is increased so that the pre-processing sub-model can be a non-linear function after all.
S220, classifying the number of flames in the first characteristic map based on the flame number classification sub-model, and determining a second characteristic map.
The flame quantity classification submodel can be used for extracting and classifying flame quantity features in the image frames to obtain the flame quantity in the image frames. The second feature map is used to characterize the number of flames in the image frame. For example, three flames are included in an image frame. In the embodiment of the invention, the first characteristic diagram is input into a pre-trained flame quantity classification sub-model, and the second characteristic diagram containing flame quantity characteristics is output.
As an alternative but non-limiting implementation, the construction of the flame quantity classification submodel may include, but is not limited to, the following procedure of steps C1 to C2:
step C1, acquiring a sample flame image set, and obtaining a marked flame image set according to the flame quantity marking quantity classification information in the sample flame image.
The sample flame image collection can collect sample flame images according to the number of flames so as to meet the requirement that the flame number determination submodel accurately classifies the sample flame images with different flame numbers. For example, 1000 sample flame images with the number of flames ranging from 1 to 10 are selected as the sample flame image sets. Also exemplary, sample flame images with the number of flames of 1 to 5 are selected as a sample flame image set according to a certain proportion, and the number proportion of the sample flame images with the number of flames of 1 to 5 is 4:4:3:2:3. it should be noted that, the method for determining the sample flame image set according to the embodiment of the present invention is not limited.
And C2, training the first convolutional neural network by taking the marked flame image set as a training set to obtain a flame quantity classification sub-model.
In the embodiment of the invention, the marked flame image set is input into the first convolutional neural network as a training set for training, and a loss function can be set to correct the training process so as to obtain a flame quantity classification submodel. Wherein the loss function may employ cross entropy loss to represent the probability distribution difference between the flame quantity classification sub-model output and the observations.
Fig. 4 is a schematic diagram of a first convolutional neural network according to a second embodiment of the present invention. As shown in fig. 4, the first convolutional neural network may include four convolutional layers, seven activation function layers, two max pooling layers, three full connection layers, and one sigmoid function layer. Wherein the step length of the two largest pooling layers is 2; the three fully connected layers contain 512 neurons, 256 neurons and 10 neurons in sequence. The four activation function layers are respectively located behind the four convolution layers, the two largest pooling layers are respectively located behind the first two activation function layers, the three full-connection layers are located behind the last activation function layer, the three activation function layers are respectively located behind the three full-connection layers, and the sigmoid function layer is located behind the last activation function layer.
It should be noted that, the sigmoid function is an activation function, and is used for hidden layer neuron output, and its value range is (0, 1), and a real number can be mapped to the interval of (0, 1) for two classification. This has the advantage that it is easy to classify image frames with relatively complex or small feature differences.
Optionally, the first convolutional neural network may further comprise an SPP-Net function layer. The benefit of providing the SPP-Net function layer is that it is capable of supporting the input of image frames of arbitrary size when used to classify sub-models in training flame quantity.
And S230, performing flame density detection on the first characteristic map and the second characteristic map based on a flame density detection sub-model, and determining a flame density map corresponding to the image frame.
It should be noted that, in order to accurately classify the number of flames in the first feature map by using the flame number classification sub-model, the estimated mean shift caused by the parameter error of the convolution layer may be reduced by using the maximum pooling layer, but partial information of the image frame may be lost. Therefore, in the embodiment of the invention, the first characteristic diagram and the second characteristic diagram are input into the flame density detection submodel together, so that the detail characteristics lost by the maximum pooling layer can be recovered, and the resolution of the output flame density diagram is improved.
In the embodiment of the invention, the flame density detection is carried out on the first characteristic diagram and the second characteristic diagram based on the flame density detection sub-model, the flame density diagram corresponding to the image frame is determined, the lost detail characteristic in the second characteristic diagram can be recovered according to the first characteristic diagram, the flame density is marked according to the number of the flame marked in the second characteristic diagram, and then the flame density diagram corresponding to the image frame is obtained. It should be noted that, the number of flames marked in the second feature map may affect the granularity of the features extracted from the convolution layer in the flame density detection sub-model, that is, the more the number of flames marked in the second feature map, the more the features extracted from the convolution layer in the flame density detection sub-model.
As an alternative but non-limiting implementation, the construction method of the flame quantity classification submodel may include, but is not limited to, the following processes of steps D1 to D2:
and D1, marking mark points on flames of the sample flame images in the sample flame image set so as to represent flame fire through the mark point density data.
In the embodiment of the present invention, marking points on flames of the sample flame image in the sample flame image set may be, for example, setting a corresponding position in each group of flames to 1 according to a classification result of the number of flames in the sample flame image.
And D2, training the second convolutional neural network by taking the sample flame image set marked with the mark points as a training set to obtain a flame density detection sub-model.
In the embodiment of the invention, a sample flame image set marked with mark points is used as a training set to be input into a second convolution neural network for training, and a loss function can be set to correct the training process so as to obtain a flame density detection submodel. The loss function may employ a standard pixel-wise Euclidean distance loss, among others.
Alternatively, the flame density detection submodel may include a preliminary processing submodel and an output submodel, and accordingly, the second convolutional neural network may include a third convolutional neural network and a fourth convolutional neural network.
Fig. 5 is a schematic diagram of a third convolutional neural network according to a second embodiment of the present invention. As shown in fig. 5, the third convolutional neural network may include four convolutional layers, four activation function layers, and two max pooling layers. Wherein the first convolution layer comprises 20 convolution kernels of 7 x 7 each, the second convolution layer comprises 40 convolution kernels of 5 x 5 each, the third convolution layer comprises 20 convolution kernels of 5 x 5 each, and the fourth convolution layer comprises 10 convolution kernels of 5 x 5 each. Wherein four activation function layers are located after four convolution layers, respectively, and two maximum pooling layers are located after the first two activation function layers, respectively.
Fig. 6 is a schematic diagram of a fourth convolutional neural network according to a second embodiment of the present invention. As shown in fig. 6, the fourth convolutional neural network may include two convolutional layers and two deconvolution layers. Wherein the first convolution layer contains 24 convolution kernels of 3×3 size, the second convolution layer contains 32 convolution kernels of 3×3 size, and the feature numbers of the two deconvolution layers are 16 and 18, respectively. Wherein the two deconvolution layers follow the two convolution layers. The advantage of this arrangement is that the image output by the flame density detection submodel can be restored to the original image size, further repairing the loss of detail due to the maximized pooling layer.
As an alternative but non-limiting implementation manner, taking the sample flame image set marked with the mark points as a training set, training the second convolutional neural network to obtain a flame density detection submodel, which may include, but is not limited to, the following processes of steps E1 to E3:
e1, processing the sample flame image set based on the pretreatment sub-model to obtain a sample feature map.
The sample feature map may be an image obtained by performing preliminary extraction on features in a sample flame image. The sample flame image set is processed based on the preprocessing sub-model, so that more detail features in the sample flame image can be reserved, and the loss of image detail features during subsequent further processing is avoided.
And E2, processing the sample feature map based on the flame quantity classification submodel to obtain a classification feature map.
The classification feature map is used for representing flame quantity classification results in the sample feature map.
And E3, training a second convolutional neural network by taking the sample feature map, the sample flame image set marked with the mark points and the classification feature map as training sets to obtain a flame density detection submodel.
In the embodiment of the invention, a sample feature map, a sample flame image set marked with mark points and a classification feature map are taken as training sets, and are simultaneously input into a second convolutional neural network for training, a loss function is set for correcting the training result, and when the training result meets the preset requirement, a flame density detection submodel is determined.
S240, determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame.
The embodiment of the invention provides a fire level determining method, which comprises the steps of preprocessing an image frame based on a preprocessing sub-model to determine a first characteristic diagram; classifying the number of flames in the first feature map based on the flame number classification sub-model, and determining a second feature map; performing flame density detection on the first feature map and the second feature map based on the flame density detection sub-model, and determining a flame density map corresponding to the image frame; determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame. According to the technical scheme, firstly, the sharing characteristic extraction is carried out on the image frames, the advanced priori processing is carried out on the flame quantity in the image frames, then the flame density is estimated according to the extracted sharing characteristic and the flame quantity classification result, so that the resolution of a flame density map can be improved, the accurate division of flame grades is improved, the reasonable allocation of fire resources is facilitated, the manpower resources are saved, and the early warning and prevention of fire disaster are improved.
Example III
Fig. 7 is a schematic structural diagram of a fire level determining device according to a third embodiment of the present invention. As shown in fig. 7, the apparatus includes:
a flame density map determining module 710, configured to detect flame density in an image frame based on a flame density detection model, and determine a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
a fire level determining module 720, configured to determine a fire level of a flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame.
The embodiment of the invention provides a fire level determining device, which is used for determining a flame density map corresponding to an image frame by detecting the flame density in the image frame based on a flame density detection model; the image frames are obtained by monitoring flame in the to-be-monitored area by the monitoring equipment; determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame. According to the technical scheme, the flame density in the monitoring area can be monitored in real time, so that the fire level in the monitoring area is accurately and quickly divided, the reasonable allocation of fire resources is facilitated, the manpower resources are saved, and the early warning and prevention of fire disaster are improved.
Further, the flame density map determination module 710 includes:
the first feature map determining unit is used for preprocessing the image frame based on the preprocessing sub-model to determine a first feature map;
the second characteristic diagram determining unit is used for classifying the flame quantity in the first characteristic diagram based on the flame quantity classifying sub-model to determine a second characteristic diagram;
and the flame density map determining unit is used for detecting the flame density of the first characteristic map and the second characteristic map based on the flame density detection sub-model and determining a flame density map corresponding to the image frame.
Further, the flame quantity classification submodel is constructed by:
acquiring a sample flame image set, and obtaining a marked flame image set according to flame quantity marking quantity classification information in the sample flame image;
and training the first convolutional neural network by taking the marked flame image set as a training set to obtain a flame quantity classification sub-model.
Further, the flame density detection submodel is constructed by:
marking mark points on flames of the sample flame images in the sample flame image set so as to represent flame fire through the density data of the mark points;
and training the second convolution neural network by taking the sample flame image set marked with the mark points as a training set to obtain a flame density detection sub-model.
Further, training the second convolutional neural network by taking the sample flame image set marked with the mark points as a training set to obtain a flame density detection sub-model, which comprises the following steps:
processing the sample flame image set based on the pretreatment sub-model to obtain a sample feature map;
processing the sample feature map based on the flame quantity classification sub-model to obtain a classification feature map;
and training the second convolutional neural network by taking the sample feature map, the sample flame image set marked with the mark points and the classification feature map as training sets to obtain a flame density detection submodel.
Further, the fire level determination module 720 includes:
a flame density data determining unit for determining flame density data according to the flame density map;
and the fire level determining unit is used for determining the fire level of the flame in the image frame according to the comparison result of the flame density data and the preset threshold value.
Further, the determining manner of the image frame includes:
acquiring video data obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
and intercepting the video data according to a preset period to obtain an image frame.
The fire grade determining device provided by the embodiment of the invention can execute the fire grade determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 8 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the determination of fire level.
In some embodiments, the method of determining the fire level may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the fire level determination method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method of determining the fire level in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of determining a fire level, comprising:
detecting flame density in an image frame based on a flame density detection model, and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
determining the fire level of the flame in the image frame according to the flame density map; wherein the flame density map is used to represent density data of mark points on each flame.
2. The method of claim 1, wherein detecting flame density in an image frame based on a flame density detection model, determining a flame density map corresponding to the image frame, comprises:
preprocessing the image frame based on a preprocessing sub-model, and determining a first feature map;
classifying the flame quantity in the first feature map based on a flame quantity classification sub-model, and determining a second feature map;
and detecting the flame density of the first characteristic map and the second characteristic map based on a flame density detection sub-model, and determining a flame density map corresponding to the image frame.
3. The method of claim 2, wherein the flame quantity classification sub-model is constructed by:
acquiring a sample flame image set, and obtaining a marked flame image set according to flame quantity marking quantity classification information in the sample flame image;
and training the first convolutional neural network by taking the marked flame image set as a training set to obtain a flame quantity classification sub-model.
4. A method according to claim 3, wherein the flame density detection submodel is constructed by:
marking mark points on flames of the sample flame images in the sample flame image set so as to represent flame fire through the density data of the mark points;
and training the second convolution neural network by taking the sample flame image set marked with the mark points as a training set to obtain a flame density detection sub-model.
5. The method of claim 4, wherein training the second convolutional neural network using the set of sample flame images labeled with the marker points as a training set to obtain a flame density detection submodel, comprising:
processing the sample flame image set based on the pretreatment sub-model to obtain a sample feature map;
processing the sample feature map based on the flame quantity classification sub-model to obtain a classification feature map;
and training the second convolutional neural network by taking the sample feature map, the sample flame image set marked with the mark points and the classification feature map as training sets to obtain a flame density detection submodel.
6. The method of claim 1, wherein determining a fire level of a flame in the image frame from the flame density map comprises:
determining flame density data according to the flame density map;
and determining the fire level of the flame in the image frame according to the comparison result of the flame density data and a preset threshold value.
7. The method of claim 1, wherein the determining the image frame comprises:
acquiring video data obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
and intercepting the video data according to a preset period to obtain an image frame.
8. A fire level determining apparatus, comprising:
the flame density map determining module is used for detecting the flame density in the image frame based on the flame density detecting model and determining a flame density map corresponding to the image frame; the image frames are obtained by monitoring flame in a to-be-monitored area by monitoring equipment;
the fire intensity level determining module is used for determining the fire intensity level of the flame in the image frame according to the flame intensity map; wherein the flame density map is used to represent density data of mark points on each flame.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of determining a fire level of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the method of determining a fire level as claimed in any one of claims 1 to 7.
CN202310013207.1A 2023-01-05 2023-01-05 Fire level determining method, device, equipment and medium Pending CN116012785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310013207.1A CN116012785A (en) 2023-01-05 2023-01-05 Fire level determining method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310013207.1A CN116012785A (en) 2023-01-05 2023-01-05 Fire level determining method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116012785A true CN116012785A (en) 2023-04-25

Family

ID=86024472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310013207.1A Pending CN116012785A (en) 2023-01-05 2023-01-05 Fire level determining method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116012785A (en)

Similar Documents

Publication Publication Date Title
US20220405145A1 (en) Method, Apparatus, System and Electronic Device for Selecting Intelligent Analysis Algorithm
CN108318773B (en) Transmission conductor strand breakage detection method and system
WO2023093151A1 (en) Image screening method and apparatus, electronic device, and storage medium
CN112153373A (en) Fault identification method and device for bright kitchen range equipment and storage medium
JP7429756B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN115346171A (en) Power transmission line monitoring method, device, equipment and storage medium
CN115311623A (en) Equipment oil leakage detection method and system based on infrared thermal imaging
CN115331132A (en) Detection method and device for automobile parts, electronic equipment and storage medium
CN108900895B (en) Method and device for shielding target area of video stream
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
CN116703925B (en) Bearing defect detection method and device, electronic equipment and storage medium
CN106960188B (en) Weather image classification method and device
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN116245865A (en) Image quality detection method and device, electronic equipment and storage medium
CN116012785A (en) Fire level determining method, device, equipment and medium
CN115830641A (en) Employee identification method and device, electronic equipment and storage medium
CN116668843A (en) Shooting state switching method and device, electronic equipment and storage medium
CN112991308B (en) Image quality determining method and device, electronic equipment and medium
CN113705442A (en) Outdoor large-board advertising picture monitoring and identifying system and method
CN113807209A (en) Parking space detection method and device, electronic equipment and storage medium
CN115953723B (en) Static frame detection method and device, electronic equipment and storage medium
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN118038006A (en) Detection method, device and medium for separating from designated position based on visual recognition
CN117557777A (en) Sample image determining method and device, electronic equipment and storage medium
CN116823661A (en) Traffic light color drawing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination