CN110648490A - Multi-factor flame identification method suitable for embedded platform - Google Patents
Multi-factor flame identification method suitable for embedded platform Download PDFInfo
- Publication number
- CN110648490A CN110648490A CN201910916354.3A CN201910916354A CN110648490A CN 110648490 A CN110648490 A CN 110648490A CN 201910916354 A CN201910916354 A CN 201910916354A CN 110648490 A CN110648490 A CN 110648490A
- Authority
- CN
- China
- Prior art keywords
- fire
- information
- frames
- field video
- quasi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000002474 experimental method Methods 0.000 claims abstract description 24
- 238000002485 combustion reaction Methods 0.000 claims abstract description 15
- 238000004422 calculation algorithm Methods 0.000 claims description 79
- 238000013528 artificial neural network Methods 0.000 claims description 33
- 238000001514 detection method Methods 0.000 claims description 19
- 239000000203 mixture Substances 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000012790 confirmation Methods 0.000 description 13
- 238000011161 development Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 238000009826 distribution Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 240000000731 Fagus sylvatica Species 0.000 description 1
- 235000010099 Fagus sylvatica Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 235000019504 cigarettes Nutrition 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 239000000077 insect repellent Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000003345 natural gas Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000010893 paper waste Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
- G08B17/125—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The application discloses a multi-factor flame identification method applicable to an embedded platform, which comprises the steps of establishing a fire sample library, wherein the fire sample library is from a network fire picture and a combustion experiment picture; respectively acquiring a plurality of field video frames; respectively extracting a moving target in each site video frame to obtain one or more quasi-fire areas; respectively confirming fire in one or more quasi fire areas according to a fire sample library, and judging whether fire information exists in the one or more quasi fire areas; if yes, fire grade division is carried out on the fire information, and a corresponding alarm signal is generated according to the divided fire grade. According to the method and the device, the acquired field video frames are processed respectively, whether fire information which possibly develops into a fire disaster exists in the field monitoring environment is analyzed in real time, the fire disaster is confirmed again, whether the fire disaster information exists is further judged, the fire disaster grade analysis is carried out on the fire disaster information, an alarm is generated, and the fire disaster information identification is more accurate.
Description
Technical Field
The application relates to the technical field of electronic intelligent fire fighting, in particular to a multi-factor flame identification method suitable for an embedded platform.
Background
At present, three main methods for fire identification exist, the first method adopts a traditional fire detection sensor to detect fire information, and the defects of long detection time and low accuracy are generally existed. The second method is to detect fire information by image recognition, that is, to manually set the characteristic dimension of a fire for fire recognition by using a traditional digital image processing means, that is, to manually represent the fire information by designing a plurality of representative characteristics, but because the characteristics for manually representing the fire are limited, the fire information cannot be normally expressed in different scenes or different backgrounds, and the disadvantages of high misjudgment rate and low robustness generally exist. And the third method is to adopt a deep learning mode to automatically learn the fire characteristics in the sample picture, and replace the characteristic dimension of the artificially designed fire by automatically learning the fire characteristics in the sample picture, so that the robustness and the accuracy are improved. As shown in fig. 1, which is a diagram of a conventional fire identification system, in a general deep learning method, when fire identification is performed, live videos collected by a plurality of video collection ends are uniformly transmitted to a background server through a switch, and the background server performs centralized calculation, which results in a huge calculation amount.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a multi-factor flame identification method suitable for an embedded platform.
The application discloses a multifactor flame identification method suitable for embedded platform includes:
establishing a fire sample library, wherein the fire sample library is from network fire pictures and combustion experiment pictures;
respectively acquiring a plurality of field video frames;
respectively extracting a moving target in each site video frame to obtain one or more quasi-fire areas;
carrying out fire confirmation on one or more quasi fire areas according to a fire sample library, and judging whether fire information exists in the one or more quasi fire areas or not;
if yes, fire grade division is carried out on the fire information, and a corresponding alarm signal is generated according to the divided fire grade.
According to an embodiment of the present application, determining whether fire information exists in one or more quasi-fire areas by performing fire confirmation on the one or more quasi-fire areas according to a fire sample library includes:
respectively adopting a BP neural network algorithm, an SSD algorithm and a Yolo algorithm to detect one or more quasi-fire areas, wherein when the BP neural network algorithm is adopted to detect the fire areas, network fire pictures and combustion experiment pictures in a fire sample library are extracted to form a BP neural network training set;
respectively outputting fire confidence degrees according to the detection;
and judging whether fire information exists according to the fire confidence coefficient.
According to an embodiment of the present application, when a BP neural network algorithm is used to detect one or more quasi-fire areas, at least half of network fire pictures and at least half of combustion experiment pictures contained in a fire sample library are extracted to form a BP neural network training set.
According to an embodiment of the present application, determining whether fire information exists according to a fire confidence level includes: when the BP neural network algorithm is adopted for detection, the fire confidence coefficient P of the BP network is output, wherein P belongs to [0,1], and whether fire information exists is judged according to the fire confidence coefficient P.
According to one embodiment of the application, the SSD algorithm and the Yolo algorithm are respectively adopted to detect the fire information, the SSD fire confidence level P _ A and the Yolo fire confidence level P _ B are respectively output, and if P >0.8 and P _ A >0.8, the fire information is large fire information; if P >0.6 and P _ B >0.7, the fire information is small fire information.
According to one embodiment of the present application, classifying fire information into fire classes and generating corresponding alarm signals according to the classified fire classes includes: if the fire information is judged to be large fire information, acquiring continuous field video frames of more than 30 frames and less than 60 frames, and dividing the fire information into first-level fire early warning and generating first-level warning signals when the large fire information is generated in the continuous field video frames of more than 30 frames and less than 60 frames.
According to one embodiment of the present application, classifying fire information into fire classes and generating corresponding alarm signals according to the classified fire classes includes: the fire hazard information is classified according to fire hazard grades, and the corresponding alarm signal is generated according to the classified fire hazard grades comprises the following steps: if the fire information is judged to be large fire information, acquiring continuous field video frames more than 60 frames and less than 90 frames, and dividing the fire information into two-stage fire early warning and generating two-stage warning signals when the large fire information appears in the continuous field video frames more than 60 frames and less than 90 frames.
According to one embodiment of the present application, classifying fire information into fire classes and generating corresponding alarm signals according to the classified fire classes includes: the fire hazard information is classified according to fire hazard grades, and the corresponding alarm signal is generated according to the classified fire hazard grades comprises the following steps: if the fire information is judged to be large fire information, acquiring more than 90 continuous field video frames, and when the large fire information appears in the more than 90 continuous field video frames, the fire information is three-level fire early warning and generating three-level warning signals.
According to one embodiment of the present application, classifying fire information into fire classes and generating corresponding alarm signals according to the classified fire classes includes: if the fire information is judged to be small fire information, acquiring continuous field video frames of more than 15 frames and less than 30 frames, and if the small fire information appears in the continuous field video frames of more than 15 frames and less than 30 frames, the fire information is a level 0 fire early warning and a level 0 alarm signal is generated.
According to an embodiment of the present application, extracting a moving object in a video frame and obtaining a quasi-fire area includes:
performing background modeling on the acquired field video frame by adopting a Gaussian mixture model modeling method;
updating parameters in the Gaussian mixture model to obtain a background image;
and subtracting the obtained background image from the field video frame, and extracting the moving target in the field video frame to obtain the quasi-fire area.
According to the multi-factor flame identification method suitable for the embedded platform, after a plurality of field video frames are obtained, the field video frames at each position are respectively processed, so that moving targets in the field video frames at each position are respectively extracted, then fire disaster confirmation is respectively carried out, namely a distributed processing mode is adopted, and the phenomenon that the calculation amount is huge due to centralized processing is avoided. Meanwhile, by processing the acquired field video frames, whether fire information which possibly develops into a fire exists in a field monitoring environment is analyzed in real time, a quasi fire area is obtained, fire confirmation is carried out by aiming at the fire area, whether the fire information exists in the quasi fire area is further judged, and if the fire information exists, the fire grade analysis is carried out on the fire information and an alarm is generated. And when the method is applied to the existing fire-fighting intelligent alarm system, the detection problem of fire information of a large-space fire scene can be solved by combining the method with the fire-fighting intelligent alarm system, the detection range of the fire scene is enlarged, compared with a traditional sensor type fire detector, the method has shorter detection time and higher accuracy, and meanwhile, the acquired scene video frame can be stored, so that the follow-up investigation and evidence obtaining of the fire scene are facilitated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram of a conventional fire identification system;
FIG. 2 is a diagram of a multi-factor flame identification system for an embedded platform according to an embodiment;
FIG. 3 is a flow chart of fire information identification in an embodiment;
FIG. 4 is an image of a standard positive distribution in an embodiment;
FIG. 5 is a schematic diagram illustrating a process of extracting a moving object from a current live video frame by using a hybrid Gaussian model in the embodiment;
FIG. 6 is a flow chart of fire classification in an embodiment.
Detailed Description
In the following description, numerous implementation details are set forth in order to provide a thorough understanding of the present invention. It should be understood, however, that these implementation details should not be used to limit the application. That is, in some embodiments of the present application, such practical details are not necessary. In addition, some conventional structures and components are shown in simplified schematic form in the drawings.
In addition, the descriptions related to "first", "second", etc. in this application are for descriptive purposes only, not specifically referring to the order or sequence, nor are they intended to limit the application, but merely to distinguish components or operations described in the same technical terms, and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
The application provides a multi-factor flame identification method suitable for an embedded platform, which comprises four stages, wherein the first stage is a fire sample library construction stage, the second stage is a moving target extraction stage, the third stage is a fire area confirmation stage, and the fourth stage is a fire grade classification and early warning stage. Meanwhile, the application also specifically provides how to transplant the multi-factor flame identification method to an embedded platform, so that the multi-factor flame identification method can be operated on chips similar to Haisi Hi3519A series. The multi-factor flame identification method applied to the embedded platform in the application is described in detail below.
As shown in fig. 2, the multi-factor flame identification system suitable for the embedded platform of the present application includes a plurality of video capture terminals, a plurality of algorithm boxes, a switch, a server, a display terminal and an alarm terminal, wherein each video capture terminal is communicatively connected to one algorithm box, the plurality of algorithm boxes are communicatively connected to the switch, and the switch, the display terminal and the alarm terminal are communicatively connected to the server respectively. Wherein, a plurality of video acquisition ends are respectively arranged in a plurality of monitored areas and used for acquiring the field video frames of each monitored area, the plurality of video acquisition ends respectively transmit the acquired field video frames to each corresponding algorithm box, each corresponding algorithm box finishes the extraction of the moving target area in the field video frames according to the received field video frames to obtain a quasi-fire area, each algorithm box transmits the quasi-fire area to a server, a fire sample library is established in the server, and the server acquires the fire sample library according to the established fire sample library, training and testing the pictures in the fire sample library to obtain an algorithm model so as to obtain the algorithm model according to the training and testing, and aiming at the fire area to confirm the fire, judging whether fire information exists, dividing the fire grade according to the fire information by the server, and controlling the alarm end to generate a corresponding alarm signal according to the divided fire grade. The multi-factor flame identification method suitable for the embedded platform is described in detail below.
According to the multi-factor flame identification method suitable for the embedded platform, before fire identification is carried out, a fire sample library is established so that pictures in the fire sample library can be trained and tested in a BP neural network algorithm to obtain a BP neural network algorithm model. When the fire sample library is built, in order to make the confirmation of the fire information more accurate, in this example, a mode of combining a network virtual picture and a real life fire picture is adopted to build the fire sample library, so that the built fire sample library has not only a network fire picture from a network, but also a combustion experiment picture obtained through a combustion experiment in real life. And acquiring network fire pictures and combustion experiment pictures respectively through a network mode and a manual simulation mode, wherein the total number of the pictures in the fire sample library is at least 10 thousands. The network fire picture is obtained as rich as possible, and comprises fire pictures obtained in different scenes, and 5 million network pictures are formed in total. When the burning experiment pictures are obtained, in order to enable the pictures in the fire disaster sample library to be more comprehensive, abundant and real, different scenes such as indoor and outdoor scenes, burning of different materials such as beech, plastics, waste paper, fabrics and natural gas and different interferences such as sunlight, incandescent lamps, mosquito-repellent incense, cigarettes and yellow/red objects are combined, common scenes in real life are fully combined when the scenes are considered, and 5 tens of thousands of real scene sample pictures are collected through a large number of burning experiments to form the burning experiment pictures so as to enable the obtained burning experiment pictures to be real. The network fire picture and the combustion experiment picture jointly form a fire sample library with the number of not less than 10 ten thousand. The composition of the pictures in the fire sample library is shown in table 1 below.
TABLE 1 fire sample library Picture sources
And completing construction of a fire sample library and testing and training of pictures in the fire sample library, and starting to identify fire information after a BP neural network algorithm model is obtained. Please refer to fig. 3, which is a flow chart of fire information identification. When a plurality of video acquisition ends acquire the field video frames and transmit the field video frames to each algorithm box in communication connection with the video acquisition ends, each algorithm box acquires the corresponding field video frame and extracts the moving target in the field video frame to obtain the pilot fireAnd (4) disaster areas. To identify fire information in live video frames, it is necessary to extract the area where the fire occurred, i.e., the fire area. After the conflagration takes place, because development and the effect of environment air current along with the conflagration, the regional and the continuous motion of background image of conflagration all, consequently, the discernment of conflagration information will be carried out, at first need draw out the moving object in the on-the-spot video frame after acquireing on-the-spot video frame, the moving object composition that obtains is accurate conflagration region, because still there are a large amount of non-conflagration moving object in the area except the area that takes place the conflagration in accurate conflagration region again, the accurate affirmation of conflagration information will be carried out, still need to reject the non-conflagration moving object in the accurate conflagration region, can carry out. In this example, the moving objects in the live video frame are extracted first, and the fire is confirmed after the quasi-fire area is obtained, mainly based on the following two considerations: 1. in the process of fire development, a fire area and a part of background area are necessarily in a moving state, and if a quasi-fire area consisting of moving objects extracted from a live video frame is M, the quasi-fire area necessarily comprises a real fire area N and a moving part of background area, M contains N, namely M is NThrough the extraction of the moving target, a quasi-fire area can be screened out, secondary confirmation can be conveniently carried out on the quasi-fire area, and a part of background areas in the moving state are removed to obtain a real fire area. 2. After the quasi-fire area is extracted from the field video frame, the quasi-fire area is used as a research object, and pixel points of corresponding images in the field video frame are reduced, so that the area needing operation is reduced, the performance of an algorithm is greatly improved, and the operation amount can be reduced.
The two methods are subtraction of two different frames of field video frames, the differentiated field video frame is used as a moving target, the difference between the two different frames of field video frames is differentiation of adjacent field video frames, and the background difference method is differentiation of the current field video frame and a background image, so that the establishment of a visible background image directly affects the extraction of the moving target. Background images are generally established into two categories, the first category is that the background images are fixed and the current live video frame and the background images are differentiated to obtain a moving object, and the method generally takes the first frame of the live video frame as the background image. However, in practice, the background image usually changes, for example, the moving object is an object in the original background image, if the background image is always unchanged, the moving object is treated as the background, and the extracted moving object is not ideal, and for example, in real life, the background slowly changes under the influence of natural factors (such as illumination brightness, natural wind, and the like), and the background image naturally changes along with the change, and if the background image is always unchanged, the error with the actual background image slowly increases, so that a large error is caused to the extracted moving object.
The second method of creating a background is that the background image may change slowly with the change of the environment, so that the error with the actual background environment may be kept small. In order to obtain a background image with adaptive capability, a background modeling algorithm is usually used, which may be roughly divided into two types, one is to store a field video frame before the current time, then use newly appeared data in the stored field video frames as samples, add the samples to the background image according to a certain rule, such as a median background modeling method and a mean background modeling method, where the median background modeling method is to calculate a median value from pixel values at corresponding positions in the stored field video frames, use the median value as a pixel value at a corresponding position in the current background image, and the mean background modeling method is to calculate an average value from pixel values at corresponding positions in the field video frames, and use the average value as a background of the current field video frame, which is relatively ideal in effect, but because the field video frames stored for a period of time are used as samples, the burden of the server memory is increased, the calculation amount of data is increased, and the requirement on hardware is high. The mean value background modeling method overcomes the defects, does not need to store the field video frame as a sample, and changes the original background image according to the current field video frame in a regression mode, such as a Karleman filter model, a single Gaussian model and a mixed Gaussian model. Through repeated comparison of experiments, the present example adopts a gaussian mixture model, and how to extract a moving target region by adopting a gaussian mixture model mode is described in detail below to obtain a quasi-fire region.
In the embodiment, the method for extracting the moving target in the video frame to obtain the quasi-fire area comprises the steps of carrying out background modeling on the obtained field video frame by adopting a Gaussian mixture model modeling method; updating parameters in the Gaussian mixture model to obtain a background image; and subtracting the obtained background image from the field video frame, and extracting the moving target in the field video frame to obtain the quasi-fire area.
If the random variable X in the Gaussian distribution obeys a Gaussian distribution with the mathematical expectation of mu and the variance of sigma ^2, which is marked as N (mu, sigma ^2), the probability density function determines the position of the random variable for the expectation value mu of the normal distribution, and the standard deviation sigma determines the amplitude of the random variable X. What we generally say is a normal distribution with μ ═ 0 and σ ═ 1. As shown in fig. 4, is a standard positive distribution image.
When the environment does not have the condition of a moving object, the pixel values at the same position at different moments are counted, and the values are found to be in single Gaussian distribution, but the actual environment is usually influenced by external factors such as illumination, wind blowing and the like, and the single Gaussian distribution cannot meet the distribution of the pixel values, so that the statistical condition of the pixel values at one position can be described by combining several Gaussian distributions through different weights, namely the mixed Gaussian model mentioned in the example. The more the number of Gaussian models, the more complicated the background can be described, the higher the accuracy, but the cost of the model is the larger the calculation amount of the data. In order to achieve satisfactory effect and also considering the requirements on computer hardware, the number of Gaussian models in general engineering is preferably 3 to 5.
Suppose that the pixel value of a pixel i in an image at time t is xitIts probability density function is then:wherein WjtThe weight of the jth Gaussian model of the ith pixel at the time t is shown, the larger the value of W is, the closer the Gaussian model is to the pixel value of the current image is shown, k is the number of adopted Gaussian models, and k isI.e. the sum of the weights of all gaussian models used to model a pixel is 1.Showing the jth gaussian model for describing the ith pixel at time t,a single gaussian model is represented. And uitThe mean of the gaussian model is shown,the variance of the gaussian model is shown, and in the modeling algorithm of the gaussian model, the required effect is achieved mainly by adjusting the values of the mean and the variance, and therefore, in the modeling algorithm of the gaussian model, the updating method of the mean and the variance of the gaussian model is very important, and the specific updating method is introduced later. When the mixed Gaussian model is used for background modeling, k single Gaussian models for describing the same pixel point need to be sequenced according to a similarity program between the Gaussian models and the current pixel, the higher the weight W is, the higher the similarity between the model and the current pixel is, andthe smaller the change is, the smaller the change of the group of pixel points is, and the more stable the change is. Can be based onTo describe such a similarity degree,the larger the value of (b) is, the higher the similarity is, and the more likely it is that the pixel point belongs to the background image. Each Gaussian model is based onThe values are sorted from large to small, the similarity between the moving target and the Gaussian model is usually small, and the similarity is large due to small change of background pixel points. A threshold T can thus be defined, if the sum of the weights of the first d gaussian models is just greater than or equal to T, then the first d gaussian models are used as background subset and the remaining k-d gaussian models are used as foreground motion subset. The value of T directly affects the effect of extracting the moving foreground, and when the value of T is small, the smaller the value of d is, the more single the subset for describing the background image is, so the value of T generally takes 0.75.
Next, a detailed description will be given of an update method of each parameter of the gaussian mixture model so that the background image can be accurately identified according to the update. Before updating each parameter, it is necessary to determine which gaussian model the pixel is most similar to, generally if pixel point xitSatisfy the requirement ofThe pixel is considered to match the model (typically 2.5 for a match threshold). If X isitMatching with the ith gaussian model, the parameters of the gaussian model are updated, and the updated equation is as follows:
Wi,t+1=(1-α)wi,t+αMit
pit=αN(xit,uit,σ2)
ui,t+1=(1-pit)uit+pituit
the gaussian model remains unchanged except for the parameters of the gaussian model that need to be updated. Although the gaussian mixture model is complex and has a large calculation amount, the extracted moving object has a good effect, and therefore, the gaussian mixture model is widely used, as shown in fig. 5, which is a schematic diagram of a process for extracting a moving object from a current field video frame by using the gaussian mixture model. And carrying out background modeling on the sample field video frame according to a Gaussian mixture background modeling method, and subtracting the current field video frame from the current background image to obtain the moving target foreground. Namely, the live video frame is subtracted from the obtained background image, so that the moving target in the live video frame can be extracted, and the quasi-fire area is obtained. And after the field video frame is obtained, modeling is carried out according to a Gaussian mixture model to obtain a background image, and the field video frame is subtracted from the background image to obtain a moving target in the current field video frame.
Referring back to fig. 3, after the quasi-fire area is obtained, fire confirmation needs to be performed by aiming at the fire area, whether fire information exists in the quasi-fire area is judged, that is, the fire area confirmation is performed, and secondary confirmation is performed on the quasi-fire area screened by the moving target extraction algorithm. In the present example, three artificial intelligence algorithms are used for common confirmation at this stage, namely, a BP neural network algorithm based on artificial feature engineering, an SSD algorithm based on a deep convolutional neural network, and a Yolo algorithm based on a deep learning convolutional neural network. And confirming the fire area to obtain whether fire information exists in the current live video frame. That is, in the present example, a BP neural network algorithm, an SSD algorithm, and a Yolo algorithm are respectively used to detect one or more quasi-fire areas, wherein when the BP neural network algorithm is used to detect a fire area, network fire pictures and combustion experiment pictures in a fire sample library are extracted to form a BP neural network training set; respectively outputting fire confidence degrees according to the detection; and judging whether fire information exists according to the fire confidence coefficient. How to use the BP neural network algorithm, SSD algorithm, and Yolo algorithm for fire confirmation is described in detail below.
A) Based on the BP neural network algorithm of the artificial characteristic engineering, three characteristics of the regional curvature of the flame, the regional diffusivity of the flame and the sharp angle change rate of the flame are extracted and used as the input of the BP neural network algorithm. As shown in table 2, the training set of the BP neural network randomly extracts at least 50% of the pictures from the fire sample library composed of the network fire pictures and the combustion experiment pictures, wherein when at least 50% of the pictures are extracted, at least half of the network fire pictures and at least half of the combustion experiment pictures are extracted to form the training set of the BP neural network. As for the BP neural network algorithm, a four-layer BP neural network is designed, an input layer is 3 units, two hidden layers (each layer has 10 units), an output layer is one unit, the BP network fire confidence coefficient P is output, P belongs to [0,1], P is 0 to represent no fire information, P is 1 to represent that fire information appears, the confidence coefficient is higher if P is larger, the probability of a missed judgment phenomenon of the fire confidence coefficient P output by the BP neural network is lower through a large number of engineering experiments, and the probability of the missed judgment is stabilized at 0.01%; however, there may be a case of erroneous determination, and the probability of erroneous determination is stabilized at 2%. In the application, the judgment of the BP neural network algorithm is only used as one factor, and the advantage of extremely low probability of missed judgment is used as an auxiliary judgment module.
Table 2 the neural network training set constitutes B) large target detection based on the SSD algorithm and small target detection based on the Yolo algorithm, both the SSD algorithm and the Yolo algorithm belong to the target detection algorithm based on the deep convolutional neural network, and in this application, it can be known through a large number of comparative experiments that the SSD algorithm is sensitive to a large fire target, but is easy to ignore a small fire target (flame in a smoldering stage or a just-fired stage); the Yolo algorithm is just opposite, is sensitive to small fire targets, and is easy to generate missed judgment on large fire targets. According to the method, the SSD algorithm is used for large target detection, the Yolo algorithm is used for small target detection, the SSD fire confidence degree P _ A and the Yolo fire confidence degree P _ B are integrated with the network fire confidence degree respectively, large fire information is judged to be generated under the condition that P is greater than 0.8 and P _ A is greater than 0.8, and small fire information is judged to be generated, namely in the firing stage, under the condition that P is greater than 0.6 and P _ B is greater than 0.7. The threshold value of P is related to two variables P _ A, P _ B, and is 0.8 for the large fire information determination and 0.6 for the small fire information determination, and P _ a and P _ B are both obtained by engineering experiments.
After the confirmation of the fire information is completed, the fire information is classified into fire classes, and a warning signal for fire is generated according to the fire classes. In the embodiment, a BP neural network algorithm, an SSD algorithm and a Yolo algorithm are adopted to carry out multi-factor integration, and whether fire information exists is jointly determined. And distinguishing and identifying the fire information to divide a large fire target and a small fire target, and further grading the large fire target and the small fire target in the stage.
As shown in fig. 5, it is a fire classification flowchart, in this example, four fire classifications are designed, wherein if the fire information is judged to be small fire information, a field video frame with more than 15 continuous frames and less than 30 continuous frames is obtained, when small fire information appears in the field video frames with more than 15 continuous frames and less than 30 continuous frames, a 0-level fire early warning is generated, the level is in the fire stage, and no property or personnel loss is caused temporarily; and when the fire information is judged to be large fire information, acquiring continuous field video frames of more than 30 frames and less than 60 frames, and when the large fire information is generated in the continuous field video frames of more than 30 frames and less than 60 frames, generating a first-level fire early warning, wherein the level is in the initial development stage of the fire. When the fire information is judged to be large fire information, acquiring continuous field video frames more than 60 frames and less than 90 frames, and when the large fire information appears in the continuous field video frames more than 60 frames and less than 90 frames, generating a secondary fire early warning, wherein the level is in the rapid development period of the fire; when the fire information is judged to be large fire information, more than 90 continuous frames of field video frames are obtained, when the large fire information appears in the more than 90 continuous frames of field video frames, three-level fire early warning is generated, and when the situation develops to the level, the property loss to a certain degree is caused, and emergency fire rescue is needed.
When the multi-factor flame identification method suitable for the embedded platform is transplanted to Haisi Hi3519A series chips for running, the transplanting step comprises the steps of building NFS, connecting Haisi development boards in a serial port mode, installing Haisi cross compiler, opencv3.4.5 cross compilation, ncnn cross compilation and project engineering compilation, and outputting a static library based on an arm framework. The NFS building comprises the steps of installing NFS services, writing configuration files and restarting the NFS services. Connecting the Haisi development board in a serial port mode comprises generally connecting a serial port line to the Haisi development board, then manually or automatically installing a driver, checking a port number in a computer-management-equipment manager-port mode, accessing a serial port by using a SecureCRT (secure peripheral component interconnect express) at a PC (personal computer) end, wherein the code rate is 115200, and connecting the Haisi development board and a virtual machine and sharing a directory, and if the Haisi development board is not allocated with an IP (Internet protocol), the Haisi development board can be manually configured. When the Haesi cross compiler is installed, a host machine is 64 bits, the cross compiler aims at a 32-bit development board, a dependence package needs to be supplemented, then an installation script below the installation package is directly executed, and then whether installation is successful or not is tested.
Compared with the traditional fire identification, the multi-factor flame identification method applicable to the embedded platform has the following advantages:
1) when the BP neural network algorithm is adopted for fire information identification, the number of pictures in a fire sample library of a BP neural network training set reaches 10 ten thousand levels, the pictures have network pictures, real scene data accounts for 50%, and the pictures are various and rich. The fire picture sample library used by the existing scheme is usually only crawled for the internet, the scene of the fire picture sample crawled on the network is single, most of the fire picture sample is a serious fire scene, and sample data of a fire starting stage or a smoldering stage is lacked. And the quantity and scale of the picture library are relatively small (within 1 ten thousand), so that the identification of the algorithm model or the research on the fire characteristics are not comprehensive enough, the robustness and the migration capability of the algorithm are relatively weak, the performance in the test set is very good, but the identification rate of the real scene is relatively low. According to the method, the pictures are collected through three channels, namely the Internet, the international open source library and the combustion experiment, an ideal training set which is huge in quantity and covers common life scenes is built, and important guarantee is provided for the training effect of the algorithm.
2) A multi-factor decision scheme is formed by combining a BP neural network algorithm based on artificial design feature engineering and an SSD algorithm and a YOLO algorithm of a real deep convolutional neural network, fire information is jointly judged, and the method is high in anti-interference capability and robustness. At present, the traditional digital image processing scheme is used in the mainstream scheme, the characteristic dimension set manually in the characteristic engineering process is difficult to represent the characteristics of all fires, for example, the diffusion rate difference of the flame diffusion characteristic is large in different development stages of the fire, so that the algorithm has weak anti-interference capability and is easily influenced by strong light, weak light and particularly night light. The multi-factor flame recognition method suitable for the embedded platform integrates a traditional digital image processing scheme and a target detection scheme based on deep learning to form a multi-factor fire recognition scheme, different algorithms are designed for a large fire target and a small fire target to perform scene recognition, the anti-interference capability of fire recognition is greatly improved, the accuracy rate is greatly guaranteed, and the accuracy rate can be stabilized at 99.5% through experiments.
3) The algorithm is low in complexity, reduces consumption of computing resources, and is suitable for running on a terminal platform. The accuracy of the deep learning fire detection algorithm is greatly improved compared with the traditional digital image processing algorithm, but the improvement of the accuracy is obtained by sacrificing performance, the current mainstream deep learning fire identification model is trained by using a universal target detection algorithm, the number of neurons of the algorithm is large, and the identification is usually carried out on a high-performance CPU (central processing unit) server or GPU (graphics processing unit) server in the background. The video streams of multiple cameras need to be accessed to the same algorithm server for unified processing, so that one server simultaneously processes multiple video signals, the calculation pressure is huge, and the hardware cost is high. The processing time and the signal transmission time of the server cause poor real-time performance. According to the method and the device, firstly, the field video frame is reduced, the moving target is extracted, and the quasi-fire area is obtained, so that the calculated amount in the fire detection link is reduced. In addition, the method and the device perform ncnn lightweight frame-based conversion and arm framework-based transplantation on the algorithm, so that the operation platform of the algorithm can adapt to a terminal embedded platform of an arm framework, the application capability of the algorithm is improved, and the hardware cost is reduced.
The above description is only an embodiment of the present application, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.
Claims (10)
1. A multi-factor flame identification method suitable for an embedded platform is characterized by comprising the following steps:
establishing a fire sample library, wherein the fire sample library is from network fire pictures and combustion experiment pictures;
respectively acquiring a plurality of field video frames;
respectively extracting a moving target in each site video frame to obtain one or more quasi-fire areas;
respectively confirming fire in one or more quasi fire areas according to the fire sample library, and judging whether fire information exists in the one or more quasi fire areas;
and if so, dividing the fire hazard information into fire hazard grades, and generating corresponding alarm signals according to the divided fire hazard grades.
2. The multi-factor flame identification method applicable to the embedded platform according to claim 1, wherein the performing fire identification on one or more quasi-fire areas according to the fire sample library and determining whether fire information exists in the one or more quasi-fire areas comprises:
respectively adopting a BP neural network algorithm, an SSD algorithm and a Yolo algorithm to detect one or more quasi-fire areas, wherein when the BP neural network algorithm is adopted to detect the fire areas, network fire pictures and combustion experiment pictures in the fire sample library are extracted to form a BP neural network training set;
respectively outputting fire confidence degrees according to the detection;
and judging whether fire information exists according to the fire confidence coefficient.
3. The multi-factor flame recognition method applicable to the embedded platform according to claim 2, wherein when the BP neural network algorithm is adopted to detect one or more quasi-fire areas, at least half of network fire pictures and at least half of combustion experiment pictures contained in a fire sample library are extracted to form a BP neural network training set.
4. The multi-factor flame identification method suitable for the embedded platform according to claim 2, wherein the determining whether the fire information exists according to the fire confidence coefficient comprises: when the BP neural network algorithm is adopted for detection, the fire confidence coefficient P of the BP network is output, wherein P belongs to [0,1], and whether fire information exists or not is judged according to the fire confidence coefficient P.
5. The multi-factor flame identification method suitable for the embedded platform according to claim 4, wherein the SSD algorithm and the Yolo algorithm are respectively adopted to detect the fire area, and SSD fire confidence level P _ A and Yolo fire confidence level P _ B are respectively output, if P >0.8 and P _ A >0.8, the fire information is big fire information; if P >0.6 and P _ B >0.7, the fire information is small fire information.
6. The multi-factor flame identification method for embedded platforms as claimed in claim 5, wherein the fire rating of the fire information and the generation of the corresponding alarm signal according to the rated fire rating comprises: if the fire information is judged to be large fire information, acquiring continuous field video frames more than 30 frames and less than 60 frames, and dividing the fire information into a first-level fire early warning and generating a first-level warning signal when the large fire information is generated in the continuous field video frames more than 30 frames and less than 60 frames.
7. The multi-factor flame identification method for embedded platforms as claimed in claim 5, wherein the fire rating of the fire information and the generation of the corresponding alarm signal according to the rated fire rating comprises: the fire classification of the fire information and the generation of the corresponding alarm signal according to the classified fire classification comprises: if the fire information is judged to be large fire information, acquiring continuous field video frames more than 60 frames and less than 90 frames, and when the large fire information appears in the continuous field video frames more than 60 frames and less than 90 frames, dividing the fire information into two-stage fire early warning and generating two-stage warning signals.
8. The multi-factor flame identification method for embedded platforms as claimed in claim 5, wherein the fire rating of the fire information and the generation of the corresponding alarm signal according to the rated fire rating comprises: the fire classification of the fire information and the generation of the corresponding alarm signal according to the classified fire classification comprises: if the fire information is judged to be large fire information, acquiring more than 90 continuous field video frames, and if the large fire information appears in the more than 90 continuous field video frames, the fire information is three-level fire early warning and generating three-level alarm signals.
9. The multi-factor flame identification method for embedded platforms as claimed in claim 5, wherein the fire rating of the fire information and the generation of the corresponding alarm signal according to the rated fire rating comprises: if the fire information is judged to be small fire information, acquiring continuous field video frames more than 15 frames and less than 30 frames, and if the small fire information appears in the continuous field video frames more than 15 frames and less than 30 frames, the fire information is a level 0 fire early warning and generates a level 0 warning signal.
10. The multi-factor flame identification method suitable for the embedded platform according to any one of claims 1-9, wherein the extracting of the moving object in the video frame to obtain the quasi-fire area comprises:
performing background modeling on the acquired field video frame by adopting a Gaussian mixture model modeling method;
updating parameters in the Gaussian mixture model to obtain a background image;
and subtracting the obtained background image from the field video frame, and extracting the moving target in the field video frame to obtain the quasi-fire area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916354.3A CN110648490B (en) | 2019-09-26 | 2019-09-26 | Multi-factor flame identification method suitable for embedded platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910916354.3A CN110648490B (en) | 2019-09-26 | 2019-09-26 | Multi-factor flame identification method suitable for embedded platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110648490A true CN110648490A (en) | 2020-01-03 |
CN110648490B CN110648490B (en) | 2021-07-27 |
Family
ID=69011420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910916354.3A Active CN110648490B (en) | 2019-09-26 | 2019-09-26 | Multi-factor flame identification method suitable for embedded platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110648490B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242053A (en) * | 2020-01-16 | 2020-06-05 | 国网山西省电力公司电力科学研究院 | Power transmission line flame detection method and system |
CN111414514A (en) * | 2020-03-19 | 2020-07-14 | 山东雷火网络科技有限公司 | System and method for flame detection based on Shandong Jinnan province |
CN111681385A (en) * | 2020-05-12 | 2020-09-18 | 上海荷福人工智能科技(集团)有限公司 | Fire-fighting classification early-warning algorithm based on artificial intelligence and fire detection system |
CN112150750A (en) * | 2020-08-25 | 2020-12-29 | 航天信德智图(北京)科技有限公司 | Forest fire alarm monitoring system based on edge calculation |
CN112907886A (en) * | 2021-02-07 | 2021-06-04 | 中国石油化工股份有限公司 | Refinery plant fire identification method based on convolutional neural network |
CN112947147A (en) * | 2021-01-27 | 2021-06-11 | 上海大学 | Fire-fighting robot based on multi-sensor and cloud platform algorithm |
CN115376268A (en) * | 2022-10-21 | 2022-11-22 | 山东太平天下智慧科技有限公司 | Monitoring alarm fire-fighting linkage system based on image recognition |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001356047A (en) * | 2000-06-14 | 2001-12-26 | Hochiki Corp | Flame detector and method for setting its detection sensitivity |
US20120179421A1 (en) * | 2010-12-07 | 2012-07-12 | Gautam Dasgupta | Emergency Response Management Apparatuses, Methods and Systems |
CN103150856A (en) * | 2013-02-28 | 2013-06-12 | 江苏润仪仪表有限公司 | Fire flame video monitoring and early warning system and fire flame detection method |
CN105336085A (en) * | 2015-09-02 | 2016-02-17 | 华南师范大学 | Remote large-space fire monitoring alarm method based on image processing technology |
US20170363475A1 (en) * | 2014-01-23 | 2017-12-21 | General Monitors, Inc. | Multi-spectral flame detector with radiant energy estimation |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108108695A (en) * | 2017-12-22 | 2018-06-01 | 湖南源信光电科技股份有限公司 | Fire defector recognition methods based on Infrared video image |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110163889A (en) * | 2018-10-15 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Method for tracking target, target tracker, target following equipment |
CN110378265A (en) * | 2019-07-08 | 2019-10-25 | 创新奇智(成都)科技有限公司 | A kind of incipient fire detection method, computer-readable medium and system |
-
2019
- 2019-09-26 CN CN201910916354.3A patent/CN110648490B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001356047A (en) * | 2000-06-14 | 2001-12-26 | Hochiki Corp | Flame detector and method for setting its detection sensitivity |
US20120179421A1 (en) * | 2010-12-07 | 2012-07-12 | Gautam Dasgupta | Emergency Response Management Apparatuses, Methods and Systems |
CN103150856A (en) * | 2013-02-28 | 2013-06-12 | 江苏润仪仪表有限公司 | Fire flame video monitoring and early warning system and fire flame detection method |
US20170363475A1 (en) * | 2014-01-23 | 2017-12-21 | General Monitors, Inc. | Multi-spectral flame detector with radiant energy estimation |
CN105336085A (en) * | 2015-09-02 | 2016-02-17 | 华南师范大学 | Remote large-space fire monitoring alarm method based on image processing technology |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108108695A (en) * | 2017-12-22 | 2018-06-01 | 湖南源信光电科技股份有限公司 | Fire defector recognition methods based on Infrared video image |
CN110163889A (en) * | 2018-10-15 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Method for tracking target, target tracker, target following equipment |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110378265A (en) * | 2019-07-08 | 2019-10-25 | 创新奇智(成都)科技有限公司 | A kind of incipient fire detection method, computer-readable medium and system |
Non-Patent Citations (2)
Title |
---|
孙琛: "基于视频图像的火灾检测算法研究与设计", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 * |
熊爱民,温佳文,何远静: "《基于图像模式识别技术的大空间火灾报警系统设计》", 《电子科学与技术》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242053A (en) * | 2020-01-16 | 2020-06-05 | 国网山西省电力公司电力科学研究院 | Power transmission line flame detection method and system |
CN111414514A (en) * | 2020-03-19 | 2020-07-14 | 山东雷火网络科技有限公司 | System and method for flame detection based on Shandong Jinnan province |
CN111414514B (en) * | 2020-03-19 | 2024-01-19 | 山东雷火网络科技有限公司 | System and method for flame detection in Shandong Jinan environment |
CN111681385A (en) * | 2020-05-12 | 2020-09-18 | 上海荷福人工智能科技(集团)有限公司 | Fire-fighting classification early-warning algorithm based on artificial intelligence and fire detection system |
CN112150750A (en) * | 2020-08-25 | 2020-12-29 | 航天信德智图(北京)科技有限公司 | Forest fire alarm monitoring system based on edge calculation |
CN112947147A (en) * | 2021-01-27 | 2021-06-11 | 上海大学 | Fire-fighting robot based on multi-sensor and cloud platform algorithm |
CN112907886A (en) * | 2021-02-07 | 2021-06-04 | 中国石油化工股份有限公司 | Refinery plant fire identification method based on convolutional neural network |
CN115376268A (en) * | 2022-10-21 | 2022-11-22 | 山东太平天下智慧科技有限公司 | Monitoring alarm fire-fighting linkage system based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN110648490B (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110648490B (en) | Multi-factor flame identification method suitable for embedded platform | |
Zhang et al. | Wildland forest fire smoke detection based on faster R-CNN using synthetic smoke images | |
CN109815904B (en) | Fire identification method based on convolutional neural network | |
CN109389185B (en) | Video smoke identification method using three-dimensional convolutional neural network | |
CN111723654A (en) | High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization | |
CN101334924A (en) | Fire hazard probe system and its fire hazard detection method | |
CN111401418A (en) | Employee dressing specification detection method based on improved Faster r-cnn | |
CN108389359A (en) | A kind of Urban Fires alarm method based on deep learning | |
CN110674790A (en) | Abnormal scene processing method and system in video monitoring | |
CN114155457A (en) | Control method and control device based on flame dynamic identification | |
CN115690615B (en) | Video stream-oriented deep learning target recognition method and system | |
CN114494944A (en) | Method, device, equipment and storage medium for determining fire hazard level | |
CN112101572A (en) | Model optimization method, device, equipment and medium | |
CN111862065A (en) | Power transmission line diagnosis method and system based on multitask deep convolutional neural network | |
CN113516102A (en) | Deep learning parabolic behavior detection method based on video | |
CN109685823A (en) | A kind of method for tracking target based on depth forest | |
CN113343123A (en) | Training method and detection method for generating confrontation multiple relation graph network | |
CN115083229B (en) | Intelligent recognition and warning system of flight training equipment based on AI visual recognition | |
CN116563762A (en) | Fire detection method, system, medium, equipment and terminal for oil and gas station | |
CN115909196A (en) | Video flame detection method and system | |
CN114359716A (en) | Multi-remote-sensing fire index automatic integration-based burned area mapping method | |
CN114998637A (en) | Improved YOLOv 4-based fire and smoke target detection method, equipment and storage medium | |
CN114419558A (en) | Fire video image identification method, fire video image identification system, computer equipment and storage medium | |
CN113920450A (en) | Method and device for identifying insulator RTV coating based on intrinsic image decomposition | |
CN111274894A (en) | Improved YOLOv 3-based method for detecting on-duty state of personnel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |