CN112001375B - Flame detection method and device, electronic equipment and storage medium - Google Patents
Flame detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112001375B CN112001375B CN202011176496.XA CN202011176496A CN112001375B CN 112001375 B CN112001375 B CN 112001375B CN 202011176496 A CN202011176496 A CN 202011176496A CN 112001375 B CN112001375 B CN 112001375B
- Authority
- CN
- China
- Prior art keywords
- flame
- video data
- image
- area
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The application provides a flame detection method, a flame detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image frame to be detected in video data; predicting a position area where a flame is located in an image frame to be detected by using a neural network model; intercepting a corresponding area image from an image frame of the video data according to the position area; extracting motion mode characteristics of the intercepted multiple region images by using a neural network model; and predicting the motion modal characteristics by using the neural network model to obtain the detection result of whether real flames exist in the video data. In the implementation process, the flame is predicted according to the motion modal characteristics extracted from the region image of the flame position in the multi-frame video image, and the appearance characteristics and multi-frame dynamic motion characteristic information of the flame in the motion process are effectively utilized, so that the accuracy of flame detection is improved.
Description
Technical Field
The present application relates to the technical field of machine learning, target detection, and video processing, and in particular, to a flame detection method, apparatus, electronic device, and storage medium.
Background
At present, the main method for flame detection is to acquire a high-definition image by using a high-definition camera and perform flame detection on the high-definition image by using a conventional image processing technology, specifically for example: and carrying out operations such as filtering of colors, time domains and space domains on the pixel points in the high-definition image. In a specific practical process, it is found that when a flame is detected by using a conventional image processing technology, the flame is easily interfered by other information in the environment, such as: the conventional image processing technology is easy to mistake the light rays such as the car light, the street lamp and the reflected light as flame, so that the flame detection accuracy by using the conventional flame detection technology is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a flame detection method, a flame detection device, an electronic device, and a storage medium, which are used to solve the problem of low accuracy in detecting flames.
The embodiment of the application provides a flame detection method, which comprises the following steps: acquiring an image frame to be detected in video data; predicting a position area where a flame is located in an image frame to be detected by using a neural network model; intercepting a corresponding area image from an image frame of the video data according to the position area; extracting motion mode characteristics of a plurality of intercepted region images by using a neural network model, wherein the plurality of region images are obtained by intercepting the same region of a plurality of image frames of video data; and predicting the motion modal characteristics by using the neural network model to obtain the detection result of whether real flames exist in the video data. In the implementation process, the flame is predicted according to the motion modal characteristics extracted from the region image of the flame position in the multi-frame video image, and the appearance characteristics and multi-frame dynamic motion characteristic information of the flame in the motion process are effectively utilized, so that the accuracy of flame detection is improved.
Optionally, in an embodiment of the present application, the neural network model includes: a target detection module; predicting the position area of the flame in the image frame to be detected by using a neural network model, wherein the method comprises the following steps: predicting a candidate region where flames in an image frame to be detected are located and the prediction probability of the flames in the candidate region by using a target detection module; and if the prediction probability is greater than a preset threshold value, determining the candidate region as a position region. In the implementation process, a target detection module is used for predicting a candidate region where flame is located in an image frame to be detected and the prediction probability of flame existing in the candidate region; if the prediction probability is larger than a preset threshold value, determining the candidate area as a position area; therefore, flame detection on the non-flame candidate region is avoided, and the accuracy of flame detection is effectively improved.
Optionally, in this embodiment of the present application, intercepting a corresponding area image from an image frame of video data according to a location area includes: acquiring a plurality of image frames after the acquisition time of an image frame to be detected from video data, and intercepting the image frames according to the position area; or acquiring a plurality of image frames with the similarity greater than a preset threshold value with the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area. In the implementation process, a plurality of image frames after the acquisition time of the image frame to be detected are acquired from the video data, or a plurality of image frames with the similarity to the image frame to be detected being greater than a preset threshold value are acquired from the video data, and the plurality of image frames are intercepted according to the position area; the dynamic motion characteristic information in the multi-frame image is effectively utilized, and whether flame exists in the video data is determined through the dynamic motion characteristic information in one frame image, so that the accuracy of detecting the flame is improved.
Optionally, in an embodiment of the present application, the neural network model includes a first feature extraction module and a second feature extraction module, and the motion modal features include stacking modal features and stitching modal features; extracting motion modality features of the intercepted plurality of region images using a neural network model, comprising: overlapping the plurality of area images by using a first feature extraction module to obtain an overlapped area image, and extracting overlapped modal features in the overlapped area image; and splicing the plurality of area images by using a second feature extraction module to obtain spliced area images, and extracting splicing modal features in the spliced area images.
In the implementation process, a plurality of area images are overlapped by using a first feature extraction module to obtain an overlapped area image, and overlapped modal features in the overlapped area image are extracted; splicing the plurality of area images by using a second feature extraction module to obtain spliced area images, and extracting splicing modal features in the spliced area images; in the implementation process, the superposition modal characteristics and the splicing modal characteristics are fused by using a characteristic fusion network module to obtain fusion characteristics; classifying and predicting the fusion characteristics by using a classification network module; the superposition modal characteristics and the splicing modal characteristics are effectively utilized, so that the accuracy of flame detection is improved.
Optionally, in this embodiment of the present application, the neural network model further includes: the system comprises a feature fusion network module and a classification network module; predicting motion modality features using a neural network model, comprising: fusing the superposition modal characteristics and the splicing modal characteristics by using a characteristic fusion network module to obtain fusion characteristics; and performing classification prediction on the fusion features by using a classification network module. In the implementation process, the superposition modal characteristics and the splicing modal characteristics are fused by using a characteristic fusion network module to obtain fusion characteristics; classifying and predicting the fusion characteristics by using a classification network module; the superposition modal characteristics and the splicing modal characteristics are effectively utilized, so that the accuracy of flame detection is improved.
Optionally, in this embodiment of the present application, after obtaining a detection result of whether there is a real flame in the video data, the method further includes: if flame exists in the video data, the position coordinate of the flame is predicted according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera, and the position coordinate is marked on the map. In the implementation process, if flame exists in the video data, the position coordinate of the flame is predicted according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera, and the position coordinate is marked on a map; therefore, the position of the fire on the map is clearly and visually marked, and people can conveniently take measures in time to extinguish the fire.
Optionally, in this embodiment of the present application, after obtaining a detection result of whether there is a real flame in the video data, the method further includes: and if flame exists in the video data, generating and outputting early warning information.
The embodiment of this application still provides a flame detection device, includes: the frame acquisition module to be detected is used for acquiring an image frame to be detected in the video data; the position region prediction module is used for predicting the position region where the flame is located in the image frame to be detected by using the neural network model; the area image intercepting module is used for intercepting a corresponding area image from an image frame of the video data according to the position area; the modal characteristic extraction module is used for extracting motion modal characteristics of a plurality of intercepted area images by using a neural network model, wherein the area images are obtained by intercepting the same area of a plurality of image frames of the video data; and the detection result obtaining module is used for predicting the motion modal characteristics by using the neural network model to obtain the detection result of whether real flame exists in the video data.
Optionally, in an embodiment of the present application, the neural network model includes: a target detection module; a location area prediction module comprising: the region probability prediction module is used for predicting a candidate region where flames are located in the image frame to be detected and the prediction probability of the flames in the candidate region by using the target detection module; and the position area determining module is used for determining the candidate area as the position area if the prediction probability is greater than a preset threshold.
Optionally, in this embodiment of the present application, the region image capturing module is specifically configured to: acquiring a plurality of image frames after the acquisition time of an image frame to be detected from video data, and intercepting the image frames according to the position area; or acquiring a plurality of image frames with the similarity greater than a preset threshold value with the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area.
Optionally, in an embodiment of the present application, the neural network model includes a first feature extraction module and a second feature extraction module, and the motion modal features include stacking modal features and stitching modal features; a modal feature extraction module comprising: the superposition characteristic extraction module is used for superposing the plurality of area images by using the first characteristic extraction module to obtain a superposition area image and extracting superposition modal characteristics in the superposition area image; and the splicing characteristic extraction module is used for splicing the plurality of area images by using the second characteristic extraction module to obtain spliced area images and extracting splicing modal characteristics in the spliced area images.
Optionally, in this embodiment of the present application, the neural network model further includes: the system comprises a feature fusion network module and a classification network module; a detection result obtaining module comprising: the fusion characteristic obtaining module is used for fusing the superposition modal characteristics and the splicing modal characteristics by using the characteristic fusion network module to obtain fusion characteristics; and the characteristic classification prediction module is used for performing classification prediction on the fusion characteristics by using the classification network module.
Optionally, in an embodiment of the present application, the flame detection device further includes: and the flame coordinate marking module is used for predicting the position coordinate of the flame according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera if the flame exists in the video data, and marking the position coordinate on the map.
Optionally, in an embodiment of the present application, the flame detection device further includes: and the early warning information output module is used for generating and outputting early warning information if flame exists in the video data.
An embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
Embodiments of the present application also provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a method for detecting flame according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a multi-frame superposition process provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a multi-frame splicing process provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a multi-frame splicing process using data provided by an embodiment of the present application;
fig. 5 is a schematic flowchart illustrating interaction between an electronic device and a terminal device according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a flame detection device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before describing the flame detection method provided by the embodiments of the present application, some concepts related to the embodiments of the present application are described:
neural Networks (NNs), also known as Artificial Neural Networks (ANNs) or Neural-like networks, are mathematical models or computational models that mimic the structure and function of biological Neural networks (e.g., the central nervous system of an animal, which may be the brain) used to estimate or approximate functions in the field of machine learning and cognitive science. The neural network model refers to a neural network model obtained by training an untrained neural network by using preset training data.
Target detection, also called target extraction, is an image understanding algorithm based on target geometry and statistical features, and target detection is to combine positioning and identification of a target into one, specifically for example: based on a computer vision algorithm, an interested target in the image is detected, namely the position of the target is marked by a rectangular frame, and the category of the target is identified.
It should be noted that the flame detection method provided in the embodiments of the present application may be executed by an electronic device, where the electronic device refers to a device terminal or a server having a function of executing a computer program, and the device terminal includes, for example: the mobile terminal includes an embedded mobile device, a mobile terminal device, a smart phone, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a network switch or a network router, and the like.
Before describing the flame detection method provided by the embodiments of the present application, application scenarios applicable to the flame detection method are described, where the application scenarios include, but are not limited to: the flame detection method is used for detecting flame or fire, or the flame detection method is used for enhancing the function of a fire alarm system, or the flame detection method is used for improving the accuracy of flame detection and the like.
Please refer to fig. 1, which is a schematic flow chart of a flame detection method provided in the embodiment of the present application; the flame detection method mainly includes the steps that a position area where flame is located in an image frame to be detected is predicted through a trained neural network model, then a plurality of area images are cut out from the image frame in video data according to the position area, finally, the neural network model is used for extracting motion mode characteristics of the area images, and whether real flame exists in the video data or not is predicted according to the motion mode characteristics; that is to say, the flame detection method predicts according to the motion modal characteristics extracted from the regional images of the positions of the flames in the multi-frame video images, and effectively utilizes the appearance characteristics and multi-frame dynamic motion characteristic information of the flames in the motion process, so that the accuracy of flame detection is improved; the flame detection method may include:
step S110: and acquiring an image frame to be detected in the video data.
The obtaining method of the image frame to be detected in the step S110 includes: a first acquisition mode, in which a video camera, a video recorder, a color camera or other acquisition equipment is used to shoot a target object to obtain video data; then the acquisition equipment sends video data to the electronic equipment, and then the electronic equipment receives the video data sent by the acquisition equipment and extracts an image frame to be detected in the video data; in a second obtaining manner, the collecting device may further forward the video data to the video server, and then the electronic device may obtain the video data from the video server, specifically, for example: the method comprises the steps of obtaining video data from a file system of a video server, or obtaining the video data from a database of the video server, or obtaining the video data from a mobile storage device of the video server, and extracting an image frame to be detected in the video data.
After step S110, step S120 is performed: and predicting the position area of the flame in the image frame to be detected by using the neural network model.
The neural network model refers to a neural network model obtained by training an untrained neural network by using preset training data, where the neural network model may include: the system comprises a target detection module, a first feature extraction module, a second feature extraction module, a feature fusion network module and a classification network module.
In a specific practical process, the neural network model can adopt a lightweight neural network model, that is, the neural network model can be obtained by selecting the neural network with fewer network structure layers for training, and before training, the neural network can be subjected to model compression; where model compression includes, but is not limited to: model quantification, model pruning, knowledge distillation, and the like. When the light weight neural network model is adopted for calculation, the calculation amount of the whole model is small, and the real-time performance and the high efficiency can be achieved in the embedded end equipment.
The embodiment of step S120 described above is, for example: predicting a candidate region where flame is located in an image frame to be detected and a prediction probability of flame in the candidate region by using a target detection module in a neural network model, wherein the target detection module is the neural network model for performing target detection on flame in the image frame to be detected, and the neural network model which can be used by the target detection module includes but is not limited to: feature Fusion Single-point multi-box Detector (FSSD), YOLO model, RCNN model, fast RCNN model, and fast RCNN model. It will be appreciated that the use of the target detection module may predict a plurality of candidate regions and a predicted probability of a flame being present in each candidate region, such as: and predicting a first candidate region at the upper left corner and a second candidate region at the lower right corner, wherein the prediction probability of flame existence in the first candidate region is 60%, and the prediction probability of flame existence in the second candidate region is 90%.
After predicting the plurality of candidate regions and the prediction probability of the presence of flames in each candidate region, the prediction probability may be compared with a preset threshold, for example: if the prediction probability is larger than a preset threshold value, determining the candidate area as a position area; the preset threshold may be set according to a specific situation, for example: if the preset threshold is set to 70% or 80%, the second candidate region may be determined as the location region, and the first candidate region may not be set as the location region. Since the prediction probability is compared with the preset threshold to determine the location area, in a specific practical process, a plurality of candidate areas are usually provided, and each candidate area corresponds to one prediction probability, so that a plurality of candidate areas with prediction probabilities larger than the preset threshold may be provided, and thus a plurality of determined location areas may be provided, and then it may be determined and confirmed manually which candidate area is the true location area. Certainly, in a specific practical process, the candidate region with the highest prediction probability can be directly determined as the position region, manual intervention is not needed to confirm which candidate region is the true position region, and the candidate region with the highest prediction probability is directly determined as the position region, and then the position region is subjected to subsequent calculation, so that the accuracy of flame detection is guaranteed.
After step S120, step S130 is performed: and intercepting corresponding area images from the image frame of the video data according to the position areas to obtain a plurality of area images.
The area image is an image obtained by cutting out a position area of one image frame of the video data, and therefore, the plurality of area images are obtained by cutting out the same position area of the plurality of image frames of the video data.
It should be noted that the location area may be understood as a regression box calculated by using neural network model regression, and the regression box is not real image data, nor a part of the image, but refers to a virtual area in the image of the video data, specifically, for example: assume that the image is divided into four regions in total, including: the method comprises the steps of obtaining a left upper corner region, a left lower corner region, a right upper corner region and a right lower corner region, predicting the flame in the left lower corner region of an image frame to be detected by using a neural network model, wherein the left lower corner region is an upper position region, and then intercepting the left lower corner region of the image frame of the video data. Intercepting each image frame screened from the video data, wherein a specific screening mode is explained below, and obtaining a plurality of area images, namely each image frame corresponds to one area image; the captured multiple regional images may include a regional image captured from a lower left corner region of the image frame to be detected, or may not include a regional image captured from a lower left corner region of the image frame to be detected, and certainly, if the multiple regional images include a regional image captured from a lower left corner region of the image frame to be detected, a finally obtained detection result of the real flame is higher.
There are many embodiments of the above step S130, including but not limited to the following:
the first embodiment is to screen a video frame according to an acquisition time and intercept the screened video frame, specifically for example: acquiring a plurality of image frames after the acquisition time of an image frame to be detected from video data, and intercepting the plurality of image frames according to a position area, specifically for example: assuming that the acquisition time of the image frame to be detected is 9 points, acquiring the image frame of which the acquisition time is more than 9 points, and intercepting the position area of the image frame; or, acquiring a plurality of image frames in a time range before and after the acquisition time of the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area, specifically for example: assuming that the acquisition time of the image frame to be detected is 9 points, the image frame with the acquisition time between 8 points 50 minutes and 9 points 10 minutes can be acquired, and the position area of the image frame is intercepted.
In the second embodiment, the video frames are filtered according to the frame sequence numbers, and the filtered video frames are intercepted, for example: acquiring a plurality of image frames after the frame number of the image frame to be detected from the video data, that is, acquiring a plurality of image frames with the frame number greater than the frame number of the image frame to be detected, and intercepting the plurality of image frames according to the position area, specifically for example: assuming that the frame number of the image frame to be detected is 100, acquiring the image frame with the frame number greater than 100, and intercepting the position area of the image frame; or, acquiring a plurality of image frames in a range before and after the frame number of the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area, specifically for example: assuming that the frame number of the image frame to be detected is 100, the image frame having the frame number between 90 and 110 may be acquired, and the location area of the image frame may be intercepted.
In the third embodiment, the video frames are filtered according to the similarity, and the filtered video frames are intercepted, for example: acquiring a plurality of image frames with the similarity greater than a preset similarity threshold with the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area; the preset similar threshold may be set according to specific situations, for example: assuming that the preset similarity threshold is set to be 85%, the frame number of the image frame to be detected is 100, the similarity between the image frame to be detected and the image frame with the frame number of 99 is 80%, and the similarity between the image frame to be detected and the image frame with the frame number of 101 is 90%, therefore, the image frame with the frame number of 99 cannot be intercepted because the similarity between the image frame to be detected and the image frame with the frame number of 99 is smaller than the preset similarity threshold; and the similarity between the image frame to be detected and the image frame with the frame number of 101 is greater than the preset similarity threshold, so that the position area of the image frame with the frame number of 101 can be intercepted.
After step S130, step S140 is performed: and extracting the motion mode characteristics of the intercepted plurality of region images by using a neural network model.
The motion modal characteristic refers to a modal characteristic (modal feature) of flame in a motion change process, and can be understood that the shape of the flame is different at every moment in the motion change process, the change of the flame shape follows a certain motion change rule, and the motion change rule is extracted from image data and is called as the motion modal characteristic; the motion modality features herein may include: stacking modal characteristics and splicing modal characteristics; the superposition mode features may be understood as features obtained by superposing motion mode features extracted from flame images at different times, and the splicing mode features may be understood as features obtained by splicing motion mode features extracted from flame images at different times, and please refer to the following description for a specific superposition mode and splicing mode.
The above-mentioned embodiment of extracting the motion modality features of the plurality of captured region images using the neural network model in step S140 may include:
step S141: and overlapping the plurality of area images by using a first feature extraction module to obtain an overlapped area image, and extracting overlapped modal features in the overlapped area image.
Please refer to fig. 2 for a schematic diagram of a multi-frame superposition process provided in the embodiment of the present application; in a specific practical process, a plurality of area images may be superimposed, where the plurality of area images may be 4, 9, 16, 25, or 36, and so on, and for ease of understanding and explanation, only the process of superimposing 4 area images is shown in fig. 2. The embodiment of step S141 described above includes, for example: superposing the plurality of area images by using a first feature extraction module to obtain a superposed area image, and extracting superposed modal features in the superposed area image, wherein the mode of extracting the superposed modal features is described in detail below; the specific superposition mode of the images of the plurality of areas is various, including but not limited to the following:
the first stacking method, mean stacking, specifically includes: one pixel point value in the superimposition-region image is a = (a1+ a2+ A3+ a 4)/4; wherein, a represents a pixel value in the image of the superimposition area, a1 represents a pixel value of the pixel a corresponding to the first image, a2 represents a pixel value of the pixel a corresponding to the second image, A3 represents a pixel value of the pixel a corresponding to the third image, and a4 represents a pixel value of the pixel a corresponding to the fourth image.
The second stacking method, weighted stacking, specifically includes: one pixel point value in the superimposed region image is a = (a × a1+ b × a2+ c × A3+ d × a 4)/4; the sum of a, b, c and d may be 1, a represents a pixel value in the image of the superimposition area, a1 represents a pixel value of a pixel point a corresponding to the first image, a2 represents a pixel value of a pixel point a corresponding to the second image, A3 represents a pixel value of a pixel point a corresponding to the third image, and a4 represents a pixel value of a pixel point a corresponding to the fourth image.
Step S142: and splicing the plurality of area images by using a second feature extraction module to obtain spliced area images, and extracting splicing modal features in the spliced area images.
Please refer to fig. 3 for a schematic diagram of a multi-frame splicing process provided in the embodiment of the present application; in a specific practical process, a plurality of area images may be stitched, where the plurality of area images may be 4, 9, 16, 25, or 36, and so on, and for ease of understanding and explanation, only the stitching process of the 4 area images is shown in fig. 3. It can be understood that, the multi-frame images are spliced at the pixel positions, the resolution of the spliced images is correspondingly enlarged, and the flame motion modal characteristics in the spliced images with enlarged resolution are extracted.
The embodiment of step S143 is, for example: splicing the plurality of area images by using a second feature extraction module to obtain spliced area images, and extracting splicing modal features in the spliced area images; the specific splicing method is as follows: the same similarity values in the 4 area images are spliced in the clockwise direction or the anticlockwise direction, or the same similarity values in the 4 area images are spliced in the sequence from left to right and from top to bottom, or the same similarity values in the 4 area images are spliced in the sequence from top to bottom and from left to right, and of course, the same similarity values in the 4 area images can also be spliced in the sequence from bottom to top and from right to left. For ease of understanding and explanation, the same similarity values in the 4 region images are spliced in the order from left to right and from top to bottom.
Please refer to fig. 4, which is a schematic diagram illustrating a multi-frame splicing process using data according to an embodiment of the present application; assume that the 4 region images include: a first region image, a second region image, a third region image, and a fourth region image; wherein the first region image has four pixel values from left to right and from top to bottom, respectively, 1, 2, 3 and 4, the second region image has four pixel values from left to right and from top to bottom, respectively, 5, 6, 7 and 8, the third region image has four pixel values from left to right and from top to bottom, respectively, 9, 10, 11 and 12, and the fourth region image has four pixel values from left to right and from top to bottom, respectively, 13, 14, 15 and 16; the first area image, the second area image, the third area image and the fourth area image are stitched by using the second feature extraction module, and pixel values of the stitched area images are obtained as 1, 5, 2, 6, 9, 13, 10, 14, 3, 7, 4, 8, 11, 15, 12 and 16 from left to right and from top to bottom, respectively.
The above-mentioned extraction method of the superimposition modal feature in step S141 and the extraction method of the stitching modal feature in step S142 are similar, so that the two feature extraction processes are put together for detailed description, and the feature extraction network model is used to extract the superimposition modal feature in the superimposition area image and extract the stitching modal feature in the stitching area image; the neural network that can be used in the feature extraction network model herein includes but is not limited to: a Single point multi-box Detector (FSSD), a LeNet network, an AlexNet network, a google LeNet network, a VGG network, a renet network, a Wide renet network, an inclusion network, and the like.
After step S140, step S150 is performed: and predicting the motion modal characteristics by using the neural network model to obtain the detection result of whether real flames exist in the video data.
The embodiment of predicting the motion modality characteristics by using the neural network model in step S150 may include:
step S151: and fusing the superposition modal characteristics and the splicing modal characteristics by using a characteristic fusion network module to obtain fusion characteristics.
The embodiment of step S151 described above is, for example: and performing feature fusion operations such as mean fusion, weighted fusion, channel fusion and/or splicing fusion on the superposition modal features and the splicing modal features by using the feature fusion network module to obtain fusion features. In a specific implementation process, if the sizes of the superposition modal features and the splicing modal features are not consistent, interpolation algorithms such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation and the like can be selected to convert the features with smaller sizes in the superposition modal features and the splicing modal features into features with uniform sizes, so that the sizes of the superposition modal features and the splicing modal features can be conveniently converted into fusion operation, and then fusion is performed to obtain fusion features.
Step S152: and carrying out classification prediction on the fusion characteristics by using a classification network module, and determining a classification result as a detection result of whether real flames exist in the video data.
The embodiment of step S152 described above is, for example: performing classification prediction on the fusion features by using classification network modules, wherein the classification network modules which can be used include but are not limited to: convolutional Neural Network (CNN), Deep Neural Network (DNN), and the like, and determines the classification result as a detection result of whether a real flame exists in the video data.
In the implementation process, a position area where a flame is located in an image frame to be detected is predicted through a trained neural network model, then a plurality of area images are intercepted from the image frame in video data according to the position area, finally, the neural network model is used for extracting the motion mode characteristics of the area images, and whether real flame exists in the video data is predicted according to the motion mode characteristics; that is to say, the method predicts according to the motion modal characteristics extracted from the regional images of the positions of the flames in the multi-frame video images, and effectively utilizes the appearance characteristics and multi-frame dynamic motion characteristic information of the flames in the motion process, thereby improving the accuracy of detecting the flames.
Optionally, after obtaining the detection result in step S150, the position coordinates may be marked on the map, and this embodiment may include:
step S153: if flame exists in the video data, the position coordinate of the flame is predicted according to the acquisition information when the video data is acquired, and the position coordinate is marked on the map.
In the first implementation manner of the step S153, the collecting information may include: the collecting coordinates and the orientation angle of the camera when the video data are collected, so that the position coordinates of the flame can be predicted according to the collecting coordinates and the orientation angle of the camera; if flame exists in the video data, the position coordinate of the flame is predicted according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera, and the position coordinate is marked on the map. There are many ways to obtain the acquisition coordinates and the orientation angle of the camera when the video data is acquired, including but not limited to: the first obtaining mode is obtained only from a video file obtained by a collecting device, and specifically includes: if the acquisition equipment is a camera, the acquisition coordinate and the orientation angle are recorded in the video file acquired by the camera at the same time, and then the acquisition equipment can be directly acquired from the video file. The second obtaining mode is obtained from the acquisition device and the video file obtained by the acquisition device, and specifically includes: the position coordinate where the video camera is located is an acquisition coordinate; if the surveillance camera is a fixed angle shot, then the fixed angle may be an orientation angle; if the monitoring camera is dynamically rotated, the rotation angle corresponding to the moment to be acquired can be obtained from the corresponding relation between the stored time point and the rotation angle, and the rotation angle is determined as the orientation angle.
The above embodiment of predicting the position coordinate of the flame according to the acquisition coordinate and the orientation angle of the camera is, for example: calculating the measuring and calculating distance between the flame and the camera according to the orientation angle of the camera and the position of the flame in a video image of the video data, predicting the position coordinate of the flame according to the collecting coordinate when the video data is collected and the measuring and calculating distance between the flame and the camera, marking the position coordinate on a map, and outputting the map with the position coordinate, so that people can clearly see the position of the fire on the map. Of course, the flame size on the map can be adjusted according to the flame size, so that people can know the spreading condition of the fire more clearly.
In a second implementation manner of the step S153, the collecting information may include: the method comprises the following steps of collecting time when video data are collected, collecting coordinates of a camera, and an included angle between the direction of the sunlight irradiating the camera and the direction of flame relative to the camera; if the orientation angle of the camera cannot be obtained or the orientation angle of the camera is not clear, the absolute direction of the sunlight irradiation can be determined according to the acquisition time when the video data is acquired, then the relative direction of the flame position coordinate relative to the camera coordinate is calculated according to the absolute direction and the included angle between the sunlight irradiation camera direction and the flame relative camera direction, then the relative distance of the flame from the acquisition coordinate is determined according to the shape and the size of the flame and the size of a reference object in the image, finally the position coordinate of the flame is determined according to the acquisition coordinate, the relative distance and the relative direction of the camera, and the position coordinate is marked on a map. Of course, the flame size on the map can be adjusted according to the flame size, so that people can know the spreading condition of the fire more clearly.
Optionally, after obtaining the detection result in step S150, early warning information may be generated and output, and this embodiment may include:
step S154: and if flame exists in the video data, generating and outputting early warning information.
The embodiment of step S154 described above is, for example: dividing the video data into a plurality of fire classes according to the size of the flame area, if flame exists in the video data, determining the fire class according to the size of the position area, and generating and outputting early warning information according to the fire class, wherein the fire classes specifically include: the fire hazard grade is divided into nine grades according to the size of a flame area, wherein the first grade is the lowest grade, the corresponding early warning information is only voice prompt information, the ninth grade is the highest grade, and the corresponding early warning information is that an alarm bell is immediately sounded and warning light is emitted; here, the first rank is taken as an example, and if it is determined that the fire rank is the first rank according to the size of the location area, the voice guidance information is generated and played.
Please refer to a schematic flow chart of interaction between an electronic device and a terminal device provided in an embodiment of the present application shown in fig. 5; optionally, the electronic device executing the flame detection method may further interact with a terminal device to provide a flame detection service for the terminal device, and a specific process of the interaction between the electronic device and the terminal device may include:
step S210: the electronic equipment receives video data and an image frame to be detected, which are sent by the terminal equipment.
The embodiment of step S210 described above is, for example: the electronic device receives video data and an image frame to be detected, which are sent by the terminal device, through a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP).
Step S220: the electronic equipment predicts the position area of the flame in the image frame to be detected by using the neural network model.
Step S230: the electronic equipment intercepts the corresponding area image from the image frame of the video data according to the position area.
Step S240: the electronic device extracts the motion modality features of the plurality of captured region images obtained by capturing the same region of the plurality of image frames of the video data using the neural network model.
Step S250: the electronic equipment predicts the motion modal characteristics by using the neural network model to obtain the detection result of whether real flames exist in the video data.
The implementation principle and implementation manner of steps S220 to S250 are similar to those of steps S120 to S150, and therefore, the implementation principle and implementation manner of steps are not described herein, and if not clear, reference may be made to the description of steps S120 to S150.
Step S260: the electronic equipment sends the detection result to the terminal equipment, and then the terminal equipment receives and outputs the detection result sent by the electronic equipment.
The embodiment of step S260 described above is, for example: the electronic device sends the detection result to the terminal device through a hypertext Transfer Protocol (HTTP) and a Hypertext Transfer Protocol Security (HTTPs), and then the terminal device receives and outputs the detection result sent by the electronic device through the HTTP and the HTTPs.
Please refer to fig. 6, which illustrates a schematic structural diagram of a flame detection device provided in the embodiment of the present application; the embodiment of the present application provides a flame detection device 300, including:
the to-be-detected frame acquiring module 310 is configured to acquire an to-be-detected image frame in the video data.
And the position region prediction module 320 is configured to predict a position region where the flame is located in the image frame to be detected by using the neural network model.
And the area image intercepting module 330 is configured to intercept a corresponding area image from an image frame of the video data according to the location area.
The modal feature extraction module 340 is configured to extract a motion modal feature of the captured multiple region images by using the neural network model, where the multiple region images are obtained by capturing the same region of multiple image frames of the video data.
A detection result obtaining module 350, configured to predict the motion modal characteristic by using the neural network model, and obtain a detection result of whether a real flame exists in the video data.
Optionally, in an embodiment of the present application, the neural network model includes: a target detection module; a location area prediction module comprising:
and the region probability prediction module is used for predicting a candidate region where the flame is located in the image frame to be detected and the prediction probability of the flame existing in the candidate region by using the target detection module.
And the position area determining module is used for determining the candidate area as the position area if the prediction probability is greater than a preset threshold.
Optionally, in this embodiment of the present application, the region image capturing module is specifically configured to:
acquiring a plurality of image frames after the acquisition time of an image frame to be detected from video data, and intercepting the image frames according to the position area;
or acquiring a plurality of image frames with the similarity greater than a preset threshold value with the image frame to be detected from the video data, and intercepting the plurality of image frames according to the position area.
Optionally, in an embodiment of the present application, the neural network model includes a first feature extraction module and a second feature extraction module, and the motion modal features include stacking modal features and stitching modal features; a modal feature extraction module comprising:
and the superposition characteristic extraction module is used for superposing the plurality of area images by using the first characteristic extraction module to obtain a superposition area image and extracting superposition modal characteristics in the superposition area image.
And the splicing characteristic extraction module is used for splicing the plurality of area images by using the second characteristic extraction module to obtain spliced area images and extracting splicing modal characteristics in the spliced area images.
Optionally, in this embodiment of the present application, the neural network model further includes: the system comprises a feature fusion network module and a classification network module; a detection result obtaining module comprising:
and the fusion characteristic obtaining module is used for fusing the superposition modal characteristics and the splicing modal characteristics by using the characteristic fusion network module to obtain fusion characteristics.
And the characteristic classification prediction module is used for performing classification prediction on the fusion characteristics by using the classification network module.
Optionally, in an embodiment of the present application, the flame detection device further includes:
and the flame coordinate marking module is used for predicting the position coordinate of the flame according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera if the flame exists in the video data, and marking the position coordinate on the map.
Optionally, in this application, the flame detection device may further include:
and the early warning information output module is used for generating and outputting early warning information if flame exists in the video data.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the flame detection method, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
Please refer to fig. 7 for a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 400 provided in an embodiment of the present application includes: a processor 410 and a memory 420, the memory 420 storing machine-readable instructions executable by the processor 410, the machine-readable instructions when executed by the processor 410 performing the method as above.
The embodiment of the present application also provides a storage medium 430, where the storage medium 430 stores a computer program, and the computer program is executed by the processor 410 to perform the method as above.
The storage medium 430 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules of the embodiments in the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.
Claims (8)
1. A method of flame detection, comprising:
acquiring an image frame to be detected in video data;
predicting a position area where a flame is located in the image frame to be detected by using a neural network model;
intercepting a corresponding area image from an image frame of the video data according to the position area;
extracting motion modality features of a plurality of intercepted region images by using the neural network model, wherein the region images are obtained by intercepting the same region of a plurality of image frames of the video data;
predicting the motion modal characteristics by using the neural network model to obtain a detection result of whether real flames exist in the video data;
the neural network model comprises a first feature extraction module and a second feature extraction module, and the motion modal features comprise superposition modal features and splicing modal features; the extracting of the motion modality features of the intercepted plurality of region images by using the neural network model comprises: superposing the plurality of area images by using the first feature extraction module to obtain superposed area images, and extracting the superposed modal features in the superposed area images; splicing the plurality of area images by using the second feature extraction module to obtain spliced area images, and extracting the splicing modal features in the spliced area images;
the neural network model further comprises: the system comprises a feature fusion network module and a classification network module; the predicting the motion modality feature using the neural network model includes: fusing the superposition modal characteristics and the splicing modal characteristics by using the characteristic fusion network module to obtain fusion characteristics; and performing classification prediction on the fusion features by using the classification network module.
2. The method of claim 1, wherein the neural network model comprises: a target detection module; the predicting the position area of the flame in the image frame to be detected by using the neural network model comprises the following steps:
predicting a candidate region where the flame is located in the image frame to be detected and the prediction probability of the flame in the candidate region by using the target detection module;
and if the prediction probability is greater than a preset threshold value, determining the candidate region as the position region.
3. The method of claim 1, wherein said truncating a corresponding region image from an image frame of the video data according to the location region comprises:
acquiring a plurality of image frames behind the acquisition time of the image frame to be detected from the video data, and intercepting the image frames according to the position area;
or acquiring a plurality of image frames with the similarity to be detected greater than a preset threshold value from the video data, and intercepting the image frames according to the position area.
4. The method according to any one of claims 1-3, further comprising, after said obtaining a detection of whether a real flame is present in said video data:
if flame exists in the video data, predicting the position coordinate of the flame according to the acquisition coordinate when the video data is acquired and the orientation angle of the camera, and marking the position coordinate on a map.
5. The method according to any one of claims 1-3, further comprising, after said obtaining a detection of whether a real flame is present in said video data:
and if flame exists in the video data, generating and outputting early warning information.
6. A flame detection device, comprising:
the frame acquisition module to be detected is used for acquiring an image frame to be detected in the video data;
the position area prediction module is used for predicting the position area where the flame is located in the image frame to be detected by using a neural network model;
the area image intercepting module is used for intercepting a corresponding area image from an image frame of the video data according to the position area;
a modal feature extraction module, configured to extract motion modal features of a plurality of captured region images using the neural network model, where the region images are obtained by capturing a same region of a plurality of image frames of the video data;
the detection result obtaining module is used for predicting the motion modal characteristics by using the neural network model to obtain a detection result of whether real flame exists in the video data;
the neural network model comprises a first feature extraction module and a second feature extraction module, and the motion modal features comprise superposition modal features and splicing modal features; the extracting of the motion modality features of the intercepted plurality of region images by using the neural network model comprises: superposing the plurality of area images by using the first feature extraction module to obtain superposed area images, and extracting the superposed modal features in the superposed area images; splicing the plurality of area images by using the second feature extraction module to obtain spliced area images, and extracting the splicing modal features in the spliced area images;
the neural network model further comprises: the system comprises a feature fusion network module and a classification network module; the predicting the motion modality feature using the neural network model includes: fusing the superposition modal characteristics and the splicing modal characteristics by using the characteristic fusion network module to obtain fusion characteristics; and performing classification prediction on the fusion features by using the classification network module.
7. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 5.
8. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011176496.XA CN112001375B (en) | 2020-10-29 | 2020-10-29 | Flame detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011176496.XA CN112001375B (en) | 2020-10-29 | 2020-10-29 | Flame detection method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112001375A CN112001375A (en) | 2020-11-27 |
CN112001375B true CN112001375B (en) | 2021-01-05 |
Family
ID=73475794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011176496.XA Active CN112001375B (en) | 2020-10-29 | 2020-10-29 | Flame detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001375B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487994A (en) * | 2020-12-01 | 2021-03-12 | 上海鸢安智能科技有限公司 | Smoke and fire detection method and system, storage medium and terminal |
CN113516146A (en) * | 2020-12-21 | 2021-10-19 | 腾讯科技(深圳)有限公司 | Data classification method, computer and readable storage medium |
CN113627223A (en) * | 2021-01-07 | 2021-11-09 | 广州中国科学院软件应用技术研究所 | Flame detection algorithm based on deep learning target detection and classification technology |
CN112906495B (en) * | 2021-01-27 | 2024-04-30 | 深圳安智杰科技有限公司 | Target detection method and device, electronic equipment and storage medium |
CN113066077B (en) * | 2021-04-13 | 2021-11-23 | 南京甄视智能科技有限公司 | Flame detection method and device |
CN113379999B (en) * | 2021-06-22 | 2024-05-24 | 徐州才聚智能科技有限公司 | Fire detection method, device, electronic equipment and storage medium |
CN113554364A (en) * | 2021-09-23 | 2021-10-26 | 深圳市信润富联数字科技有限公司 | Disaster emergency management method, device, equipment and computer storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617635A (en) * | 2013-11-28 | 2014-03-05 | 南京理工大学 | Transient flame detection method based on image processing |
CN111027541A (en) * | 2019-11-15 | 2020-04-17 | 国网安徽省电力有限公司检修分公司 | Flame detection method and system based on visible light and thermal imaging and storage medium |
CN111814638A (en) * | 2020-06-30 | 2020-10-23 | 成都睿沿科技有限公司 | Security scene flame detection method based on deep learning |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105741481B (en) * | 2016-04-21 | 2018-07-06 | 大连理工大学 | A kind of fire monitoring positioning device and fire monitoring localization method based on binocular camera |
WO2018079400A1 (en) * | 2016-10-24 | 2018-05-03 | ホーチキ株式会社 | Fire monitoring system |
CN108121986B (en) * | 2017-12-29 | 2019-12-17 | 深圳云天励飞技术有限公司 | Object detection method and device, computer device and computer readable storage medium |
CN108257347B (en) * | 2018-01-10 | 2020-09-29 | 安徽大学 | Flame image sequence classification method and device by using convolutional neural network |
CN109376747A (en) * | 2018-12-11 | 2019-02-22 | 北京工业大学 | A kind of video flame detecting method based on double-current convolutional neural networks |
CN111368771A (en) * | 2020-03-11 | 2020-07-03 | 四川路桥建设集团交通工程有限公司 | Tunnel fire early warning method and device based on image processing, computer equipment and computer readable storage medium |
CN111489342B (en) * | 2020-04-09 | 2023-09-26 | 西安星舟天启智能装备有限责任公司 | Video-based flame detection method, system and readable storage medium |
-
2020
- 2020-10-29 CN CN202011176496.XA patent/CN112001375B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617635A (en) * | 2013-11-28 | 2014-03-05 | 南京理工大学 | Transient flame detection method based on image processing |
CN111027541A (en) * | 2019-11-15 | 2020-04-17 | 国网安徽省电力有限公司检修分公司 | Flame detection method and system based on visible light and thermal imaging and storage medium |
CN111814638A (en) * | 2020-06-30 | 2020-10-23 | 成都睿沿科技有限公司 | Security scene flame detection method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN112001375A (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112001375B (en) | Flame detection method and device, electronic equipment and storage medium | |
CN112560999B (en) | Target detection model training method and device, electronic equipment and storage medium | |
CN108256404B (en) | Pedestrian detection method and device | |
JP7272533B2 (en) | Systems and methods for evaluating perceptual systems | |
CN107808111B (en) | Method and apparatus for pedestrian detection and attitude estimation | |
CN112183353B (en) | Image data processing method and device and related equipment | |
CN111898581B (en) | Animal detection method, apparatus, electronic device, and readable storage medium | |
US20190171897A1 (en) | System and method for automatically improving gathering of data using a data gathering device | |
CN106845352B (en) | Pedestrian detection method and device | |
CN108009466B (en) | Pedestrian detection method and device | |
CN108875750B (en) | Object detection method, device and system and storage medium | |
EP4013518A1 (en) | Flame finding with automated image analysis | |
CN110942456B (en) | Tamper image detection method, device, equipment and storage medium | |
JP2007209008A (en) | Surveillance device | |
CN110991385A (en) | Method and device for identifying ship driving track and electronic equipment | |
US20220122360A1 (en) | Identification of suspicious individuals during night in public areas using a video brightening network system | |
CN109255360B (en) | Target classification method, device and system | |
CN114463788A (en) | Fall detection method, system, computer equipment and storage medium | |
JP2007028680A (en) | Monitoring device | |
KR102218255B1 (en) | System and method for analyzing image based on artificial intelligence through learning of updated areas and computer program for the same | |
JP2021007055A (en) | Discriminator learning device, discriminator learning method, and computer program | |
US12033347B2 (en) | Image processing system for extending a range for image analytics | |
CN114119531A (en) | Fire detection method and device applied to campus smart platform and computer equipment | |
Tiwari et al. | Development of Algorithm for Object Detection & Tracking Using RGB Model | |
CN118135377B (en) | Model deployment method, terminal side equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |