CN110941988A - Flame identification method and system and neural network for identifying flame - Google Patents
Flame identification method and system and neural network for identifying flame Download PDFInfo
- Publication number
- CN110941988A CN110941988A CN201910963981.2A CN201910963981A CN110941988A CN 110941988 A CN110941988 A CN 110941988A CN 201910963981 A CN201910963981 A CN 201910963981A CN 110941988 A CN110941988 A CN 110941988A
- Authority
- CN
- China
- Prior art keywords
- flame
- rectangular frame
- image
- exists
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a flame identification method and system and a neural network for identifying flame, belonging to the technical field of image identification. The method comprises the following steps: extracting depth features about the flame from the image; generating an image mask according to the color model of the flame; forming a plurality of areas where flames may exist in the image according to the image mask, and forming a plurality of rectangular frames on the areas; determining whether an object exists in the rectangular frame; correcting the size of the rectangular frame in the case that the object is determined to exist; mapping the corrected rectangular frame to a corresponding area in the depth feature, and performing down-sampling processing to obtain a feature vector; judging whether flame exists in each rectangular frame or not by adopting the full-connection layer according to the characteristic vectors; correcting the rectangular frame under the condition that the flame exists in the rectangular frame; judging whether flame exists in the image by adopting a neural network unit; if it is determined that there is a flame, the rectangular frame after further correction is output as the position of the identified flame.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a flame recognition method and system and a neural network for recognizing flame.
Background
The damage of fire to human society is enormous. Every year, forest fires take away the lives of many firefighters, and serious influences are caused on the local environment and the ecology; meanwhile, a large number of fires occur in human gathering places such as factories and residential areas, and a large amount of property loss and casualties are directly brought.
With the development of computer vision, flame detection technologies based on computer vision have been studied in large quantities. The sensitivity of image detection is much higher than for conventional smoke alarms. Therefore, the image detection can detect a flame at the initial stage of a fire, thereby early warning the fire.
Disclosure of Invention
An object of embodiments of the present invention is to provide a flame recognition method, a flame recognition system, and a neural network for recognizing flames, which can determine whether a fire occurs on a site by means of image recognition, and further determine the location of an ignition point in the case of determining the occurrence of a fire.
In order to achieve the above object, an embodiment of the present invention provides a flame recognition method including:
extracting depth features about the flame from the image using a plurality of first convolution layers;
generating an image mask according to a preset color model of flame;
forming a plurality of regions in the image where the flames may exist according to the image mask, and forming a plurality of rectangular frames on the regions;
determining whether an object exists in the rectangular frame or not according to the depth characteristics by adopting a second convolutional layer;
in the case that the object is determined to exist in the rectangular frame, correcting the size of the rectangular frame based on the depth feature;
mapping the corrected rectangular frame to a corresponding Region in the depth features by using a Region of Interest (RoI) Pool layer, and performing downsampling processing on the corresponding Region to obtain a feature vector;
judging whether flame exists in each rectangular frame or not by adopting a full-connection layer according to the characteristic vectors;
further correcting the rectangular frame under the condition that the flame exists in the rectangular frame;
judging whether flame exists in the image or not by adopting a trained neural network unit;
outputting the further corrected rectangular frame as the identified position of the flame in the case of judging that the flame exists in the image;
and determining that no flame exists in the image when the flame does not exist in the image.
Optionally, the flame identification method further comprises:
acquiring an image to be identified before performing the step of extracting depth features relating to the flame from the image using a plurality of first convolution layers.
Optionally, the flame identification method further comprises:
deleting the rectangular frame in the case that it is determined that the object does not exist in the rectangular frame.
Optionally, the flame identification method further comprises:
and deleting the rectangular frame under the condition that the flame does not exist in the rectangular frame.
In another aspect, the present invention provides a neural network for identifying flames, the neural network comprising:
a plurality of first convolution layers for extracting depth features for the flame from an image;
a mask generation layer for generating an image mask according to a preset color model of the flame;
a plurality of second convolutional layers for:
forming a plurality of regions in the image where the flames may exist according to the image mask, and forming a plurality of rectangular frames on the regions;
determining whether an object exists in the rectangular frame according to the depth feature;
in the case that the object is determined to exist in the rectangular frame, correcting the size of the rectangular frame based on the depth feature;
a RoI Pool layer for:
mapping the corrected rectangular frame to a corresponding area in the depth feature, and performing downsampling processing on the corresponding area to obtain a feature vector;
a full-link layer for:
judging whether flame exists in each rectangular frame or not according to the characteristic vectors;
further correcting the rectangular frame under the condition that the flame exists in the rectangular frame;
a trained neural network unit to:
judging whether flame exists in the image or not;
outputting the further corrected rectangular frame as the identified position of the flame in the case of judging that the flame exists in the image;
and determining that no flame exists in the image when the flame does not exist in the image.
Optionally, the first said first convolutional layer is further for receiving an image to be identified.
Optionally, the second convolutional layer is further used for deleting the rectangular frame in case that it is determined that there is no object in the rectangular frame.
Optionally, the full connection layer is further used for deleting the rectangular frame in the case that the flame is not present in the rectangular frame.
In yet another aspect, the invention also provides a flame identification system comprising a processor for performing a flame identification method as described in any one of the above.
In yet another aspect, the present disclosure also provides a storage medium storing instructions for reading by a machine to cause the machine to perform a method of flame identification as described in any one of the above.
Through the technical scheme, the flame identification method, the flame identification system and the neural network for identifying flame can identify the shot scene picture so as to judge whether the scene has flame to determine whether a fire occurs or not, and further determine the position of a fire point under the condition of judging the occurrence of the fire, so that the problem of alarm hysteresis of a smoke alarm in the prior art is solved, and the safety of field equipment and personnel is guaranteed.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a flow diagram of a method of flame identification according to an embodiment of the invention;
FIG. 2 is a block diagram of a neural network according to an embodiment of the present invention; and
FIG. 3 is a schematic diagram of a workflow of a neural network according to one embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
In the embodiments of the present invention, unless otherwise specified, the use of directional terms such as "upper, lower, top, and bottom" is generally used with respect to the orientation shown in the drawings or the positional relationship of the components with respect to each other in the vertical, or gravitational direction.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between the various embodiments can be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not be within the protection scope of the present invention.
FIG. 1 shows a flow diagram of a method for flame identification according to an embodiment of the invention. In fig. 1, the flame recognition method may include:
in step S10, depth features about the flame are extracted from the image using the plurality of first convolution layers. The first Convolutional layer may be, for example, a Convolutional layer in a fast-RCNN (Region-Convolutional Neural Network) model. For this image, it may be, for example, a picture taken live or a picture taken from a video taken live.
In step S11, an image mask is generated according to a preset color model of flames. Specifically, the image mask may be generated in a manner that a color value of a region in the image that meets a preset condition is set to 1, and a color value of a region that does not meet the preset condition is set to 0, so as to complete binarization operation of the color model. For the preset condition, the preset condition may be, for example, as shown in formula (1), that is, when it is determined that the red color value of the area is greater than the green color value, the green color value is greater than the blue color value, and the red color value is greater than the preset value, the color value of the area is set to 1,
where M (x, y) is a color value of a coordinate point (x, y) in the image mask, 1 represents a region of interest, 0 represents a region of non-interest, fR(x, y) is a color value of red at the coordinate point (x, y), fG(x, y) is a color value of green at the coordinate point (x, y), fB(x, y) is a color value of blue at the coordinate point (x, y), TRIs a preset color threshold.
In step S12, a plurality of regions where flames may exist are formed in the image according to the image mask, and a plurality of rectangular frames are formed on the regions. Specifically, the step S12 may be to form a plurality of regions (regions of interest) where flames may exist on the depth features (feature maps) according to the image mask, and form a plurality of rectangular boxes (anchors) on the regions; the plurality of rectangular frames are mapped into the image, so that anchors (rectangular frames) of regions where a plurality of flames may exist in the image are obtained.
In step S13, the second convolutional layer is used to determine whether an object exists in the rectangular frame according to the depth feature. Specifically, the second convolution layer may be to perform convolution calculation on the depth features in each rectangular frame (anchor) respectively, so as to obtain the confidence corresponding to each anchor. For the way of judging whether the rectangular frame has the object, a confidence threshold value may be set for the confidence, and the rectangular frame corresponding to the confidence greater than or equal to the confidence threshold value is judged as the object possibly existing; and then selecting a preset number of rectangular frames from the plurality of rectangular frames in which the object possibly exists according to the sequence from the confidence coefficient to the small confidence coefficient to serve as the rectangular frames for judging the existence of the object. Accordingly, the remaining rectangular frames are determined as having no object.
In step S15, if it is determined that an object is present in the rectangular frame, the size of the rectangular frame is corrected based on the depth feature. The modification method may be a mechanism formed by training a neural network known to those skilled in the art, and thus will not be described herein. Accordingly, in the case where it is determined that there is no object in the rectangular frame, the rectangular frame may be deleted, i.e., step S14 is performed.
In step S16, the modified rectangular frame is mapped to a corresponding area in the depth feature by using the RoI Pool layer, and the corresponding area is downsampled to obtain a feature vector. Wherein the feature vector may be a preset fixed length.
In step S17, the fully connected layer is used to determine whether there is a flame in each rectangular frame according to the feature vector.
In step S19, when it is determined that a flame is present in the rectangular frame, the rectangular frame is further corrected. In this embodiment, since the number of the rectangular frames is plural, the step S17 may be to separately determine whether there is a flame in each rectangular frame, and in the case where there is a flame in the rectangular frame, further modify the rectangular frame; in the case where it is determined that there is no flame in the rectangular frame, the rectangular frame may be deleted, that is, step S18 is performed.
In step S20, the trained neural network unit is used to determine whether there is a flame in the image. The trained neural network element may be a neural network such as MobileNetV2, among others. The specific process for training the MobileNetV2 should be known to those skilled in the art.
In step S21, when it is determined that there is a flame in the image, the rectangular frame after further correction is output as the position of the identified flame. In this embodiment, when the neural network unit determines that there is a flame in the image, it can be said that there is a flame in the image at this time, but the neural network unit cannot determine the position of the flame, and then the rectangular frame corrected by the aforementioned full-link layer may be used as the position of the flame.
In step S22, when it is determined that there is no flame in the image, it is determined that there is no flame in the image.
The sequence of the individual steps of the flame identification method as shown in fig. 1 is only to explain and supplement the contents of the present invention, and does not limit the scope of the present invention. Under the same technical concept, it should also be within the scope of the present invention to simply adjust only some of the steps of the present invention (e.g., exchange the order of steps S10 to S19 with step S20).
In addition, in view of the completeness of the method, the step of receiving the image to be recognized may be performed before the step S10 is performed.
In another aspect, the present invention provides a neural network for flame recognition, which may include a plurality of first convolution layers 11, a mask generation layer 12, a plurality of second convolution layers 13, a RoI Pool layer 14, a full link layer 15, and trained neural network units 21, as shown in fig. 2.
The neural network may be used to perform a method as illustrated in fig. 1. In particular, the workflow of the neural network may be as shown in fig. 3. In fig. 3, a plurality of first convolution layers 11 (feature extractors) of the neural network may be used to extract depth features (feature maps) with respect to flames from an image (input image). The mask-generating layer 12 may be used to generate an image mask according to a preset color model of the flame. A plurality of second convolution layers 13(Masked RPNs) may be used to form a plurality of regions in the image where flames may exist according to the image mask, and form a plurality of rectangular frames on the regions; determining whether an object exists in the rectangular frame according to the depth characteristics (characteristic diagram); in the case where it is determined that an object exists in the rectangular frame, the size of the rectangular frame is corrected based on the depth feature. The RoI Pool layer 14 may be used to map the modified rectangular box to a corresponding region in the depth feature, which is downsampled to obtain a feature vector. The fully connected layer 15 (classification and regression network) may be used to determine whether there is a flame in each rectangular box based on the feature vectors; in the case where it is judged that there is a flame in the rectangular frame, the rectangular frame is further corrected. The trained neural network unit 21 (global information network) can be used to determine whether there is flame in the image; under the condition that the flame exists in the image, outputting a further corrected rectangular frame as the position of the identified flame by adopting a voting strategy; and determining that no flame exists in the image when the flame does not exist in the image.
In addition, a first convolution layer 11 of the plurality of first convolution layers 11 may be used to receive an image to be identified. The second convolution layer 13 may be used to delete the rectangular frame when it is determined that no object is present in the rectangular frame. Accordingly, the fully-connected layer 15 may be further used to delete the rectangular frame in the case where it is judged that there is no flame in the rectangular frame.
In this embodiment, it is considered that the first convolution layer 10, the mask generation layer 12, the second convolution layer 13, the RoI Pool layer 14, and the full-link layer 15 may be integrated into a modified fast-RCNN model, and the neural network unit 21 may be regarded as a global information network. The neural network can also be seen as consisting of a modified Faster-RCNN model 10 and a global information network 20. In this embodiment, since the Faster-RCNN model 10 is used to determine the location of flames in an image, the global information network 20 is used to determine whether flames are present in the image. The steps executed by the two are independent, so that the two can be executed by adopting one thread respectively under the condition that the running equipment is a single processor; in the case where the execution device is a plurality of processors, both may be executed by at least one processor.
In yet another aspect, the present invention also provides a flame identification system, which may include a processor, which may be configured to perform any of the flame identification methods described above.
In yet another aspect, the present disclosure also provides a storage medium that may store instructions that may be used to be read by a machine to cause the machine to perform a flame identification method as illustrated in fig. 1 above.
Through the technical scheme, the flame identification method, the flame identification system and the neural network for identifying flame can identify the shot scene picture so as to judge whether the scene has flame to determine whether a fire occurs or not, and further determine the position of a fire point under the condition of judging the occurrence of the fire, so that the problem of alarm hysteresis of a smoke alarm in the prior art is solved, and the safety of field equipment and personnel is guaranteed.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention will not be described separately for the various possible combinations.
Those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a (may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, various different embodiments of the present invention may be arbitrarily combined with each other, and the embodiments of the present invention should be considered as disclosed in the disclosure of the embodiments of the present invention as long as the embodiments do not depart from the spirit of the embodiments of the present invention.
Claims (10)
1. A flame identification method, characterized in that the flame identification method comprises:
extracting depth features about the flame from the image using a plurality of first convolution layers;
generating an image mask according to a preset color model of flame;
forming a plurality of regions in the image where the flames may exist according to the image mask, and forming a plurality of rectangular frames on the regions;
determining whether an object exists in the rectangular frame or not according to the depth characteristics by adopting a second convolutional layer;
in the case that the object is determined to exist in the rectangular frame, correcting the size of the rectangular frame based on the depth feature;
mapping the corrected rectangular frame to a corresponding area in the depth feature by adopting a RoI Pool layer, and performing downsampling processing on the corresponding area to obtain a feature vector;
judging whether flame exists in each rectangular frame or not by adopting a full-connection layer according to the characteristic vectors;
further correcting the rectangular frame under the condition that the flame exists in the rectangular frame;
judging whether flame exists in the image or not by adopting a trained neural network unit;
outputting the further corrected rectangular frame as the identified position of the flame in the case of judging that the flame exists in the image;
and determining that no flame exists in the image when the flame does not exist in the image.
2. The flame identification method of claim 1, further comprising:
acquiring an image to be identified before performing the step of extracting depth features relating to the flame from the image using a plurality of first convolution layers.
3. The flame identification method of claim 1, further comprising:
deleting the rectangular frame in the case that it is determined that the object does not exist in the rectangular frame.
4. The flame identification method of claim 1, further comprising:
and deleting the rectangular frame under the condition that the flame does not exist in the rectangular frame.
5. A neural network for identifying flames, the neural network comprising:
a plurality of first convolution layers for extracting depth features for the flame from an image;
a mask generation layer for generating an image mask according to a preset color model of the flame;
a plurality of second convolutional layers for:
forming a plurality of regions in the image where the flames may exist according to the image mask, and forming a plurality of rectangular frames on the regions;
determining whether an object exists in the rectangular frame according to the depth feature;
in the case that the object is determined to exist in the rectangular frame, correcting the size of the rectangular frame based on the depth feature;
RoIPool layer for:
mapping the corrected rectangular frame to a corresponding area in the depth feature, and performing downsampling processing on the corresponding area to obtain a feature vector; a full-link layer for:
judging whether flame exists in each rectangular frame or not according to the characteristic vectors;
further correcting the rectangular frame under the condition that the flame exists in the rectangular frame; a trained neural network unit to:
judging whether flame exists in the image or not;
outputting the further corrected rectangular frame as the identified position of the flame in the case of judging that the flame exists in the image;
and determining that no flame exists in the image when the flame does not exist in the image.
6. The neural network of claim 5, wherein a first of the first convolutional layers is further configured to receive an image to be identified.
7. The neural network of claim 5, wherein the second convolutional layer is further configured to delete the rectangular frame if it is determined that no object is present in the rectangular frame.
8. The neural network of claim 5, wherein the fully-connected layer is further configured to delete the rectangular box if it is determined that no flame is present in the rectangular box.
9. A flame identification system, characterized in that the flame identification system comprises a processor for performing the flame identification method as claimed in any of claims 1 to 4.
10. A storage medium storing instructions for reading by a machine to cause the machine to perform a method of flame identification as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910963981.2A CN110941988B (en) | 2019-10-11 | 2019-10-11 | Flame identification method, system and neural network for identifying flame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910963981.2A CN110941988B (en) | 2019-10-11 | 2019-10-11 | Flame identification method, system and neural network for identifying flame |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110941988A true CN110941988A (en) | 2020-03-31 |
CN110941988B CN110941988B (en) | 2023-06-13 |
Family
ID=69906044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910963981.2A Active CN110941988B (en) | 2019-10-11 | 2019-10-11 | Flame identification method, system and neural network for identifying flame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110941988B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019079446A (en) * | 2017-10-27 | 2019-05-23 | ホーチキ株式会社 | Fire monitoring system |
CN110135269A (en) * | 2019-04-18 | 2019-08-16 | 杭州电子科技大学 | A kind of fire image detection method based on blend color model and neural network |
-
2019
- 2019-10-11 CN CN201910963981.2A patent/CN110941988B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019079446A (en) * | 2017-10-27 | 2019-05-23 | ホーチキ株式会社 | Fire monitoring system |
CN110135269A (en) * | 2019-04-18 | 2019-08-16 | 杭州电子科技大学 | A kind of fire image detection method based on blend color model and neural network |
Non-Patent Citations (2)
Title |
---|
严云洋;朱晓妤;刘以安;高尚兵;: "基于Faster R-CNN模型的火焰检测" * |
张开生;张盟蒙;: "基于K-means和颜色模型的林火辨识方法研究" * |
Also Published As
Publication number | Publication date |
---|---|
CN110941988B (en) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109948497B (en) | Object detection method and device and electronic equipment | |
CN112102409B (en) | Target detection method, device, equipment and storage medium | |
US8995714B2 (en) | Information creation device for estimating object position and information creation method and program for estimating object position | |
CN113012383B (en) | Fire detection alarm method, related system, related equipment and storage medium | |
CN115004269B (en) | Monitoring device, monitoring method, and program | |
CN111553298B (en) | Fire disaster identification method and system based on block chain | |
CN112767645B (en) | Smoke identification method and device and electronic equipment | |
CN112036400B (en) | Method for constructing network for target detection and target detection method and system | |
CN110162462A (en) | Test method, system and the computer equipment of face identification system based on scene | |
CN114399734A (en) | Forest fire early warning method based on visual information | |
KR102540208B1 (en) | Fire detection method using deep learning | |
CN115880765A (en) | Method and device for detecting abnormal behavior of regional intrusion and computer equipment | |
CN114339367A (en) | Video frame processing method, device and equipment | |
CN114898273A (en) | Video monitoring abnormity detection method, device and equipment | |
CN115082834A (en) | Engineering vehicle black smoke emission monitoring method and system based on deep learning | |
CN115909221A (en) | Image recognition method, system, computer device and readable storage medium | |
CN117315551B (en) | Method and computing device for flame alerting | |
CN113298130B (en) | Method for detecting target image and generating target object detection model | |
CN111814617B (en) | Fire determination method and device based on video, computer equipment and storage medium | |
CN110941988B (en) | Flame identification method, system and neural network for identifying flame | |
CN112347989A (en) | Reflective garment identification method and device, computer equipment and readable storage medium | |
CN114913470A (en) | Event detection method and device | |
CN115376275B (en) | Construction safety warning method and system based on image processing | |
CN110909677B (en) | Method, system and storage medium for multi-target tracking and behavior analysis | |
CN116758587A (en) | Method and device for detecting wearing of safety helmet, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: No. 397, Tongcheng South Road, Baohe District, Hefei City, Anhui Province 230061 Applicant after: Super high voltage branch of State Grid Anhui Electric Power Co.,Ltd. Applicant after: State Grid Anhui Electric Power Company Address before: No. 397, Tongcheng South Road, Baohe District, Hefei City, Anhui Province 230061 Applicant before: STATE GRID ANHUI POWER SUPPLY COMPANY OVERHAUL BRANCH Applicant before: State Grid Anhui Electric Power Company |
|
GR01 | Patent grant | ||
GR01 | Patent grant |