NL2030149B1 - A method and system for closed-loop automatic control for whole-process intelligent monitoring of fire activity operations - Google Patents
A method and system for closed-loop automatic control for whole-process intelligent monitoring of fire activity operations Download PDFInfo
- Publication number
- NL2030149B1 NL2030149B1 NL2030149A NL2030149A NL2030149B1 NL 2030149 B1 NL2030149 B1 NL 2030149B1 NL 2030149 A NL2030149 A NL 2030149A NL 2030149 A NL2030149 A NL 2030149A NL 2030149 B1 NL2030149 B1 NL 2030149B1
- Authority
- NL
- Netherlands
- Prior art keywords
- fire
- closed
- time
- area
- deep learning
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and system for closed-loop automatic control and management for whole-process intelligent monitoring of fire activity operations, 5 establishing a safety loss function for fire-activated operations and achieving a measure of the safety of fire-activated operations, mainly involving the three elements of fire-activated operations as defined by the fire-activated operation ticket: ® fire-activated operation area, ® fire-activated operation type, © fire-activated operation spatial distance from the hazard source. On this basis, a set of machine 10 vision intelligent detection and recognition technology is used to detect and classify the fire operation area, the fire operation behavior and the neighboring hazard sources. The detection and classification results are further fed back to the safety loss function of the fire operation, and according to the regularized closed-loop control rules of the fire operation, the command of fire operation termination or 15 continuation is issued to realize the closed-loop automatic control of the fire operation.
Description
1 001874P-NL
A method and system for closed-loop automatic control for whole-process intelligent monitoring of fire activity operations
The present invention relates to a method and system of closed-loop automatic control for whole-process intelligent monitoring of fire activity operations, which belongs to the technical field of control of fire-activated operation.
Background Technology
The materials, intermediate parts and final products in the production process of chemical enterprises are flammable and explosive, toxic and harmful, and the safety requirements for production processes and equipment are extremely demanding.
With the increasing scale of chemical industry production, production equipment has been operating continuously for a long time in high temperature, high pressure and strong corrosive environment, which inevitably causes equipment aging and structural defects. The urgent need for all kinds of regular maintenance of chemical industry equipment, inevitably produce fire operations. In the chemical production site of the fire work is a high-risk operation, the implementation of strict control is the key to prevent poisoning, fire, explosion and other serious accidents.
At present, the control of fire activity is mainly carried out through the issuance of the fire activity chapter for preventive operation review, and the management and control of fire activity by fire supervisors. This prevention strategy requires a lot of manpower and resources, and has a severe test on the operation and monitoring personnel's business level, in addition to personnel negligence and fatigue can also cause fire activity safety hazards.
With the development of artificial intelligence and machine vision technology, automated intelligent imaging monitoring has become an effective technical means to replace all kinds of manual observation work, has seen the target detection, classification and behavior recognition of the field of application, with high monitoring efficiency, timeliness, accuracy and other technical advantages, and has the ability to work for a long time without interruption. However, in the field of fire fighting
2 001874P-NL operation control, artificial intelligence technology and machine vision technology face application fault, there is no technical means for fire fighting operation automation control. The main problems are: (i) lack of regularized description of fire activity safety measurement, (ii) lack of regularized description of fire activity specification, and (iii) lack of area detection, behavior recognition and target detection technology for fire acticity.
Content of the Invention
Objective of the invention: To address the problems and shortcomings of the prior art, the invention provides a method and system for whole-process intelligent monitoring of fire activity operations, which identifies the location, behavior and hazard source target of fire activity through artificial intelligence technology, and measures the safety of fire activity through the safety loss function of fire activity at time t to achieve closed-loop automatic control of fire activity.
Technical solution: A closed-loop automatic control method for the whole process of intelligent monitoring of fire fighting operations, combining fire fighting operation tickets, machine vision technology and deep learning technology based on convolutional neural network establishes a safety loss function for fire fighting operations at time t. Machine vision technology is used to detect the area where the fire is generated, a deep learning model for behavior recognition is used to determine the type of fire fighting operation behavior at time t, and a deep learning model for target detection is used to detect and classify hazard sources. The model detects and classifies the hazard sources, and finally calculates the safety measure of the fire operation at time t. According to the established closed-loop control rules for fire operation based on the fire operation safety loss function, the command to terminate or continue the fire operation is sent, so as to complete the closed-loop automatic control of the fire operation.
The said safety loss function of the fire fighting operation at time t is used to measure the safety of the fire fighting operation; the safety loss function of the fire fighting operation at time t is:
3 001874P-NL
T 4 2
E(1)=] a, (I) - 2) +a, (ht) -n) + Sv x, ©) dt nt i=l where I(t) is the location of the fire operation at time t, » is the location of the fire operation area marked by the fire operation ticket, h(t) is the type of fire operation at time t, 77 is the type of fire operation marked by the fire operation ticket, wi is the weight of the hazard source, i is the level of the hazard source, si(t) is the distance of the fire operation area within the monitoring range from the i-level hazard source at time t, and 7,,7, are the starting and ending moments of the fire operation marked by the fire operation ticket, «,,«, are the loss weights. The safety of the fire operation increases with the decreasing value of the fire operation safety loss function.
The said i is the level of hazard sources, divided into 4 levels of hazard sources, the weight of the hazard sources are: w,=1 w,=0.7, w,=0.6, w,=0.5 .
Using optical features-based seed point selection and random wandering method of moving fires to detect the location of the occurrence of moving fires at time t:
Optical features 1: The color channel variance features of RGB color images of the fire-activated area within the monitoring range are calculated as:
Co 2
I(x)+1 (x)+1 (x
Vix) = I (x)= OHO HL) J+) 3 . 2
L(x) +1 (x)+1 (x
V(x — I, (x) _ ( ) o( ) »( ) & & 3 2
CLF (x)+1,(x))
Vlad = J (2) — trem remetiooni 3 where V(x), Vg(x), and V(x) are the variances of the red, green, and blue channels in the optical information at the pixel point, I(x) is the red channel light intensity at point x, Ig(x) is the green channel light intensity at point x, and p(x) is the blue channel light intensity at point x.
4 001874P-NL
Optical feature 2: The global contrast feature of the red channel is calculated as:
C (x)= > 17,C0-1,(y)
VL (vel, where C(x) is the global contrast characteristic of the red channel, I(x) is the red channel light intensity at point x, I:{y) is the red channel light intensity at any point in the environment, and I; is the red channel component of the ambient optical information.
Principles of fire-activated seed point selection: 3(x)= (1 (x)= ¥, (3) + (7, (3) =, (x) +C, (2)
The first k maximum I(x) values are taken and their corresponding points form a seed point set &.
According to the seed point set £, a random walk model is used to obtain the complete fire area, establish the minimum outside rectangle of the fire area, and mark the center of the rectangle as the fire location I(t).
Combined with the fire location I(t), the behavior recognition deep learning model is used to determine the type of fire operation behavior at time t.
The imaging area with radius ¢ at the center of the fire location I(t) is designed as the area s(t) of the fire behavior, where 5=0.5N and N is the image length.
By deploying video sensors in the hazardous chemical plant area, capturing and storing images of different scenes, different plant areas, different lighting conditions and different types of fire operations, manually labeling the types of fire operations in the images, determining the label of each image, constructing a typical fire operation image database by combining the images and the corresponding image labels, inputting the behavior recognition deep learning model for training, and using the
001874P-NL trained behavior recognition deep learning model to realize the classification of the types of fire operations in the fire operation behavior occurrence area at time t.
Through the four types of hazardous sources on-site image acquisition and online 5 public image crawling, the four types of hazardous sources images in different scenes, different plants and different lighting conditions are captured and stored, the types of the four types of hazardous sources in the images are manually labeled, the label of each image is determined, the image database of the four types of hazardous sources is constructed by combining the images and the corresponding image labels, and the target detection and recognition deep learning model is input for training. The trained deep learning model for target detection and recognition is used to detect and classify the major hazards within the imaging field of view at time t, classify the regions of four different classes of hazards, and mark the center of the smallest outer rectangle of the region of four different classes of hazards as the location of the hazards dit) i=1,2,3,4.
If there is a class i hazardous source in the imaging field of view: s, (=r) —d, (1) where || || is the Euclidean distance metric function for two locations.
If there is no class i hazard source in the imaging field of view, the default value: s,(1)=10°".
According to the closed-loop control rules of the fire operation based on the safety loss function of the fire operation, the fire operation is controlled at time t:
When E{1}>e, immediately terminate the fire operation at t+1 moments.
Otherwise, the fire activity continue to occur, calculate E(t+1).
So far, the closed-loop control of the fire activity at time t is completed.
6 001874P-NL
A system for realizing the above-mentioned closed-loop automatic control method for whole-process intelligent monitoring of fire activity operations, including a fire activity safety loss function calculation module, a fire area detection module, a fire activity closed-loop control module, a module for determining the type of fire activity behavior and a module for marking the location of the hazard source.
The said fire activity safety loss function calculation module combines fire activity ticket, machine vision technology and deep learning technology based on convolutional neural network to establish a fire activity safety loss function at time t.
Said moving fire area detection module uses machine vision technology to detect the area where the moving fire is generated.
The said judging the kind of kinetic fire operation behavior module uses behavior recognition deep learning model to determine the kind of kinetic fire operation behavior at t moments.
The location marking module of the said hazard source uses the target detection deep learning model to detect and classify the hazard source, and finally calculates the safety metric of the fire operation at the moment t.
Said fire activity closed-loop control module according to the established fire activity safety loss function based on the fire activity closed-loop control rules, send fire activity to terminate or continue the command, so as to complete the fire activity closed-loop automatic control.
A computer device, the computer device includes a memory, a processor and a computer program stored in the memory and runable on the processor, the processor executes the above computer program to achieve the full process intelligent monitoring of the fire activity closed-loop automatic control method as described above.
A computer-readable storage medium, the computer-readable storage medium stores a computer program executing the method of closed-loop automatic control of
7 001874P-NL the whole process intelligent monitoring of the fire moving operation as described above.
Beneficial effect: Compared with the prior art, the present invention provides a closed-loop automatic control method and system for the whole process of intelligent monitoring of fire moving operations, which can automatically measure the safety of fire moving operations and automatically monitor and control the whole process of fire moving operations, realizing intelligent monitoring and automatic control of the fire moving operations process.
Description of the attached drawings
Figure 1 is a flow chart of the method of the embodiment of the present invention.
Specific implementation
The present invention is further elucidated hereinafter in conjunction with specific embodiments, it should be understood that these embodiments are used only to illustrate the present invention and not to limit the scope of the present invention, and after reading the present invention, various equivalent forms of modification of the present invention by a person skilled in the art fall within the scope defined by the claims appended to the present application.
As shown in FIG. 1, a closed-loop automatic control method for the whole process intelligent monitoring of fire-activation operations, the core of the method is the safety loss function of fire-activation operations at time t for measuring the safety of fire-activation operations:
E(1)=[" a (10) -) +a, (h(n) —n)’ ih W, XS, © | dt 1 i=1
Where, I(t) is the location of the fire operation at time t, > is the location of the fire operation area marked by the fire operation ticket, h(t) is the type of fire operation at time t, 77 is the type of fire operation marked by the fire operation ticket, wi is the weight of the hazard source, i is the level of the hazard source, which is divided into 4 levels of hazard sources w,=1, w,=0.7, w,=0.6, w,=0.5 , si(t) is the distance of the fire operation area within the monitoring range at time t The distance of the ith level
8 001874P-NL of hazardous sources, 7,,7, for the fire activity ticket marked by the start and end of the fire activity moment, «,,«, for the loss weight. The safety of fire operation increases with decreasing value of fire operation safety loss function.
The specific fire activity control process is as follows:
Step 1: Detect the location of the fire at time t. Using the optical feature-based seed point selection and random walk method:
Optical features 1: channel variance features are calculated as;
Fim | 1-20 2) 10 = Iz co LL] oo LOL HLO 10 = where V(x), V(x), and Vr(x) are the variance of the red, green, and blue channels in the optical information at the point, I:{x) is the red channel light intensity at the point x, lg(x) is the green channel light intensity at the point x, and lp(x) is the blue channel light intensity at the point x.
Optical feature 2: The global contrast feature of the red channel is calculated as
C.x)= > LoL) vi. (vel, where C:(x) is the global contrast feature of the red channel, I(x) is the red channel light intensity at the point, ly) is the red channel light intensity at any point in the environment, and I; is the red channel component of the environmental optical information;
Seed point selection principle: 3(x)=(7, (x)= (1) + (7, (x)= V(x) + C. (x)
The first k maximum I(x) values are taken and their corresponding points form the seed point set &.
The complete moving fire region is obtained using the random walk model according to the seed point set £. The target region extraction based on the random walk
9 001874P-NL method can be found in the literature [Ramadan H, Tairi H. Pattern mining-based video saliency detection: application to moving object segmentation. Computers &
Electrical Engineering, 2018, 70: 567-579.], which will not be repeated here.
The minimum outside rectangle of this moving fire region is established, and the center point of this rectangle is marked as the moving fire occurrence location I(t).
Step 2: Combining the fire location I(t), the behavior recognition deep learning model is used to determine the type of fire operation at time t.
The imaging area with radius o at the center of the fire location I(t) is designed as the area s(t) of the fire operation behavior, where 5=0.5N, N is the image length.
By deploying video sensors in the hazardous chemical plant area, capturing and storing video images of different scenes, different plant areas, different lighting conditions and different types of fire fighting operations, manually annotating the video images, constructing a typical fire fighting operation image database, inputting the behavior recognition deep learning model for training, and using the trained behavior recognition deep learning model to realize the classification of the types of fire fighting operations in the fire fighting operation behavior occurrence area at time t The trained behavior recognition deep learning model is used to classify the types of fire fighting operation behavior in the region at time t. The model training and classification process of behavior recognition based on deep learning model
References [Wang Zhongmin, Cao Hongjiang, Fan Lin. A deep learning method for human behavior recognition based on convolutional neural network. Computer
Science, 2016, 43(s2). 56-58.], which is not repeated here.
Step 3: Through four types of hazardous sources on-site image acquisition and online public image crawling, capture and store four types of hazardous sources images under different scenes, different plants, and different lighting conditions, manually annotate the images, construct an image database of the four types of hazardous sources, , input the target detection and recognition deep learning model for training, and use the trained target detection and recognition deep learning model to realize the detection of dynamic fire at moment t The trained target detection and
10 001874P-NL recognition deep learning model is used to detect and classify the major hazards in the imaging field of view where the fire behavior occurs at moment t. The regions of four different classes of hazards are classified, and the center of the smallest outer rectangle of the four different classes of hazards is marked as the location of the hazards i=1,2,3,4. The model training and classification process based on deep learning target detection and classification is referenced in [Lu, K., Chen, J., Little, J.
J., & He, H. (2018). Lightweight convolutional neural networks for player detection and classification. Computer Vision and Image Understanding, 172, 77-87.], which is not repeated here.
If there is a class i hazard source in the imaging field of view : s,(0=|l(0)—d, (1)
Where, || || is the Euclidean distance metric function for two locations, dit) i=1,2,3,4 is the center of the smallest outer rectangle of four different classes of hazardous source areas for the location of the hazardous source.
If there is no class i hazard source in the imaging field of view, the default value: 5,(£)=10".
Step 4: According to the closed-loop control rule of fire activity based on the safety loss function of fire activity, the fire activity control is carried out at time t.
When E(t)>s, immediately terminate the operation of the fire at the moment t + 1.
Otherwise, the fire activity continue to occur, and E(t+1) is calculated.
So far, the closed-loop control of the fire activity at time t is completed.
The system for realizing the above-mentioned closed-loop automatic control method of fire activity with intelligent monitoring of the whole process includes a fire activity safety loss function calculation module, a fire area detection module, a fire activity closed-loop control module, a module for determining the type of fire activity behavior and a module for marking the location of the hazard source.
11 001874P-NL
The fire operation safety loss function calculation module combines fire operation tickets, machine vision technology and deep learning technology based on convolutional neural network to establish the fire operation safety loss function at time t. The fire operation safety loss function at time t is as described above.
The fire area detection module uses machine vision technology to detect the area where the fire is generated. The specific implementation process is as described above in step one.
Judgment of the kind of fire fighting operation module using behavior recognition deep learning model to determine the kind of fire fighting operation at time t. The specific implementation process is as described in step 2 above.
The location marking module of the hazard source uses the deep learning model of target detection to detect and classify the hazard source, and finally calculates the safety metric of the fire operation at time t. The specific implementation process is described in step 3 above.
Firework closed-loop control module according to the established firework safety loss function based on the firework closed-loop control rules, send firework termination or continuation of instructions, so as to complete the firework closed-loop automatic control. The specific implementation process is as described in step 4 above.
Obviously, it should be understood by those skilled in the art that the above- mentioned steps of the method of closed-loop automatic control of fire fighting operations with intelligent monitoring of the whole process or the modules of the closed-loop automatic control system of fire fighting operations with intelligent monitoring of the whole process can be implemented with common computing devices, which can be concentrated on a single computing device or distributed on a network composed of multiple computing devices, optionally, they can be implemented in program code executable by the computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases the steps shown or described can be executed in a different order than herein, or they can be implemented separately as individual integrated circuit modules, or
12 001874P-NL multiple modules or steps of them can be implemented as a single integrated circuit module.
In this way, embodiments of the present invention are not limited to any particular combination of hardware and software.
Claims (10)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111080568.5A CN113792658B (en) | 2021-09-15 | 2021-09-15 | Automatic control method and system for closed loop of fire operation in whole process intelligent monitoring |
Publications (2)
Publication Number | Publication Date |
---|---|
NL2030149A NL2030149A (en) | 2022-03-15 |
NL2030149B1 true NL2030149B1 (en) | 2023-03-29 |
Family
ID=79183499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2030149A NL2030149B1 (en) | 2021-09-15 | 2021-12-16 | A method and system for closed-loop automatic control for whole-process intelligent monitoring of fire activity operations |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113792658B (en) |
NL (1) | NL2030149B1 (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903077A (en) * | 2012-12-27 | 2014-07-02 | 上海建工七建集团有限公司 | Danger source supervision system and method |
CN104658187B (en) * | 2015-02-09 | 2017-02-22 | 云南电网有限责任公司西双版纳供电局 | Operational risk management and control reminding method and device for power transmission line of power grid |
KR20170006095A (en) * | 2015-07-07 | 2017-01-17 | 주식회사 피식스컨설팅 | Safety managing system and method for construction site and industries |
CN109858819B (en) * | 2019-02-13 | 2021-08-27 | 重庆真趣信息科技有限公司 | Management system of fire operation ticket |
CN110427825B (en) * | 2019-07-01 | 2023-05-12 | 上海宝钢工业技术服务有限公司 | Video flame identification method based on fusion of key frame and fast support vector machine |
CN110570076A (en) * | 2019-07-19 | 2019-12-13 | 云南昆钢电子信息科技有限公司 | Dangerous work site inspection and supervision management system and method |
CN110929923B (en) * | 2019-11-08 | 2023-04-07 | 温州设计集团有限公司 | Urban safety risk management and control system based on digital twin technology |
-
2021
- 2021-09-15 CN CN202111080568.5A patent/CN113792658B/en active Active
- 2021-12-16 NL NL2030149A patent/NL2030149B1/en active
Also Published As
Publication number | Publication date |
---|---|
CN113792658B (en) | 2023-09-01 |
CN113792658A (en) | 2021-12-14 |
NL2030149A (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111144263B (en) | Construction worker high-falling accident early warning method and device | |
CN111898514B (en) | Multi-target visual supervision method based on target detection and action recognition | |
CN108022235B (en) | Method for identifying defects of key components of high-voltage transmission iron tower | |
Hossain et al. | Wildfire flame and smoke detection using static image features and artificial neural network | |
CN111686392A (en) | Artificial intelligence fire extinguishing system is surveyed to full scene of vision condition | |
CN112633052A (en) | Belt tearing detection method | |
CN111062373A (en) | Hoisting process danger identification method and system based on deep learning | |
CN113713292A (en) | Method and device for carrying out accurate flame discrimination, fire extinguishing point positioning and rapid fire extinguishing based on YOLOv5 model | |
CN108566538A (en) | Based on the circular coal yard personnel safety guard of Infrared-Visible fusion tracking and the monitoring system and method for spontaneous combustion | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN113743256A (en) | Construction site safety intelligent early warning method and device | |
CN111163294A (en) | Building safety channel monitoring system and method for artificial intelligence target recognition | |
WO2023104557A1 (en) | Machine-learning for safety rule violation determination | |
CN113191273A (en) | Oil field well site video target detection and identification method and system based on neural network | |
CN115131937A (en) | Forest fire early warning method based on sensor and deep learning | |
CN116846059A (en) | Edge detection system for power grid inspection and monitoring | |
CN113469098B (en) | Intelligent visual monitoring device for organic hazardous chemical leakage | |
NL2030149B1 (en) | A method and system for closed-loop automatic control for whole-process intelligent monitoring of fire activity operations | |
CN114241189B (en) | Ship black smoke recognition method based on deep learning | |
Yan et al. | Forest Fire Image Intelligent Recognition based on the Neural Network. | |
CN114663805A (en) | Flame positioning alarm system and method based on convertor station valve hall fire-fighting robot | |
Zhou et al. | An improved Yolov5s based real-time spontaneous combustion point detection method | |
Shrigandhi et al. | Systematic Literature Review on Object Detection Methods at Construction Sites | |
Xiang et al. | Safety helmet detection algorithm in complex scenarios based on YOLOX | |
EP4350639A1 (en) | Safety rule violation detection in a construction or constructed site |