CN111539264A - Ship flame detection positioning system and detection positioning method - Google Patents
Ship flame detection positioning system and detection positioning method Download PDFInfo
- Publication number
- CN111539264A CN111539264A CN202010254273.4A CN202010254273A CN111539264A CN 111539264 A CN111539264 A CN 111539264A CN 202010254273 A CN202010254273 A CN 202010254273A CN 111539264 A CN111539264 A CN 111539264A
- Authority
- CN
- China
- Prior art keywords
- flame
- positioning
- module
- ship
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/10—Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means
- G08B17/103—Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means using a light emitting and receiving device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
A3D binocular camera collects video information and image information and sends the video information and the image information to a flame recognition and tracking module and a flame positioning module, the flame recognition and tracking module recognizes and tracks flames through a machine learning algorithm, and the flame positioning module obtains position information of the flames through a visual positioning algorithm. The invention can effectively detect the target flame under the complex background, display the specific cabin and position of the flame and output an ideal fire extinguishing point, thereby improving the success rate of fire extinguishing.
Description
Technical Field
The invention relates to a ship flame detection positioning system and a detection positioning method.
Background
A fire refers to a catastrophic combustion event that loses control over time or space. The existence and use of the fire represent the progress of human civilization, but also bring about potential safety hazard and disaster. The fire hazard is one of the great hazards threatening human life, and in the space with compact design structure and limited action area of the ship, the fire hazard is more inexperienced and seriously threatens the property and life safety of the ship. According to incomplete statistics, the ship fire accounts for 11 percent of the total number of marine accidents, and is in the fourth place. Marine fires, while fearful, are by no means completely uncontrollable. The early detection of fire is an important guarantee for fully utilizing fire extinguishing measures, reducing fire loss and protecting lives and properties.
At present, the intelligent and networking degrees of the automatic fire alarm system in China are low, and the automatic fire alarm system cannot be widely applied. Therefore, the functional improvement of the flame recognition system is a necessary trend toward intellectualization and networking.
Disclosure of Invention
The invention provides a ship flame detection positioning system and a detection positioning method, which can effectively detect target flames under a complex background, display specific cabins and positions of the flames, output 'ideal fire extinguishing points', and improve the success rate of fire extinguishing.
In order to achieve the above object, the present invention provides a ship flame detection positioning system, comprising:
the 3D binocular camera is used for acquiring flame video information and image information;
the flame identification and tracking module is connected with the 3D binocular camera and used for identifying and tracking the flame through a machine learning algorithm;
and the flame positioning module is connected with the 3D binocular camera and used for obtaining the position information of the flame through a visual positioning algorithm.
The ship flame detection positioning system further comprises: and the photoelectric smoke sensor is connected with the 3D binocular camera and used for detecting smoke.
The ship flame detection positioning system further comprises: and the display output module is connected with the flame identification and tracking module and the flame positioning module and is used for displaying the flame image.
The invention also provides a ship flame detection and positioning method, which comprises the following steps:
the 3D binocular camera collects video information and image information and sends the video information and the image information to the flame identification and tracking module and the flame positioning module, the flame identification and tracking module identifies and tracks flames through a machine learning algorithm, and the flame positioning module obtains position information of the flames through a visual positioning algorithm.
The display output module displays a flame image.
The machine learning algorithm adopts a YOLO algorithm.
The YOLO algorithm clips the resolution of the initial flame image acquired by the 3D camera into 448 × 448 pixels and divides the image into S × S grids.
The photoelectric smoke sensor detects smoke and sends an early warning signal to the display output module.
The fire extinguishing point determining module takes the flame root in the first frame picture as an original point to form a flame coordinate system, converts the flame positions in the other frame pictures into the flame coordinate system, subtracts the flame position in the next frame picture from the flame position in the previous frame picture to obtain a reverse vector of flame trend, and determines the distance in the direction of the reverse vector to obtain the position of an ideal fire extinguishing point.
By combining the image recognition technology and the photoelectric smoke sensor, the invention can detect the abnormality and inform the abnormality to a crew when no open fire occurs, thereby providing more preparation time for fire extinguishment. Carry out flame discernment and tracking training under the different conditions through the machine learning algorithm, can carry out effective detection to the target flame under the complicated background, after detecting flame, utilize 3D camera binocular vision scheme to acquire flame three-dimensional information, then show that flame belongs to specific cabin and position and output "ideal point of putting out a fire", because can discern the size of intensity of a fire, so when the intensity of a fire is too big or the intensity of a fire is uncontrollable, can indicate to use carbon dioxide fire extinguishing system to put out a fire.
Drawings
FIG. 1 is a schematic view of a flame detection and positioning system for a ship provided by the invention.
FIG. 2 is a flow chart of a method for detecting and positioning a flame of a ship according to the present invention.
FIG. 3 is a flow chart of the flame identification and tracking module identifying and tracking a flame via the YOLO algorithm.
FIG. 4 is a flow chart of the flame localization module obtaining position information of the flame through a visual localization algorithm.
Fig. 5 is a schematic diagram of a 3D binocular camera visual positioning algorithm.
Detailed Description
The preferred embodiment of the present invention will be described in detail below with reference to fig. 1 to 5.
The invention applies an image type flame detection technology, which is a novel flame detection technology, improves the technical defects of low sensitivity, easy interference of environmental factors, limited detection range and the like of the early temperature sensing, smoke sensing and light sensing flame detection technologies, and is widely concerned by scholars at home and abroad. The traditional flame detection technology mainly realizes flame detection through steps of suspected flame area extraction, artificial flame feature extraction, classifier judgment and the like, the process is complex, and the intellectualization of flame detection is not completely realized. And the detection of small target flames remains difficult due to the influence of background noise. Therefore, the invention provides an intelligent method and a complete intelligent flame monitoring system capable of effectively detecting multi-scale flames under a complex background.
As shown in fig. 1, the present invention provides a ship flame detection positioning system, which comprises:
a photoelectric smoke sensor 1 for detecting smoke;
the 3D binocular camera 2 is connected with the smoke sensor 1 and used for acquiring flame video information and image information;
the flame identification and tracking module 3 is connected with the 3D binocular camera 2 and used for identifying and tracking the flame through a machine learning algorithm;
the flame positioning module 4 is connected with the 3D binocular camera 2 and used for obtaining the position information of flame through a visual positioning algorithm;
the display output module 5 is connected with the flame identification and tracking module 3 and the flame positioning module 4 and is used for displaying flame images;
and the fire extinguishing point determining module 6 is connected with the flame identification and tracking module 3 and the flame positioning module 4 and is used for calculating an ideal fire extinguishing point.
When open fire does not appear but smoke is generated, the photoelectric smoke sensor 1 is used for detecting, if smoke is detected, abnormal information is transmitted to the 3D binocular camera 2, the 3D binocular camera 2 automatically increases scanning frequency and transmits image information to the display output module 5 for orange early warning, and more preparations before fire extinguishment are provided for crews. After the naked flame appears, the flame image information that gathers through 3D binocular camera 2 that has the flame information collection function transmits for flame discernment and tracking module 3 and flame orientation module 4, and the combination of rethread machine learning algorithm and vision positioning algorithm discerns and decides flame, with the positional information of flame and the intensity of a fire information transmission to display output module 5 on, can fix a position concrete cabin of starting a fire through positional information, can judge which kind of mode of putting out a fire of adoption through intensity of a fire information. And calculate the position of ideal fire extinguishing point, can indicate to use the on-board carbon dioxide fire extinguishing systems to put out a fire if uncontrollable conflagration appears.
As shown in fig. 2, the present invention further provides a method for detecting and positioning flame of a ship, comprising the following steps:
s1, the 3D binocular camera collects video information and image information and sends the video information and the image information to the flame identification and tracking module and the flame positioning module;
step S2, the photoelectric smoke sensor judges whether smoke is detected, if smoke is detected, a signal is sent to the 3D binocular camera and the display output module, and the display output module sends out smoke early warning; under the conditions of spontaneous combustion and insufficient combustion, smoke appears earlier than open fire, and smoke early warning can play a role of early warning, so that higher attention is given to crews, and more time is required for fire extinguishment; if the naked fire directly appears, the 3D binocular camera can directly detect;
step S3, the flame identification and tracking module identifies and tracks the flame through a machine learning algorithm, and the flame positioning module obtains the position information of the flame through a visual positioning algorithm;
step S4, displaying the flame image by the display output module;
and step S5, the fire extinguishing point determining module calculates an ideal fire extinguishing point according to the flame identification tracking information and the flame position information.
Further, in an embodiment of the present invention, the machine learning algorithm employs a YOLO algorithm. As shown in fig. 3, the method for identifying and tracking a flame by the flame identification and tracking module through the YOLO algorithm includes the following steps:
s3-1.1, inputting an initial flame image collected by a 3D camera;
s3-1.2, constructing a deep network structure to extract flame characteristics;
establishing a YOLO model, inputting initial image data of the existing flame type (the initial image data is data input in the YOLO model in advance, and identifying flames by a YOLO algorithm is realized by manually calibrating a large number of flame pictures), and marking flame position information in the initial image data. Cutting the resolution of the initial flame image data into 448 x 448 pixels, dividing the image into S x S grids (the S value is equal to the scale of a flame image characteristic diagram obtained by a convolutional neural network, and the grids are used for predicting the position of a flame boundary frame to be detected) to obtain a flame type image, and matching the flame type image with the initial image data to obtain a calibrated flame type image as a characteristic diagram;
step S3-1.3, pyramid feature fusion;
gradually adding feature maps of different scales of the flame, performing feature fusion, wherein a convolutional neural network can simultaneously learn features of a plurality of scales, mapping an input image to a plurality of different output tensors by using a YOLO (image oriented error) model, and detecting the plurality of scales of the flame by using logistic regression;
step S3-1.4, multi-scale prediction;
the method comprises the steps of taking a video sequence of flames collected by a 3D camera as an input parameter of a YOLO model, predicting multi-scale parameters in the videos, generating a boundary frame of the flames to be detected in a video sequence image by using a K-means method, iterating for many times until a clustering center is changed to 0, analyzing dynamic change behaviors of the flames to be detected by applying space-time constraint conditions and geometric knowledge according to flame position information of the flames to be detected in each frame of the video sequence, realizing dynamic tracking of flame space position trends, and outputting flame identification and tracking results;
and S3-1.5, outputting the flame detection and tracking result to a display output module.
As shown in fig. 4, the method for obtaining the position information of the flame by the flame location module through the visual location algorithm includes the following steps:
step S3-2.1, detecting and comparing left and right views YOLO;
calibrating internal parameters, a focal length f and imaging original points x1 and x2 (or y1 and y2) of the cameras by using two cameras which are arranged in parallel (or vertically) of the 3D binocular camera, shooting the same flame, and respectively obtaining corresponding parameters xL and xR (or yL and yR) of the flame in the left camera, the right camera (or the upper camera and the lower camera) on an imaging plane;
step S3-2.2, matching left and right view targets;
according to the known data parameters, such as the focal length f of the camera, the center distance B of the left camera, the right camera (OR the upper camera, the lower camera), and the x coordinates xL and xR (OR the y coordinates yL and yR) of the feature points on the left imaging plane and the right imaging plane, the distances Z from the optical centers OL and OR of the two cameras to the feature points can be obtained by using the principle of similar triangles, and the units are millimeters (mm), as shown in FIG. 5;
s3-2.3, acquiring a flame three-dimensional coordinate;
suppose that image points P1 and P2 of an arbitrary point P in space on the two cameras C1 and C2 have been detected from the two images, respectively, that is, P1 and P2 are known as corresponding points of the same point P in space;
assuming that the C1 and C2 cameras are calibrated, their projection matrices are M1 and M2, respectively, so that there are:
wherein (u)1,v1,1)(u2,v21) image homogeneous coordinates of points p1 and p2 in the respective images; (X, Y, Z,1) is the homogeneous coordinate of the point P in a world coordinate system;are respectively MkRow i, column j; elimination of Zc1Or Zc2Four linear equations for X, Y, Z are obtained:
if it is a known pixel (u)1,v1),(u2,v2) The space coordinates (X, Y, Z) of the object point can be obtained by the above equation system.
In one embodiment of the present invention, the method for calculating the ideal fire-extinguishing point by the fire-extinguishing point determining module comprises the following steps:
s5.1, receiving flame identification tracking information and flame position information by a fire extinguishing point determining module;
s5.2, converting the origin of the coordinate system from the camera to the flame root in the first frame of picture, namely the center of the bottom of the flame frame, and taking the origin O of the flame coordinate system (X, Y and Z);
s5.3, converting the flame positions in other frame pictures into a flame coordinate system, subtracting the flame position in the second frame picture from the flame position in the first frame picture to obtain a reverse vector of the first flame trend, subtracting the flame position in the third frame picture from the flame position in the second frame picture to obtain a reverse vector of the second flame trend, and repeating the steps to obtain a plurality of reverse vectors changing along with the flame;
s5.4, in a flame coordinate system (X, Y, Z), enabling the Z value to be constant to be zero, combining the reverse vector of the flame trend obtained in the step S5.3, determining the direction of the reverse vector of the flame trend in an XOY plane, determining the distance of the reverse vector of the flame trend, and obtaining the position of an ideal fire-extinguishing point; the distance is determined according to the type of the fire extinguisher, such as 1m or 2 m;
and S5.5, outputting an ideal fire extinguishing point.
In one embodiment of the invention, the photoelectric smoke sensor is used for detection, and if an abnormal photoelectric smoke sensor occurs, the photoelectric smoke sensor transmits information to the display output module to perform orange smoke early warning, so that more preparation time is provided for crews to prepare before fire extinguishment. After the 3D camera recognizes the flame, the orange early warning on the display screen of the display output module is switched to a plate for displaying flame image information, and the plate is immediately positioned to a specific cabin. The manager selects to manually extinguish the fire or start the carbon dioxide fire extinguishing system to extinguish the fire according to the properties of the combustion materials in the cabin, the flame intensity on the display and other conditions. If the manager selects manual fire extinguishing, an ideal fire extinguishing point is given according to the flame detection and tracking results, in 4 dimensions formed by an XOY plane and a Z-axis positive half shaft, the direction of the ideal fire extinguishing point is the symmetrical vector direction of the component of the flame trend vector direction on the XOY plane relative to the O point, and the distance of the ideal fire extinguishing point is changed according to different fire extinguisher types. The fire extinguishing point determining module adopts a spot lamp, the spot lamp is installed beside the 3D binocular camera, when the 3D binocular camera judges that flame exists in a monitoring area, the spot lamp is automatically activated, after an ideal fire extinguishing point is calculated by the spot lamp, the spot lamp shoots out an obvious word 'ideal fire extinguishing point' on the ground, and the effect of fire extinguishing by fire extinguishing personnel at the point is optimal. If the manager chooses to use the carbon dioxide fire extinguishing system to extinguish the fire, the manager rings a carbon dioxide release alarm to evacuate the personnel in the cabin as soon as possible, closes the doors and windows and arranges the personnel to release the fixed carbon dioxide. By combining the image recognition technology and the photoelectric smoke sensor, the invention can detect the abnormality and inform the abnormality to a crew when no open fire occurs, thereby providing more preparation time for fire extinguishment. Carry out flame discernment and tracking training under the different conditions through the machine learning algorithm, can carry out effective detection to the target flame under the complicated background, after detecting flame, utilize 3D camera binocular vision scheme to acquire flame three-dimensional information, then show that flame belongs to specific cabin and position and output "ideal point of putting out a fire", because can discern the size of intensity of a fire, so when the intensity of a fire is too big or the intensity of a fire is uncontrollable, can indicate to use carbon dioxide fire extinguishing system to put out a fire.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.
Claims (10)
1. A marine vessel flame detection and positioning system, comprising:
the 3D binocular camera is used for acquiring flame video information and image information;
the flame identification and tracking module is connected with the 3D binocular camera and used for identifying and tracking the flame through a machine learning algorithm;
and the flame positioning module is connected with the 3D binocular camera and used for obtaining the position information of the flame through a visual positioning algorithm.
2. The marine vessel flame detection and positioning system of claim 1, further comprising: and the photoelectric smoke sensor is connected with the 3D binocular camera and used for detecting smoke.
3. The marine vessel flame detection and positioning system of claim 1, further comprising: and the display output module is connected with the flame identification and tracking module and the flame positioning module and is used for displaying the flame image.
4. The marine vessel flame detection and positioning system of claim 1, further comprising: and the fire extinguishing point determining module is connected with the flame identifying and tracking module and the flame positioning module and is used for calculating an ideal fire extinguishing point.
5. A ship flame detection positioning method of a ship flame detection positioning system according to any one of claims 1-4, characterized by comprising the following steps:
the 3D binocular camera collects video information and image information and sends the video information and the image information to the flame identification and tracking module and the flame positioning module, the flame identification and tracking module identifies and tracks flames through a machine learning algorithm, and the flame positioning module obtains position information of the flames through a visual positioning algorithm.
6. The ship flame detection positioning method according to claim 1, wherein the display output module displays a flame image.
7. The method as claimed in claim 1, wherein the machine learning algorithm is a YOLO algorithm.
8. The ship flame detection positioning method of claim 7, wherein the YOLO algorithm clips the resolution of the initial flame image collected by the 3D camera into 448 x 448 pixels and divides the image into sxs grids.
9. The method for detecting and positioning flame of ship according to claim 7, characterized in that the photoelectric smoke sensor detects smoke and sends early warning signal to the display output module.
10. The ship flame detecting and positioning method according to claim 7, wherein the fire extinguishing point determining module forms a flame coordinate system with the root of the flame in the first frame of picture as an origin, converts the flame positions in the other frames of pictures into the flame coordinate system, subtracts the flame position in the next frame of picture from the flame position in the previous frame of picture to obtain a reverse vector of the flame trend, and determines the distance in the direction of the reverse vector to obtain the position of the ideal fire extinguishing point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010254273.4A CN111539264A (en) | 2020-04-02 | 2020-04-02 | Ship flame detection positioning system and detection positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010254273.4A CN111539264A (en) | 2020-04-02 | 2020-04-02 | Ship flame detection positioning system and detection positioning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111539264A true CN111539264A (en) | 2020-08-14 |
Family
ID=71976909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010254273.4A Pending CN111539264A (en) | 2020-04-02 | 2020-04-02 | Ship flame detection positioning system and detection positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111539264A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112466083A (en) * | 2020-10-15 | 2021-03-09 | 中船重工远舟(北京)科技有限公司 | Marine fire monitoring and alarming method and system |
CN113289290A (en) * | 2021-05-11 | 2021-08-24 | 国电南瑞科技股份有限公司 | Fire-fighting robot flame automatic aiming method, device and system |
CN115691034A (en) * | 2022-11-01 | 2023-02-03 | 广东职业技术学院 | Intelligent household abnormal condition warning method, system and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204348014U (en) * | 2013-09-03 | 2015-05-20 | 深圳市中航装饰设计工程有限公司 | Market, hotel's visual intelligent fire alarm system |
CN207149028U (en) * | 2017-08-04 | 2018-03-27 | 一川雨歌 | A kind of fire monitoring device |
CN109903507A (en) * | 2019-03-04 | 2019-06-18 | 上海海事大学 | A kind of fire disaster intelligent monitor system and method based on deep learning |
CN110051954A (en) * | 2019-04-19 | 2019-07-26 | 辽宁科技大学 | A kind of split type Super High full-automatic fire-extinguishing machine people and its control method |
-
2020
- 2020-04-02 CN CN202010254273.4A patent/CN111539264A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204348014U (en) * | 2013-09-03 | 2015-05-20 | 深圳市中航装饰设计工程有限公司 | Market, hotel's visual intelligent fire alarm system |
CN207149028U (en) * | 2017-08-04 | 2018-03-27 | 一川雨歌 | A kind of fire monitoring device |
CN109903507A (en) * | 2019-03-04 | 2019-06-18 | 上海海事大学 | A kind of fire disaster intelligent monitor system and method based on deep learning |
CN110051954A (en) * | 2019-04-19 | 2019-07-26 | 辽宁科技大学 | A kind of split type Super High full-automatic fire-extinguishing machine people and its control method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112466083A (en) * | 2020-10-15 | 2021-03-09 | 中船重工远舟(北京)科技有限公司 | Marine fire monitoring and alarming method and system |
CN113289290A (en) * | 2021-05-11 | 2021-08-24 | 国电南瑞科技股份有限公司 | Fire-fighting robot flame automatic aiming method, device and system |
CN115691034A (en) * | 2022-11-01 | 2023-02-03 | 广东职业技术学院 | Intelligent household abnormal condition warning method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903507A (en) | A kind of fire disaster intelligent monitor system and method based on deep learning | |
CN111539264A (en) | Ship flame detection positioning system and detection positioning method | |
CN104751593B (en) | Method and system for fire detection, warning, positioning and extinguishing | |
CN111091072A (en) | YOLOv 3-based flame and dense smoke detection method | |
CN101334924B (en) | Fire hazard probe system and its fire hazard detection method | |
CN106128053A (en) | A kind of wisdom gold eyeball identification personnel stay hover alarm method and device | |
CN110837784A (en) | Examination room peeping cheating detection system based on human head characteristics | |
CN106846375A (en) | A kind of flame detecting method for being applied to autonomous firefighting robot | |
CN106295551A (en) | A kind of personal security cap wear condition real-time detection method based on video analysis | |
CN108389359B (en) | Deep learning-based urban fire alarm method | |
CN103413395A (en) | Intelligent smoke detecting and early warning method and device | |
CN106210634A (en) | A kind of wisdom gold eyeball identification personnel fall down to the ground alarm method and device | |
CN107909615A (en) | A kind of fire monitor localization method based on binocular vision | |
CN103400463B (en) | A kind of forest fires localization method based on two dimensional image and device | |
CN105879284A (en) | Fire water monitor control system comprising multiple sensors and control method | |
CN107437318A (en) | A kind of visible ray Intelligent Recognition algorithm | |
CN113299035A (en) | Fire identification method and system based on artificial intelligence and binocular vision | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN110136172A (en) | The detection method that safeguard is worn before a kind of miner goes into the well | |
CN109741565B (en) | Coal mine fire disaster recognition system and method | |
CN113449675B (en) | Method for detecting crossing of coal mine personnel | |
CN114202646A (en) | Infrared image smoking detection method and system based on deep learning | |
CN112906674A (en) | Mine fire identification and fire source positioning method based on binocular vision | |
WO2016103173A1 (en) | Method and device for detecting an overhead cable from an aerial vessel | |
CN209657454U (en) | Coal-mine fire identifying system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |