CN109903507A - A kind of fire disaster intelligent monitor system and method based on deep learning - Google Patents
A kind of fire disaster intelligent monitor system and method based on deep learning Download PDFInfo
- Publication number
- CN109903507A CN109903507A CN201910160596.4A CN201910160596A CN109903507A CN 109903507 A CN109903507 A CN 109903507A CN 201910160596 A CN201910160596 A CN 201910160596A CN 109903507 A CN109903507 A CN 109903507A
- Authority
- CN
- China
- Prior art keywords
- flame
- fire
- deep learning
- point
- location information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present invention discloses a kind of fire disaster intelligent monitor system and method based on deep learning, and include: image capture module is monitored in real time by binocular camera;Image analysis module, using deep learning model to the flame real-time detection in monitor video image;Alarm module reminds the fire behavior in monitoring personnel observation monitor video image;Flame locating module receives the flame pixels location information that deep learning model inspection arrives, flame pixels location information is mapped to space using binocular stereo vision location algorithm, obtains flame space location information;Pointing fire-extinguishing module, by preset interval time, receives flame space location information under conditions of alarm module is unresponsive to lead to continue alarm.Deep learning and camera location algorithm are combined and are used for flame identification by the present invention, and propose new convolution core module, and ability in feature extraction is stronger, moreover it is possible to flame alarm is being measured, in the reactionless situation of monitoring personnel automatically to flame region pointing fire-extinguishing.
Description
Technical field
The present invention relates to image recognition and computer vision field, in particular to a kind of fire disaster intelligently based on deep learning
Monitoring system and its method.
Background technique
Fire is one of most common disaster, if large area fire cannot be further caused in early stage control in time
Occur, it will cause irremediable life and property loss.Especially in some special smoking ban places, such as bus station refuels
It stands, logistics warehouse etc., once because artificial smoking or other factors cause fire, consequence is very serious.Therefore to cigarette smoking
Monitoring and fire early detection very it is necessary to.
Flame is the physical phenomenon of fire early stage, and early stage flame detecting technology mainly has temperature-sensitive, sense cigarette and three kinds photosensitive, existing
It has been be widely used that, but still remain the technologies such as sensitivity is low, interfere vulnerable to environmental factor, detection range is limited and lack
It falls into, it is difficult to adapt to the flame detecting requirement of large space or complex environment.
Image-type flame detecting technology is a kind of novel flame Detection Techniques, improves lacking for early stage flame detecting technology
It falls into, effectively increases the accuracy of fire defector, reduce and fail to report and report by mistake.Image-type fire defector technology mainly utilizes mode
Know method for distinguishing, needs the feature extractor of engineer's flame, but due to the variability of flame, complexity, often extract
Feature do not have representativeness, it is difficult to Small object flame and polymorphic type flame are detected.
Chinese patent application CN201510319446.5 proposes " fire monitoring system based on dipper system ", is intended to
It preserves the ecological environment, effective monitoring can be carried out to forest fire dangerous situation.But the image processing unit in monitoring system is still adopted
The morphological feature and color characteristic for manually extracting flame are as classification foundation, it is difficult to small-sized to indoor spaces, smoking ban place
Fire makes effective monitoring.
In conclusion how to overcome the shortcomings of conventional images type fire defector technology and propose complete set based on depth
Spend study fire disaster intelligent monitor system very it is necessary to.
Summary of the invention
The fire disaster intelligent monitor system and method based on deep learning that the purpose of the present invention is to provide a kind of, by depth
Study and camera location algorithm, which are combined, further provides new convolution core module for flame identification, has feature
The stronger advantage of extractability, while complete smoking ban place fire disaster intelligent monitor system has also been devised, it can detect fire
Flame alarm, and pointing fire-extinguishing is carried out to flame region automatically in the responseless situation of monitoring personnel.
In order to achieve the above object, the invention is realized by the following technical scheme:
A kind of fire disaster intelligent monitor system based on deep learning includes:
Image capture module is monitored in real time under different places by binocular camera, acquires monitor video image
Data;
Image analysis module carries out the flame in the monitor video image using trained deep learning model real
When detect, judge whether flame occur, if there is flame, determine flame region, the flame region is passed through into bounding box mode
It is labeled, and using bounding box point as match point, obtains flame pixels location information;
Flame locating module receives the flame pixels location information that the deep learning model inspection arrives, fixed using binocular
The flame pixels location information is mapped to space by bit mapping method, is obtained flame space location information, is eventually found flame
Position.
Preferably, the binocular positioning mapping method further includes:
For any one flame point P spatially, on the plane of delineation that the flame point P is shot at the first video camera C1
Projection be point CL, the projection on the plane of delineation that the flame point P is shot at the second video camera C2 is point CR, wherein point C1
With point CLConnected straight line is l1, point C2 and point CRConnected straight line is l2, then flame point P is straight line l1With straight line l2Intersection point,
That is the location information of the flame point P in space uniquely determines.
Preferably, when detecting flame, by triggering the alarm of monitoring room, monitoring personnel is reminded to observe the monitoring
Fire behavior in video image.
Preferably, the fire disaster intelligent monitor system further includes pointing fire-extinguishing module, the pointing fire-extinguishing module
Under conditions of the alarm no response leads to continue alarm, by preset interval time, it is empty to receive the flame
Between location information, fire-fighting system is controlled by controller and realizes pointing fire-extinguishing.
Preferably, the deep learning model is Mask R-CNN model.
Preferably, the deep learning model includes:
Feature extraction layer extracts the characteristic information of target image and generates multiple dimensioned characteristic pattern;
Candidate region generates network, for generating the candidate region of target image;
Full articulamentum is returned for target classification and bounding box;
RoIAlign layers, the obtained candidate region is mapped to the corresponding region on characteristic pattern, the area Bing Duigai
Domain carries out down-sampling and handles to obtain the feature vector of fixed dimension, then described eigenvector is passed to the full articulamentum and carries out letter
Breath arranges, final to realize that target classification and bounding box return.
Preferably, the feature extraction layer uses feature pyramid network, and the feature pyramid network is by ResNet volumes
Product neural network is built, and the ResNet convolutional neural networks are using by the way of being connected in parallel and connecting with long-jump and to combine.
Preferably, the deep learning model increases a mask branch after RoIAlign layers described, for generating target
The segmentation mask of image.
The present invention also provides a kind of fire disaster intelligent monitor method based on fire disaster intelligent monitor system as described above,
This method includes following procedure:
It is monitored in real time under different places by binocular camera, acquires monitor video image data, and will acquisition
The monitor video image data incoming service device on trained deep learning model;
The flame in the monitor video image data is measured in real time using trained deep learning model, is sentenced
It is disconnected whether flame occur, if there is flame, determine flame region, the flame region is labeled by bounding box mode,
And using bounding box point as match point, flame pixels location information is obtained;
The flame pixels location information is mapped to by space using binocular positioning mapping method and obtains flame space position
Information;
The flame space location information is transmitted to a controller, the controller control fire-fighting system pinpoints to realize
Fire extinguishing.
Preferably, the fire disaster intelligent monitor method further includes: when detecting flame, by triggering monitoring room
Alarm is reminded fire behavior in monitoring personnel observation monitor video and is handled;The alarm described in the no response and cause to hold
When continuous alarm, by preset interval time, the flame space location information is received, realizes pointing fire-extinguishing.
Compared with prior art, the invention has the benefit that (1) the invention proposes be based on deep learning model M ask
The flame detecting method of R-CNN improves the feature that traditional mode knowledge method for distinguishing extraction feature is not complete, extracts and does not have representative
Property is difficult to the situation identified to changeable flame;And it innovatively proposes in conjunction with existing ResNet and existing
The new convolution module of Inception characteristic, relatively individual Inception convolution module, which has, prevents network from going out because the number of plies is too deep
The advantages of existing over-fitting;Relatively individual ResNet convolution module, because the thought of parallel connection network is used, with different rulers
Very little convolution kernel extracts feature and carries out channel fusion again, has the stronger advantage of ability in feature extraction;(2) the invention proposes be based on
The matching algorithm of deep learning combines deep learning model and binocular stereo vision algorithm, based on to conventional stereo matching calculation
Method is difficult to the considerations of taking into account matching precision and matching speed, carries out Stereo matching using the coordinate of deep learning model inspection frame,
On the basis of guaranteeing matching precision, matching speed is taken into account, and conventional mapping methods look for match point critically important, complicated in background
In the case of be difficult effectively to find match point, process is complicated and time-consuming;(3) present invention devises complete fire disaster intelligent monitor system
System, the system can not only detect flame alarm, and can be automatic right in the responseless situation of monitoring personnel
Flame region carries out pointing fire-extinguishing.
Detailed description of the invention
Fig. 1 is the fire disaster intelligent monitor overall system design schematic diagram of the invention based on deep learning;
Fig. 2 is deep learning model structure schematic diagram of the invention;
Fig. 3 is improvement ResNet schematic network structure of the invention;
Fig. 4 is that loss of the invention changes schematic diagram;
Fig. 5 is binocular camera positioning principle schematic diagram of the invention;
Fig. 6 is the image schematic diagram containing flame that camera of the invention captures;
Fig. 7 is the image schematic diagram of deep learning model inspection of the invention to flame;
Fig. 8 a is the schematic diagram of the fire defector frame pixel coordinate information at first camera visual angle of the invention;
Fig. 8 b is the schematic diagram of the fire defector frame pixel coordinate information at second camera visual angle of the invention.
Specific embodiment
In order to keep the present invention more obvious and easy to understand, the present invention is done furtherly below in conjunction with the drawings and specific embodiments
It is bright.
As shown in Figure 1, the invention discloses a kind of fire disaster intelligent monitor system based on deep learning, mainly includes image
This five modules of acquisition module, image analysis module, alarm module, flame locating module and pointing fire-extinguishing module.
Described image acquisition module is monitored in real time under different places by binocular camera, acquires monitor video figure
As data;Described image analysis module is mainly using trained deep learning model M askR-CNN to monitor video image
Flame in data is measured in real time, and obtains flame pixels location information;In the alarm module, once detect fire
Monitoring room alarm will be triggered when flame, remind fire behavior in monitoring personnel observation monitor video and handled;The flame is fixed
Position module receives the flame pixels location information that arrives of deep learning model inspection, will be received using binocular positioning mapping algorithm
The flame pixels location information is mapped to space and obtains flame space location information;The pointing fire-extinguishing module is in the alarm
Module no response and cause continue alarm under conditions of, by preset interval time, receive the flame locating module and obtain
The flame space location information arrived, and flame space location information is transmitted to controller, fire-fighting system is controlled by controller
(such as fire water monitor) Lai Shixian pointing fire-extinguishing.
Therefore, the purpose of deep learning model of the invention is to judge flame whether occur in video image and flame exists
Location information in picture.The effect of camera mainly monitors smoking ban place in real time, acquires video image data, and
By deep learning model trained on data incoming service device, the position of the flame obtained using deep learning model in the picture
Confidence breath, finds the position of flame in space.
It is illustrated in figure 2 deep learning model structure schematic diagram of the invention, such as deep learning target detection model
Mask R-CNN, mainly include feature extraction layer, candidate region generate network (RegionProposal Network, RPN),
RoIAlign layers and full articulamentum.
The feature extraction layer extracts target using feature pyramid network (Feature Pyramid Network, FPN)
The characteristic information of image simultaneously generates multiple dimensioned characteristic pattern (Feature Map);The candidate region generates network for generating
The candidate region of target image;The candidate region that will be obtained RoIAlign layers of is mapped to the corresponding region on characteristic pattern,
And down-sampling is carried out to the region and handles to obtain the feature vector of fixed dimension, feature vector is then passed to the full articulamentum
Finish message is carried out, it is final to realize that target classification and bounding box return.Meanwhile deep learning model can also be after RoIAlign layers
A mask branch (Maskbranch) is added, for generating the segmentation mask of target image.
Wherein, the feature pyramid network is the core component of Mask R-CNN model, can be by ResNet etc.
General convolutional neural networks (Convolution Neural Network, CNN) build, convolutional layer convolutional neural networks
(CNN) mode of learning is expressed based on level, non-linear expression's ability with height can be special according to the construction of data adaptive
Levy extractor.
Illustratively, feature pyramid network of the invention can be formed by improved ResNet network establishment, improved
ResNet network is strong with ability in feature extraction in such a way that parallel connection is connected with long-jump and to be combined, prevent network because
The number of plies is too deep to there is the advantages that over-fitting.
It is illustrated in figure 3 improvement ResNet structural schematic diagram of the invention, specific as follows:
(1) firstly, the improved ResNet network of the present invention (is divided into three similar to a streams by the way of parallel connection
A branch finally summarizes together, and such benefit is that can have convolution kernel of different shapes on branch road, can extract difference
Feature, finally carry out channel fusion again, summarize together, ability in feature extraction becomes strong) ability in feature extraction of enhancing network.
Wherein, the effect of 1 × 1 convolution kernel is dimensionality reduction (port number of control output), and the effect of activation primitive is the non-thread of increase model
Property ability to express, 3 × 3,5 × 5 etc. be various sizes of convolution kernel, can extract different features, the feature that will finally extract
Splicing fusion.
(2) simultaneously, the mode connected in the improved ResNet network structure of the present invention using long-jump, i.e., remained original again
The advantage of ResNet, wherein F (x)-x is residual error mapping, and x is identical mapping, with the increase of model depth, it may appear that ladder
The problem of degree disappears (the usual model number of plies is deeper, and model parameter amount increases, and ability to express or ability in feature extraction become strong), mould
The performance of type reduces instead, and advantage of the invention is to efficiently solve this situation by long-jump connection type, can
The convergence rate of model is set to become faster.
As an embodiment of the present invention, the step of deep learning model training is as follows:
(a) acquisition and processing of deep learning training data, test data;
(b) deep learning model is established;
(c) data mark (such as labelme tool);
(d) it is based on transfer learning method, using large-scale dataset ImageNet pre-training parameter as deep learning mould
Training data after mark, test data are passed to the deep learning model by the parameter initialization of type;And it completes to depth
Practise the training and evaluation of model;
(e) in the server by the storage of trained deep learning model.
The Loss value variation being illustrated in figure 4 during deep learning model training of the invention, wherein abscissa
(Epoch) the number of iterations of model training is indicated, ordinate (Loss value) indicates the difference (prediction between predicted value and desired value
Value is the output valve of model, and desired value is actual value, and Validationloss is the Loss of verifying collection, and Train loss is trained
The Loss of collection), Loss value is smaller and smaller, illustrates that model is also gradually restrained, and model training is completed.Just in manual labeled data
Determined that actual value, actual value reflect the position of target, classification in image.
Binocular vision is similar to the three-dimensional information that environment is obtained using eyes, since binocular location technology is from target
Match point is looked on two different angle shooting photos, process is very complicated, also will receive the interference of environmental factor.
From the foregoing, it will be observed that the effect of deep learning model of the invention be detect and judge camera shooting image in either with or without
Where is the region of appearance flame and flame, as shown in Figure 6;If there is flame, deep learning model can be by the region of flame
It is marked out with rectangle frame (i.e. bounding box) come as shown in fig. 7, i.e. novelty of the present invention proposes directly with the inspection of deep learning model
The bounding box point measured is used for subsequent mapped location as match point, wherein the coordinate of bounding box is the coordinate of four angle points.
As figures 8 a and 8 b show, it is positioned based on binocular camera, the fire defector frame picture of two different perspectivess can be obtained
Plain coordinate information directlys adopt the pixel coordinate information of detection block as match point, and using binocular and positions mapping algorithm maps
Flame location is found to space.It is illustrated in figure 5 binocular camera positioning principle schematic diagram of the invention, binocular positions mapping side
The principle of method is as follows:
Projection of the flame point P on the plane of delineation that a video camera C1 is shot spatially is CL, P and CLOn connecting line
Point be all projected in point CLOn, it can not be according to CLThis individually puts the depth information that coordinate obtains P.If along with another
One video camera C2, while CRIt is projection of the P on the plane of delineation that video camera C2 is shot, then P is exactly straight line l1(i.e. point C1
With point CLStraight line where being connected) and straight line l2(i.e. point C2 and point CRBe connected where straight line) intersection point.So space flame
The location information of point P can uniquely determine.
The fire disaster intelligent monitor method based on deep learning that the present invention provides a kind of, the fire disaster intelligent monitor method include
Following procedure:
Image capture module is monitored in real time under different places by binocular camera, acquires monitor video picture number
According to, and by deep learning model trained on data incoming service device;
Image analysis module carries out the flame in monitor video image data using trained deep learning model real
When detect, and obtain flame pixels location information;
Flame locating module receives the flame pixels location information that deep learning model inspection arrives, and utilizes binocular stereo vision
The flame pixels location information received is mapped to space and obtains flame space location information by location algorithm;Wherein, once examining
Monitoring room alarm will be triggered when measuring flame, remind fire behavior in monitoring personnel observation monitor video and handled;
Pointing fire-extinguishing module is under conditions of alarm no response leads to continue alarm, when by preset interval
Between, the flame space location information that flame locating module obtains is received, and flame space location information is transmitted to controller, led to
Controller control fire-fighting system is crossed to realize pointing fire-extinguishing.
It is discussed in detail although the contents of the present invention have passed through above preferred embodiment, but it should be appreciated that above-mentioned
Description is not considered as limitation of the present invention.After those skilled in the art have read above content, for of the invention
A variety of modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited to the appended claims.
Claims (10)
1. a kind of fire disaster intelligent monitor system based on deep learning, characterized by comprising:
Image capture module is monitored in real time under different places by binocular camera, acquires the number of monitor video image
According to;
Image analysis module examines the flame in the monitor video image using trained deep learning model in real time
It surveys, judges whether flame occur, if there is flame, determine flame region, the flame region is carried out by bounding box mode
Mark, and using bounding box point as match point, obtain flame pixels location information;
Flame locating module receives the flame pixels location information that the deep learning model inspection arrives, and is reflected using binocular positioning
The flame pixels location information is mapped to space by shooting method, is obtained flame space location information, is eventually found flame location.
2. fire disaster intelligent monitor system as described in claim 1, which is characterized in that
The binocular positioning mapping method further includes:
For any one flame point P spatially, the throwing on the plane of delineation that the flame point P is shot at the first video camera C1
Shadow is point CL, the projection on the plane of delineation that the flame point P is shot at the second video camera C2 is point CR, wherein point C1 and point
CLConnected straight line is l1, point C2 and point CRConnected straight line is l2, then flame point P is straight line l1With straight line l2Intersection point, i.e., it is empty
Between the location information of flame point P uniquely determine.
3. fire disaster intelligent monitor system as described in claim 1, which is characterized in that further include:
When detecting flame, by triggering the alarm of monitoring room, monitoring personnel is reminded to observe in the monitor video image
Fire behavior.
4. fire disaster intelligent monitor system as claimed in claim 3, which is characterized in that further include pointing fire-extinguishing module, institute
Pointing fire-extinguishing module is stated under conditions of the alarm no response leads to continue alarm, by preset interval time,
The flame space location information is received, fire-fighting system is controlled by controller to realize pointing fire-extinguishing.
5. fire disaster intelligent monitor system as described in claim 1, which is characterized in that
The deep learning model is Mask R-CNN model.
6. fire disaster intelligent monitor system as claimed in claim 1 or 5, which is characterized in that
The deep learning model includes:
Feature extraction layer extracts the characteristic information of target image and generates multiple dimensioned characteristic pattern;
Candidate region generates network, for generating the candidate region of target image;
Full articulamentum is returned for target classification and bounding box;
RoIAlign layers, the obtained candidate region is mapped to the corresponding region on characteristic pattern, and to the region into
Row down-sampling handles to obtain the feature vector of fixed dimension, then described eigenvector is passed to the full articulamentum to carry out information whole
Reason, it is final to realize that target classification and bounding box return.
7. fire disaster intelligent monitor system as claimed in claim 6, which is characterized in that
The feature extraction layer uses feature pyramid network, and the feature pyramid network is taken by ResNet convolutional neural networks
It builds, the ResNet convolutional neural networks are in such a way that parallel connection is connected with long-jump and to be combined.
8. fire disaster intelligent monitor system as claimed in claim 6, which is characterized in that
The deep learning model increases a mask branch after RoIAlign layers described, and the segmentation for generating target image is covered
Code.
9. a kind of fire disaster intelligent monitor method based on the fire disaster intelligent monitor system as described in claim 1-8 any one,
It is characterized in that, this method includes following procedure:
It is monitored in real time under different places by binocular camera, acquires monitor video image data, and by the institute of acquisition
State trained deep learning model on monitor video image data incoming service device;
The flame in the monitor video image data is measured in real time using trained deep learning model, judgement is
It is no flame occur, if there is flame, determine flame region, the flame region is labeled by bounding box mode, and will
Bounding box point obtains flame pixels location information as match point;
The flame pixels location information is mapped to by space using binocular positioning mapping method and obtains flame space location information;
The flame space location information is transmitted to a controller, the controller controls fire-fighting system to realize that fixed point is gone out
Fire.
10. fire disaster intelligent monitor method as claimed in claim 9, which is characterized in that further include:
When detecting flame, by triggering monitoring room alarm, reminding fire behavior in monitoring personnel observation monitor video and carrying out
Processing;
The alarm described in the no response and when leading to continue alarm, by preset interval time, receive the flame space
Location information realizes pointing fire-extinguishing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910160596.4A CN109903507A (en) | 2019-03-04 | 2019-03-04 | A kind of fire disaster intelligent monitor system and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910160596.4A CN109903507A (en) | 2019-03-04 | 2019-03-04 | A kind of fire disaster intelligent monitor system and method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109903507A true CN109903507A (en) | 2019-06-18 |
Family
ID=66946243
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910160596.4A Pending CN109903507A (en) | 2019-03-04 | 2019-03-04 | A kind of fire disaster intelligent monitor system and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109903507A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599489A (en) * | 2019-08-26 | 2019-12-20 | 华中科技大学 | Target space positioning method |
CN110675449A (en) * | 2019-09-02 | 2020-01-10 | 山东科技大学 | Binocular camera-based offshore flow detection method |
CN110728186A (en) * | 2019-09-11 | 2020-01-24 | 中国科学院声学研究所南海研究站 | Fire detection method based on multi-network fusion |
CN110765937A (en) * | 2019-10-22 | 2020-02-07 | 新疆天业(集团)有限公司 | Coal yard spontaneous combustion detection method based on transfer learning |
CN110975206A (en) * | 2019-12-06 | 2020-04-10 | 北京南瑞怡和环保科技有限公司 | Intelligent water mist fire extinguishing system |
CN111145275A (en) * | 2019-12-30 | 2020-05-12 | 重庆市海普软件产业有限公司 | Intelligent automatic control forest fire prevention monitoring system and method |
CN111258309A (en) * | 2020-01-15 | 2020-06-09 | 上海锵玫人工智能科技有限公司 | Fire extinguishing method for urban fire-fighting robot |
CN111289112A (en) * | 2020-02-25 | 2020-06-16 | 安徽炬视科技有限公司 | Relation network-based power monitoring video flame detection system and method |
CN111539264A (en) * | 2020-04-02 | 2020-08-14 | 上海海事大学 | Ship flame detection positioning system and detection positioning method |
CN111582353A (en) * | 2020-04-30 | 2020-08-25 | 恒睿(重庆)人工智能技术研究院有限公司 | Image feature detection method, system, device and medium |
CN111898523A (en) * | 2020-07-29 | 2020-11-06 | 电子科技大学 | Remote sensing image special vehicle target detection method based on transfer learning |
CN112052797A (en) * | 2020-09-07 | 2020-12-08 | 合肥科大立安安全技术有限责任公司 | MaskRCNN-based video fire identification method and system |
KR20210017137A (en) * | 2019-08-07 | 2021-02-17 | 주식회사 엘지유플러스 | Real time fire detection system and fire detection method using the same |
CN112396026A (en) * | 2020-11-30 | 2021-02-23 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
CN112465119A (en) * | 2020-12-08 | 2021-03-09 | 武汉理工光科股份有限公司 | Fire-fighting dangerous case early warning method and device based on deep learning |
CN112633231A (en) * | 2020-12-30 | 2021-04-09 | 珠海大横琴科技发展有限公司 | Fire disaster identification method and device |
CN112801148A (en) * | 2021-01-14 | 2021-05-14 | 西安电子科技大学 | Fire recognition and positioning system and method based on deep learning |
CN113515989A (en) * | 2020-07-20 | 2021-10-19 | 阿里巴巴集团控股有限公司 | Moving object, smoke and fire detection method, device and storage medium |
CN113705539A (en) * | 2021-09-29 | 2021-11-26 | 内江师范学院 | Intelligent fire monitor fire extinguishing control method and control system |
CN113713294A (en) * | 2020-05-25 | 2021-11-30 | 中国石油化工股份有限公司 | Method, device and system for carrying out safety protection on mobile refueling equipment |
CN114399719A (en) * | 2022-03-25 | 2022-04-26 | 合肥中科融道智能科技有限公司 | Transformer substation fire video monitoring method |
CN114870312A (en) * | 2022-04-28 | 2022-08-09 | 南通阳鸿石化储运有限公司 | Intelligent fire extinguishing method and system for reservoir area hose station based on digital model |
CN114943884A (en) * | 2022-06-10 | 2022-08-26 | 慧之安信息技术股份有限公司 | Equipment protection method based on deep learning |
CN115797439A (en) * | 2022-11-11 | 2023-03-14 | 中国消防救援学院 | Flame space positioning system and method based on binocular vision |
CN116665034A (en) * | 2022-11-11 | 2023-08-29 | 中国消防救援学院 | Three-dimensional matching and flame space positioning method based on edge characteristics |
CN117726948A (en) * | 2024-02-07 | 2024-03-19 | 成都白泽智汇科技有限公司 | Binocular image processing method and system based on neural network model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102298818A (en) * | 2011-08-18 | 2011-12-28 | 中国科学技术大学 | Binocular shooting fire detecting and positioning device and fire positioning method thereof |
CN108229442A (en) * | 2018-02-07 | 2018-06-29 | 西南科技大学 | Face fast and stable detection method in image sequence based on MS-KCF |
CN109376681A (en) * | 2018-11-06 | 2019-02-22 | 广东工业大学 | A kind of more people's Attitude estimation method and system |
EP3561788A1 (en) * | 2016-12-21 | 2019-10-30 | Hochiki Corporation | Fire monitoring system |
-
2019
- 2019-03-04 CN CN201910160596.4A patent/CN109903507A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102298818A (en) * | 2011-08-18 | 2011-12-28 | 中国科学技术大学 | Binocular shooting fire detecting and positioning device and fire positioning method thereof |
EP3561788A1 (en) * | 2016-12-21 | 2019-10-30 | Hochiki Corporation | Fire monitoring system |
CN108229442A (en) * | 2018-02-07 | 2018-06-29 | 西南科技大学 | Face fast and stable detection method in image sequence based on MS-KCF |
CN109376681A (en) * | 2018-11-06 | 2019-02-22 | 广东工业大学 | A kind of more people's Attitude estimation method and system |
Non-Patent Citations (1)
Title |
---|
曹之乐: ""基于双目视觉的焦点定位方法研究与应用"", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210017137A (en) * | 2019-08-07 | 2021-02-17 | 주식회사 엘지유플러스 | Real time fire detection system and fire detection method using the same |
KR102265291B1 (en) * | 2019-08-07 | 2021-06-16 | 주식회사 엘지유플러스 | Real time fire detection system and fire detection method using the same |
CN110599489A (en) * | 2019-08-26 | 2019-12-20 | 华中科技大学 | Target space positioning method |
CN110675449A (en) * | 2019-09-02 | 2020-01-10 | 山东科技大学 | Binocular camera-based offshore flow detection method |
CN110728186A (en) * | 2019-09-11 | 2020-01-24 | 中国科学院声学研究所南海研究站 | Fire detection method based on multi-network fusion |
CN110765937A (en) * | 2019-10-22 | 2020-02-07 | 新疆天业(集团)有限公司 | Coal yard spontaneous combustion detection method based on transfer learning |
CN110975206A (en) * | 2019-12-06 | 2020-04-10 | 北京南瑞怡和环保科技有限公司 | Intelligent water mist fire extinguishing system |
CN110975206B (en) * | 2019-12-06 | 2021-06-29 | 北京南瑞怡和环保科技有限公司 | Intelligent water mist fire extinguishing system |
CN111145275A (en) * | 2019-12-30 | 2020-05-12 | 重庆市海普软件产业有限公司 | Intelligent automatic control forest fire prevention monitoring system and method |
CN111258309A (en) * | 2020-01-15 | 2020-06-09 | 上海锵玫人工智能科技有限公司 | Fire extinguishing method for urban fire-fighting robot |
CN111289112A (en) * | 2020-02-25 | 2020-06-16 | 安徽炬视科技有限公司 | Relation network-based power monitoring video flame detection system and method |
CN111539264A (en) * | 2020-04-02 | 2020-08-14 | 上海海事大学 | Ship flame detection positioning system and detection positioning method |
CN111582353A (en) * | 2020-04-30 | 2020-08-25 | 恒睿(重庆)人工智能技术研究院有限公司 | Image feature detection method, system, device and medium |
CN111582353B (en) * | 2020-04-30 | 2022-01-21 | 恒睿(重庆)人工智能技术研究院有限公司 | Image feature detection method, system, device and medium |
CN113713294A (en) * | 2020-05-25 | 2021-11-30 | 中国石油化工股份有限公司 | Method, device and system for carrying out safety protection on mobile refueling equipment |
CN113515989A (en) * | 2020-07-20 | 2021-10-19 | 阿里巴巴集团控股有限公司 | Moving object, smoke and fire detection method, device and storage medium |
CN111898523A (en) * | 2020-07-29 | 2020-11-06 | 电子科技大学 | Remote sensing image special vehicle target detection method based on transfer learning |
CN112052797A (en) * | 2020-09-07 | 2020-12-08 | 合肥科大立安安全技术有限责任公司 | MaskRCNN-based video fire identification method and system |
CN112396026A (en) * | 2020-11-30 | 2021-02-23 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
CN112465119A (en) * | 2020-12-08 | 2021-03-09 | 武汉理工光科股份有限公司 | Fire-fighting dangerous case early warning method and device based on deep learning |
CN112633231A (en) * | 2020-12-30 | 2021-04-09 | 珠海大横琴科技发展有限公司 | Fire disaster identification method and device |
CN112801148A (en) * | 2021-01-14 | 2021-05-14 | 西安电子科技大学 | Fire recognition and positioning system and method based on deep learning |
CN113705539A (en) * | 2021-09-29 | 2021-11-26 | 内江师范学院 | Intelligent fire monitor fire extinguishing control method and control system |
CN113705539B (en) * | 2021-09-29 | 2023-05-05 | 内江师范学院 | Intelligent fire monitor control fire extinguishing method and control system |
CN114399719A (en) * | 2022-03-25 | 2022-04-26 | 合肥中科融道智能科技有限公司 | Transformer substation fire video monitoring method |
CN114870312A (en) * | 2022-04-28 | 2022-08-09 | 南通阳鸿石化储运有限公司 | Intelligent fire extinguishing method and system for reservoir area hose station based on digital model |
CN114943884A (en) * | 2022-06-10 | 2022-08-26 | 慧之安信息技术股份有限公司 | Equipment protection method based on deep learning |
CN114943884B (en) * | 2022-06-10 | 2022-11-18 | 慧之安信息技术股份有限公司 | Equipment protection method based on deep learning |
CN115797439A (en) * | 2022-11-11 | 2023-03-14 | 中国消防救援学院 | Flame space positioning system and method based on binocular vision |
CN116665034A (en) * | 2022-11-11 | 2023-08-29 | 中国消防救援学院 | Three-dimensional matching and flame space positioning method based on edge characteristics |
CN116665034B (en) * | 2022-11-11 | 2023-10-31 | 中国消防救援学院 | Three-dimensional matching and flame space positioning method based on edge characteristics |
CN117726948A (en) * | 2024-02-07 | 2024-03-19 | 成都白泽智汇科技有限公司 | Binocular image processing method and system based on neural network model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903507A (en) | A kind of fire disaster intelligent monitor system and method based on deep learning | |
CN110569772B (en) | Method for detecting state of personnel in swimming pool | |
CN109559320A (en) | Realize that vision SLAM semanteme builds the method and system of figure function based on empty convolution deep neural network | |
CN107437318B (en) | Visible light intelligent recognition algorithm | |
CN109543601A (en) | A kind of unmanned vehicle object detection method based on multi-modal deep learning | |
CN107580199A (en) | The target positioning of overlapping ken multiple-camera collaboration and tracking system | |
CN111679695B (en) | Unmanned aerial vehicle cruising and tracking system and method based on deep learning technology | |
CN107392965A (en) | A kind of distance-finding method being combined based on deep learning and binocular stereo vision | |
CN101167086A (en) | Human detection and tracking for security applications | |
CN111339905B (en) | CIM well lid state visual detection system based on deep learning and multiple visual angles | |
CN109544548A (en) | Defect inspection method, device, server, equipment and the storage medium of cutlery box | |
CN109784278A (en) | The small and weak moving ship real-time detection method in sea based on deep learning | |
CN106846375A (en) | A kind of flame detecting method for being applied to autonomous firefighting robot | |
CN103413395A (en) | Intelligent smoke detecting and early warning method and device | |
CN107295230A (en) | A kind of miniature object movement detection device and method based on thermal infrared imager | |
KR20180133745A (en) | Flying object identification system using lidar sensors and pan/tilt zoom cameras and method for controlling the same | |
CN111163290B (en) | Method for detecting and tracking night navigation ship | |
CN109342423A (en) | A kind of urban discharging pipeline acceptance method based on the mapping of machine vision pipeline | |
CN114202646A (en) | Infrared image smoking detection method and system based on deep learning | |
CN111539325A (en) | Forest fire detection method based on deep learning | |
CN113713292A (en) | Method and device for carrying out accurate flame discrimination, fire extinguishing point positioning and rapid fire extinguishing based on YOLOv5 model | |
CN115880231A (en) | Power transmission line hidden danger detection method and system based on deep learning | |
CN109697426B (en) | Flight based on multi-detector fusion shuts down berth detection method | |
US20210201542A1 (en) | Building maintaining method and system | |
CN111539264A (en) | Ship flame detection positioning system and detection positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190618 |