CN112711996A - System for detecting occupancy of fire fighting access - Google Patents
System for detecting occupancy of fire fighting access Download PDFInfo
- Publication number
- CN112711996A CN112711996A CN202011527880.XA CN202011527880A CN112711996A CN 112711996 A CN112711996 A CN 112711996A CN 202011527880 A CN202011527880 A CN 202011527880A CN 112711996 A CN112711996 A CN 112711996A
- Authority
- CN
- China
- Prior art keywords
- fire fighting
- loss
- camera
- module
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 238000010586 diagram Methods 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 16
- 230000007246 mechanism Effects 0.000 claims description 14
- 238000003062 neural network model Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 7
- 241001465754 Metazoa Species 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 238000009434 installation Methods 0.000 claims 1
- 238000013507 mapping Methods 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 11
- 230000008569 process Effects 0.000 abstract description 2
- 230000002349 favourable effect Effects 0.000 abstract 1
- 238000007689 inspection Methods 0.000 abstract 1
- 238000011176 pooling Methods 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/08—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B3/00—Audible signalling systems; Audible personal calling systems
- G08B3/10—Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The invention provides a system for fire fighting access occupation detection, which comprises: the system can quickly and accurately detect the condition of illegally occupying the fire fighting access and timely remind background management personnel to process the condition. Compared with the traditional manual or automatic inspection method, the method has the characteristics of higher automation and intelligence degree, low false alarm rate and the like, is suitable for complex scenes, and is favorable for ensuring the smoothness of a fire fighting channel.
Description
Technical Field
The invention relates to a fire fighting access occupation detection system based on an artificial intelligence image recognition technology.
Background
In recent years, safety problems are highlighted, and people pay more attention to safety problems of living environments and working environments. The fire fighting access is an emergency access provided in public areas. When a fire occurs, fire fighters, fire fighting vehicles and fire fighting equipment enter from the fire fighting access, but if the fire fighting access is occupied by vehicles or other hard-to-move objects, the rescue of the fire can be delayed, and greater economic loss and casualties are caused. At present, most of units are only provided with warning boards in front of fire fighting passageways, and people generally observe the warning boards consciously. But in most cases, everyone has a lucky psychology to stop the motor vehicle at the fire fighting channel and guard personnel often cannot find the motor vehicle in time, so that the phenomenon easily causes safety accidents and causes great loss to national people.
At present, the automatic identification of the occupied fire fighting channel mainly comprises 2 methods:
(1) the method based on sensor layout comprises the following steps: the method detects the obstacles in the fire fighting channel through infrared, ultrasonic, geomagnetic and the like, and sends an alarm when detecting the obstacles. However, in many fire-fighting access ways, there may be several vehicles or people passing through the scene in short time, and under the condition, the sensors can detect the obstacles in real time, so that frequent false alarm or false judgment is formed, and the sensors cannot be used.
(2) The method based on the comparison of the difference value between the monitoring picture and the reference picture comprises the following steps: and taking the picture which is not occupied by the fire fighting access as a reference picture, comparing the reference picture with the picture shot by real-time monitoring, and if the difference area percentage is continuously greater than a threshold value, determining that the picture is occupied. This technical scheme can't accurate discernment occupy the object of fire control passageway, can cause a series of wrong reports: if the people flow or the vehicle flow is large, the difference area may be continuously larger than a threshold value, and false alarm can be formed; a scenario where a fire passage is not substantially occupied, such as when a plurality of people or animals remain in the fire passage for a period of time, may result in a false alarm.
For the above reasons, there is a need for a more accurate and intelligent fire fighting access occupation automatic identification and detection system.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides an intelligent fire fighting passage detection system for accurately identifying objects occupied by a fire fighting passage and positions occupied by the passage, and particularly provides a system for detecting the occupancy of the fire fighting passage, which is used for detecting the occupancy of a fire fighting passage by vehicles or other sundries. The system comprises a camera deployment control module, a fire fighting channel calibration module, an intelligent object identification module in the fire fighting channel, a fire fighting channel occupation detection module and an alarm module;
the camera is distributed and controlled, and a camera is erected to monitor the fire fighting access;
the fire fighting channel calibration module is used for reading the camera view of the camera and calibrating the range of the fire fighting channel in a block diagram form;
the intelligent object identification module is used for detecting targets occupying the fire fighting channel;
the fire fighting access occupation detection module is used for identifying and distinguishing objects in pictures by utilizing an intelligent object identification module in the fire fighting access according to the pictures which are read every minute and are shot by one camera;
and the alarm module is used for giving an alarm to management personnel.
The camera comprises a monitoring camera, a data transmission module and a voice warning module; the data transmission module reads a shot picture every minute, and transmits data to the management background system through 4G or 5G wireless signals, and the voice alarm module can carry out warning broadcasting according to an instruction sent by the management background system and requires that targets such as vehicles occupying a fire fighting channel leave the fire fighting channel.
When the camera is installed, the camera view is required to cover the whole fire fighting access.
The intelligent recognition module for the objects appearing in the fire fighting channel comprises a training sample knowledge base and an intelligent recognition model;
establishing a training sample knowledge base for people or objects which frequently appear in the fire fighting access (such as 3 times in a day) including large automobiles, small and medium automobiles, battery cars, motorcycles, personnel, cats, dogs, boxes and tricycles, wherein the training sample knowledge base comprises sample pictures, storage positions, names, sample picture sizes, identification object positions, sizes, classification and image access numbers;
the yolov3 deep neural network model is improved, a mixed domain attention mechanism is introduced, the yolov3 deep neural network model is used for training a training sample knowledge base to obtain an intelligent recognition model, and a target, position and object range block diagram occupying a fire fighting channel is detected through the intelligent recognition model.
The specific structure of the improved yolov3 deep neural network model fusing the mixed domain attention mechanism comprises the following steps:
the backbone network adopts dark net53 for extracting image detail features;
extracting feature maps of 3 different layers (in combination with a subsequent network structure diagram, 3 feature maps in a backbone network are respectively extracted, the sizes of the feature maps are respectively 13 × 13,26 × 26 and 52 × 52 and are respectively suitable for target detection with different sizes), the sizes of the feature maps are respectively 13 × 13,26 × 26 and 52 × 52, and original drawings are mapped through the feature maps so as to detect and classify targets (the feature maps are an attribute of a yolov3 model, original drawings are mapped through points on the feature maps extracted from the backbone network so as to obtain candidate frame positions, and then an optimization model is continuously trained through loss constraint);
adding a mixed domain attention mechanism, wherein a loss function loss is composed of 4 parts, namely a positioning loss lbox, a classification loss lcls, a confidence coefficient loss lobj and a target shielding loss lse, wherein the positioning loss, the confidence coefficient loss and the target shielding loss adopt a square error loss, the classification loss adopts a cross entropy loss, and the specific calculation mode is as follows:
Loss=lbox+lshe+lcls+lobj
wherein S is2The values of the grid size are 13x13,26x26 and 52x52, B represents the number of candidate frames generated by each grid, and x represents the number of the candidate frames generated by each gridi,yiRespectively represent the abscissa and ordinate of the upper left corner of the real labeling box, wi,hiRespectively representing the width and height of the actual label box,are respectively xi,yiThe predicted value of (a) is determined,are respectively wi,hiPredicted value of (a), x0And y0、w0And h0The calibrated information of the fire fighting access area to be detected (namely the minimum convex hull quadrangle of the fire fighting access area calibrated in advance, which has the function of constructing the lse loss and enables the model to detect the target position more accurately) is represented, and loss represents the error sum of the whole network;
parameter(s)Indicating whether the jth prediction box of the ith grid is in charge of the target or not, if so, thenThe value is 1, otherwise 0;
parameter(s)If there is an object at the jth prediction box of the ith mesh,the value is 0, otherwise 1;
parameter ciRepresenting the confidence of the prediction frame at the ith grid by calculating the product of the probability that the current prediction frame contains the object and the intersection and proportion of the prediction frame and the real frameIs represented byiCorresponding predicted values;
parameter piRepresenting the prediction box object class at the ith mesh, correspondingIs predicted category information;
λcoord、λnoobjand λclassFor loss coefficients, an improved yolov3 deep neural network model of a mixed domain attention mechanism is fused through optimization training (namely, a process of improving the effect of the neural network through training and combined with result adjustment and hyperreference), and finally an intelligent recognition model for detecting targets occupying a fire fighting channel is obtained.
The fire control passageway occupies the photo that detection module took to a frame of camera that each minute read, utilizes the interior object intelligent recognition module that appears of fire control passageway to discern and differentiate the object in the photo, specifically includes: and (2) judging whether an object range diagram of the vehicle and other objects (such as boxes) which possibly cause fire fighting channel blockage intersect with a calibrated fire fighting channel range diagram without subsequent identification for personnel and animals, if so, storing the object classification, the object center position and the object range diagram data in a to-be-classified table, taking the data of 3 continuous photos stored in the to-be-classified table, and if three photos are consistent in classification, judging that the fire fighting channel is occupied if the mean error of the sizes of the object center position and the range diagram (the range diagram refers to a position diagram obtained after the model detects the object) does not exceed 5%.
The warning module is used for warning the manager, and comprises: when the fire fighting channel is found to be occupied, the management personnel pops up related photos and position information of the deployment and control camera in a pop-up window mode for the first time, and notifies the occupied vehicle to leave in a broadcast mode by using a voice warning module attached to the camera; if the camera monitoring area still has the fire fighting access occupation situation after N (N is generally 5) minutes, the occupied photos, the position information and the occupation time information are sent to managers nearby the fire fighting access for field processing in a WeChat dispatching mode.
Has the advantages that: the method can accurately and effectively detect the occupation condition of the fire fighting channel, can be well matched with background management personnel for management, and can supplement the training data set by combining the newly-appeared occupation condition of the fire fighting channel, thereby increasing the effectiveness and robustness of model detection.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic diagram of the present invention;
FIG. 2 is a schematic diagram of fire fighting access calibration
FIG. 3a is a schematic diagram of a yolov3 detection model network structure of the fusion mixed domain attention mechanism according to the present invention;
FIG. 3b is a detailed architecture diagram of yolov3 detection model network.
FIG. 4 is a schematic diagram of a detection constraint relationship according to the present invention.
Detailed Description
As shown in fig. 1, the present example provides a system for fire fighting access occupancy detection, comprising:
the camera control module is used for controlling the monitored fire fighting access area in the early stage of a camera, so that the fire fighting access area to be monitored can be completely displayed in the visual field of the camera. One frame of photo which can be read every minute is shot, and data is transmitted to a management background system through 4G or 5G wireless signals.
The fire fighting access calibration module is a software module, can read the camera view of the camera, and manually calibrates the range of the fire fighting access in a block diagram form. As shown in fig. 2.
The intelligent fire fighting channel has the advantages that a training sample knowledge base of the intelligent object identification module appears in the fire fighting channel, the initial number of samples is about 10000, and the intelligent fire fighting channel comprises sample pictures of large automobiles, small and medium automobiles, battery cars, motorcycles, personnel, cats, dogs, boxes and tricycles, and the storage positions, names and sample picture sizes of the pictures, the positions and sizes of identification objects and classification labeling are carried out.
An intelligent identification model of an object intelligent identification module appears in a fire fighting channel, as shown in fig. 3a, wherein, common convolution normalized blocks are denoted as consiluence and covs, filters denote the number of convolution kernels, size denotes the size of the convolution kernels, output denotes the output size after operation,/2 denotes the maximum pooling of 2x 2, Residual is a Residual convolution block, the specific structure is shown as the bottom right, coefficients in front of the network blocks composed of consistence and Residual denote the number of the network blocks, sigmoid denotes an activation function, Maxpool denotes the maximum pooling, AVGPool denotes the average pooling, scale denotes a scaling function, generally, root is taken for a dimension, CBAM denotes a mixed domain attention mechanism, including a channel attention mechanism and a space attention mechanism, the detailed structure is shown in fig. 3b, for labeled sample data is trained by yov 3 target detection model, fire fighting channel occupancy detection model is guaranteed, and input of the network model is 416 image of a 3 channel, the output result is information such as a prediction box, a confidence cls, and a category obj on the graph. The network backbone adopts darknet53 for extracting image detail features and extracting feature maps of 3 different layers, the sizes of the feature maps are respectively 13 × 13,26 × 26 and 52 × 52, meanwhile, a mixed domain attention mechanism is added into a darket53 backbone network, the extraction effect of a model on the image features is improved, the feature maps to an original image to detect and classify a target, 3 loss functions of positioning loss, classification loss and confidence loss are constructed, the former two adopt square-error loss, the latter adopts cross entropy loss, and the calculation mode is as follows:
Loss=lbox+lshe+lcls+lobj
wherein S represents the mesh size, S2Representing 13x13,26x26,52x52, B representing the number of candidate boxes generated by each grid, xi,yiRespectively represent the abscissa and ordinate, w, of the top left corner of the real labeling boxi,hiRespectively representing the width and height of the actual label box,are respectively xi,yiThe predicted value of (a) is determined,are respectively wi,hiPredicted value of (a), x0And y0,w0And h0The method comprises the steps that the information of a calibrated fire fighting channel area to be tested is represented, and loss represents the error sum of the whole network;
parameter(s)Indicating whether the jth prediction box of the ith grid is in charge of the target or not, and if so, determining whether the jth prediction box of the ith grid is in charge of the target or notThe value is 1, otherwise 0;
parameter(s)If there is an object at the jth prediction box of the ith mesh,the value is 0, otherwise 1;
parameter ciThe confidence of the prediction frame at the ith grid is represented by the product of the probability that the current prediction frame contains the object and the intersection and proportion of the prediction frame and the real frameIs represented byiCorresponding predicted values;
parameter piRepresenting the prediction box object class at the ith mesh, correspondingIs predicted category information;
λcoord、λnoobjand λclassFor loss coefficients, a yolov3 neural network model fused with a mixed domain attention mechanism is optimally trained to finally obtain a detection model, so that the occupation condition in the camera field of view can be accurately detected.
The training network model is constrained through the loss function, an ideal detection model is finally obtained, and the types, position information and confidence coefficient of vehicles, sundries and the like in the camera view field can be accurately detected.
The fire fighting channel occupation detection module is used for identifying and distinguishing objects shot by a picture shot by a camera according to the picture read every minute by using the intelligent object identification module type in the fire fighting channel, and does not perform subsequent identification on personnel and animals, and judging whether a range diagram of the fire fighting channel occupation detection module intersects with a calibrated fire fighting channel range diagram or not for vehicles and other objects which can really cause fire fighting channel blockage. If the objects are classified and the center positions of the objects are intersected, storing the data of the object range diagram in a table to be classified, and taking the data of 3 continuous photos stored in the table to be classified. If three objects in the three pictures are classified consistently, and the mean error of the positions and the size of the range diagram does not exceed 5%, the fire fighting channel is considered to be occupied. As shown in FIG. 4, in the 3 consecutive photographs, the area S of the block diagram identifying the vehicle in each photograph1,S2,S3Central position of the block diagram (X)1,Y1),(X2,Y2),(X3,Y3) By passing And (4) calculating, wherein the calculation result does not exceed 5%, and then determining occupation.
And the warning module pops out related photos and position information of the control camera to a manager in a pop-up window mode for the first time after finding that the fire fighting channel is occupied, and notifies the occupied vehicle to leave in a broadcast mode by using a voice warning module attached to the camera. If after 5 minutes, the camera monitoring area still has the fire fighting access occupation condition, and the occupation photos, the position information and the occupation time information are sent to managers nearby the fire fighting access for field processing in a WeChat dispatching mode.
The present invention provides a system for fire fighting access occupancy detection, and a number of methods and ways to implement the technical solution, which are described above as merely preferred embodiments of the present invention, it should be noted that, for those skilled in the art, several modifications and embellishments can be made without departing from the principle of the present invention, and these should also be considered as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (7)
1. A system for detecting the occupation of a fire fighting channel is characterized by comprising a camera deployment control module, a fire fighting channel calibration module, an intelligent object identification module in the fire fighting channel, a fire fighting channel occupation detection module and an alarm module;
the camera is distributed and controlled, and a camera is erected to monitor the fire fighting access;
the fire fighting channel calibration module is used for reading the camera view of the camera and calibrating the range of the fire fighting channel in a block diagram form;
the intelligent object identification module is used for detecting targets occupying the fire fighting channel;
the fire fighting access occupation detection module is used for identifying and distinguishing objects in pictures by utilizing an intelligent object identification module in the fire fighting access according to the pictures which are read every minute and are shot by one camera;
and the alarm module is used for giving an alarm to management personnel.
2. The system of claim 1, wherein the camera comprises a monitoring camera, a data transmission module and a voice alarm module; the data transmission module reads one shot picture every minute and transmits the picture to the management background system, and the voice alarm module can carry out alarm broadcasting according to an instruction sent by the management background system and requires that a target occupying the fire fighting channel leaves the fire fighting channel.
3. The system of claim 2, wherein the camera deployment installation requires the camera field of view to cover the entire fire passageway.
4. The system of claim 3, wherein the intelligent recognition module for the object appearing in the fire fighting tunnel comprises a training sample knowledge base and an intelligent recognition model;
establishing a training sample knowledge base for people or objects frequently appearing in the fire fighting channel, wherein the training sample knowledge base comprises sample pictures, storage positions, names, sample picture sizes, identification object positions, sizes, classification and image channel numbers;
the yolov3 deep neural network model is improved, a mixed domain attention mechanism is introduced, the yolov3 deep neural network model is used for training a training sample knowledge base to obtain an intelligent recognition model, and a target, position and object range block diagram occupying a fire fighting channel is detected through the intelligent recognition model.
5. The system of claim 4, wherein the specific structure of the improved yolov3 deep neural network model of the fused mixed domain attention mechanism comprises:
the backbone network adopts dark net53 for extracting image detail features;
extracting feature maps of 3 different layers, wherein the sizes of the feature maps are 13 × 13,26 × 26 and 52 × 52 respectively, and mapping an original image through the feature maps to detect and classify the target;
adding a mixed domain attention mechanism, wherein a loss function loss is composed of 4 parts, namely a positioning loss lbox, a classification loss lcls, a confidence coefficient loss lobj and a target shielding loss lse, wherein the positioning loss, the confidence coefficient loss and the target shielding loss adopt a square error loss, the classification loss adopts a cross entropy loss, and the specific calculation mode is as follows:
Loss=lbox+lshe+lcls+lobj
wherein S is2The values of the grid size are 13x13,26x26 and 52x52, B represents the number of candidate frames generated by each grid, and x represents the number of the candidate frames generated by each gridi,yiRespectively represent the abscissa and ordinate of the upper left corner of the real labeling box, wi,hiRespectively representing the width and height of the actual label box,are respectively xi,yiThe predicted value of (a) is determined,are respectively wi,hiPredicted value of (a), x0And y0、w0And h0The calibrated fire fighting channel area information to be tested is represented, and loss represents the error sum of the whole network;
parameter(s)Indicating whether the jth prediction box of the ith grid is in charge of the target or not, if so, thenThe value is 1, otherwise 0;
parameter(s)If there is an object at the jth prediction box of the ith mesh,the value is 0, otherwise 1;
parameter ciRepresenting the confidence of the prediction frame at the ith grid by calculating the product of the probability that the current prediction frame contains the object and the intersection and proportion of the prediction frame and the real frameIs represented byiCorresponding predicted values;
parameter piRepresenting the prediction box object class at the ith mesh, correspondingIs predicted category information;
λcoord、λnoobjand λclassFor loss coefficients, an intelligent recognition model for detecting targets occupying fire fighting channels is finally obtained by optimizing, training and fusing an improved yolov3 deep neural network model of a mixed domain attention mechanism.
6. The system according to claim 5, wherein the fire fighting access occupation detection module identifies and discriminates the object in the picture by using the intelligent object recognition module for the object appearing in the fire fighting access according to the picture taken by one camera read every minute, and specifically comprises: and (3) performing no subsequent identification on personnel and animals, judging whether an object range diagram of the vehicle and other objects which can cause fire fighting channel blockage is intersected with a calibrated fire fighting channel range diagram, if the object range diagram and the calibrated fire fighting channel range diagram are intersected, storing the object classification, the object center position and the object range diagram data in a to-be-classified table, taking the data of 3 continuous photos stored in the to-be-classified table, and if three photos are consistent in classification, and the mean error between the object center position and the range diagram size does not exceed 5%, judging that the fire fighting channel is occupied.
7. The system of claim 6, wherein the alarm module is configured to alarm a manager, and comprises: when the fire fighting channel is found to be occupied, the management personnel pops up related photos and position information of the deployment and control camera in a pop-up window mode for the first time, and notifies the occupied vehicle to leave in a broadcast mode by using a voice warning module attached to the camera; if the fire fighting access is still occupied in the camera monitoring area after N minutes, occupied photos, position information and occupied time information are sent to managers nearby the fire fighting access for field processing in a WeChat dispatching mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011527880.XA CN112711996A (en) | 2020-12-22 | 2020-12-22 | System for detecting occupancy of fire fighting access |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011527880.XA CN112711996A (en) | 2020-12-22 | 2020-12-22 | System for detecting occupancy of fire fighting access |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112711996A true CN112711996A (en) | 2021-04-27 |
Family
ID=75545187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011527880.XA Pending CN112711996A (en) | 2020-12-22 | 2020-12-22 | System for detecting occupancy of fire fighting access |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112711996A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113670269A (en) * | 2021-08-12 | 2021-11-19 | 北京航空航天大学 | Large-view-field foreign matter detection device and method |
CN113869290A (en) * | 2021-12-01 | 2021-12-31 | 中化学交通建设集团有限公司 | Fire fighting access occupation identification method and device based on artificial intelligence technology |
CN114092858A (en) * | 2021-11-24 | 2022-02-25 | 浙江浩腾电子科技股份有限公司 | AI-based community fire fighting access occupation detection and identification method |
CN114582129A (en) * | 2022-03-11 | 2022-06-03 | 浙江电马云车科技有限公司 | Fire fighting access early warning system based on 5G |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110636715A (en) * | 2019-08-27 | 2019-12-31 | 杭州电子科技大学 | Self-learning-based automatic welding and defect detection method |
CN110796168A (en) * | 2019-09-26 | 2020-02-14 | 江苏大学 | Improved YOLOv 3-based vehicle detection method |
US20200098428A1 (en) * | 2018-09-20 | 2020-03-26 | University Of Utah Research Foundation | Digital rram-based convolutional block |
CN111178451A (en) * | 2020-01-02 | 2020-05-19 | 中国民航大学 | License plate detection method based on YOLOv3 network |
CN111476827A (en) * | 2019-01-24 | 2020-07-31 | 曜科智能科技(上海)有限公司 | Target tracking method, system, electronic device and storage medium |
CN111597902A (en) * | 2020-04-16 | 2020-08-28 | 浙江工业大学 | Motor vehicle illegal parking monitoring method |
CN111629181A (en) * | 2020-05-19 | 2020-09-04 | 辽宁云盾网力科技有限公司 | Fire-fighting life passage monitoring system and method |
-
2020
- 2020-12-22 CN CN202011527880.XA patent/CN112711996A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200098428A1 (en) * | 2018-09-20 | 2020-03-26 | University Of Utah Research Foundation | Digital rram-based convolutional block |
CN111476827A (en) * | 2019-01-24 | 2020-07-31 | 曜科智能科技(上海)有限公司 | Target tracking method, system, electronic device and storage medium |
CN110636715A (en) * | 2019-08-27 | 2019-12-31 | 杭州电子科技大学 | Self-learning-based automatic welding and defect detection method |
CN110796168A (en) * | 2019-09-26 | 2020-02-14 | 江苏大学 | Improved YOLOv 3-based vehicle detection method |
CN111178451A (en) * | 2020-01-02 | 2020-05-19 | 中国民航大学 | License plate detection method based on YOLOv3 network |
CN111597902A (en) * | 2020-04-16 | 2020-08-28 | 浙江工业大学 | Motor vehicle illegal parking monitoring method |
CN111629181A (en) * | 2020-05-19 | 2020-09-04 | 辽宁云盾网力科技有限公司 | Fire-fighting life passage monitoring system and method |
Non-Patent Citations (1)
Title |
---|
陈俊: ""基于YOLOv3算法的目标检测研究与实现"", 《 CNKI优秀硕士学位论文全文库》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113670269A (en) * | 2021-08-12 | 2021-11-19 | 北京航空航天大学 | Large-view-field foreign matter detection device and method |
CN114092858A (en) * | 2021-11-24 | 2022-02-25 | 浙江浩腾电子科技股份有限公司 | AI-based community fire fighting access occupation detection and identification method |
CN113869290A (en) * | 2021-12-01 | 2021-12-31 | 中化学交通建设集团有限公司 | Fire fighting access occupation identification method and device based on artificial intelligence technology |
CN114582129A (en) * | 2022-03-11 | 2022-06-03 | 浙江电马云车科技有限公司 | Fire fighting access early warning system based on 5G |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112711996A (en) | System for detecting occupancy of fire fighting access | |
CN108062349B (en) | Video monitoring method and system based on video structured data and deep learning | |
CN110348312A (en) | A kind of area video human action behavior real-time identification method | |
CN108009473B (en) | Video structuralization processing method, system and storage device based on target behavior attribute | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN109670404B (en) | Road ponding image detection early warning method based on hybrid model | |
CN112819068B (en) | Ship operation violation behavior real-time detection method based on deep learning | |
CN107437318B (en) | Visible light intelligent recognition algorithm | |
KR20090054522A (en) | Fire detection system and method basedon visual data | |
CN104463253B (en) | Passageway for fire apparatus safety detection method based on adaptive background study | |
KR102122850B1 (en) | Solution for analysis road and recognition vehicle license plate employing deep-learning | |
CN107330414A (en) | Act of violence monitoring method | |
CN112733690A (en) | High-altitude parabolic detection method and device and electronic equipment | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
CN110255318B (en) | Method for detecting idle articles in elevator car based on image semantic segmentation | |
CN113963301A (en) | Space-time feature fused video fire and smoke detection method and system | |
CN114898261A (en) | Sleep quality assessment method and system based on fusion of video and physiological data | |
CN112131951A (en) | System for automatically identifying behaviors of illegal ladder use in construction | |
CN114155470A (en) | River channel area intrusion detection method, system and storage medium | |
CN116129343A (en) | Fire-fighting channel occupation detection method and device and electronic equipment | |
EP4287147A1 (en) | Training method, use, software program and system for the detection of unknown objects | |
CN115995097A (en) | Deep learning-based safety helmet wearing standard judging method | |
CN101540891A (en) | Luggage delivery warehouse human body detecting system based on monitoring video | |
CN115798133A (en) | Flame alarm method, device, equipment and storage medium | |
CN115171214A (en) | Construction site abnormal behavior detection method and system based on FCOS target detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |