CN111629181B - Fire-fighting life passage monitoring system and method - Google Patents
Fire-fighting life passage monitoring system and method Download PDFInfo
- Publication number
- CN111629181B CN111629181B CN202010423684.1A CN202010423684A CN111629181B CN 111629181 B CN111629181 B CN 111629181B CN 202010423684 A CN202010423684 A CN 202010423684A CN 111629181 B CN111629181 B CN 111629181B
- Authority
- CN
- China
- Prior art keywords
- hidden danger
- occupied
- fire
- monitoring
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a fire-fighting life channel monitoring system and a method, which mainly comprise: the system comprises an AI intelligent module, a monitoring center platform and a monitoring system of a third party, wherein the AI intelligent module is used for monitoring a fire-fighting life passage area, acquiring a video image of the fire-fighting life passage area, identifying a hidden danger event in the video image, generating alarm information and uploading the alarm information and the hidden danger event image to the monitoring center platform; the monitoring center platform is used for generating a hidden danger event record and distributing the hidden danger event to a monitoring system of a third party; and the third-party supervision system is used for receiving the hidden danger event records and handling the hidden danger events. The intelligent analysis based on the video automatically discovers the violation condition that the fire truck channel is blocked and the potential safety hazard occupied by the operation surface, automatically warns in time, dynamically tracks the potential hazard and eliminates the potential hazard, thereby realizing automatic detection of the potential hazard, automatic notification of the responsible person and elimination of the potential hazard.
Description
Technical Field
The invention relates to the technical field of fire protection monitoring, in particular to a fire protection life channel monitoring system and method.
Background
The smoothness of the fire truck channel and the illegal occupation of the parking operation surface of the fire truck are ensured, and the fire truck is the most basic condition for effectively fighting fire. However, due to the rapid increase of urban population and vehicles, the illegal parking of the vehicles causes the blockage of the fire fighting vehicle channel and the increasingly serious occupation of the parking operation surface. The serious consequence that the passage of the fire truck is blocked recently is that the fire truck can not pass through the fire truck when a plurality of fire disasters happen, so that the fire disasters can not be put out in time. Although each social unit has responsibility to ensure the smoothness of the fire fighting truck passage and the parking operation surface, the guarantee of the fire fighting life passage is difficult to achieve due to the fact that technical means are lacked and the fire fighting truck passage and the parking operation surface completely depend on manual inspection and staring.
Disclosure of Invention
Objects of the invention
In order to overcome at least one defect in the prior art, the invention discloses the following technical scheme, which adopts video-based intelligent analysis to automatically find the violation condition that a fire truck channel is blocked and the potential safety hazard occupied by a working face, automatically early warns in time, dynamically tracks the potential hazard and eliminates the potential hazard, thereby realizing automatic detection of the potential hazard, automatic notification of a responsible person and elimination of the potential hazard.
(II) technical scheme
As a first aspect of the invention, the invention discloses a fire-fighting life channel monitoring method, which comprises the following steps:
monitoring a fire-fighting life passage area, and acquiring a video image of the fire-fighting life passage area;
identifying occupied objects in the video images, judging whether the occupied objects accord with alarm rules or not, if so, generating alarm information and a hidden danger image, and uploading the alarm information and the hidden danger image to the monitoring center platform;
generating a hidden danger event record, and distributing the hidden danger event to a monitoring system of a third party;
and receiving the hidden danger event and disposing the hidden danger event.
In a possible embodiment, the identifying an occupied object in the video image includes:
and identifying whether the occupied object is a vehicle or not through the outline and the license plate element.
In one possible embodiment, the identifying the occupied object is a vehicle, and includes:
and judging the vehicle residence time, and when the vehicle residence time is greater than a residence time threshold value, extracting the vehicle license plate data information and transmitting the vehicle license plate data information to the monitoring center platform.
In a possible implementation, after the transmitting to the monitoring center platform, the method includes:
the monitoring center platform encapsulates the vehicle license plate data information and the illegal image to generate system data;
and uploading the standard data to a public security traffic police punishment private network.
In one possible embodiment, after the hidden danger event is treated, the method includes:
and analyzing the hidden danger frequency, hidden danger generation time and hidden danger disposal efficiency of each monitoring area.
As a second aspect of the present invention, the present invention also discloses a fire fighting life passage monitoring system, comprising: the monitoring system comprises an AI intelligent module, a monitoring center platform and a third party monitoring system;
the AI intelligent module is used for monitoring a fire-fighting life passage area, acquiring a video image of the fire-fighting life passage area, identifying occupied objects in the video image, judging whether the occupied objects accord with alarm rules or not, generating alarm information and a hidden danger image if the occupied objects accord with the alarm rules, and uploading the alarm information and the hidden danger image to a monitoring center platform;
the monitoring center platform is used for generating a hidden danger event record and distributing the hidden danger event record to a monitoring system of the third party;
and the monitoring system of the third party is used for receiving the hidden danger event record and disposing the hidden danger event.
In one possible embodiment, the AI intelligence module includes an identification submodule configured to identify whether the occupied object is a vehicle or not by using a contour and a license plate element.
In a possible embodiment, the identification submodule further includes a determination unit, and when the residence time of the vehicle in the fire-fighting life passage area is greater than a threshold value, the identification submodule extracts the vehicle license plate data.
In a possible implementation mode, the monitoring center platform is connected with the public security traffic police punishment private network through a special application interface.
In a possible implementation manner, the monitoring center platform further includes an intelligent analysis unit, and the intelligent analysis unit is configured to analyze the hidden danger frequency, the hidden danger occurrence time, and the hidden danger disposal efficiency of each monitoring area.
(III) advantageous effects
The invention discloses a fire-fighting life channel monitoring system and a method, which have the following beneficial effects:
1. the corresponding video monitoring device is arranged in a fire fighting life passage area, the violation condition that a fire fighting truck passage is blocked and the potential safety hazard occupied by a working face are automatically found by adopting video-based intelligent analysis, timely and automatic early warning is carried out, the potential safety hazard is dynamically tracked and eliminated, and therefore the potential safety hazard is automatically detected, and a responsible person is automatically informed and eliminated.
2. The monitoring center platform is linked with fire-fighting and traffic police departments, and is used for carrying out key tracking, supervision and law enforcement on areas which have unsolved hidden dangers for a long time, frequently-occurring hidden dangers and frequently-occurring hidden dangers.
3. The monitoring center platform carries out statistic analysis on the hidden danger data, carries out modeling analysis on the regions, the places and the time where illegal behaviors often occur, searches for rules and trends and provides management decision support for supervision departments.
4. And the related personnel of the responsibility unit can view and handle the real-time alarm event through short message short connection and a WeChat small program, and inquire the historical event.
Drawings
The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining and illustrating the present invention and should not be construed as limiting the scope of the present invention.
FIG. 1 is a flow chart of a fire fighting life path monitoring method disclosed by the present invention;
FIG. 2 is a flow chart of identifying an occupied object in a video image according to the present disclosure;
FIG. 3 is a flow chart of uploading system data to a police system by a monitoring center platform according to the present disclosure;
FIG. 4 is a schematic diagram of a fire fighting life path monitoring system according to the present disclosure.
Reference numerals:
the system comprises an AI intelligent module 500, a monitoring center platform 600, a third party supervision system 700, a camera 510, an identification submodule 520, a judgment unit 521, a GIS screen 610, an intelligent analysis unit 620 and a public security traffic police system 710.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention.
It should be noted that: in the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described are some embodiments of the present invention, not all embodiments, and features in embodiments and embodiments in the present application may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.
A first embodiment of a fire fighting life path monitoring method disclosed by the present invention is described in detail below with reference to fig. 1 to 3. The embodiment is mainly applied to intelligent fire fighting, automatically finds the violation condition that a fire fighting truck channel is blocked and potential safety hazards occupied by a working face based on video intelligent analysis, automatically warns in time, dynamically tracks the potential hazards and eliminates the potential hazards, and accordingly realizes automatic detection of the potential hazards, automatic notification of responsible persons and elimination of the potential hazards.
As shown in fig. 1, the present embodiment mainly includes the following steps:
100. monitoring a fire-fighting life passage area, and acquiring a video image of the fire-fighting life passage area;
200. identifying occupied objects in the video images, judging whether the occupied objects accord with alarm rules, if so, generating alarm information and hidden danger images, and uploading the alarm information and the hidden danger images to a monitoring center platform;
300. generating a hidden danger event record, and distributing the hidden danger event to a monitoring system of a third party;
400. and receiving the hidden danger event and disposing the hidden danger event.
Specifically, the fire-fighting life passage area comprises a fire-fighting truck passage and a fire-fighting truck working surface.
The method comprises the steps of acquiring a video image of a monitoring area by monitoring a fire fighting vehicle channel and a fire fighting vehicle operation surface, identifying occupied objects of illegal behaviors in the video image, judging whether the occupied objects accord with alarm rules, generating corresponding alarm information and hidden danger image information if the occupied objects accord with the alarm rules, and uploading the alarm information and the hidden danger image information to a monitoring center platform.
The identification of the occupied object includes identification of a category of the occupied object. And adopting different alarm rules according to the types of the occupied objects, and judging whether the occupied objects conform to the corresponding alarm rules. The occupancy object categories may include vehicles and sundries.
When the type of the occupied object is identified as a vehicle, the alarm rule is set to judge whether the occupied object is illegal according to the length of the stay time of the occupied object on the fire truck passage and/or the fire truck operation surface, for example, the stay time is set to be 15 minutes, and when the stay time of the occupied object on the fire truck passage and/or the fire truck operation surface exceeds 15 minutes, illegal images are extracted from the occupied object and alarm information is generated; when the time that the occupied object stays on the fire truck passage and/or the fire truck working surface does not exceed 15 minutes, automatic filtering is carried out, and no alarm information can be generated.
When the type of the occupied object is recognized as sundries, the alarm rule is set to judge whether the occupied object is illegal or not according to the occupied area of the occupied object on the fire truck passage and/or the fire truck operation surface and the actual length of stay, for example, the occupied area is set to be more than or equal to 3 square meters and the stay time is set to be 15 minutes, and when the occupied area of the occupied object exceeds 3 square meters and the stay time on the fire truck passage and/or the fire truck operation surface exceeds 15 minutes, illegal images are extracted from the occupied object and alarm information is generated; when the occupied area of the occupied objects does not exceed 3 square meters or the time of staying on the fire truck passage and/or the fire truck working surface does not exceed 15 minutes, automatic filtering is carried out, and no alarm information can be generated.
More specifically, the invention is based on RCNN (region with relational Neural Networks feature) algorithm to detect the occupied object type of the video image of the monitoring area. Extracting features by adopting a front 13 layer of a VGG16 Network structure (Visual Geometry Group), extracting candidate frames by using a Region selection Network (RPN), classifying each candidate frame by adopting a softmax classifier, and obtaining the probability that the position of each candidate frame is a vehicle or sundries through border Regression (Bounding Box Regression), thereby completing the detection of the type of the occupied object. The VGG16 network includes convolutional layers, activation layers, and pooling layers for extracting feature maps (feature maps) that occupy the image of an object. For an image of size PxQ, it is first scaled to a fixed size MxN and then input to a VGG16 network for convolution operations. There are 13 convolutional layers, each convolutional layer having a convolutional kernel of 3x3 and padding (padding) of 1 turn. The active layer has 13 layers. There are 4 pooling layers, and the pooling matrix for each layer is 2x2 with a step size of 2. Since each pooling layer makes the matrix length and width of the output half of the input, the feature map of the output through 4 pooling layers is (M/16) x (N/16). And selecting a part of the network, corresponding to each feature point of a sliding window on the feature map, on the video image by using the RPN region to generate a plurality of candidate regions with different scales. The region selection network (RPN) is used for generating candidate regions (regions), judging whether the anchor point (anchor) belongs to the foreground or the background through a normalization exponential function (softmax), and correcting the anchor point by utilizing frame regression to obtain an accurate candidate frame. Where the normalized exponential function is used for gradient log normalization of finite term discrete probability distributions, which maps some inputs to real numbers between 0-1, and the normalization guarantees a sum of 1, so the sum of the probabilities for multiple classes is also exactly 1. The bounding box regression is used to fine tune the candidate region to make it closer to the correct region. The area selection network generates 9 anchor points with different scales for each sliding datum point through a sliding window by using a sliding window mechanism, for example, anchor points with the number of (M/16) x (N/16) x9 are obtained for a (M/16) x (N/16) feature map. The anchor point calculation is to predict multiple candidate regions for each feature point on the feature map, specifically, to map each feature point back to the central point of the original image receptive field as a reference point, and then to select k anchor points with different scales around the reference point, for example, the area is divided into 3 {128^2, 256^2, 512^2}, and the length-width ratio is divided into 3 { 1: 1,1: 2,2: 1, forming anchor points with 3 × 3 ═ 9 different scales. The candidate area is generated through the area selection network, and the generation speed of the candidate area can be improved. And then mapping the candidate region to the feature map to obtain an interest region, and pooling the interest region to obtain an interest region with a set size. The method comprises the steps of mapping candidate regions back to a characteristic map scale with the size of (M/16) x (N/16) in a Region of Interest Pooling (Region of Interest Pooling), dividing each candidate Region into N parts horizontally and vertically, performing maximum Pooling processing on each part, enabling output results to be nxn even if the candidate regions are different in size, classifying the regions of Interest with set sizes by using a convolutional layer, judging whether the corresponding candidate regions are vehicles or sundries, and performing frame regression positioning on the candidate regions by using the convolutional layer to obtain position information of the vehicles or the sundries in an image. Inputting the acquired nxn candidate area into a subsequent network, classifying the candidate area through a full connection layer and a softmax classifier connected behind the full connection layer, outputting the probability that the candidate area is a vehicle or sundries, and performing frame regression operation on the candidate area to obtain the position of the vehicle or the sundries in the image with higher precision and accuracy. Furthermore, according to the type of the occupied object and the position of the occupied object in the monitoring video image, whether the occupied object occupies a fire truck channel and/or a fire truck operation surface can be judged, and then judgment whether the occupied object accords with the alarm rule is carried out.
Further, the warning information includes the type and data of the occupied object, such as the data of the license plate of the vehicle.
Further, the hidden danger image comprises an illegal action image and an illegal image.
And the monitoring center platform receives the alarm information and the hidden danger image information, generates and records a complete hidden danger event, packages and distributes the complete hidden danger event to a corresponding third-party monitoring system, wherein the main content of the complete hidden danger event comprises time, place and photos. The monitoring system of the third party comprises a public security traffic police system and a community property system, and alarm information and hidden danger image information of the third party are sent to the corresponding monitoring system according to the type of an occupied object, for example, the occupied object is a vehicle and is sent to the public security traffic police system and the community property system, and the occupied object is a sundry and is sent to the community property system. The distribution mode can inform the management personnel of the third-party supervision system in various forms of short message, WeChat and voice notice. For example, the occupied object is a fire truck passage of a community, the monitoring center platform receives the alarm information and the hidden danger image of the type, the hidden danger event is sent to a community property system through analysis of the monitoring center platform, and then the community property system informs a community manager of a corresponding area to go to dispose by using various forms of short messages, WeChat, voice notification and the like.
And the monitoring system of the third party receives the hidden danger event, disposes the hidden danger event and feeds back a disposal result to the monitoring center platform after the disposal is finished.
In one embodiment, as shown in fig. 2, in step 200, identifying an occupied object in a video image includes:
210. and identifying whether the occupied object is a vehicle or not through the outline and the license plate element.
Whether the occupied object is a vehicle or not is judged through key elements such as the outline, the license plate and the like, the accuracy of the vehicle identification mode is as high as 95%, and false alarm can be effectively avoided.
In one embodiment, after identifying 210 that the occupied object is a vehicle, the method includes:
211. and judging the vehicle residence time, and when the vehicle residence time is greater than the residence time threshold value, extracting the vehicle license plate data information and transmitting the vehicle license plate data information to the monitoring center platform.
Occupy the stay of object only for a short time on fire engine passageway or the fire engine working face, for example, when the personnel of driving need answer a call, only temporarily stay on fire engine passageway or fire engine working face, when meetting this kind of condition, can intelligent judgement vehicle's dwell time in the identification process, if the vehicle that stops when exceeding predetermined dwell time threshold value, then discern the license plate data of this vehicle and the image information of violating the parking, upload to the surveillance center platform, can avoid the wrong report that temporarily parks the vehicle and lead to like this.
When the license plate of the vehicle is shielded, illegal parking image information of the vehicle is identified and uploaded to the monitoring center.
As shown in fig. 3, in one embodiment, after the transmission to the monitoring center platform in step 211, the method includes:
212. the monitoring center platform packages the vehicle license plate data information and the illegal image to generate system data;
213. uploading the standard data to a public security traffic police punishment private network, generating a illegal occupation punishment sheet, and punishing.
Specifically, aiming at illegal behaviors that the vehicle blocks a fire truck passage and occupies a fire truck working face, the monitoring center platform is in butt joint with a public security traffic police punishment special network, when social unit managers cannot communicate and remove illegal vehicle occupation conditions, the monitoring center platform carries out standard data packaging format on vehicle license plate data information, time information, place information and illegal images, and reports the vehicle license plate data information, the time information, the place information and the illegal vehicle images to the public security traffic police punishment special network through a special communication interface to generate illegal occupation punishment lists, and punishment is carried out on illegal personnel.
When the license plate of the vehicle is shielded, the monitoring center normally generates alarm information and illegal images, and distributes the illegal alarm information and the illegal images to the related social unit managers, the social unit managers follow up and communicate illegal behaviors, if the communication is invalid, a punishment note cannot be generated due to the fact that the license plate number is shielded, and the monitoring center platform cannot submit to a public security traffic police private network.
Further, the warning information includes the type and data of the occupied object, such as the data of the license plate of the vehicle.
Further, the hidden danger image comprises an illegal action image and an illegal image.
In one embodiment, the step 400, after the hidden danger event is disposed, includes:
and analyzing the hidden danger frequency, hidden danger generation time and hidden danger disposal efficiency of each monitoring area.
Specifically, after the supervisory personnel of third party handled the hidden danger incident, and feed back the handling result to the surveillance center platform, the surveillance center platform can combine monitoring position information, thereby the hidden danger frequency of gathering each monitoring point, hidden danger emergence time, hidden danger treatment efficiency etc. carry out the modeling analysis, thereby through the further production of preventing hidden danger that carries out of analysis, for example in the community, increase the property personnel and patrol around fire engine passageway and fire engine working face in the time quantum of leaving work, prevent that the vehicle from stopping on fire engine passageway or fire engine working face.
The following detailed description refers to fig. 4, and based on the same inventive concept, the embodiment of the present invention further provides a first embodiment of a fire fighting life passage monitoring system. Because the principle of the problem solved by the system is similar to that of the fire-fighting life passage monitoring method, the implementation of the system can be referred to the implementation of the method, and repeated details are not repeated. The embodiment is mainly applied to intelligent fire fighting, automatically finds the violation condition that a fire fighting truck channel is blocked and potential safety hazards occupied by a working face based on video intelligent analysis, automatically warns in time, dynamically tracks the potential hazards and eliminates the potential hazards, and accordingly realizes automatic detection of the potential hazards, automatic notification of responsible persons and elimination of the potential hazards.
As shown in fig. 4, the present embodiment mainly includes: an AI intelligence module 500, a monitoring center platform 600, and a third party's supervisory system 700.
The AI intelligent module 500 is used for monitoring a fire-fighting life passage area, acquiring a video image of the fire-fighting life passage area, identifying occupied objects in the video image, and judging whether the occupied objects meet alarm rules, if so, generating alarm information and hidden danger images, uploading the alarm information and the hidden danger images to the monitoring center platform 600, the monitoring center platform 600 is used for generating hidden danger event records, and distributing the hidden danger events to the monitoring system 700 of a third party, and the monitoring system 700 of the third party is used for receiving the hidden danger event records, and handling the hidden danger events.
Specifically, the AI intelligent module 500 is connected to the monitoring center platform 600 via the internet, and the AI intelligent module 500 further includes a camera 510, an identifier module 520, and a capturing unit. The camera 510 monitors the fire-fighting life passage area and acquires a video image of the fire-fighting life passage area; the identification submodule 520 is configured to identify occupied objects of illegal activities in the video image, determine whether the occupied objects meet the alarm rule, and generate corresponding alarm information if the occupied objects meet the alarm rule; (ii) a The snapshot unit takes a snapshot of the hidden danger event to generate hidden danger image information; the AI intelligent module 500 automatically uploads the alarm information and the hidden danger image information to the monitoring center platform 600 through the internet.
Further, the setting has rain-proof, dustproof in this application, and under the condition that night, light are not enough, still can shoot the camera 510 of clear video picture.
The monitoring center platform 600 receives the alarm information and the hidden danger image information, generates and records a complete hidden danger event, the main content of the complete hidden danger event includes time, place and photo, and packages and distributes the complete hidden danger event to the corresponding third-party monitoring system 700. The third-party supervision system 700 comprises a public security traffic police system 710 and a community property system, and sends alarm information and hidden danger image information of occupied objects to corresponding supervision systems according to the types of the occupied objects. The distribution mode may notify the third party's supervisory system 700 in various forms of short message, WeChat, and voice notification. For example, the occupied objects are objects stacked in a fire truck passage of the community, the monitoring center platform 600 receives the alarm information and the hidden danger images of the type, and the hidden danger events are sent to the community property system through analysis of the monitoring center platform 600.
The monitoring system 700 of the third party receives the hidden danger event, disposes the hidden danger event, and feeds back a disposal result to the monitoring center platform 600 after the disposal is completed.
Further, the third party's supervisory systems include a police system 710 and a community property system.
The platform in the monitoring center further comprises a GIS screen 610, wherein the GIS screen 610 is used for fusing and displaying real-time hidden danger states, hidden danger event dynamic lists, statistical analysis instrument panels and the like of all monitoring areas based on a GIS map. Statistical analysis is performed through the monitoring center platform 600, and the area, the place and the time of illegal activities can be visually displayed.
In one embodiment, the identification submodule 520 is configured to identify whether the occupied object is a vehicle through the outline and the license plate element.
The recognition sub-module 520 recognizes the occupied object based on an image recognition technology, and specifically, the target detection algorithm based on the convolutional neural network includes a one-stage target detection algorithm (e.g., YOLO, SSD), a two-stage target detection algorithm (fasternn, RCCN), and the like. Whether the occupied object is a vehicle or not is judged through key elements such as the outline, the license plate and the like, the accuracy of the vehicle identification mode is as high as 95%, and false alarm can be effectively avoided. For example, the rcnn (region with relational Neural Networks feature) algorithm of the invention detects the occupied object type of the video image of the monitored area. Extracting features by adopting a front 13 layer of a VGG16 Network structure (Visual Geometry Group), extracting candidate frames by using a Region selection Network (RPN), classifying each candidate frame by adopting a softmax classifier, and obtaining the probability that the position of each candidate frame is a vehicle or sundries through border Regression (Bounding Box Regression), thereby completing the detection of the type of the occupied object. The VGG16 network includes convolutional layers, activation layers, and pooling layers for extracting feature maps (feature maps) that occupy the image of an object. For an image of size PxQ, it is first scaled to a fixed size MxN and then input to a VGG16 network for convolution operations. There are 13 convolutional layers, each convolutional layer having a convolutional kernel of 3x3 and padding (padding) of 1 turn. The active layer has 13 layers. There are 4 pooling layers, and the pooling matrix for each layer is 2x2 with a step size of 2. Since each pooling layer makes the matrix length and width of the output half of the input, the feature map of the output through 4 pooling layers is (M/16) x (N/16). And selecting a part of the network, corresponding to each feature point of a sliding window on the feature map, on the video image by using the RPN region to generate a plurality of candidate regions with different scales. The region selection network (RPN) is used for generating candidate regions (regions), judging whether the anchor point (anchor) belongs to the foreground or the background through a normalization exponential function (softmax), and correcting the anchor point by utilizing frame regression to obtain an accurate candidate frame. Where the normalized exponential function is used for gradient log normalization of finite term discrete probability distributions, which maps some inputs to real numbers between 0-1, and the normalization guarantees a sum of 1, so the sum of the probabilities for multiple classes is also exactly 1. The bounding box regression is used to fine tune the candidate region to make it closer to the correct region. The area selection network generates 9 anchor points with different scales for each sliding datum point through a sliding window by using a sliding window mechanism, for example, anchor points with the number of (M/16) x (N/16) x9 are obtained for a (M/16) x (N/16) feature map. The anchor point calculation is to predict multiple candidate regions for each feature point on the feature map, specifically, to map each feature point back to the central point of the original image receptive field as a reference point, and then to select k anchor points with different scales around the reference point, for example, the area is divided into 3 {128^2, 256^2, 512^2}, and the length-width ratio is divided into 3 { 1: 1,1: 2,2: 1, forming anchor points with 3 × 3 ═ 9 different scales. The candidate area is generated through the area selection network, and the generation speed of the candidate area can be improved. And then mapping the candidate region to the feature map to obtain an interest region, and pooling the interest region to obtain an interest region with a set size. The method comprises the steps of mapping candidate regions back to a characteristic map scale with the size of (M/16) x (N/16) in a Region of Interest Pooling (Region of Interest Pooling), dividing each candidate Region into N parts horizontally and vertically, performing maximum Pooling processing on each part, enabling output results to be nxn even if the candidate regions are different in size, classifying the regions of Interest with set sizes by using a convolutional layer, judging whether the corresponding candidate regions are vehicles or sundries, and performing frame regression positioning on the candidate regions by using the convolutional layer to obtain position information of the vehicles or the sundries in an image. Inputting the acquired nxn candidate area into a subsequent network, classifying the candidate area through a full connection layer and a softmax classifier connected behind the full connection layer, outputting the probability that the candidate area is a vehicle or sundries, and performing frame regression operation on the candidate area to obtain the position of the vehicle or the sundries in the image with higher precision and accuracy. Furthermore, according to the type of the occupied object and the position of the occupied object in the monitoring video image, whether the occupied object occupies a fire truck channel and/or a fire truck operation surface can be judged, and then judgment whether the occupied object accords with the alarm rule is carried out.
In one embodiment, the identification submodule 520 further includes a determining unit 521, and when the residence time of the vehicle in the fire-fighting life passage area is greater than a threshold value, the identification submodule extracts the vehicle license plate data.
The judgment rule is set to judge whether the occupied object is illegal according to the residence time of the occupied object on the fire fighting vehicle channel and/or the fire fighting vehicle operation surface, for example, the residence time is set to be 15 minutes, and when the residence time of the occupied object on the fire fighting vehicle channel and/or the fire fighting vehicle operation surface exceeds 15 minutes, illegal images are extracted from the occupied object and alarm information is generated; when the time that the occupied object stays on the fire truck passage and/or the fire truck working surface does not exceed 15 minutes, automatic filtering is carried out, and no alarm information can be generated.
When occupying the object and only temporarily stopping on fire engine passageway or the fire engine working face, for example, when the personnel of driving need answer a call, only temporarily stop on fire engine passageway or fire engine working face, when meetting this kind of condition, can intelligent judgement vehicle's dwell time in the identification process, if the vehicle that stops when exceeding predetermined dwell time threshold value, then discern the license plate data of this vehicle and the image information of violating the parking, upload to surveillance center platform 600, can avoid the wrong report that temporarily parks the vehicle and lead to like this.
In one embodiment, the monitoring center platform 600 is connected to the police punishment private network through a dedicated application interface.
Aiming at illegal behaviors that a vehicle blocks a fire truck passage and occupies a fire truck operation surface, the monitoring center platform 600 is in butt joint with a public security traffic police punishment private network, when social unit managers cannot release illegal vehicle occupation conditions through communication, the monitoring center platform 600 performs standard data packaging format on vehicle license plate data information and illegal images, reports the vehicle license plate data information and illegal images to the public security traffic police punishment private network through a special communication interface, generates illegal occupation punishment bills, and punishs illegal personnel.
When the license plate of the vehicle is shielded, the monitoring center normally generates alarm information and illegal images, and distributes the illegal alarm information and illegal images to the related social unit managers, the social unit managers follow up and communicate illegal behaviors, if the communication is invalid, the monitoring center platform 600 cannot submit to the private network of the public security traffic police because the license plate number is shielded and a punishment sheet cannot be generated.
In one embodiment, the monitoring center platform 600 further includes an intelligent analysis unit 620, and the intelligent analysis unit 620 is configured to analyze the hidden danger frequency, the hidden danger occurrence time, and the hidden danger disposal efficiency of each monitoring area.
After the supervision personnel of third party handle the hidden danger incident, and feed back the processing result to monitoring center platform 600, monitoring center platform 600 can combine monitoring position information, thereby the hidden danger frequency, hidden danger emergence time, hidden danger treatment efficiency etc. of gathering each monitoring point carry out the modeling analysis, thereby through the further production of preventing hidden danger that carries out of analysis, for example in the community, increase the property personnel and patrol around fire engine passageway and fire engine working face in the time quantum of leaving work, prevent that the vehicle from stopping on fire engine passageway or fire engine working face.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. A fire-fighting life channel monitoring method is characterized by comprising the following steps:
monitoring a fire-fighting life passage area, and acquiring a video image of the fire-fighting life passage area;
identifying the type of an occupied object in the video image, adopting different alarm rules according to the type of the occupied object, judging whether the occupied object meets the alarm rules, if so, generating alarm information and a hidden danger image, and uploading the alarm information and the hidden danger image to a monitoring center platform; wherein the occupied object categories include vehicles and sundries; when the type of the occupied object is identified as a vehicle, the alarm rule is set to judge whether the occupied object is illegal according to the length of the stay time of the occupied object in the fire-fighting life passage area; when the type of the occupied object is recognized as sundries, the alarm rule is set to judge whether the occupied object is illegal according to the occupied area and the actual staying length of the occupied object in the fire-fighting life passage area;
generating a hidden danger event record, and distributing the hidden danger event to a monitoring system of a third party; the hidden danger events comprise time, place and photos; the monitoring system of the third party comprises a public security traffic police system and a community property system, and alarm information and hidden danger image information of the third party are sent to the corresponding monitoring system according to the type of the occupied object; the occupied objects are vehicles, namely, the occupied objects are sent to a public security traffic police system and a community property system; if the occupied objects are sundries, sending the sundries to a community property system;
receiving the hidden danger event and disposing the hidden danger event;
the method for detecting the type of the occupied object of the video image by adopting the RCNN algorithm specifically comprises the following steps: extracting features by adopting the front 13 layers of a VGG16 network structure, extracting candidate frames by using a regional selection network, classifying each candidate frame by adopting a softmax classifier, obtaining the probability that the position of each candidate frame is a vehicle or sundries through frame regression, and completing the detection of the type of the occupied object.
2. The method of claim 1, wherein said identifying an occupied object in said video image comprises:
and identifying whether the occupied object is a vehicle or not through the outline and the license plate element.
3. The method of claim 2, wherein the identifying the occupied object is a vehicle, comprising:
and judging the vehicle staying time, and when the vehicle staying time is greater than a staying time threshold value, extracting the vehicle license plate data information and transmitting the vehicle license plate data information to the monitoring center platform.
4. The method of claim 3, wherein said transmitting to said monitoring center platform comprises:
the monitoring center platform encapsulates the vehicle license plate data information and the illegal image to generate system data;
and uploading the standard data to a public security traffic police punishment private network.
5. The method of claim 1, wherein after handling the hidden danger event, comprising:
and analyzing the hidden danger frequency, hidden danger generation time and hidden danger disposal efficiency of each monitoring area.
6. A fire fighting life-path monitoring system, comprising: the monitoring system comprises an AI intelligent module, a monitoring center platform and a third party monitoring system;
the AI intelligent module is used for monitoring a fire-fighting life passage area, acquiring a video image of the fire-fighting life passage area, identifying the type of an occupied object in the video image, adopting different alarm rules according to the type of the occupied object, judging whether the occupied object meets the alarm rules, if so, generating alarm information and a hidden danger image, and uploading the alarm information and the hidden danger image to a monitoring center platform; wherein the occupied object categories include vehicles and sundries; when the type of the occupied object is identified as a vehicle, the alarm rule is set to judge whether the occupied object is illegal according to the length of the stay time of the occupied object in the fire-fighting life passage area; when the type of the occupied object is recognized as sundries, the alarm rule is set to judge whether the occupied object is illegal according to the occupied area and the actual staying length of the occupied object in the fire-fighting life passage area;
the monitoring center platform is used for generating a hidden danger event record and distributing the hidden danger event to a monitoring system of the third party; the hidden danger events comprise time, place and photos; the monitoring system of the third party comprises a public security traffic police system and a community property system, and alarm information and hidden danger image information of the third party are sent to the corresponding monitoring system according to the type of the occupied object; the occupied objects are vehicles, namely, the occupied objects are sent to a public security traffic police system and a community property system; if the occupied objects are sundries, sending the sundries to a community property system;
the monitoring system of the third party is used for receiving the hidden danger event record and disposing the hidden danger event;
the AI intelligent module adopts an RCNN algorithm to detect the type of the occupied object of the video image, and specifically comprises the following steps: extracting features by adopting the front 13 layers of a VGG16 network structure, extracting candidate frames by using a regional selection network, classifying each candidate frame by adopting a softmax classifier, obtaining the probability that the position of each candidate frame is a vehicle or sundries through frame regression, and completing the detection of the type of the occupied object.
7. The system of claim 6, wherein the AI intelligence module includes an identification submodule configured to identify whether the occupied object is a vehicle from a contour outline, a license plate element.
8. The system of claim 7, wherein the identification submodule further comprises a determination unit, wherein the identification submodule extracts the vehicle license plate data when the residence time of the vehicle in the fire-fighting life lane area is greater than a threshold value.
9. The system of claim 6, wherein the monitoring center platform is connected to a police punishment private network through a dedicated application interface.
10. The system of claim 6, wherein the monitoring center platform further comprises an intelligent analysis unit for analyzing the hidden danger frequency, the hidden danger occurrence time, and the hidden danger disposal efficiency of each monitored area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010423684.1A CN111629181B (en) | 2020-05-19 | 2020-05-19 | Fire-fighting life passage monitoring system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010423684.1A CN111629181B (en) | 2020-05-19 | 2020-05-19 | Fire-fighting life passage monitoring system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111629181A CN111629181A (en) | 2020-09-04 |
CN111629181B true CN111629181B (en) | 2021-03-16 |
Family
ID=72259131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010423684.1A Active CN111629181B (en) | 2020-05-19 | 2020-05-19 | Fire-fighting life passage monitoring system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111629181B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112511807A (en) * | 2020-12-16 | 2021-03-16 | 绿漫科技有限公司 | Fire fighting access monitoring linkage method |
CN112711996A (en) * | 2020-12-22 | 2021-04-27 | 中通服咨询设计研究院有限公司 | System for detecting occupancy of fire fighting access |
CN112966573A (en) * | 2021-02-19 | 2021-06-15 | 合肥海赛信息科技有限公司 | Intelligent fire fighting access occupation detection method based on video analysis |
CN112633262B (en) * | 2021-03-09 | 2021-05-11 | 微晟(武汉)技术有限公司 | Channel monitoring method and device, electronic equipment and medium |
WO2022198507A1 (en) * | 2021-03-24 | 2022-09-29 | 京东方科技集团股份有限公司 | Obstacle detection method, apparatus, and device, and computer storage medium |
CN113192222B (en) * | 2021-04-14 | 2022-11-15 | 山东理工大学 | Equipment-level alarm strategy method for power line visual alarm |
CN113239832B (en) * | 2021-05-20 | 2023-02-17 | 众芯汉创(北京)科技有限公司 | Hidden danger intelligent identification method and system based on image identification |
CN115131701A (en) * | 2022-06-27 | 2022-09-30 | 盛视科技股份有限公司 | Channel occupying object identification method, channel occupying object identification device and terminal equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN205354350U (en) * | 2016-01-28 | 2016-06-29 | 广州市凯茂信息技术有限公司 | Illegal incident automatic monitoring snapshot system that parks of high definition |
CN106327878A (en) * | 2016-11-16 | 2017-01-11 | 天津市中环系统工程有限责任公司 | Movable illegal parking snapshot system and implementation method |
CN110070729A (en) * | 2019-05-06 | 2019-07-30 | 济南浪潮高新科技投资发展有限公司 | It is a kind of that vehicle detecting system and method are stopped based on the separated of mist calculating |
CN110443178A (en) * | 2019-07-29 | 2019-11-12 | 浙江工贸职业技术学院 | A kind of monitoring system and its method of vehicle violation parking |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110298837B (en) * | 2019-07-08 | 2023-03-24 | 上海天诚比集科技有限公司 | Method for detecting fire-fighting road occupation abnormal object based on interframe difference method |
CN110443196A (en) * | 2019-08-05 | 2019-11-12 | 上海天诚比集科技有限公司 | Fire-fighting road occupying detection method based on SSIM algorithm |
-
2020
- 2020-05-19 CN CN202010423684.1A patent/CN111629181B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN205354350U (en) * | 2016-01-28 | 2016-06-29 | 广州市凯茂信息技术有限公司 | Illegal incident automatic monitoring snapshot system that parks of high definition |
CN106327878A (en) * | 2016-11-16 | 2017-01-11 | 天津市中环系统工程有限责任公司 | Movable illegal parking snapshot system and implementation method |
CN110070729A (en) * | 2019-05-06 | 2019-07-30 | 济南浪潮高新科技投资发展有限公司 | It is a kind of that vehicle detecting system and method are stopped based on the separated of mist calculating |
CN110443178A (en) * | 2019-07-29 | 2019-11-12 | 浙江工贸职业技术学院 | A kind of monitoring system and its method of vehicle violation parking |
Also Published As
Publication number | Publication date |
---|---|
CN111629181A (en) | 2020-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111629181B (en) | Fire-fighting life passage monitoring system and method | |
KR102122859B1 (en) | Method for tracking multi target in traffic image-monitoring-system | |
CN110348312A (en) | A kind of area video human action behavior real-time identification method | |
KR102122850B1 (en) | Solution for analysis road and recognition vehicle license plate employing deep-learning | |
KR20100119476A (en) | An outomatic sensing system for traffic accident and method thereof | |
CN102081844A (en) | Traffic video behavior analyzing and alarming server | |
KR102282800B1 (en) | Method for trackig multi target employing ridar and camera | |
CN110619277A (en) | Multi-community intelligent deployment and control method and system | |
CN117319609A (en) | Internet of things big data intelligent video monitoring system and method | |
CN114898297A (en) | Non-motor vehicle illegal behavior determination method based on target detection and target tracking | |
CN111785050A (en) | Expressway fatigue driving early warning device and method | |
CN118038674A (en) | Monitoring system real-time dynamic analysis warning system based on big data | |
CN111783700A (en) | Automatic recognition early warning method and system for road foreign matters | |
CN113850995A (en) | Event detection method, device and system based on tunnel radar vision data fusion | |
CN112598865B (en) | Monitoring method and system for preventing cable line from being damaged by external force | |
CN117978969A (en) | AI video management platform applied to aquaculture | |
CN113408319B (en) | Urban road anomaly perception processing method, device, system and storage medium | |
KR20200086015A (en) | Situation linkage type image analysis device | |
CN117676084A (en) | Intelligent event processing system and method | |
CN113076821A (en) | Event detection method and device | |
CN116208633A (en) | Artificial intelligence service platform system, method, equipment and medium | |
CN115359416A (en) | Intelligent early warning system for railway freight yard sky eye | |
CN115272924A (en) | Treatment system based on modularized video intelligent analysis engine | |
Jaspin et al. | Accident Detection and Severity Classification System using YOLO Model | |
CN111382697A (en) | Image data processing method and first electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |