CN112418040A - Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier - Google Patents

Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier Download PDF

Info

Publication number
CN112418040A
CN112418040A CN202011276966.XA CN202011276966A CN112418040A CN 112418040 A CN112418040 A CN 112418040A CN 202011276966 A CN202011276966 A CN 202011276966A CN 112418040 A CN112418040 A CN 112418040A
Authority
CN
China
Prior art keywords
fire fighting
image
camera
obstacle
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011276966.XA
Other languages
Chinese (zh)
Other versions
CN112418040B (en
Inventor
何利文
包跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202011276966.XA priority Critical patent/CN112418040B/en
Publication of CN112418040A publication Critical patent/CN112418040A/en
Application granted granted Critical
Publication of CN112418040B publication Critical patent/CN112418040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a binocular vision-based method for detecting and identifying a fire fighting passage occupied by an obstacle. Belonging to the technical field of visual identification; the method comprises the following steps: collecting left and right images; obtaining an external parameter E, an internal parameter matrix and a distortion coefficient, and eliminating distortion; correcting the left and right images through the parameter matrix and the external parameter E to obtain left and right images L2 and R2; matching the left and right images L2 and R2 to obtain a disparity map D of the left and right images; marking out inner measurement areas of two road lines; obtaining candidate frame coordinate information and category information of the barrier; obtaining distance information of an obstacle; the method can monitor, detect, track and alarm the target which is about to enter the fire fighting access or enters the fire fighting access in real time, can obtain the distance information of the barrier, and can give an alarm prompt to the barrier occupying the fire fighting access for a long time, thereby being beneficial to the smoothness of the fire fighting access, avoiding the occurrence of disasters and protecting the personal safety of pedestrians.

Description

Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier
Technical Field
The invention relates to the technical field of visual identification, in particular to a binocular vision-based method for detecting and identifying an obstacle occupying a fire fighting channel.
Background
The fire fighting channel is also called as a life channel, and the number of accidents caused by rescue delay due to blockage of the fire fighting channel is too large because of weak fire fighting consciousness of residents. The supervision of fire control passageway is a big difficult problem in numerous public places such as district, garden sight, shopping mall, and to the residential district as an example, the common hidden danger problem of residential district fire control passageway has:
and (I) building and occupying a fire fighting channel against regulations. This phenomenon is most common in old communities, rural areas, and urban villages. Residents build simple carports, storage carports and the like temporarily and privately, and many of the carports are made of inflammable and combustible materials, so that fire spread is easily caused, and the fire fighting vehicles cannot pass normally due to the fact that fire fighting channels are occupied by the illegal building;
and (II) occupying a fire fighting passage due to illegal parking. With the popularization of household automobiles, the parking problem of residential districts is difficult to solve properly, and a large amount of funds are spent on residents to buy or rent parking spaces or garages. This results in many residents of residential areas drawing convenience, saving money, and often parking private cars on fire-fighting aisles. In the past, the illegal lane occupation also becomes the largest problem in the management of the community fire safety channel;
and (III) occupying a fire fighting passage by unauthorized stacking. In some district inspections, it can be often found that many residents stack sundries in staircases without permission, and some residents stack a large amount of decoration materials and broken tiles in the home and cause serious fire fighting channel blockage. Some residents place the battery cars in shared corridor for charging, so that the battery cars not only occupy fire fighting access, but also bring about great fire fighting potential safety hazards.
The fire fighting access is guaranteed to be smooth and unobstructed by full force, cannot be centralized and treated by one time, and is normalized, the daily supervision is well grasped, the fire fighting hidden danger is timely discovered, and the timely solution is very necessary; monocular vision inspection and laser radar inspection mainly exist in the current market. Monocular vision calculation is slow in speed of processing a monocular still image, calculation distance deviation is large, monocular obstacle detection is conducted on an image with large noise, and therefore misidentification can occur, and frequent false alarm is caused. For plane patterns on the ground, such as water stains, reflection and even images drawn on the ground, false recognition can be caused, and the accuracy of monocular vision is low. In addition, the laser radar has high cost and great difficulty in classifying the obstacles, and a method for identifying and classifying the detected obstacles is unavailable.
Disclosure of Invention
Aiming at the problems, the invention provides a method for detecting and identifying whether an obstacle occupies a fire fighting channel based on binocular vision, aiming at the defects of obstacle detection based on monocular vision and laser radar obstacle detection in the field of fire fighting channel safety. The method is used for detecting whether an obstacle exists in a fire fighting access area or not in real time to occupy the fire fighting access area, so that potential safety hazards are caused, and the traffic order is influenced. If the obstacle occupies the fire fighting access, a related warning is sent out to prompt related staff to clear the fire fighting decision in time.
The technical scheme of the invention is as follows: a binocular vision-based method for detecting and identifying a fire fighting access occupied by an obstacle comprises the following specific steps:
step (1.1), two images of the fire fighting channel are obtained through a left binocular camera and a right binocular camera: namely, left image L1 and right image R1;
step (1.2), calibrating a left binocular camera and a right binocular camera to obtain an external parameter E of the cameras; calibrating the obtained left image and the obtained right image to obtain respective internal parameter matrixes of the binocular cameras and five corresponding distortion coefficients (D ═ K1, K2, P1, P2 and K3));
step (1.3), correcting a left image L1 and a right image R1 by using the obtained internal parameter matrix and external parameter E for each frame of a video shot by a left binocular camera and a right binocular camera to obtain a corrected left image L2 and a corrected right image R2;
step (1.4), matching the corrected left image L2 and the right image R2 to obtain a disparity map D of the left image and the right image, and calculating a depth image through a disparity and depth conversion formula;
step (1.5), performing linear detection on a pixel point set with large gradient change in an image by using an LSD linear detection algorithm and utilizing gradient information and row and column lines in the image to obtain road lines on two sides of a fire fighting access road surface, thereby marking out an inner detection area A of the two road lines;
step (1.6), improving the fast-RCNN network: replacing VGG16 in the Faster-RCNN network with ResNet; replacing the previous NMS with Soft-NMS;
inputting the obtained image of the left camera into an improved Faster-RCNN network model, generating a candidate frame through rpn network, and obtaining candidate frame coordinate information and category information of the obstacle in the candidate frame through classification loss and boundary frame regression loss;
and (1.7) taking the depth value of the central point of the foreign matter window of the fire fighting channel in the obtained depth image as the distance of each obstacle occupying the fire fighting channel, thereby obtaining the distance information of the obstacles.
In step (1.2), the distortion coefficient includes: radial distortion and tangential distortion;
wherein the radial distortion comprises barrel distortion and pincushion distortion; the radial distortion is eliminated by correcting according to the following formula:
xcorr=xdis(1+k1r2+k2r4+k3r6)
ycorr=ydis(1+k1r2+k2r4+k3r6)
in the formula, xdisAnd ydisRepresenting distorted coordinates, xcorrAnd ycorrDenotes coordinates after repair, k1,k2,k3Denotes the radial distortion parameter and p1, p2 denotes the tangential distortion parameter.
In the step (1.4), a conversion formula of the parallax and the depth of the camera is deduced according to the obtained parallax map and through the geometric relation of parallel binocular vision, and the conversion formula is as follows:
Figure BDA0002779406900000031
Figure BDA0002779406900000032
Figure BDA0002779406900000033
where z denotes a depth map, f denotes a normalized camera focal length, b denotes a base line of the left and right cameras, d denotes a parallax, and d ═ xl-xrThe relationship between the pixel point (Xl, Yl) of the left camera and the corresponding point (Xr, Yr) in the right camera; and (4) utilizing a conversion formula, bringing the corresponding focal length and the base line of the left camera and the right camera and the obtained parallax of the left camera and the right camera into the conversion formula, and obtaining a depth map z.
In the step (1.6), the fast-RCNN network is improved, and the specific operation method is as follows: firstly, replacing VGG16 in the Faster-RCNN with ResNet by adopting a feature network; adopting post-treatment Soft-NMS to replace post-treatment NMS; and secondly, changing the original convolution kernel of 3 x 3 into a convolution kernel of 5 x 5 by adopting a convolution kernel.
In step (1.7), the obtaining of the distance information of the obstacle is: judging whether the barrier occupying the fire fighting passage occupies the fire fighting passage for a long time or not through target tracking, and if so, sending out related warning; if not, judging that the fire fighting access is occasionally occupied.
The invention has the beneficial effects that: the method is used for detecting the fire fighting access barrier based on binocular vision, and compared with a radar monitoring method, the method is low in cost; compared with a monocular vision detection method, the method has a better detection effect. The method can monitor, detect, track and alarm the target which is about to enter the fire fighting channel or enters the fire fighting channel in real time, can obtain the distance information of the barrier and the category information of the barrier, can give an alarm prompt to the barrier occupying the fire fighting channel for a long time, is favorable for the smoothness of the fire fighting channel, avoids the occurrence of disasters, and has important significance for protecting the personal safety of daily outgoing pedestrians.
Drawings
FIG. 1 is a flow chart of the architecture of the present invention;
fig. 2 is a schematic plan view of left and right cameras for conversion of parallax to depth of a camera in the present invention.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the following detailed description is made with reference to the accompanying drawings:
as shown in the figure; a binocular vision-based method for detecting and identifying a fire fighting access occupied by an obstacle comprises the following specific steps:
step (1.1), two images of the fire fighting channel are obtained through a left binocular camera and a right binocular camera: namely, left image L1 and right image R1;
step (1.2), calibrating a left binocular camera and a right binocular camera to obtain an external parameter E of the cameras; calibrating the obtained left image and the obtained right image to obtain respective internal parameter matrixes of the binocular cameras and five corresponding distortion coefficients (D ═ K1, K2, P1, P2 and K3));
the distortion coefficient comprises: radial distortion and tangential distortion; the tangential distortion is generated because the lens is not parallel to the image plane due to the defects in the manufacturing of the camera, and only the radial distortion is considered for the fire fighting access occasion, because the influence of the tangential distortion is far smaller than the radial distortion;
wherein the effect of the radial distortion generally has two situations, including barrel distortion and pincushion distortion; the radial distortion is eliminated by correcting according to the following formula:
xcorr=xdis(1+k1r2+k2r4+k3r6)
ycorr=ydis(1+k1r2+k2r4+k3r6)
in the formula, xdisAnd ydisRepresenting distorted coordinates, xcorrAnd ycorrDenotes coordinates after repair, k1,k2,k3Denotes the radial distortion parameter and p1, p2 denotes the tangential distortion parameter.
Step (1.3), correcting the left image L1 and the right image R1 by using the obtained internal parameter matrix and external parameter E for each frame of the video shot by the binocular camera to obtain a corrected left image L2 and a corrected right image R2;
step (1.4), matching the corrected left image L2 and the right image R2 to obtain a disparity map D of the left image and the right image, and calculating a depth image through a disparity and depth conversion formula;
according to the obtained disparity map, a conversion formula of the parallax and the depth of the camera is deduced through the geometric relation of parallel binocular vision, and the conversion formula is as follows:
Figure BDA0002779406900000041
Figure BDA0002779406900000042
Figure BDA0002779406900000043
where z denotes a depth map, f denotes a normalized camera focal length, b denotes baselines of the left and right cameras (which may be obtained from a priori information or camera calibration), d denotes parallax, and d ═ xl-xrThe relationship between the pixel point (Xl, Yl) of the left camera and the corresponding point (Xr, Yr) in the right camera; and (4) utilizing the conversion formula, bringing known parameters of the focal length, the base line and the obtained parallax of the left camera and the right camera into the formula to obtain a depth map z.
Step (1.5), performing linear detection on a pixel point set with large gradient change in an image by using an LSD linear detection algorithm and utilizing gradient information and row and column lines in the image, further obtaining road lines on two sides of a fire fighting access road surface, and marking out an inner detection area A of the two road lines;
step (1.6), improving the fast-RCNN network: replacing VGG16 in the Faster-RCNN network with ResNet; replacing the previous NMS with Soft-NMS; firstly, by adopting a better characteristic network, replacing VGG16 in fast-RCNN with ResNet, on PASCAL VOC 2007, by replacing VGG-16 with ResNet-101, mAP is improved from 73.2% to 76.4%, and on PASAL 2012, mAP is improved from 70.4% to 73.8%; secondly, improving post-processing NMS (NMS) by adopting Soft-NMS (improving Object Detection With One Line of code), wherein the NMS arranges Detection frames according to scores, then keeps the frame With the highest score, and deletes other frames With the overlapping area larger than a certain proportion With the frame, so that the score of the box With the highest score and the IOU of the box larger than a threshold value is set to zero by direct rough exposure, and Soft-NMS replaces the original score by a slightly smaller score instead of the zero by direct rough exposure; and thirdly, changing the original convolution kernel of 3 x 3 into the convolution kernel of 5 x 5 by adopting a larger convolution kernel. Because the obstacle targets in the detected fire passage are all large, a small convolution kernel may yield small features but not significant for large features.
Inputting the obtained image of the left camera into an improved fast-RCNN network model to obtain a corresponding feature map, wherein the fast-RCNN generates candidate frames by using an internal candidate region network rpn (registration pro social network) in the measured region a obtained in the step 5), (and also sequentially slides each feature map from left to right by using a sliding window with the size of 5 × 5 from top to bottom to calculate a central point of the sliding window, which corresponds to a central point on the original image, and each central point generates three corresponding scales (areas) (128 × 128,256, 512) according to 2k scores of the class score cls layer and 4k coordinates of the boundary regression parameter reg layer with three different proportions { 3: 9 of (128 × 128,256, 512) of the class score cls layer, and calculates k anchors of {1:1,1:2,2:1 }.3 }. So for a frame of 1000 x 600 x 3 image captured by the left camera, there are approximately 60 x 40 x 9 anchors, leaving approximately 6k anchors after ignoring the anchors across the boundary. For the candidate frames generated by the RPN, a large amount of overlapping exists, Soft-NMS non-maximum value suppression is adopted based on the cls scores of the candidate frames, and the intersection ratio IOU is set to be 0.7, so that only 2k candidate frames are left in each picture. The Faster RCNN will take 256 candidate frames from these candidate frames as positive and negative sample ratio 1: 1) and then project these RPN generated candidate frames onto the feature map to obtain the corresponding feature matrix. Then, each feature matrix is scaled to a feature map with the size of 7 × 7 through an ROI posing layer, finally, the feature map is flattened to obtain a prediction result through a series of fully-connected layers, wherein the loss of the RPN network is obtained by calculating classification loss through binary cross entropy loss, and the boundary box regression loss is calculated through a smoothL1 loss function, so that the prediction of the category of each obstacle in the fire fighting channel and the prediction of the boundary box regression parameters (including the size and the coordinates of the boundary box) of the obstacle occupying the fire fighting channel can be obtained. The coordinates of the obtained obstacles in the boundary frame are the window coordinate information of the obstacles in the depth image of the left camera in the step 4);
step (1.7), the depth value of the central point of the foreign matter window of the fire fighting channel in the obtained depth image is used as the distance of each obstacle occupying the fire fighting channel, so that the distance information of the obstacle is obtained, whether the obstacle occupying the fire fighting channel occupies the fire fighting channel for a long time is judged through target tracking, and if yes, a relevant warning is sent out; if not, judging that the fire fighting access is occasionally occupied; for an obstacle which occasionally occupies a fire fighting access, such as an electric vehicle driving through the fire fighting access, by target tracking, if 30 frames per second in a monitoring video exists, if 30000 continuous frames of the obstacle appear in the monitoring video, the obstacle is judged to occupy the fire fighting access for a long time, and what type of obstacle occupies the fire fighting access, so that fire fighting potential safety hazard is caused, and related warning is sent to remind related working personnel to clear and decide in time; otherwise, judging that the fire fighting access is occasionally occupied.
The specific application case is as follows:
an open fire channel of a certain cell is often occupied by some owners as a parking space for a long time before, and if a fire or other emergencies happen in the cell, residents in the cell can not evacuate to a safety area in time at the first time due to the blockage of the fire channel; after the method is adopted by the property of the community, a binocular camera is installed in a fire fighting channel area, an improved Faster-RCNN network model is added in an original monitoring system, a fire fighting channel is monitored, detected, tracked and alarmed in real time, and the monitoring system can timely send out an alarm for a private car or other obstacles occupying the fire fighting channel for a long time to remind related security personnel to clear and dredge. Compared with the prior art, the potential safety hazard is effectively reduced.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of embodiments of the present invention; other variations are possible within the scope of the invention; thus, by way of example, and not limitation, alternative configurations of embodiments of the invention may be considered consistent with the teachings of the present invention; accordingly, the embodiments of the invention are not limited to the embodiments explicitly described and depicted.

Claims (5)

1. A binocular vision-based method for detecting and identifying a fire fighting access occupied by an obstacle is characterized by comprising the following specific steps:
step (1.1), two images of the fire fighting channel are obtained through a left binocular camera and a right binocular camera: namely, left image L1 and right image R1;
step (1.2), calibrating a left binocular camera and a right binocular camera to obtain an external parameter E of the cameras; calibrating the obtained left image and the obtained right image to obtain respective internal parameter matrixes of the binocular cameras and five corresponding distortion coefficients (D ═ K1, K2, P1, P2 and K3));
step (1.3), correcting a left image L1 and a right image R1 by using the obtained internal parameter matrix and external parameter E for each frame of a video shot by a left binocular camera and a right binocular camera to obtain a corrected left image L2 and a corrected right image R2;
step (1.4), matching the corrected left image L2 and the right image R2 to obtain a disparity map D of the left image and the right image, and calculating a depth image through a disparity and depth conversion formula;
step (1.5), performing linear detection on a pixel point set with large gradient change in an image by using an LSD linear detection algorithm and utilizing gradient information and row and column lines in the image to obtain road lines on two sides of a fire fighting access road surface, thereby marking out an inner detection area A of the two road lines;
step (1.6), improving the fast-RCNN network: replacing VGG16 in the Faster-RCNN network with ResNet; replacing the previous NMS with Soft-NMS;
inputting the obtained image of the left camera into an improved Faster-RCNN network model, generating a candidate frame through rpn network, and obtaining candidate frame coordinate information and category information of the obstacle in the candidate frame through classification loss and boundary frame regression loss;
and (1.7) taking the depth value of the central point of the foreign matter window of the fire fighting channel in the obtained depth image as the distance of each obstacle occupying the fire fighting channel, thereby obtaining the distance information of the obstacles.
2. The method for detecting and identifying binocular vision based obstruction occupancy of a fire fighting access according to claim 1, wherein in step (1.2), the distortion factor comprises: radial distortion and tangential distortion;
wherein the radial distortion comprises barrel distortion and pincushion distortion; the radial distortion is eliminated by correcting according to the following formula:
xcorr=xdis(1+k1r2+k2r4+k3r6)
ycorr=ydis(1+k1r2+k2r4+k3r6)
in the formula, xdisAnd ydisRepresenting distorted coordinates, xcorrAnd ycorrDenotes coordinates after repair, k1,k2,k3Denotes the radial distortion parameter and p1, p2 denotes the tangential distortion parameter.
3. The method for detecting and identifying the binocular vision-based obstacle occupation fire fighting access according to claim 1, wherein in step (1.4), a conversion formula of the parallax and the depth of the camera is derived through a geometric relation of parallel binocular vision according to the obtained parallax map, and the conversion formula is as follows:
Figure FDA0002779406890000021
Figure FDA0002779406890000022
Figure FDA0002779406890000023
where z denotes a depth map, f denotes a normalized camera focal length, b denotes a base line of the left and right cameras, d denotes a parallax, and d ═ xl-xrThe relationship between the pixel point (Xl, Yl) of the left camera and the corresponding point (Xr, Yr) in the right camera; and (4) utilizing a conversion formula, bringing the corresponding focal length and the base line of the left camera and the right camera and the obtained parallax of the left camera and the right camera into the conversion formula, and obtaining a depth map z.
4. The binocular vision based method for detecting and identifying obstacle occupation fire fighting access according to claim 1, wherein in step (1.6), the fast-RCNN network is modified to operate as follows: firstly, replacing VGG16 in the Faster-RCNN with ResNet by adopting a feature network; adopting post-treatment Soft-NMS to replace post-treatment NMS; and secondly, changing the original convolution kernel of 3 x 3 into a convolution kernel of 5 x 5 by adopting a convolution kernel.
5. The method for detecting and identifying binocular vision-based obstacle occupation fire fighting access according to claim 1, wherein in step (1.7), the obtaining of the distance information of the obstacle is: judging whether the barrier occupying the fire fighting passage occupies the fire fighting passage for a long time or not through target tracking, and if so, sending out related warning; if not, judging that the fire fighting access is occasionally occupied.
CN202011276966.XA 2020-11-16 2020-11-16 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier Active CN112418040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276966.XA CN112418040B (en) 2020-11-16 2020-11-16 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276966.XA CN112418040B (en) 2020-11-16 2020-11-16 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier

Publications (2)

Publication Number Publication Date
CN112418040A true CN112418040A (en) 2021-02-26
CN112418040B CN112418040B (en) 2022-08-26

Family

ID=74830894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276966.XA Active CN112418040B (en) 2020-11-16 2020-11-16 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier

Country Status (1)

Country Link
CN (1) CN112418040B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633262A (en) * 2021-03-09 2021-04-09 微晟(武汉)技术有限公司 Channel monitoring method and device, electronic equipment and medium
CN113128347A (en) * 2021-03-24 2021-07-16 北京中科慧眼科技有限公司 RGB-D fusion information based obstacle target classification method and system and intelligent terminal
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
CN113147746A (en) * 2021-05-20 2021-07-23 宝能(广州)汽车研究院有限公司 Method and device for detecting ramp parking space
WO2022198507A1 (en) * 2021-03-24 2022-09-29 京东方科技集团股份有限公司 Obstacle detection method, apparatus, and device, and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN109035322A (en) * 2018-07-17 2018-12-18 重庆大学 A kind of detection of obstacles and recognition methods based on binocular vision
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN109035322A (en) * 2018-07-17 2018-12-18 重庆大学 A kind of detection of obstacles and recognition methods based on binocular vision
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方博文等: "基于双目视觉的行车中障碍距离检测方法研究", 《机械设计与制造》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633262A (en) * 2021-03-09 2021-04-09 微晟(武汉)技术有限公司 Channel monitoring method and device, electronic equipment and medium
CN112633262B (en) * 2021-03-09 2021-05-11 微晟(武汉)技术有限公司 Channel monitoring method and device, electronic equipment and medium
CN113128347A (en) * 2021-03-24 2021-07-16 北京中科慧眼科技有限公司 RGB-D fusion information based obstacle target classification method and system and intelligent terminal
WO2022198507A1 (en) * 2021-03-24 2022-09-29 京东方科技集团股份有限公司 Obstacle detection method, apparatus, and device, and computer storage medium
CN113128347B (en) * 2021-03-24 2024-01-16 北京中科慧眼科技有限公司 Obstacle target classification method and system based on RGB-D fusion information and intelligent terminal
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
CN113147746A (en) * 2021-05-20 2021-07-23 宝能(广州)汽车研究院有限公司 Method and device for detecting ramp parking space

Also Published As

Publication number Publication date
CN112418040B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN112418040B (en) Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN101075376B (en) Intelligent video traffic monitoring system based on multi-viewpoints and its method
US20030123703A1 (en) Method for monitoring a moving object and system regarding same
US20030053659A1 (en) Moving object assessment system and method
US20030053658A1 (en) Surveillance system and methods regarding same
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
CN200990147Y (en) Intelligent video traffic monitoring system based on multi-view point
CN109086682B (en) Intelligent video black smoke vehicle detection method based on multi-feature fusion
CN112633176A (en) Rail transit obstacle detection method based on deep learning
CN111783700B (en) Automatic recognition and early warning method and system for pavement foreign matters
CN106530825B (en) Electric bicycle and mechanical transport collision detection method based on ST-MRF model
Rajan et al. Deep Learning Based Pothole Detection
CN111192283A (en) Height limiting rod detection and height calculation method
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
Michael et al. Fast change detection for camera-based surveillance systems
CN109993081A (en) A kind of statistical method of traffic flow based on road video and car plate detection
MUNAJAT et al. A NEW METHOD FOR ANALYZING CONGESTION LEVELS BASED ON ROAD DENSITY AND VEHICLE SPEED.
Li et al. 3D-lidar based negative obstacle detection in unstructured environment
Sun et al. The study on intelligent vehicle collision-avoidance system with vision perception and fuzzy decision making
CN113076821A (en) Event detection method and device
Zeng et al. Research on recognition technology of vehicle rolling line violation in highway based on visual UAV
Hsieh et al. Recognising daytime and nighttime driving images using Bayes classifier
Seeger et al. 2-d evidential grid mapping with narrow vertical field of view sensors using multiple hypotheses and spatial neighborhoods
Lee et al. A stereo matching algorithm based on top-view transformation and dynamic programming for road-vehicle detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant