CN111950483A - Vision-based vehicle front collision prediction method - Google Patents
Vision-based vehicle front collision prediction method Download PDFInfo
- Publication number
- CN111950483A CN111950483A CN202010830942.8A CN202010830942A CN111950483A CN 111950483 A CN111950483 A CN 111950483A CN 202010830942 A CN202010830942 A CN 202010830942A CN 111950483 A CN111950483 A CN 111950483A
- Authority
- CN
- China
- Prior art keywords
- ttc
- key points
- vision
- prediction method
- vehicle front
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 238000005516 engineering process Methods 0.000 claims abstract description 7
- 238000012216 screening Methods 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000003384 imaging method Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 238000012935 Averaging Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002996 descriptor matching method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60T—VEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
- B60T7/00—Brake-action initiating means
- B60T7/12—Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
- B60T7/22—Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle, or by means of contactless obstacle detectors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/04—Traffic conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W50/16—Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/80—Spatial relation or speed relative to objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a vision-based vehicle front collision prediction method, which comprises the steps of positioning a front vehicle through a vision target detection algorithm and generating an interested area; preprocessing the image, and detecting key points in the region of interest at the moment; associating the key points with the key points detected in the previous frame of image, and matching the same key points to generate matched sub-images; obtaining a group of optimal matching gametes through screening; and calculating and predicting the collision time TTC, comparing and judging the TTC with a set threshold value, and responding to different working conditions respectively. According to the invention, the vehicle front collision early warning problem is solved by a pure vision scheme, the TTC is obtained by calculation through a computer vision technology, the obtained calculation result is more accurate, and the scheme has no limitation on the road condition.
Description
Technical Field
The invention relates to the technical field of vehicle safety early warning, in particular to a vehicle front collision prediction method based on vision.
Background
With the rapid increase of vehicle keeping quantity, vehicle traffic accidents are also rising year by year, and the requirements of vehicle consumers on safety are also increasing day by day. If the driver can be warned by early warning before the road traffic danger occurs or auxiliary driving is provided, most traffic accidents can be avoided. The front collision early warning system (FCWS) is a subsystem in an Advanced Driving Assistance System (ADAS), and the FCWS can monitor a front vehicle at any time and warn a driver or even actively control a brake pedal to realize braking when a potential collision danger exists. FCWS mainly involves several key technologies: sensing and processing the information of the self vehicle and the obstacle, evaluating the driving safety state and controlling active braking. The patent emphasizes solving the first two key technologies.
The visual technology obtains the positional information of the front vehicle and can be realized through stereoscopic vision or monocular vision, the stereoscopic camera senses the distance by simulating human beings and utilizing binocular parallax, but the problems of large volume, high price and high calculation load exist, and the monocular camera is adopted in the patent, so that the size is small, the cost is low and the calculation load is small.
Patent numbers: CN105574552A provides a vehicle range finding and collision early warning method based on monocular vision, and this technique has the defect when calculating TTC, and d in the computational formula and S can not accurate measurement, therefore there is great error in the TTC that calculates, especially when the road has certain slope. The accuracy requirement on the calculation result of the TTC is high when the vehicle front collision early warning is carried out, so that other methods for calculating the TTC need to be improved.
Disclosure of Invention
The invention provides a vision-based vehicle front collision prediction method, which is applied to a perception and decision system of an intelligent electric drive vehicle, realizes a front collision early warning function by continuously estimating collision time (TTC) by a vision system, and comprehensively considers the speed of the vehicle and the front vehicle, so that the calculated TTC is more accurate, and the accuracy and the effectiveness of the front collision early warning are improved.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
a vision-based vehicle front impact prediction method, comprising the steps of:
s1, positioning a front vehicle through a visual target detection algorithm and generating an interested area; the target detection algorithm can adopt a target detection algorithm based on a convolutional neural network, such as SSD, YOLO and the like, for rapid detection.
S2, preprocessing the image, including gray processing, Gaussian smoothing and gradient processing, so as to facilitate the detection of key points later;
s3, detecting key points in the region of interest at the moment;
s4, associating the key points with the key points detected in the previous frame of image, and matching the same key points to generate matched pieces; in the matching process, the matching accuracy is improved through cross detection and non-maximum suppression technology;
s5, screening the obtained matchmaker; in the matching process, the matching accuracy is improved through cross detection and non-maximum suppression technology;
s6, calculating and predicting the Time To Collision (TTC); a constant-speed model is adopted:
h0=f*LCD/d0
h1=f*LAB/d1
LAB=LCD
d1=d0-V0*Δt=d1*h1/h0-V0*Δt
TTC=d1/V0=Δt/(h1/h0-1)
wherein f is the focal length of the camera, h0The distance, h, of the two key points detected and screened out in the previous frame image projected on the imaging plane1For the distance, L, projected on the imaging plane for the two key points corresponding to this momentAB,LCDThe distance between two key points at the same moment; d0Represents the distance from the camera at the previous moment, d1Represents the distance, V, from the camera at that moment0Representing the speed of the vehicle in front of the previous moment, and delta t representing the time required for acquiring two adjacent frames of images; TTC is the predicted time to collision.
S7, setting TTC threshold values A and B, wherein A represents that collision danger exists, and B represents that the situation is urgent;
if TTC > ═ A, it is safe temporarily;
if A > TTC > -B, it shows that there is collision danger, and gives warning to the driver through voice, vibration and other modes;
if TTC < B, the situation is urgent, the voice reminding is carried out, and meanwhile, the brake pedal is controlled to brake actively, and the safety belt is tightened.
Further, the steps are cycled and continuously detected.
The invention has the technical effects that:
the problem of front collision early warning is solved through vision, a matching sub is obtained by using a key point detector and a descriptor matching method, the TTC obtained through further calculation is more accurate, and the road using condition is not limited.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic illustration of the TTC calculation of the present invention;
FIG. 3 is a schematic illustration of an intelligent electrically driven vehicle of an embodiment;
FIG. 4 is a schematic illustration of object detection of an embodiment;
FIG. 5 is a schematic diagram of a Gaussian smoothing image processing technique of an embodiment;
FIG. 6 is a schematic diagram of a gradient processing image processing technique of an embodiment;
FIG. 7 is a schematic diagram of two previous and subsequent image associations according to an embodiment.
Detailed Description
The specific technical scheme of the invention is described by combining the embodiment.
Referring to fig. 3, in the embodiment, an intelligent electrically driven vehicle 1 is used as a carrier, a monocular camera 2 is mounted on a front window of the vehicle for image acquisition, an industrial personal computer 3 is mounted at the rear part of the vehicle, a front collision early warning system is mounted on the vehicle for processing the acquired image, and the vehicle runs on an urban road at a speed of 40 km/h. As shown in fig. 4, when a vehicle appears in the front, the vehicle is first located by a target detection algorithm, and the accuracy and speed of detection are comprehensively considered, and a YOLOv3 target detection algorithm is adopted in the present embodiment. As in fig. 5, the image is gaussian smoothed; as shown in fig. 6, the image is subjected to gradient processing to facilitate the detection of image key points, and a FAST key point detector is adopted in this embodiment. As shown in fig. 7, the key points of the two frames of images before and after are associated, and a break descriptor is used in this embodiment to generate a matching sub. And screening the matched sub-sets to obtain a group of optimal matched sub-sets for calculating and predicting the Time To Collision (TTC). In this embodiment, the time for acquiring two frames of images is 0.1s, and a constant speed model may be adopted assuming that a vehicle ahead moves at a constant speed within 0.1 s. In this embodiment, a is set to 2s, B is set to 0.5s, and the calculated predicted time to collision TTC is set to 1.5s, at which time the vehicle reminds the driver by voice. The vehicle front collision early warning system continuously detects and releases the alarm when the TTC is 2 s.
In another embodiment, the vehicle front collision prediction method is loaded in an industrial personal computer of an experimental vehicle, and the vehicle runs on an expressway at the speed of 80km/h, and an SSD target detection algorithm, a FAST key point detector and a BRIEF descriptor are adopted. In this embodiment, a constant speed model is still adopted, the time taken for acquiring two frames of images is 0.1s, a is set to be 3s, B is set to be 1.5s, the time for collision with a front vehicle is predicted to be 6s through calculation, the vehicle is in a safe state at this time, and the vehicle front collision early warning system continuously detects the time.
Patent CN105574552A provides a vehicle distance measurement and collision early warning method based on monocular vision, which needs to obtain the distance between the vehicle and the vehicle ahead at each time when calculating TTC, the distance acquisition method is the same as the distance measurement method in the document [ xu yang, yao man, high profit, peak ]. The TTC calculation method skillfully omits the calculation of the distance between the vehicle and the front vehicle, and is converted into the calculation of the distance of the key point on the imaging plane, and the positioning of the key point is more accurate than the positioning of the middle point of the edge line at the bottom of the vehicle adopted in the patent CN105574552A, so the calculation accuracy of the TTC is improved.
As shown in fig. 1, the patent application adopts the following technical solutions:
s1, positioning a front vehicle through a visual target detection algorithm, generating an interested area, and representing the interested area as a bounded closed graph or frame on an image.
And S2, preprocessing the image, including gray processing, Gaussian smoothing and gradient processing, so as to facilitate the detection of the subsequent key points.
And S3, detecting key points in the region of interest at the moment.
Keypoints (keypoints) are concepts in computer vision that are used to locate the position of feature points in an image, which can be precisely located in two coordinate directions in the image because of some features. Common key point detection means (Keypoint Detectors) include: SHITOMASI, HARRISS, FAST, BRISK, ORB, SIFT, etc. As shown, C and D are two of the key points detected at the current time. Further, a plurality of feature points can be detected at any one time, and the number of feature points is related to the features of the image and the detection method, and is generally several tens to several hundreds.
And S4, associating the key points with the key points detected in the previous frame of image, and matching the same key points to generate a matched sub. Referring to FIG. 2, C and D correspond to the key points A and B at the previous time, respectively, and A-C and B-D are two sets of matched pairs. Also here, using computer vision techniques, associating two keypoints requires the use of a Descriptor (Descriptor), which is a set of vectors that describe the keypoints, and currently popular descriptors include: BRISK, BRIEF, ORB, FREAK, SIFT, etc. After the key point detector and the descriptor are selected, key point matching can be carried out, and the accuracy is improved through cross detection and non-maximum suppression means in the matching process.
And S5, screening the obtained matchmaker. The matches (containing two keypoints) have coordinates in the image, so the distance between the two keypoints contained in each match can be calculated. And calculating the distances of all the matching sub-sets, then calculating the average value, and deleting the matching sub-sets which are far from the average value to obtain a group of optimal matching sub-sets. Two of the spots were statistically selected.
And S6, calculating TTC. Using a constant speed model (assuming that the vehicle in front is travelling at constant speed, since the detection time is usually less than 0.1s, corresponding assumptions can be made and a constant acceleration model can be used instead), still illustrated by fig. 2:
h0=f*LCD/d0
h1=f*LAB/d1
LAB=LCD
d1=d0-V0*Δt=d1*h1/h0-V0*Δt
TTC=d1/V0=Δt/(h1/h0-1)
wherein f is the focal length of the camera, h0The distance h projected on the imaging plane for the two key points (A, B) detected and screened out in the last frame of image1For the distance, L, projected on the imaging plane for the two key points (C, D) corresponding to this momentAB,LCDIs the distance between two key points at the same time. d0Represents the distance from the camera at the previous moment, d1Represents the distance, V, from the camera at that moment0Representing the speed of the vehicle ahead at the previous moment, at representing the time required for acquiring two adjacent frames of imagesTime. TTC is the predicted time to collision.
S7, setting TTC threshold values A and B, wherein A represents that collision danger exists, B represents that the situation is urgent, if TTC > is equal to A, temporary safety is indicated, if A > TTC > is equal to B, collision danger exists, a driver is warned through voice, vibration and the like, if TTC < B, the situation is urgent, and the brake pedal is controlled to brake actively and tighten the safety belt while voice reminding is performed.
The FCWS system is periodically circulated and continuously detected, and a flow chart of one period is shown in figure 1.
Claims (6)
1. A vision-based vehicle front collision prediction method is characterized by comprising the following steps:
s1, positioning a front vehicle through a visual target detection algorithm and generating an interested area;
s2, preprocessing the image;
s3, detecting key points in the region of interest at the moment;
s4, associating the key points with the key points detected in the previous frame of image, and matching the same key points to generate matched pieces;
s5, screening the obtained matchmaker;
s6, calculating and predicting the Time To Collision (TTC);
s7, setting TTC threshold values A and B, wherein A represents that collision danger exists, and B represents that the situation is urgent;
if TTC > ═ A, it is safe temporarily;
if A > TTC > -B, it shows that there is collision danger, and gives warning to the driver through voice, vibration and other modes;
if TTC < B, the situation is urgent, the voice reminding is carried out, and meanwhile, the brake pedal is controlled to brake actively, and the safety belt is tightened.
2. The vision-based vehicle front collision prediction method according to claim 1, characterized in that the target detection algorithm of S1 adopts a convolutional neural network-based target detection algorithm for fast detection.
3. The vision-based vehicle front collision prediction method according to claim 1, wherein the preprocessing flow of S2 includes performing gray scale processing, gaussian smoothing, and gradient processing to facilitate the detection of the key points.
4. The vision-based vehicle front collision prediction method according to claim 1, wherein the matching process of S4 is implemented by cross detection and non-maximum suppression technology to improve the matching accuracy.
5. The vision-based vehicle front collision prediction method according to claim 1, wherein S5 selects the best matching sub-set by calculating the distance of the matching sub-sets and averaging them, and deleting the matching sub-sets that are too far from the average value, by using statistical means in the screening process.
6. A vision-based vehicle front collision prediction method according to claim 1, characterized in that in S6 the predicted time to collision TTC is calculated, using a constant velocity model:
h0=f*LCD/d0
h1=f*LAB/d1
LAB=LCD
d1=d0-V0*Δt=d1*h1/h0-V0*Δt
TTC=d1/V0=Δt/(h1/h0-1)
wherein f is the focal length of the camera, h0The distance, h, of the two key points detected and screened out in the previous frame image projected on the imaging plane1For the distance, L, projected on the imaging plane for the two key points corresponding to this momentAB,LCDThe distance between two key points at the same moment; d0Represents the distance from the camera at the previous moment, d1Represents the distance, V, from the camera at that moment0Representing the speed of the vehicle in front of the previous moment, and delta t representing the time required for acquiring two adjacent frames of images; TTC is the predicted time to collision.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010830942.8A CN111950483A (en) | 2020-08-18 | 2020-08-18 | Vision-based vehicle front collision prediction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010830942.8A CN111950483A (en) | 2020-08-18 | 2020-08-18 | Vision-based vehicle front collision prediction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111950483A true CN111950483A (en) | 2020-11-17 |
Family
ID=73343650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010830942.8A Pending CN111950483A (en) | 2020-08-18 | 2020-08-18 | Vision-based vehicle front collision prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111950483A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112744174A (en) * | 2021-01-18 | 2021-05-04 | 深圳广联赛讯股份有限公司 | Vehicle collision monitoring method, device, equipment and computer readable storage medium |
CN113124819A (en) * | 2021-06-17 | 2021-07-16 | 中国空气动力研究与发展中心低速空气动力研究所 | Monocular distance measuring method based on plane mirror |
CN113188521A (en) * | 2021-05-11 | 2021-07-30 | 江晓东 | Monocular vision-based vehicle collision early warning method |
CN113306566A (en) * | 2021-06-16 | 2021-08-27 | 上海大学 | Vehicle and pedestrian collision early warning method and device based on sniffing technology |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100305857A1 (en) * | 2009-05-08 | 2010-12-02 | Jeffrey Byrne | Method and System for Visual Collision Detection and Estimation |
CN102642510A (en) * | 2011-02-17 | 2012-08-22 | 汽车零部件研究及发展中心有限公司 | Image-based vehicle anti-collision early warning method |
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus) |
CN103713655A (en) * | 2014-01-17 | 2014-04-09 | 中测新图(北京)遥感技术有限责任公司 | Rotary-deflection-angle correction system and rotary-deflection-angle correction method of digital aerial-surveying camera |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
CN106156725A (en) * | 2016-06-16 | 2016-11-23 | 江苏大学 | A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
CN108229500A (en) * | 2017-12-12 | 2018-06-29 | 西安工程大学 | A kind of SIFT Mismatching point scalping methods based on Function Fitting |
-
2020
- 2020-08-18 CN CN202010830942.8A patent/CN111950483A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100305857A1 (en) * | 2009-05-08 | 2010-12-02 | Jeffrey Byrne | Method and System for Visual Collision Detection and Estimation |
CN102642510A (en) * | 2011-02-17 | 2012-08-22 | 汽车零部件研究及发展中心有限公司 | Image-based vehicle anti-collision early warning method |
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus) |
CN103713655A (en) * | 2014-01-17 | 2014-04-09 | 中测新图(北京)遥感技术有限责任公司 | Rotary-deflection-angle correction system and rotary-deflection-angle correction method of digital aerial-surveying camera |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
CN106156725A (en) * | 2016-06-16 | 2016-11-23 | 江苏大学 | A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
CN108229500A (en) * | 2017-12-12 | 2018-06-29 | 西安工程大学 | A kind of SIFT Mismatching point scalping methods based on Function Fitting |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112744174A (en) * | 2021-01-18 | 2021-05-04 | 深圳广联赛讯股份有限公司 | Vehicle collision monitoring method, device, equipment and computer readable storage medium |
CN113188521A (en) * | 2021-05-11 | 2021-07-30 | 江晓东 | Monocular vision-based vehicle collision early warning method |
CN113306566A (en) * | 2021-06-16 | 2021-08-27 | 上海大学 | Vehicle and pedestrian collision early warning method and device based on sniffing technology |
CN113306566B (en) * | 2021-06-16 | 2023-12-12 | 上海大学 | Vehicle pedestrian collision early warning method and device based on sniffing technology |
CN113124819A (en) * | 2021-06-17 | 2021-07-16 | 中国空气动力研究与发展中心低速空气动力研究所 | Monocular distance measuring method based on plane mirror |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11062167B2 (en) | Object detection using recurrent neural network and concatenated feature map | |
CN111950483A (en) | Vision-based vehicle front collision prediction method | |
CN109334563B (en) | Anti-collision early warning method based on pedestrians and riders in front of road | |
JP7499256B2 (en) | System and method for classifying driver behavior - Patents.com | |
US20180211403A1 (en) | Recurrent Deep Convolutional Neural Network For Object Detection | |
JP5938569B2 (en) | Advanced driver support system considering azimuth information and operation method thereof | |
US20190325597A1 (en) | Simultaneous Localization And Mapping Constraints In Generative Adversarial Networks For Monocular Depth Estimation | |
EP2960858B1 (en) | Sensor system for determining distance information based on stereoscopic images | |
GB2561448A (en) | Free space detection using monocular camera and deep learning | |
CN106326866B (en) | Early warning method and device for vehicle collision | |
US8559727B1 (en) | Temporal coherence in clear path detection | |
JP6972797B2 (en) | Information processing device, image pickup device, device control system, mobile body, information processing method, and program | |
CN107458308B (en) | Driving assisting method and system | |
Zaarane et al. | Vehicle to vehicle distance measurement for self-driving systems | |
CN109145805B (en) | Moving target detection method and system under vehicle-mounted environment | |
CN111856510A (en) | Vehicle front collision prediction method based on laser radar | |
JP2019067115A (en) | Road surface detecting device | |
WO2023080111A1 (en) | Method and Systems for Detection Accuracy Ranking and Vehicle Instruction | |
JP2011113330A (en) | Object detection device and drive assist system | |
CN116853235A (en) | Collision early warning method, device, computer equipment and storage medium | |
JP6972798B2 (en) | Information processing device, image pickup device, device control system, mobile body, information processing method, and program | |
CN107256382A (en) | Virtual bumper control method and system based on image recognition | |
Jyothi et al. | Driver assistance for safe navigation under unstructured traffic environment | |
Neto et al. | Real-time collision risk estimation based on Pearson's correlation coefficient | |
Ahmad et al. | Comparative study of dashcam-based vehicle incident detection techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201117 |