CN107796373A - A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven - Google Patents
A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven Download PDFInfo
- Publication number
- CN107796373A CN107796373A CN201710930473.5A CN201710930473A CN107796373A CN 107796373 A CN107796373 A CN 107796373A CN 201710930473 A CN201710930473 A CN 201710930473A CN 107796373 A CN107796373 A CN 107796373A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- longitudinal
- image
- target
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000005259 measurement Methods 0.000 claims abstract description 49
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 230000008447 perception Effects 0.000 claims abstract description 15
- 239000002245 particle Substances 0.000 claims abstract description 9
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001373 regressive effect Effects 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Electromagnetism (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
本发明提供的一种基于车道平面几何模型驱动的前方车辆单目视觉的测距方法,利用CCD摄像头采集前方目标车辆图像,利用融合Haar‑like特征与Adaboost算法识别步骤2中获取的前方目标车辆图像中的前方目标车辆,利用粒子滤波法对步骤3中所获取的前方目标车辆进行跟踪,根据上述所得的每一帧图像中前方目标车辆构建该帧图像中车道平面几何的纵向车距测量模型,得到该帧图像中目标靶源纵向感知距离y,根据上述所得的每一帧图像中前方目标车辆构建车辆测距误差动态补偿模型,得到纵向测量误差值z,根据步骤5中所得的目标靶源纵向感知距离y和纵向测量误差值z计算该帧图像中前方目标车辆与本方车辆之间的纵向车距YW(P)。The present invention provides a distance measuring method based on the monocular vision of the front vehicle driven by the plane geometric model of the lane, utilizes the CCD camera to collect the front target vehicle image, and utilizes the fusion Haar-like feature and Adaboost algorithm to identify the front target vehicle obtained in step 2 For the target vehicle ahead in the image, use the particle filter method to track the target vehicle ahead obtained in step 3, and construct the longitudinal distance measurement model of the lane plane geometry in the frame image according to the target vehicle ahead in each frame of image obtained above , to obtain the longitudinal perception distance y of the target source in the frame image, construct a vehicle ranging error dynamic compensation model according to the target vehicle ahead in each frame image obtained above, and obtain the longitudinal measurement error value z, according to the target target obtained in step 5 The source longitudinal perception distance y and the longitudinal measurement error value z calculate the longitudinal distance Y W (P) between the front target vehicle and the own vehicle in the frame image.
Description
技术领域technical field
本发明属于车辆纵向安全辅助驾驶技术领域,涉及一种基于车道平面几何模型驱动的前方车辆单目视觉的测距方法。The invention belongs to the technical field of vehicle longitudinal safety assisted driving, and relates to a distance measuring method based on the monocular vision of a vehicle in front driven by a plane geometric model of a lane.
背景技术Background technique
车辆跟驰是驾驶员在交通活动中一种最基本的驾驶行为,车辆在跟驰阶段面临着主要威胁主要来自纵向车辆追尾碰撞,自车与前车未保持一定的安全车距以及对自车和前车的速度判定不准确而导致车辆追尾碰撞。精确研究自车与前方车辆的车距值对于保持车辆间距以及车辆碰撞预警具有重要意义。Vehicle following is one of the most basic driving behaviors for drivers in traffic activities. The main threats faced by vehicles during the following phase are mainly longitudinal vehicle rear-end collisions, failure to maintain a certain safe distance between the self-vehicle and the vehicle in front, and the threat to the self-vehicle. Inaccurate determination of the speed of the vehicle in front and the vehicle in front leads to a rear-end collision of the vehicle. Accurately studying the distance value between the ego vehicle and the vehicle in front is of great significance for maintaining the distance between vehicles and vehicle collision warning.
车距测量目前已经研究的方式主要有超声波测距、激光测距、毫米波雷达测距以及机器视觉测距。超声波测距只适用于短距离测距,而激光测距和毫米波雷达测距使用成本过高,相比之下,机器视觉测距方式硬件结构简单、成本也低并且获取信息丰富且容易,因此采用机器视觉测量车距具有更好的实用价值和应用前景。The currently researched methods of vehicle distance measurement mainly include ultrasonic distance measurement, laser distance measurement, millimeter-wave radar distance measurement and machine vision distance measurement. Ultrasonic ranging is only suitable for short-distance ranging, while laser ranging and millimeter-wave radar ranging are too expensive to use. In contrast, machine vision ranging has a simple hardware structure, low cost, and rich and easy access to information. Therefore, the use of machine vision to measure vehicle distance has better practical value and application prospects.
机器视觉中综合各项测量方式进行比较,因单目视觉测量处理数据时间较短,能满足测距实时性,所以使用单目视觉测量车距的占大多数。In machine vision, various measurement methods are comprehensively compared. Because the data processing time of monocular vision measurement is short, it can meet the real-time performance of distance measurement, so the majority of vehicles use monocular vision to measure the distance between vehicles.
基于单目视觉进行车距自动测量时,对于前方车辆的定位非常重要,定位的准确性直接影响到车距测量的精确度。目前基于车辆阴影的识别方法受外界光线因素影响较大;基于车尾数学模型方法通过对应点标定来获取图像的深度信息,但由于器材限制及标定等原因,无法得到较高精度坐标系之间相互转变的转换矩阵,适用性受到限制;基于单目视觉图像灰度化处理进行前方车辆识别时,往往只能识别出前方车辆尾部轮廓,但由于车辆后悬离地高度的存在,势必会造成很大的测距误差。When performing automatic measurement of vehicle distance based on monocular vision, it is very important for the positioning of the vehicle in front, and the accuracy of positioning directly affects the accuracy of vehicle distance measurement. At present, the recognition method based on vehicle shadows is greatly affected by external light factors; the depth information of the image is obtained through the calibration of corresponding points based on the mathematical model method of the rear of the vehicle, but due to equipment limitations and calibration, it is impossible to obtain a high-precision coordinate system. Mutually transformed transformation matrices, the applicability is limited; when the front vehicle is recognized based on the monocular vision image grayscale processing, often only the rear profile of the front vehicle can be recognized, but due to the existence of the height of the rear overhang of the vehicle, it is bound to cause Large ranging error.
发明内容Contents of the invention
本发明的目的在于提供一种基于车道平面几何模型驱动的前方车辆单目视觉的测距方法,解决了现有技术中存在的受光线影响、高精度坐标系转化以及车辆后悬离地高度存在造成的测距误差的问题和缺陷。The purpose of the present invention is to provide a distance measuring method based on the monocular vision of the front vehicle driven by the plane geometric model of the lane, which solves the problems in the prior art such as the influence of light, the conversion of high-precision coordinate system, and the existence of the height of the vehicle's rear overhang. Problems and defects caused by ranging errors.
为了达到上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical scheme adopted in the present invention is:
本发明提供的一种基于车道平面几何模型驱动的前方车辆单目视觉的测距方法,包括以下步骤:The present invention provides a distance measuring method based on the monocular vision of the front vehicle driven by the plane geometric model of the lane, comprising the following steps:
步骤1,对CCD摄像头进行标定,得到有效焦距f、CCD摄像头高度h和CCD摄像头俯仰角θ;Step 1, calibrate the CCD camera to obtain the effective focal length f, the height h of the CCD camera and the pitch angle θ of the CCD camera;
步骤2,利用CCD摄像头采集前方目标车辆图像;Step 2, using the CCD camera to collect the image of the target vehicle ahead;
步骤3,利用融合Haar-like特征与Adaboost算法识别步骤2中获取的前方目标车辆图像中的前方目标车辆;Step 3, using the fusion Haar-like feature and Adaboost algorithm to identify the front target vehicle in the front target vehicle image obtained in step 2;
步骤4,利用粒子滤波法对步骤3中所获取的前方目标车辆进行跟踪;Step 4, using the particle filter method to track the front target vehicle acquired in step 3;
步骤5,根据上述所得的每一帧图像中前方目标车辆构建该帧图像中车道平面几何的纵向车距测量模型,得到该帧图像中目标靶源纵向感知距离y;Step 5, constructing the longitudinal distance measurement model of the lane plane geometry in the frame image according to the front target vehicle in each frame image obtained above, and obtaining the longitudinal perception distance y of the target source in the frame image;
步骤6,根据上述所得的每一帧图像中前方目标车辆构建车辆测距误差动态补偿模型,得到纵向测量误差值z;Step 6, constructing a vehicle ranging error dynamic compensation model based on the front target vehicle in each frame of image obtained above, and obtaining the longitudinal measurement error value z;
步骤7,根据步骤5中所得的目标靶源纵向感知距离y和纵向测量误差值z计算该帧图像中前方目标车辆与本方车辆之间的纵向车距YW(P)。Step 7. Calculate the longitudinal distance Y W (P) between the front target vehicle and your own vehicle in the frame image according to the target source longitudinal perception distance y obtained in step 5 and the longitudinal measurement error value z.
优选地,步骤3中,利用融合Haar-like特征与Adaboost算法识别步骤2中获取的前方目标车辆图像中的前方目标车辆的具体方法:Preferably, in step 3, utilize fusion Haar-like feature and Adaboost algorithm to identify the specific method of the front target vehicle in the front target vehicle image that obtains in step 2:
S1,根据步骤2中所得的前方目标车辆图像建立样本集,利用Adaboost算法选取样本集中的车辆训练样本的有效Haar-like特征,每个有效Haar-like特征产生对应的弱分类器,将弱分类器加权组合变成强分类器,最后采用瀑布型分类器进行级联,得到特征样本的级联分类器;S1, establish a sample set according to the image of the target vehicle ahead obtained in step 2, use the Adaboost algorithm to select the effective Haar-like features of the vehicle training samples in the sample set, each effective Haar-like feature generates a corresponding weak classifier, and classify the weak The weighted combination of classifiers becomes a strong classifier, and finally cascaded with waterfall classifiers to obtain a cascade classifier of feature samples;
S2,取海量的车辆训练样本并对该车辆训练样本进行有效Haar-like特征提取,之后将有效Haar-like特征输入到特征样本的级联分类器进行车辆存在性检测,得到Adaboost级联分类器;S2, take a large number of vehicle training samples and extract the effective Haar-like features of the vehicle training samples, and then input the effective Haar-like features to the cascade classifier of the feature samples to detect the existence of the vehicle, and obtain the Adaboost cascade classifier ;
S3,根据CCD摄像头的固定位置,确定前方目标车辆图像中的感兴趣区域,采用S2中所得的Adaboost级联分类器对感兴趣区域进行车辆存在性检测,最终获得前方目标车辆图像中的前方目标车辆。S3, according to the fixed position of the CCD camera, determine the region of interest in the image of the front target vehicle, use the Adaboost cascade classifier obtained in S2 to detect the presence of the vehicle in the region of interest, and finally obtain the front target in the image of the front target vehicle vehicle.
优选地,步骤5中,通过步骤1中对CCD摄像头进行标定所得,基于车道平面几何的纵向车距测量模型中设CCD摄像头的光心为C点、光心C点在路面上的投影点为世界坐标系原点O点、前方车辆的测距特征点为P、车辆向前行驶的方向为世界坐标系的XW轴、世界坐标系的ZW轴垂直于路面朝下、CCD摄像头成像平面为A′B′F′E′、远视角平面为CEF、光轴中心所在平面为CMN和测距特征点所在平面为CC2D,其中,CCD摄像头的光轴CC1与成像平面A′B′F′E′的交点为C0点,则CC0为CCD传感器的焦距,即CC0=f;图像中前方目标车辆的近视场图像下边缘点A与成像平面A′B′F′E′交于G′点,远视场下边缘点B与成像平面A′B′F′E′交于H点;前方车辆的测距特征点P投影到世界坐标系XW轴的点为P',其中,目标靶源纵向感知距离即为OP′的长度。Preferably, in step 5, the CCD camera is obtained through calibration in step 1, the optical center of the CCD camera is set as point C in the longitudinal vehicle distance measurement model based on the plane geometry of the lane, and the projection point of the optical center C point on the road surface is The origin point O of the world coordinate system, the distance measurement feature point of the vehicle in front is P, the forward direction of the vehicle is the X W axis of the world coordinate system, the Z W axis of the world coordinate system is perpendicular to the road, and the imaging plane of the CCD camera is A'B'F'E', the plane of the far viewing angle is CEF, the plane where the center of the optical axis is located is CMN, and the plane where the ranging feature points are located is CC 2 D, where the optical axis CC 1 of the CCD camera and the imaging plane A'B' The intersection point of F'E' is C 0 point, then CC 0 is the focal length of the CCD sensor, that is, CC 0 =f; the lower edge point A of the near-field image of the target vehicle in front of the image and the imaging plane A'B'F'E' Intersect at point G', the edge point B of the far field of view and the imaging plane A'B'F'E' intersect at point H; the point where the distance measurement feature point P of the vehicle in front is projected to the X W axis of the world coordinate system is P', Among them, the longitudinal perception distance of the target source is the length of OP'.
优选地,步骤5中,目标靶源纵向感知距离y的计算公式为:Preferably, in step 5, the formula for calculating the longitudinal perception distance y of the target source is:
式中,v0为光心纵向图像坐标,v(P0)为特征点纵向图像坐标,dy为单位像元的纵向长度。In the formula, v 0 is the longitudinal image coordinate of the optical center, v(P 0 ) is the longitudinal image coordinate of the feature point, and dy is the longitudinal length of the unit pixel.
优选地,步骤6中,前方目标车辆测距误差动态补偿模型的构建,具体包括以下步骤:Preferably, in step 6, the construction of the dynamic compensation model for the ranging error of the target vehicle in front specifically includes the following steps:
第一步,固定CCD视觉传感器,运用步骤1所述的标定方法进行CCD摄像头的内外参数标定并记录;The first step is to fix the CCD vision sensor, and use the calibration method described in step 1 to calibrate and record the internal and external parameters of the CCD camera;
第二步,固定目标靶源的离地高度,沿道路纵向每隔5m移动目标靶源,使其在距离CCD视觉传感器[10m,100m]范围内变化,并用CCD摄像头记录目标靶源在各个位置的图像;The second step is to fix the height of the target source from the ground, move the target source every 5m along the longitudinal direction of the road, so that it changes within the range of [10m, 100m] from the CCD visual sensor, and use the CCD camera to record the target source at each position Image;
第三步,调整目标靶源的离地高度,使其在[0.2m,1m]内变化,重复第二步;The third step is to adjust the height of the target source from the ground so that it changes within [0.2m, 1m], and repeat the second step;
第四步,采用matlab处理第三步所得图像,分析目标靶源在不同离地高度和不同纵向位置的测量误差。In the fourth step, use matlab to process the image obtained in the third step, and analyze the measurement error of the target source at different heights from the ground and different longitudinal positions.
优选地,步骤6中,纵向测量误差值z的计算公式为:Preferably, in step 6, the formula for calculating the longitudinal measurement error value z is:
z=118-1124x-3.133y+3522x2+34.6xy-0.0399y2-4795x3-86.22x2y-0.1845xy2+0.0017y3+2676x4+98.62x3y+0.1428x2y2+0.001313xy3-2.077e-5y4+8.835e-8y5-7.658e-6xy4+0.0004x2y3-0.1114x3y2-38.23x4y-391.1x5 z=118-1124x-3.133y+3522x 2 +34.6xy-0.0399y 2 -4795x 3 -86.22x 2 y-0.1845xy 2 +0.0017y 3 +2676x 4 +98.62x 3 y+0.1428x 2 y 2 +0.001313 xy 3 -2.077e -5 y 4 +8.835e -8 y 5 -7.658e -6 xy 4 +0.0004x 2 y 3 -0.1114x 3 y 2 -38.23x 4 y-391.1x 5
式中,x为目标靶源离地高度。In the formula, x is the height of the target source from the ground.
优选地,步骤7中,前方目标车辆与本方车辆之间的纵向车距YW(P)的计算公式为:Preferably, in step 7, the formula for calculating the longitudinal distance Y W (P) between the target vehicle ahead and the own vehicle is:
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
本发明提供的一种基于车道平面几何模型驱动的前方车辆单目视觉的测距方法,利用CCD摄像头采集前方目标车辆图像,利用融合Haar-like特征与Adaboost算法识别步骤2中获取的前方目标车辆图像中的前方目标车辆,利用粒子滤波法对步骤3中所获取的前方目标车辆进行跟踪,根据上述所得的每一帧图像中前方目标车辆构建该帧图像中车道平面几何的纵向车距测量模型,得到该帧图像中目标靶源纵向感知距离y,根据上述所得的每一帧图像中前方目标车辆构建车辆测距误差动态补偿模型,得到纵向测量误差值z,根据步骤5中所得的目标靶源纵向感知距离y和纵向测量误差值z计算该帧图像中前方目标车辆与本方车辆之间的纵向车距YW(P);本发明具有硬件结构简单、成本低和软件算法柔性大以及测量精度更高的特点,并且可以避免行车道两侧道路阴影、其它非本车道内的车辆等干扰因素的影响,提高了系统的检测鲁棒性,本发明有效解决了车辆后悬离地高度的存在,提高了前方车辆纵向位置关系的辨识效能。The present invention provides a distance measuring method based on the monocular vision of the front vehicle driven by the plane geometric model of the lane. The CCD camera is used to collect the image of the front target vehicle, and the fusion of Haar-like features and Adaboost algorithm is used to identify the front target vehicle obtained in step 2. For the target vehicle ahead in the image, use the particle filter method to track the target vehicle ahead obtained in step 3, and construct the longitudinal distance measurement model of the lane plane geometry in the frame image according to the target vehicle ahead in each frame of image obtained above , to obtain the longitudinal perception distance y of the target source in the frame image, construct a vehicle ranging error dynamic compensation model according to the target vehicle ahead in each frame image obtained above, and obtain the longitudinal measurement error value z, according to the target target obtained in step 5 The source longitudinal perception distance y and the longitudinal measurement error value z calculate the longitudinal distance Y W (P) between the front target vehicle and the own vehicle in the frame image; the present invention has the advantages of simple hardware structure, low cost and large software algorithm flexibility and It has the characteristics of higher measurement accuracy, and can avoid the influence of interference factors such as road shadows on both sides of the carriageway and other vehicles not in the lane, and improves the detection robustness of the system. The invention effectively solves the problem of vehicle rear suspension height The existence of , improves the identification efficiency of the longitudinal position relationship of the vehicle in front.
附图说明Description of drawings
图1是CCD摄像机安装示意图;Figure 1 is a schematic diagram of the installation of a CCD camera;
图2是本发明车距测量装置的流程方法;Fig. 2 is the flow process method of the distance measuring device of the present invention;
图3是CCD摄像机内部参数标定示意图;Figure 3 is a schematic diagram of the calibration of the internal parameters of the CCD camera;
图4是CCD摄像机外部参数标定示意图;Figure 4 is a schematic diagram of the calibration of the external parameters of the CCD camera;
图5是基于Haar-like特征和Adaboost的车辆识别算法结构图;Figure 5 is a structural diagram of a vehicle recognition algorithm based on Haar-like features and Adaboost;
图6是特征样本级联分类器的构建示意图;Figure 6 is a schematic diagram of the construction of a feature sample cascade classifier;
图7是目标图像车辆轮廓提取示意图;Fig. 7 is a schematic diagram of target image vehicle contour extraction;
图8是CCD摄像机成像空间几何约束关系图;Fig. 8 is a diagram of geometric constraints of CCD camera imaging space;
图9是车道平面约束测距模型侧视图;Fig. 9 is a side view of the lane plane constrained ranging model;
图10是纵向车距测量误差源分析示意图。Fig. 10 is a schematic diagram of analysis of sources of error in longitudinal vehicle distance measurement.
具体实施方式Detailed ways
下面结合附图,对本发明进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.
如图1所示,本发明提供的一种基于车道平面几何模型驱动的前方车辆单目视觉测距装置,包括CCD摄像头1,CCD摄像头1使用吸盘固定在车辆前挡风玻璃2中间靠上的位置。其中,CCD摄像头1通过BNC视频线和视频采集卡与上位机进行连接。As shown in Fig. 1, a kind of front vehicle monocular vision distance measuring device driven based on the plane geometric model of the lane provided by the present invention comprises a CCD camera 1, and the CCD camera 1 is fixed on the middle of the front windshield 2 of the vehicle using a suction cup. Location. Wherein, the CCD camera 1 is connected with the host computer through a BNC video cable and a video capture card.
如图2所示,本发明提供的一种基于车道平面几何模型驱动的前方车辆单目视觉测距方法,具体步骤如下:As shown in Fig. 2, a kind of method for the monocular vision distance measurement of the front vehicle driven by the plane geometric model of the lane provided by the present invention, the specific steps are as follows:
步骤1、CCD摄像头的标定:Step 1. Calibration of the CCD camera:
视觉传感器的标定是机器视觉中需解决的关键问题,标定的目的是为了得到CCD摄像头的内部参数和外部参数,用以完成后续步骤中二维图像到三维立体场景的转换,具体地:The calibration of the visual sensor is a key problem to be solved in machine vision. The purpose of the calibration is to obtain the internal parameters and external parameters of the CCD camera to complete the conversion from the two-dimensional image to the three-dimensional scene in the subsequent steps. Specifically:
首先,采用MATLAB软件中的Camera Calibration Toolbox模块和平面靶标标定板标定获取CCD摄像头的内部参数;具体实施方式如下(如图3):变换标定平面靶标的角度,利用CCD摄像头采集20帧不同方位下的平面靶标图像,将其放入MATLAB软件中的CameraCalibrationToolbox模块进行CCD摄像头内部参数的解算,得到CCD摄像头的内部参数有效焦距f;First, use the Camera Calibration Toolbox module in the MATLAB software and the plane target calibration board to calibrate to obtain the internal parameters of the CCD camera; Put it into the CameraCalibrationToolbox module in the MATLAB software to calculate the internal parameters of the CCD camera, and obtain the effective focal length f of the internal parameters of the CCD camera;
其次,采用基于道路图像消失点的方法,具体实施方式如下(如图4):在CCD摄像头获取的平面靶标图像上,标记出左车道线和右车道线,同时记录左车道线和右车道线交点O的像素坐标,且分别在左右车道线上各取一个尽量离交点较远的点,记录下A、B两点的像素坐标,根据CCD摄像头内部参数标定结果及两平行线在三维世界内的实际距离。通过HALCON软件里的标定工具箱中的Calibration模块标定获取CCD摄像头1的外部参数。即可解算获得CCD摄像头1的外部参数CCD摄像头高度h和CCD摄像头俯仰角θ。Secondly, the method based on the vanishing point of the road image is adopted, and the specific implementation method is as follows (as shown in Figure 4): on the plane target image acquired by the CCD camera, the left lane line and the right lane line are marked, and the left lane line and the right lane line are recorded at the same time The pixel coordinates of the intersection point O, and take a point on the left and right lane lines as far away from the intersection point as possible, record the pixel coordinates of the two points A and B, according to the calibration results of the internal parameters of the CCD camera and the two parallel lines in the three-dimensional world actual distance. The external parameters of the CCD camera 1 are obtained through calibration of the Calibration module in the calibration toolbox in the HALCON software. The external parameters of the CCD camera 1, the height h of the CCD camera and the pitch angle θ of the CCD camera, can be obtained through calculation.
步骤2、目标车辆图像的采集与传输:Step 2. Acquisition and transmission of target vehicle images:
CCD摄像头1采集前方目标车辆图像,并将采集的前方目标车辆图像经过BNC视频线、视频采集卡传输至上位机系统中的图像处理软件,得到可进行分析处理的前方目标车辆图像。The CCD camera 1 collects the image of the target vehicle ahead, and transmits the collected image of the target vehicle ahead to the image processing software in the upper computer system through the BNC video cable and the video capture card, so as to obtain the image of the target vehicle ahead that can be analyzed and processed.
步骤3、利用融合Haar-like特征与Adaboost算法识别步骤2中获取的前方目标车辆图像中的前方目标车辆,其具体流程(如图5):Step 3, using the fusion Haar-like feature and Adaboost algorithm to identify the front target vehicle in the front target vehicle image obtained in step 2, the specific process (as shown in Figure 5):
首先,根据步骤2所得的前方目标车辆图像建立样本集,对样本集中的车辆训练样本和非车辆训练样本进行预处理,即:利用Adaboost算法选取车辆训练样本中的有效Haar-like特征,每个有效特征产生对应的弱分类器,将弱分类器加权组合变成强分类器,最后构建特征样本的级联分类器,具体方法如下:First, a sample set is established according to the image of the front target vehicle obtained in step 2, and the vehicle training samples and non-vehicle training samples in the sample set are preprocessed, that is, the Adaboost algorithm is used to select the effective Haar-like features in the vehicle training samples, each Effective features generate corresponding weak classifiers, and the weighted combination of weak classifiers becomes a strong classifier, and finally a cascade classifier of feature samples is constructed. The specific method is as follows:
第一步、将前方目标车辆图像转化为积分图:利用积分图法将前方目标车辆图像转化后,转化后得到该目标车辆图像的积分图,积分图中每个点表示转化后图像左上角起点到该点矩形区域内的像素和,如公式1所示。The first step is to convert the image of the target vehicle ahead into an integral map: after converting the image of the target vehicle ahead using the integral map method, the integral map of the target vehicle image is obtained after conversion. Each point in the integral map represents the starting point of the upper left corner of the transformed image The sum of pixels within the rectangular area to that point, as shown in Equation 1.
其中,i(x,y)为(x,y)点在积分图像上的像素积分值,i(x’,y’)为原图中(x,y)点内的像素值。Among them, i(x, y) is the pixel integral value of point (x, y) on the integral image, and i(x’, y’) is the pixel value in point (x, y) in the original image.
第二步、采用Adaboost算法对转化后的积分图进行快速有效的类Haar-like特征提取,生成具有特征样本的强分类器,具体地:Adaboost算法通过每次循环提取一个相应的有效的类Haar-like特征,每个有效特征产生对应的弱分类器,将弱分类器加权组合变成强分类器;The second step is to use the Adaboost algorithm to perform fast and effective Haar-like feature extraction on the converted integral graph to generate a strong classifier with feature samples. Specifically: the Adaboost algorithm extracts a corresponding effective Haar-like feature through each cycle -like features, each effective feature generates a corresponding weak classifier, and the weighted combination of weak classifiers becomes a strong classifier;
第三步、构建特征样本的级联分类器:通常待检测图像中大部分区域都不包含目标车辆,因此采用级联分类器快速排除非车辆区域,提高目标检测速度。本发明采用经典的瀑布型分类器进行级联,每个层为用Adaboost算法训练得到的Adaboost强分类器,每个Adaboost强分类器又包含若干个弱分类器,待识别的样本图被级联分类器一层一层进行检测,如果在任何一层被判为负样本,即非车辆目标图像,后面的强分类器都通不过,这样可以使后面的分类器有更多的时间来识别正样本窗口,即目标车辆图像,具体流程如图6。考虑到车辆目标检测的可信度及算法的实时性,本发明所用级联分类器采用8层分类器,最终的检测率在0.9以上,每一层的误检率为0.5,则每一层的检测率在0.99以上。The third step is to construct a cascade classifier of feature samples: usually most areas in the image to be detected do not contain the target vehicle, so the cascade classifier is used to quickly exclude non-vehicle areas and improve the speed of target detection. The present invention adopts classic waterfall classifiers to be cascaded, and each layer is an Adaboost strong classifier trained with the Adaboost algorithm, and each Adaboost strong classifier includes several weak classifiers, and the sample graphs to be identified are cascaded The classifier detects layer by layer. If it is judged as a negative sample at any layer, that is, a non-vehicle target image, the subsequent strong classifiers will not pass, which will allow the subsequent classifiers to have more time to identify positive samples. The sample window, that is, the image of the target vehicle, the specific process is shown in Figure 6. Considering the reliability of the vehicle target detection and the real-time performance of the algorithm, the cascade classifier used in the present invention adopts 8 layers of classifiers, the final detection rate is more than 0.9, and the false detection rate of each layer is 0.5, then each layer The detection rate is above 0.99.
其次,对海量测试样本进行Haar-like特征提取,将特征输入到Adaboost级联分类器进行车辆存在性检测,保证检测准确率,检测速率以及算法实时性,前述两大步骤构建了融合Haar-like特征与Adaboost的前方车辆识别算法。Secondly, Haar-like feature extraction is performed on a large number of test samples, and the features are input to the Adaboost cascade classifier for vehicle presence detection to ensure the detection accuracy, detection speed and real-time performance of the algorithm. The above two steps build a fusion Haar-like Features and Adaboost's front vehicle recognition algorithm.
最后,根据CCD摄像头的固定位置,确定本发明前方目标车辆图像的ROI区域为图像的下半部分,对前方目标车辆图像中的感兴趣区域进行处理,得到车辆识别结果,具体方法为:首先利用融合Haar-like特征与Adaboost算法对感兴趣区域(ROI)同上述训练过程一样进行处理,即:对目标图像上的感兴趣区域进行图像预处理和计算积分图;其次用上述训练过程中选择的Haar-like特征信息,包含结构、位置、类型等来提取感兴趣区域的Haar-like特征值,组成特征向量;最后采用海量离线训练得到的Adaboost级联分类器对感兴趣区域进行车辆存在性检测,并输出车辆识别结果,如图7所示。该方法在保证了检测准确率和算法实时性的同时,能够有效地避免行车道两侧道路阴影、其它非本车道内的车辆等干扰因素的影响,提高系统的检测鲁棒性。Finally, according to the fixed position of the CCD camera, it is determined that the ROI region of the front target vehicle image of the present invention is the lower half of the image, and the region of interest in the front target vehicle image is processed to obtain the vehicle recognition result. The specific method is: first use Integrating Haar-like features and Adaboost algorithm to process the region of interest (ROI) in the same way as the above training process, that is: perform image preprocessing and calculate the integral image on the region of interest on the target image; secondly, use the selected in the above training process Haar-like feature information, including structure, location, type, etc., is used to extract the Haar-like feature value of the region of interest to form a feature vector; finally, the Adaboost cascade classifier obtained by massive offline training is used to detect the presence of vehicles in the region of interest , and output the vehicle recognition result, as shown in Figure 7. While ensuring the detection accuracy and real-time performance of the algorithm, this method can effectively avoid the influence of interference factors such as road shadows on both sides of the lane and other vehicles not in the lane, and improve the detection robustness of the system.
步骤4、利用粒子滤波法对前方目标车辆进行跟踪:Step 4, use the particle filter method to track the target vehicle in front:
实际采集图像时,由于目标车辆背景图像复杂多样,容易导致算法漏检或者出现虚警等问题。因此,为了保证系统的实时性和鲁棒性,本发明采用粒子滤波算法对进行前方车辆跟踪,具体跟踪步骤如下:粒子初始化、时间更新、观测更新步骤、重采样和状态更新。此处设置粒子滤波器的粒子数N设置为100,每次迭代采样数为30。可以保证跟踪算法具有较高的跟踪精度和稳定性,跟踪过程平均耗时20ms,跟踪算法对车辆类型、姿态变化、环境干扰等非确定因素具有较强的免疫能力,能够满足车载系统实时性和鲁棒性要求。When actually collecting images, due to the complex and diverse background images of the target vehicle, it is easy to cause problems such as missed detection by the algorithm or false alarms. Therefore, in order to ensure the real-time and robustness of the system, the present invention adopts the particle filter algorithm to track the vehicle in front, and the specific tracking steps are as follows: particle initialization, time update, observation update step, resampling and state update. Here, the particle number N of the particle filter is set to 100, and the sampling number of each iteration is 30. It can ensure that the tracking algorithm has high tracking accuracy and stability, and the tracking process takes an average of 20ms. The tracking algorithm has strong immunity to uncertain factors such as vehicle type, attitude change, and environmental interference, and can meet the real-time performance of the vehicle system. robustness requirements.
步骤5、构建车道平面几何的纵向车距测量模型:Step 5. Construct the longitudinal vehicle distance measurement model of the plane geometry of the lane:
对上述得到的每一帧可处理图像进行基于车道平面几何的纵向车距测量模型计算,首先根据CCD摄像头的安装位置建立基于车道平面几何的纵向测距模型,如图8所示。其中设CCD摄像头的光心为C点、光心C点在路面上的投影点为世界坐标系原点O点、前方车辆的测距特征点为P、车辆向前行驶的方向为世界坐标系的XW轴、世界坐标系的ZW轴垂直于路面朝下、CCD摄像头成像平面为A′B′F′E′、远视角平面为CEF、光轴中心所在平面为CMN和测距特征点所在平面为CC2D,其中,CCD摄像头的光轴CC1与成像平面A′B′F′E′的交点为C0点,则CC0为CCD传感器的焦距,即CC0=f;图像中前方目标车辆的近视场图像下边缘点A与成像平面A′B′F′E′交于G′点,远视场下边缘点B与成像平面A′B′F′E′交于H点;前方车辆的测距特征点P投影到世界坐标系XW轴的点为P',其中,目标靶源纵向感知距离即为OP′的长度。Calculate the longitudinal vehicle distance measurement model based on the plane geometry of the lane for each frame of the processable image obtained above. First, establish the longitudinal distance measurement model based on the plane geometry of the lane according to the installation position of the CCD camera, as shown in Figure 8. The optical center of the CCD camera is set as point C, the projection point of the optical center C point on the road surface is the origin point O of the world coordinate system, the distance measurement feature point of the vehicle in front is P, and the forward direction of the vehicle is the point of the world coordinate system. The X W axis, the Z W axis of the world coordinate system are perpendicular to the road and face downward, the imaging plane of the CCD camera is A'B'F'E', the plane of the far viewing angle is CEF, and the plane where the center of the optical axis is located is the CMN and the distance measurement feature point. The plane is CC 2 D, where the intersection of the optical axis CC 1 of the CCD camera and the imaging plane A'B'F'E' is C 0 , then CC 0 is the focal length of the CCD sensor, that is, CC 0 =f; in the image The lower edge point A of the near-field image of the target vehicle in front intersects with the imaging plane A'B'F'E' at point G', and the lower edge point B of the far-field image intersects with the imaging plane A'B'F'E' at point H; The point where the ranging feature point P of the vehicle in front is projected onto the X and W axes of the world coordinate system is P', where the longitudinal perception distance of the target source is the length of OP'.
其次根据上述道路平面约束下车辆测距模型推导前方车辆在世界坐标系的位置,如图9所示。已知C0、CC0、θ和成像平面A′B′F′E′的各边长,通过以下公式求解测距特征点P的世界坐标的纵坐标值,从图8车道平面约束测距模型的侧视图可以得知:Secondly, the position of the front vehicle in the world coordinate system is derived according to the vehicle ranging model under the above-mentioned road plane constraints, as shown in Figure 9. Knowing C 0 , CC 0 , θ and the lengths of each side of the imaging plane A′B′F′E′, solve the ordinate value of the world coordinate of the distance measurement feature point P by the following formula, and measure the distance from the lane plane constraints in Figure 8 A side view of the model reveals:
CC0=f,∠OC1C=θ,OC=h (2)CC 0 =f,∠OC 1 C=θ, OC=h (2)
在ΔC0CP′0中,In ΔC 0 CP′ 0 ,
其中P′0C0可取正值和负值,又有:Among them, P′ 0 C 0 can take positive and negative values, and:
式中,P′0C0=y(C0)-y(P′0)=[v0-v(P′0)]×dy=[v0-v(P0)]×dyIn the formula, P′ 0 C 0 =y(C 0 )-y(P′ 0 )=[v 0 -v(P′ 0 )]×dy=[v 0 -v(P 0 )]×dy
在ΔOCP′中,OP'=OC×tan∠OCP'In ΔOCP', OP'=OC×tan∠OCP'
则可以推导出,It can be deduced that,
式中,v0为光心纵向图像坐标,v(P0)为特征点纵向图像坐标,dy为单位像元的纵向长度,y为目标靶源纵向感知距离。In the formula, v 0 is the longitudinal image coordinate of the optical center, v(P 0 ) is the longitudinal image coordinate of the feature point, dy is the longitudinal length of the unit pixel, and y is the longitudinal perception distance of the target source.
步骤6、构建车辆测距误差动态补偿模型:Step 6. Construct a vehicle ranging error dynamic compensation model:
如图10所示,前方目标车辆尾部区域以车辆轮廓下边缘为界分成区域A和区域B两部分,区域A的垂直高度为目标车辆的后悬高度hα,点D为车辆尾部区域下边缘中点,点P为车辆尾部区域在路面投影中点,由于前方车辆辨识算法只能检测出前方目标车辆尾部轮廓,因此选取点D作为纵向车距测量特征点使得纵向车距测量值偏大。As shown in Figure 10, the rear area of the target vehicle in front is divided into two parts, area A and area B, bounded by the lower edge of the vehicle outline. The vertical height of area A is the rear overhang height h α of the target vehicle, and point D is the lower edge of the vehicle rear area. The midpoint, point P is the midpoint of the projection of the rear area of the vehicle on the road surface. Since the front vehicle identification algorithm can only detect the rear profile of the target vehicle in front, point D is selected as the longitudinal distance measurement feature point to make the longitudinal distance measurement value too large.
为了降低后悬高度对纵向车距测量造成的误差,使纵向车距测量值尽可能准确逼近真实值,本发明采用规格为2m×1m的红色目标靶源作为前方车辆尾部区域仿真对象,目标靶源固定在支架上且离地高度可调,通过分析统计大量车辆样本,确定车辆后悬高度变化范围为[0.2m,1m]。In order to reduce the error caused by the height of the rear overhang to the measurement of the longitudinal vehicle distance and make the measured value of the longitudinal vehicle distance as close as possible to the real value, the present invention uses a red target source with a specification of 2m×1m as the simulation object of the rear area of the vehicle in front. The source is fixed on the bracket and the height from the ground is adjustable. Through the analysis and statistics of a large number of vehicle samples, it is determined that the variation range of the rear overhang height of the vehicle is [0.2m, 1m].
其具体实施步骤为:Its specific implementation steps are:
第一步,固定CCD视觉传感器,运用步骤1所述的标定方法进行CCD摄像头的内外参数标定并记录;The first step is to fix the CCD vision sensor, and use the calibration method described in step 1 to calibrate and record the internal and external parameters of the CCD camera;
第二步,固定目标靶源的离地高度,沿道路纵向每隔5m移动目标靶源,使其在距离CCD视觉传感器[10m,100m]范围内变化,并用CCD摄像头记录目标靶源在各个位置的图像;The second step is to fix the height of the target source from the ground, move the target source every 5m along the longitudinal direction of the road, so that it changes within the range of [10m, 100m] from the CCD visual sensor, and use the CCD camera to record the target source at each position Image;
第三步,调整目标靶源的离地高度,使其在[0.2m,1m]内变化,重复第二步;The third step is to adjust the height of the target source from the ground so that it changes within [0.2m, 1m], and repeat the second step;
第四步,采用matlab处理第三步所得图像,分析目标靶源在不同离地高度、不同纵向位置的测量误差。依据目标靶源在不同离地高度、不同纵向位置的误差数据,可回归纵向测量误差动态补偿模型为:The fourth step is to use matlab to process the image obtained in the third step, and analyze the measurement error of the target source at different heights from the ground and different longitudinal positions. According to the error data of the target source at different heights from the ground and different longitudinal positions, the regressive dynamic compensation model of longitudinal measurement error is:
依据目标靶源在不同离地高度、不同纵向位置的误差数据,可回归纵向测量误差动态补偿模型为:According to the error data of the target source at different heights from the ground and different longitudinal positions, the regressive dynamic compensation model of longitudinal measurement error is:
其中:式中x为目标靶源离地高度,y为目标靶源纵向感知距离,z为纵向测量误差。Where: in the formula, x is the height of the target source from the ground, y is the longitudinal perception distance of the target source, and z is the longitudinal measurement error.
步骤7、车距计算:Step 7. Calculation of vehicle distance:
根据步骤5中基于车道平面几何的纵向车距测量模型的构建和步骤6中车辆测距误差动态补偿模型的构建对车距进行重构,则重构的纵向车距测量模型为:According to the construction of the longitudinal vehicle distance measurement model based on the plane geometry of the lane in step 5 and the construction of the dynamic compensation model of the vehicle ranging error in step 6, the vehicle distance is reconstructed, and the reconstructed longitudinal distance measurement model is:
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710930473.5A CN107796373B (en) | 2017-10-09 | 2017-10-09 | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710930473.5A CN107796373B (en) | 2017-10-09 | 2017-10-09 | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107796373A true CN107796373A (en) | 2018-03-13 |
CN107796373B CN107796373B (en) | 2020-07-28 |
Family
ID=61532879
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710930473.5A Expired - Fee Related CN107796373B (en) | 2017-10-09 | 2017-10-09 | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107796373B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108759667A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
CN109272536A (en) * | 2018-09-21 | 2019-01-25 | 浙江工商大学 | A line killing point tracking method based on Kalman filtering |
CN109752709A (en) * | 2019-01-22 | 2019-05-14 | 武汉鸿瑞达信息技术有限公司 | A kind of distance measurement method and device based on image |
CN112365741A (en) * | 2020-10-23 | 2021-02-12 | 淮阴工学院 | Safety early warning method and system based on multilane vehicle distance detection |
CN112686209A (en) * | 2021-01-25 | 2021-04-20 | 深圳市艾为智能有限公司 | Vehicle rear blind area monitoring method based on wheel identification |
CN112880642A (en) * | 2021-03-01 | 2021-06-01 | 苏州挚途科技有限公司 | Distance measuring system and distance measuring method |
CN113221739A (en) * | 2021-05-12 | 2021-08-06 | 中国科学技术大学 | Monocular vision-based vehicle distance measuring method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102661733A (en) * | 2012-05-28 | 2012-09-12 | 天津工业大学 | Front vehicle ranging method based on monocular vision |
CN202614463U (en) * | 2012-05-09 | 2012-12-19 | 齐齐哈尔大学 | A temperature drift calibration device of a piezoresistive pressure sensor |
CN102865824A (en) * | 2012-09-18 | 2013-01-09 | 北京经纬恒润科技有限公司 | Method and device for calculating relative distance between vehicles |
KR20160037424A (en) * | 2014-09-29 | 2016-04-06 | 동명대학교산학협력단 | A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN106524908A (en) * | 2016-10-17 | 2017-03-22 | 湖北文理学院 | Measurement method for machine tool total travel space errors |
CN107194045A (en) * | 2017-05-08 | 2017-09-22 | 北京航空航天大学 | Ripple modeling method is disturbed before a kind of refueled aircraft for air refuelling |
-
2017
- 2017-10-09 CN CN201710930473.5A patent/CN107796373B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202614463U (en) * | 2012-05-09 | 2012-12-19 | 齐齐哈尔大学 | A temperature drift calibration device of a piezoresistive pressure sensor |
CN102661733A (en) * | 2012-05-28 | 2012-09-12 | 天津工业大学 | Front vehicle ranging method based on monocular vision |
CN102865824A (en) * | 2012-09-18 | 2013-01-09 | 北京经纬恒润科技有限公司 | Method and device for calculating relative distance between vehicles |
KR20160037424A (en) * | 2014-09-29 | 2016-04-06 | 동명대학교산학협력단 | A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN106524908A (en) * | 2016-10-17 | 2017-03-22 | 湖北文理学院 | Measurement method for machine tool total travel space errors |
CN107194045A (en) * | 2017-05-08 | 2017-09-22 | 北京航空航天大学 | Ripple modeling method is disturbed before a kind of refueled aircraft for air refuelling |
Non-Patent Citations (2)
Title |
---|
杨炜,魏朗: "基于单目视觉的纵向车间距检测研究", 《自动化测试技术》 * |
段翔,杨炜: "前方车辆图像识别与纵向安全域控制方法研究", 《交通运输》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108759667A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
CN109272536A (en) * | 2018-09-21 | 2019-01-25 | 浙江工商大学 | A line killing point tracking method based on Kalman filtering |
CN109272536B (en) * | 2018-09-21 | 2021-11-09 | 浙江工商大学 | Lane line vanishing point tracking method based on Kalman filtering |
CN109752709A (en) * | 2019-01-22 | 2019-05-14 | 武汉鸿瑞达信息技术有限公司 | A kind of distance measurement method and device based on image |
CN112365741A (en) * | 2020-10-23 | 2021-02-12 | 淮阴工学院 | Safety early warning method and system based on multilane vehicle distance detection |
CN112365741B (en) * | 2020-10-23 | 2021-09-28 | 淮阴工学院 | Safety early warning method and system based on multilane vehicle distance detection |
CN112686209A (en) * | 2021-01-25 | 2021-04-20 | 深圳市艾为智能有限公司 | Vehicle rear blind area monitoring method based on wheel identification |
CN112880642A (en) * | 2021-03-01 | 2021-06-01 | 苏州挚途科技有限公司 | Distance measuring system and distance measuring method |
CN113221739A (en) * | 2021-05-12 | 2021-08-06 | 中国科学技术大学 | Monocular vision-based vehicle distance measuring method |
Also Published As
Publication number | Publication date |
---|---|
CN107796373B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107796373B (en) | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model | |
CN109100741B (en) | A target detection method based on 3D lidar and image data | |
WO2021004548A1 (en) | Vehicle speed intelligent measurement method based on binocular stereo vision system | |
CN108444390B (en) | Unmanned automobile obstacle identification method and device | |
CN103559791B (en) | A kind of vehicle checking method merging radar and ccd video camera signal | |
US20200041284A1 (en) | Map road marking and road quality collecting apparatus and method based on adas system | |
CN114359181B (en) | Intelligent traffic target fusion detection method and system based on image and point cloud | |
CN113156421A (en) | Obstacle detection method based on information fusion of millimeter wave radar and camera | |
CN113850102B (en) | Vehicle visual inspection method and system based on millimeter wave radar assistance | |
CN108596058A (en) | Running disorder object distance measuring method based on computer vision | |
CN107590438A (en) | A kind of intelligent auxiliary driving method and system | |
CN107609486A (en) | To anti-collision early warning method and system before a kind of vehicle | |
CN103499337B (en) | Vehicle-mounted monocular camera distance and height measuring device based on vertical target | |
CN105225482A (en) | Based on vehicle detecting system and the method for binocular stereo vision | |
CN104700414A (en) | Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera | |
US11783507B2 (en) | Camera calibration apparatus and operating method | |
CN112232139B (en) | An obstacle avoidance method based on the combination of Yolo v4 and Tof algorithm | |
CN107463890A (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
Sehestedt et al. | Robust lane detection in urban environments | |
CN107688174A (en) | A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment | |
CN112683228A (en) | Monocular camera ranging method and device | |
CN114155511A (en) | Environmental information acquisition method for automatically driving automobile on public road | |
CN118038226A (en) | A road safety monitoring method based on LiDAR and thermal infrared visible light information fusion | |
CN112733678A (en) | Ranging method, ranging device, computer equipment and storage medium | |
CN111256651B (en) | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210129 Address after: 201600 no.216 GANGYE Road, Xiaokunshan Town, Songjiang District, Shanghai Patentee after: Shanghai Yingdong Technology Development Co.,Ltd. Address before: 710064 No. 33, South Second Ring Road, Shaanxi, Xi'an Patentee before: CHANG'AN University |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200728 Termination date: 20211009 |
|
CF01 | Termination of patent right due to non-payment of annual fee |