CN103559791B - A fusion ccd camera and a radar detection signal of the vehicle - Google Patents

A fusion ccd camera and a radar detection signal of the vehicle Download PDF

Info

Publication number
CN103559791B
CN103559791B CN201310530503.5A CN201310530503A CN103559791B CN 103559791 B CN103559791 B CN 103559791B CN 201310530503 A CN201310530503 A CN 201310530503A CN 103559791 B CN103559791 B CN 103559791B
Authority
CN
China
Prior art keywords
vehicle
image
coordinate system
radar
coordinates
Prior art date
Application number
CN201310530503.5A
Other languages
Chinese (zh)
Other versions
CN103559791A (en
Inventor
鲍泓
徐成
田仙仙
张璐璐
Original Assignee
北京联合大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京联合大学 filed Critical 北京联合大学
Priority to CN201310530503.5A priority Critical patent/CN103559791B/en
Publication of CN103559791A publication Critical patent/CN103559791A/en
Application granted granted Critical
Publication of CN103559791B publication Critical patent/CN103559791B/en

Links

Abstract

本发明公开了一种融合雷达和CCD摄像机信号的车辆检测方法,包括:输入雷达和CCD摄像机信号;进行摄像头校正,得到路平面坐标与图像坐标的投影矩阵,将路平面世界坐标转换成图像平面坐标;建立适合车辆HOG描述器的正负样本集;对车辆样本集进行批量特征提取,建立HOG样本集;建立线性支持向量机SVM分类模型,对SVM进行训练;提取雷达检测到的障碍物在视频图像中的感兴趣区域,输入SVM分类器中进行目标类型判断;输出识别结果;利用雷达测出判断为车辆的目标的距离。 The present invention discloses a vehicle detection method for a fusion radar and a CCD camera signals, comprising: an input radar and a CCD camera signals; for camera calibration, to obtain the projection matrix passage plane coordinates and image coordinates, converting the passage plane of the world coordinate into the image plane coordinates; set for the vehicle to establish positive and negative samples of HOG descriptors; bulk sample set vehicle feature extraction established HOG sample set; linear establishing the SVM classification model, the training of the SVM; extracting radar detected obstacle region of interest in the video image, the input SVM classifier determines the target type; outputting a recognition result; measured by the radar is determined as the target vehicle distance. 本发明利用雷达和CCD摄像机信号进行联合检测,不仅获取了车辆的深度信息,同时也能较好地检测出车辆的轮廓信息,提高了车辆检测、定位的可靠性和精确性。 The present invention utilizes a CCD camera and a radar joint detection signals, acquires not only the depth information of the vehicle, but also to better detect the contour of the vehicle information, vehicle detection improved, reliability and accuracy of positioning.

Description

一种融合雷达和CCD摄像机信号的车辆检测方法 A fusion of the radar and the CCD camera of the vehicle detection signal

技术领域 FIELD

[0001] 本发明属于计算机视觉领域,涉及一种基于多传感器信息融合技术的智能车前方车辆检测方法,特别涉及一种融合雷达和CCD摄像机信号的车辆检测方法。 [0001] The present invention belongs to the field of computer vision, relates to a method of detecting the forward vehicle intelligent vehicle multi-sensor information fusion technique, particularly to a method for detecting a fusion vehicle radar signals and the CCD camera.

背景技术 Background technique

[0002] 智能驾驶技术中,主流的环境检测传感器为惯导、激光雷达、毫米波雷达、红外和彩色C⑶摄像机等。 [0002] intelligent driving technology, mainstream environmental detection sensor inertial navigation, laser radar, millimeter-wave radar, infrared and color C⑶ cameras.

[0003] 采用雷达可以快速精确地获取智能车前方二维水平面上的车辆距离信息,同时雷达的工作频率高,测量的距离数据精度高,价格相对便宜,能够满足车辆检测的实时性。 [0003] The radar can quickly and accurately obtain the vehicle before the horizontal plane Fanger Wei intelligent vehicle distance information, the radar while the high operating frequency, high-precision distance measurement data, relatively cheap, can meet real-time detection of the vehicle. 但是雷达获取车辆的信息量较少,单独使用雷达只能检测出车辆扫描平面上的深度信息。 However, the vehicle information acquired fewer radar, radar depth information alone only detected on the scanning plane of the vehicle.

[0004] 视频传感器可以提供二维可见光图像,通过对特定物体(车辆)已知的明显特征(如边缘、角点、纹理、位置和形状等特征信息)进行检测判定,在一些特定的条件下,可以有效地检测出车辆。 [0004] a two-dimensional video sensor may provide a visible image, it is determined by detecting a particular object (vehicle) a known significant feature (e.g., edge, corner, texture, shape and position of the feature information), under certain conditions , the vehicle can be effectively detected. 使用视频传感器检测车辆通常包含三个过程:首先从图像中确定出目标物体,然后对物体分类识别,最后是跟踪车辆。 Using a video sensor for detecting a vehicle generally consists of three processes: first determining from an image of the target object, then the object classification, and finally the tracking vehicle. 视频传感器的缺点是无法得到车辆的距离信息。 Disadvantage is that the video sensor information from the vehicle can not be obtained.

[0005] 在实际的智能车辆检测应用中,单独使用某一种检测传感器都难以全面而精确地完成车辆检测和定位,所以有必要利用多种传感器的数据进行融合,实现优势互补以提高车辆检测和定位的可靠性、精确性。 [0005] In the actual application of intelligent vehicle detection, a separate sensor for detecting a comprehensive and difficult to accurately detect and locate the vehicle is completed, it is necessary to use a variety of sensor data fusion, complementary advantages to improve the detection of the vehicle and reliability of positioning accuracy.

发明内容 SUMMARY

[0006] 针对现有技术中存在的单独使用一种检测传感器难以全面而精确地完成车辆检测和定位问题,本发明提出了一种融合雷达和CCD摄像机信号的车辆检测方法。 [0006] The prior art for use alone in the presence detection sensor is difficult to fully and accurately detect the vehicle and complete the positioning problems, the present invention provides a method for detecting a fusion vehicle radar signals and the CCD camera.

[0007] 本发明的多传感器包括单线激光雷达和视频传感器。 Multisensor [0007] The present invention includes a single-line laser radar and video sensors. 单线激光雷达可以快速精确地获取智能车前方二维水平面上车辆的距离信息;视频传感器可以提供车辆的二维可见光图像,根据图像可以检测目标车辆的类型信息。 Single line laser radar can quickly and accurately obtain information from the intelligent vehicle before the vehicle Fanger Wei horizontal plane; a two-dimensional sensor may provide a visible light video image of the vehicle, it may be detected according to the image information of the object type of the vehicle.

[0008] -种融合雷达和CXD摄像机信号的车辆检测方法,包括以下步骤: [0008] - Species vehicle detection fusion CXD radar and the camera signal, comprising the steps of:

[0009] 步骤一,输入来自雷达的路面障碍物信息信号和来自CCD摄像机的路平面图像信号。 [0009] Step a, the input information signal and road obstacles channel image signals from the CCD camera plane from the radar.

[0010] 步骤二,校正视频传感器摄像头,得到路平面坐标与图像坐标的投影矩阵,将路平面世界坐标转换成图像平面坐标。 [0010] Step two, the sensor calibration video camera, to obtain plane coordinates and image coordinates passage projection matrix, the road plane world coordinates into image plane coordinates. 将单线激光雷达所在的路平面坐标转换成选定的标定板所在的参考系坐标,得到雷达监测到的路平面的车辆坐标对应的图像坐标。 Converting the coordinate plane where the laser radar-line path to a selected reference coordinate system where the calibration plate, the image coordinates of the vehicle to obtain the coordinates of the radar surveillance plane corresponding to the road.

[0011] 步骤三,建立适合车辆H0G(HistogramofOrientedGradient,梯度方向直方图) 特征描述器的正负样本集。 [0011] Step three, the establishment of appropriate positive and negative sample set vehicle H0G (HistogramofOrientedGradient, gradient direction histogram) of the feature descriptor.

[0012]HOG特征描述器是应用于计算机视觉和图像处理领域,用于目标检测的特征描述器。 [0012] HOG features described are applicable to the field of computer vision and image processing, feature descriptor for target detection. 利用HOG来计算局部图像梯度的方向信息的统计值。 Statistical value calculating direction information of the local image gradients using HOG.

[0013] 步骤四,采用HOG算法对车辆样本集进行批量的特征提取,从而建立HOG特征样本集。 [0013] Step 4 using the HOG algorithm bulk sample set vehicle feature extraction, so as to establish sets of samples HOG features.

[0014] 步骤五,建立线性支持向量机SVM分类模型,使用特征样本集对SVM进行训练。 [0014] Step 5 linear establishing the SVM classification model using the feature of the SVM training sample set.

[0015] 步骤六,提取雷达检测到的障碍物在视频图像中的区域,并进行HOG特征提取,输入训练得到的SVM分类器中进行目标类型判断。 [0015] Step six, to extract the region the radar obstacle detection in the video image, and feature extraction HOG, the target type determination obtained input training SVM classifier.

[0016] 步骤七,输出SVM分类器的目标物体识别结果。 [0016] Step 7 SVM classifier outputs a recognition result of the target object.

[0017] 步骤八,输出单线激光雷达测出的智能车前方判断为车辆的目标的距离。 Forward [0017] Step 8-line output of a laser radar intelligent vehicle determines the distance to the target vehicle.

[0018] 本发明的有益效果主要表现在:利用由视频传感器获得的车辆的图像信息和由单线激光雷达测得的车辆的距离信息进行联合检测,不仅获取了车辆的深度信息,同时也能较好地检测出车辆的轮廓信息,实现优势互补以提高车辆检测、定位的可靠性和精确性。 [0018] Advantageous effects of the present invention mainly in: using the distance information of the image information obtained by the video sensor of the vehicle and by a single line laser radar measured vehicle performs joint detection, only obtaining the depth information of the vehicle, but also to more well detect the contour information of the vehicle, and complement each other to improve vehicle detection, positioning accuracy and reliability.

附图说明 BRIEF DESCRIPTION

[0019] 图1为融合雷达和CCD摄像机信号的车辆检测方法流程图; [0019] FIG. 1 is a method for detecting a vehicle radar and a CCD camera integration signal flowchart;

[0020] 图2为世界坐标系到照相机坐标系的变换不意图; [0020] FIG. 2 is a world coordinate system to the camera coordinate system transformation is not intended;

[0021] 图3为照相机坐标系到图像坐标系的变换不意图; [0021] FIG. 3 is a coordinate system to the camera coordinate system converted image are not intended;

[0022] 图4为车辆图像水平竖直梯度示意图。 [0022] FIG. 4 is a schematic vertical gradient in horizontal vehicle image.

具体实施方式 Detailed ways

[0023] 下面结合附图和实施例对本发明做进一步说明。 Drawings and embodiments of the present invention will be further described [0023] below in conjunction.

[0024] 本发明所述的车辆检测方法由存于电脑内的软件实现,电脑安装在车辆后备箱中。 [0024] The vehicle detection method according to the present invention is implemented in software stored in the computer, the computer mounted in the vehicle trunk. 雷达水平固定在汽车前段车牌位置处,CCD摄像机安装在车内后视镜位置。 Automotive radar level is fixed at a position preceding plate, CCD camera mounted in the vehicle mirror position.

[0025] 图1为融合雷达和CCD摄像机信号的车辆检测方法流程图,包括以下步骤: [0025] FIG. 1 is a method for detecting a vehicle radar and a CCD camera fusion flowchart signals, comprising the steps of:

[0026] 步骤一,输入雷达和CXD摄像机信号。 [0026] Step a, CXD radar and camera signal input.

[0027] 步骤二,进行摄像头校正(包括照相机的内参和外参),得到路平面坐标与图像坐标的投影矩阵,将路平面世界坐标转换成图像平面坐标。 [0027] Step II for camera calibration (including internal reference and an outer camera parameters), the projection matrix to obtain plane coordinates and the image coordinates of the path, converts the path into the image plane of the world coordinate plane coordinates.

[0028] 世界坐标系中的点到图像坐标中的点的投影过程分两步来实现: [0028] projected, the world coordinate system of the image point coordinates of points achieved in two steps:

[0029] (1)将世界坐标系(或参考坐标系)中点的坐标(XW,YW,ZW)变换到照相机坐标系(X。,Y。,Z。)。 [0029] (1) The world coordinate system (or reference frame) coordinate of the midpoint (XW, YW, ZW) transformed into the camera coordinate system (X., Y., Z.). 变换过程如图2所示,变换公式为: Transformation process shown in Figure 2, the conversion formula is:

Figure CN103559791BD00051

坐标系到照相机坐标系的平移矩阵。 Coordinate system to the camera coordinate translation matrix system.

[0032] (2)由照相机坐标系变换到图像坐标系,变换过程如图3,变换公式为: [0032] (2) converted from the camera coordinate system to the image coordinate system transformation process in FIG. 3, the transform formula is:

Figure CN103559791BD00061

[0034] 式中,fx、fy代表以像素为单位的水平方向和垂直方向焦距,u。 [0034] In the formula, fx, fy pixels representative of the horizontal and vertical directions of the focal length, u. 、V。 , V. 分别表示主点(摄像机主轴与图像平面的交点)的横、纵坐标,s为投影参数(是一个过程参数,计算过程中被抵消)。 Represent the principal point (the intersection of the spindle and the camera image plane) of the horizontal and vertical coordinates, s is a projection parameters (a process parameter calculation process is canceled).

[0035] 世界坐标系中的点到图像坐标中的点的投影公式为: [0035] Projection Formula world coordinate system points to a point in image coordinates is:

Figure CN103559791BD00062

[0037] 选择棋盘格中定义的坐标系作为参考坐标系,每个视角都建立相应的刚体变换, 通过给定照相机内参数,得到求解过程的初始值,使求得的照相机的内参尽量使重投影误差最小。 [0037] Select a coordinate system defined in a grid as a reference coordinate system, each view has established the respective rigid body transformation, given by the camera parameters to obtain an initial value of the solution process, so that the internal reference camera is determined as far as possible the weight The minimum projection error. 在标定出相机的内参数时,最后选择地平面上的一张标定板的图片的棋盘格坐标系作为参考坐标系。 When the camera parameters are calibrated, a calibration image last selected on the ground plane plate checkerboard coordinate system as the reference coordinate system.

[0038] 通过将雷达所在路平面坐标系与选定的标定板的坐标系之间进行变换,得到雷达监测到的路平面的车辆坐标与图像坐标的转换矩阵、车辆高度与图像坐标的转换矩阵,从而可以确定出车辆在图像平面上的位置及所在位置的车辆的高度。 [0038] By transforming the radar coordinate system where the path between the plane of the calibration plate coordinate system selected, the conversion matrix to obtain radar monitored vehicle coordinates transformation matrix image coordinates of the road plane, the image coordinates of the vehicle height , which can determine the height position of the vehicle and the vehicle's location on the image plane.

[0039] 步骤三,建立适合车辆HOG特征描述器的正负样本集。 [0039] Step three, the positive and negative sample set for establishing the vehicle's HOG features described.

[0040] 本实施例采用CaltechGraz提供的车辆样本库,车辆的大小都为64X64批量提取训练样本的HOG特征,HOG的特征数据来自CCD。 [0040] The present embodiment employs the vehicle CaltechGraz provide sample database, the size of the vehicle are 64X64 batch extraction HOG features training samples characteristic data from the HOG CCD.

[0041] 步骤四,采用HOG算法对车辆样本集进行批量特征提取,从而建立HOG特征样本集。 [0041] Step 4 using the HOG algorithm bulk sample set vehicle feature extraction, so as to establish sets of samples HOG features.

[0042] HOG特征是针对矩形区域中的梯度方向上的强度统计。 [0042] HOG wherein the statistic for intensity gradient direction of the rectangular region. 车辆水平竖直梯度示意图如图4所示。 Horizontal vehicle gradient schematic vertical as shown in FIG.

[0043] 采用车辆模板大小为64*64,将模板样本分为16*16大小的block块,设block的高为H,宽为W,本发明采用H:W=I: 1块特征提取方法:每个block块分为4个相同的cell 单元,每个cell单元的大小为8*8,每个单元的特征是其内部64个像素的特征向量之和。 [0043] The vehicle size is 64 * 64 template, the template sample into a 16 * 16 block size of the block, the block set height H, width W, the present invention employs H: W = I: 1 block feature extraction method : each block is divided into four blocks of identical cell units, each cell unit size is 8 * 8, characterized in that a feature vector of each unit of 64 pixels inside and.

[0044] 用I(x,y)表示图像I在(x,y)处像素点的灰度值,按下式计算矩形区域中的梯度方向的强度统计特征: [0044] represents a statistical characteristic intensity gradient direction of the pixel gray value image I in point (x, y), is calculated as follows using the rectangular region I (x, y):

[0045] Gx (x,y) =I(x+1,y)-I(x~l,y) [0045] Gx (x, y) = I (x + 1, y) -I (x ~ l, y)

[0046] Gy (x,y) =I(x,y+1)-I(x,y-1) [0046] Gy (x, y) = I (x, y + 1) -I (x, y-1)

Figure CN103559791BD00063

[0048] a(x,y) =arctan(Gy (x,y),Gx (x,y)) [0048] a (x, y) = arctan (Gy (x, y), Gx (x, y))

[0049] 其中,Gx(x,y)、Gy(x,y)分别表示(x,y)处像素点的水平方向和垂直方向的梯度幅值,G(x,y)为(x,y)处像素点的梯度强度,a(x,y)表示(x,y)处像素点的梯度方向。 [0049] wherein, Gx (x, y), Gy (x, y) denote the gradient magnitude of pixels in the horizontal direction at the point (x, y) and the vertical direction, G (x, y) of (x, y ) gradient magnitude at pixel, a (x, y) denotes the gradient direction of the pixel at the point (x, y).

Figure CN103559791BD00071

大小Ak (X,y)为: Size Ak (X, y) is:

Figure CN103559791BD00072

[0052] 其中,bink(x,y)表示梯度方向的第k个方向区间。 [0052] wherein, bink (x, y) represents the k-th interval directions gradient direction. 这样,(x,y)处像素点的每个方向上的梯度特征可以用一个9维的向量Ak (x,y)表示。 Thus, the gradient direction of each pixel in the feature point at (x, y) can be expressed by a 9-dimensional vector Ak (x, y).

[0053] 为了消除光照等因素影响,对块内的每个单元进行归一化处理: [0053] In order to eliminate the influence of light and other factors, for each cell within a block normalization:

Figure CN103559791BD00073

[0055] 其中,f(Cni,k)表示第m个单元Cni中的第k个区间的归一化强度,e是为了避免分母为零设置的一个较小的数。 [0055] where, f (Cni, k) denotes the normalized intensity in the m-th unit Cni k-th interval, e is a small number in order to avoid setting of zero denominator.

[0056] 由f(Cni,k)的表达式可知,每个单元提取的特征向量为9维,每个块的特征为将4 个cell单元中的特征级联得到的36维向量。 [0056] apparent from the expression f (Cni, k), each feature vector extraction unit 9 for the dimension characteristic of each block is 36-dimensional vector wherein the cascade of four cell units obtained.

[0057] 步骤五,建立线性SVM分类器,使用步骤四中的特征样本集训练SVM分类器。 [0057] Step V. Establishing linear SVM classifier using the feature of the Step 4 SVM classifier training sample set.

[0058] 步骤六,目标类型判断。 [0058] Step six, the target type determination. 将雷达得到的车辆在路平面中的位置信息通过矩阵变换转换成图像坐标信息,将目标区域图像进行HOG特征提取,采用步骤五训练得到的SVM分类器进行预测,判断目标是否是车辆。 The radar vehicle position information obtained in the road plane by a matrix conversion into image coordinates information, comparing the target area image HOG feature extraction step using an SVM classifier is trained to predict five, it determines whether the target is a vehicle.

[0059] 步骤七,输出SVM分类器的目标物体识别结果。 [0059] Step 7 SVM classifier outputs a recognition result of the target object.

[0060] 步骤八,输出单线激光雷达测出的智能车前方判断为车辆的目标的距离。 Forward [0060] Step 8-line output of a laser radar intelligent vehicle determines the distance to the target vehicle.

Claims (1)

1. 一种融合雷达和CCD摄像机信号的车辆检测方法,其特征在于包括以下步骤: 步骤一,输入来自雷达的路面障碍物信息信号和来自CCD摄像机的路平面图像信号; 步骤二,进行包括照相机的内参和外参的摄像头校正,得到路平面坐标与图像坐标的投影矩阵,将路平面世界坐标转换成图像平面坐标; 世界坐标系中的点到图像坐标中的点的投影过程分两步来实现: (1) 将世界坐标系中点的坐标(XW,YW,Zw)变换到照相机坐标系(HZ。),变换公式为: 1. A method for detecting a vehicle radar and a CCD camera integration signal, comprising the following steps: First, an input signal from the radar road obstacle information and path planar image signals from the CCD camera; step two, a camera comprising internal reference and external camera calibration parameters, the projection matrix to obtain plane coordinates and the image coordinates of the path, the path converting the world coordinates into image plane coordinates plane; world coordinate system to the projection point of the image point in the process of two steps coordinates Implementation: (1) the coordinates of the midpoint of the world coordinate system (XW, YW, Zw) transformed to the camera coordinate system (HZ.), transformation formula is:
Figure CN103559791BC00021
式中, In the formula,
Figure CN103559791BC00022
为由世界坐标系到照相机坐标系的旋转矩阵 By the world coordinate system rotation matrix to the camera coordinate system
Figure CN103559791BC00023
为由世界坐标系到照相机坐标系的平移矩阵; (2) 由照相机坐标系变换到图像坐标系,变换公式为: By the world coordinate system to the camera coordinate translation matrix; and (2) converted from the camera coordinate system to the image coordinate system, transform formula is:
Figure CN103559791BC00024
式中,fx、fyfx代表以像素为单位的水平方向和垂直方向焦距,u。 Where, fx, fyfx pixels representative of the horizontal and vertical directions of the focal length, u. ,分别表示摄像机主轴与图像平面的交点的横、纵坐标,s为投影参数; 世界坐标系中的点到图像坐标中的点的投影公式为: , Respectively cross-spindle and the intersection of the camera image plane, and the ordinate, s projection parameter; world coordinate system to the projection point of the image point coordinates in the formula is:
Figure CN103559791BC00025
选择棋盘格中定义的坐标系作为参考坐标系,每个视角都建立相应的刚体变换,通过给定照相机内参数,得到求解过程的初始值,使求得的照相机的内参尽量使重投影误差最小;在标定出相机的内参数时,最后选择地平面上的一张标定板的图片的棋盘格坐标系作为参考坐标系; 通过将雷达所在路平面坐标系与选定的标定板的坐标系之间进行变换,得到雷达监测到的路平面的车辆坐标与图像坐标的转换矩阵、车辆高度与图像坐标的转换矩阵,从而可以确定出车辆在图像平面上的位置及所在位置的车辆的高度; 步骤三,建立适合车辆HOG特征描述器的正负样本集; 步骤四,采用HOG算法对车辆样本集进行批量特征提取,从而建立HOG特征样本集; HOG特征是针对矩形区域中的梯度方向上的强度统计; 米用行人模板大小为64*128,将行人模板样本分为16*16大小的bl Select a coordinate system defined in a grid as a reference coordinate system, each view has established the respective rigid body transformation, given by the camera parameters, to obtain the initial value of the solution process, so that the internal reference camera is determined as possible to minimize the re-projection error ; when the camera parameters are calibrated, a calibration image last selected on the ground plane plate checkerboard coordinate system as a reference coordinate system; by the way where the radar coordinate system and the plane coordinate system of the selected calibration plate Room for transformation, a transformation matrix transformation matrix vehicle coordinates and image coordinates of the radar monitored road plane, the vehicle height of the image coordinates, so as to determine the height of the vehicle's position and the location of the vehicle on the image plane; a step Third, the current collector for positive and negative samples to establish the vehicle's HOG features described; step 4 using the HOG algorithm bulk sample set vehicle feature extraction, so as to establish sets of samples HOG features; HOG wherein the intensity gradient direction for the rectangular area statistics; m pedestrian with template size of 64 * 128, pedestrian template sample into the size 16 * 16 bl ock块,设block的高为H,宽为W,本发明采用H: W = 1:1块特征提取方法:每个block块分为4个相同的cell 单元,每个cell单元的大小为8*8,每个单元的特征是其内部64个像素的特征向量之和; 用I(x,y)表示图像I在(x,y)处像素点的灰度值,按下式计算矩形区域中的梯度方向的强度统计特征: ock block, provided the block height H, width W, the present invention employs H: W = 1: 1 block feature extraction method: Each block is divided into four blocks of the same unit cell, the size of each unit cell 8 * 8, characterized in that each unit is 64 pixels inside the feature vector sum; I represents a gray value image at the pixel point (x, y) with I (x, y), a rectangular area is calculated as follows intensity gradient direction statistical characteristics of:
Figure CN103559791BC00031
其中,Gx(X,y)、Gy(X,y)分别表示(X,y)处像素点的水平方向和垂直方向的梯度幅值, G(x,y)为(x,y)处像素点的梯度强度,a (X,y)表示(X,y)处像素点的梯度方向; HOG特征将 Wherein, Gx (X, y), Gy (X, y) represent the gradient magnitude (X, y) at a pixel in the horizontal direction and the vertical direction, the pixel G (x, y) of (x, y) gradient strength points, a (X, y) represents the (X, y) at the pixel gradient direction; the HOG features will
Figure CN103559791BC00032
的梯度方向均匀分为9个bin,第k个方向的梯度幅值大小Ak (X,y) 为: Gradient direction uniformly divided into nine bin, the gradient magnitude of the amplitude Ak of ​​the k-th directions (X, y) is:
Figure CN103559791BC00033
其中,bink(x,y)表示梯度方向的第k个方向区间;这样,(x,y)处像素点的每个方向上的梯度特征可以用一个9维的向量Ak (X,y)表示; 为了消除光照因素影响,对块内的每个单元进行归一化处理: Wherein, bink (x, y) represents the k th gradient direction orientation section; thus, the gradient direction of each pixel in the feature point at (x, y) can be a 9-dimensional vector Ak (X, y) represents ; factors to eliminate the influence of light, for each cell within a block normalization:
Figure CN103559791BC00034
其中,f (Cni,k)表示第m个单元Cni中的第k个区间的归一化强度,即第k个区间的累积强度在Cni所属的块B中所占的比例,ε是为了避免分母为零设置的一个较小的数; 由Hcn^k)的表达式可知,每个单元提取的特征向量为9维,每个块的特征为将4个cell单元中的特征级联得到的36维向量; 步骤五,建立线性SVM分类器,使用步骤四中的特征样本集训练SVM分类器; 步骤六,目标类型判断;将雷达得到的车辆在路平面中的位置信息通过矩阵变换转换成图像坐标信息,将感兴趣区域图像进行特征提取,采用步骤五训练得到的SVM分类器进行预测,判断目标是否是车辆; 步骤七,输出SVM分类器的目标物体识别结果; 步骤八,利用单线激光雷达测出智能车前方判断为车辆的目标的距离。 Wherein, F (Cni, k) of m units Cni represents the k th normalized intensity intervals, i.e. the ratio of the integrated intensity of the k-th interval occupied in the block B Cni belongs, to avoid [epsilon] a smaller number is set to zero denominator; understood from Hcn ^ k) expressions, each feature vector extraction unit 9 for the dimension, the characteristic of each block is characterized cascade of four cell units obtained 36 dimensional vector; step five, the establishment of linear SVM classifier using a step wherein the sample set of training SVM classifier four; step six, the target type is determined; the location information of the vehicle radar obtained in the road plane is converted by a matrix into coordinate information of the image, the image feature extraction region of interest using an SVM classifier training step five prediction obtained, determining whether the target vehicle; step seven, the output of SVM classifier recognition result of a target object; step 8 using a single line laser the intelligent vehicle radar the front of the vehicle is determined as the target distance.
CN201310530503.5A 2013-10-31 2013-10-31 A fusion ccd camera and a radar detection signal of the vehicle CN103559791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310530503.5A CN103559791B (en) 2013-10-31 2013-10-31 A fusion ccd camera and a radar detection signal of the vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310530503.5A CN103559791B (en) 2013-10-31 2013-10-31 A fusion ccd camera and a radar detection signal of the vehicle

Publications (2)

Publication Number Publication Date
CN103559791A CN103559791A (en) 2014-02-05
CN103559791B true CN103559791B (en) 2015-11-18

Family

ID=50014028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310530503.5A CN103559791B (en) 2013-10-31 2013-10-31 A fusion ccd camera and a radar detection signal of the vehicle

Country Status (1)

Country Link
CN (1) CN103559791B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150051735A (en) * 2013-11-05 2015-05-13 현대모비스 주식회사 Parking Guide System and the Method
JP6253467B2 (en) * 2014-03-24 2017-12-27 東芝アルパイン・オートモティブテクノロジー株式会社 An image processing apparatus and an image processing program
CN104464173A (en) * 2014-12-03 2015-03-25 国网吉林省电力有限公司白城供电公司 Power transmission line external damage protection system based on space image three-dimensional measurement
CN105809971A (en) * 2014-12-30 2016-07-27 深圳市朗驰欣创科技有限公司 Vehicle tracking method and device
CN104916132B (en) * 2015-05-14 2017-02-01 扬州大学 One way intersections traffic flow to determine the driving track
CN105206065B (en) * 2015-10-10 2018-04-27 浙江宇视科技有限公司 A vehicle capture method and a capture system of the vehicle
CN105574511B (en) * 2015-12-18 2019-01-08 财团法人车辆研究测试中心 Have the adaptability object sorter and its method of parallel framework
CN105913488B (en) * 2016-04-15 2018-08-07 长安大学 Kind of three-dimensional point cloud 3D mapping table based on rapid reconstruction
CN107458308A (en) * 2016-06-02 2017-12-12 王明伟 Auxiliary driving method and system
CN106203529A (en) * 2016-07-18 2016-12-07 江苏大学 Vehicle tire safety detection method based on prior information
CN106296708B (en) * 2016-08-18 2019-02-15 宁波傲视智绘光电科技有限公司 Car tracing method and apparatus
CN106485274B (en) * 2016-10-09 2019-05-10 湖南穗富眼电子科技有限公司 A kind of object classification method based on target property figure
CN106774296A (en) * 2016-10-24 2017-05-31 中国兵器装备集团自动化研究所 Obstacle detection method based on information fusing of laser radar and CCD camera
CN108700665A (en) * 2017-06-01 2018-10-23 深圳市大疆创新科技有限公司 Laser radar-based detection method, device and detection device
EP3465267A4 (en) * 2017-08-25 2019-05-22 Beijing Didi Infinity Technology and Development Co., Ltd. Methods and systems for detecting environmental information of a vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581575A (en) * 2009-06-19 2009-11-18 南昌航空大学 Three-dimensional rebuilding method based on laser and camera data fusion
JP5047515B2 (en) * 2006-03-20 2012-10-10 アイシン・エィ・ダブリュ株式会社 Image creating system and road image generating method roads, and road image synthesizer
CN103150547A (en) * 2013-01-21 2013-06-12 信帧电子技术(北京)有限公司 Vehicle tracking method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5047515B2 (en) * 2006-03-20 2012-10-10 アイシン・エィ・ダブリュ株式会社 Image creating system and road image generating method roads, and road image synthesizer
CN101581575A (en) * 2009-06-19 2009-11-18 南昌航空大学 Three-dimensional rebuilding method based on laser and camera data fusion
CN103150547A (en) * 2013-01-21 2013-06-12 信帧电子技术(北京)有限公司 Vehicle tracking method and device

Also Published As

Publication number Publication date
CN103559791A (en) 2014-02-05

Similar Documents

Publication Publication Date Title
US20030137593A1 (en) Infrared image-processing apparatus
US20050232463A1 (en) Method and apparatus for detecting a presence prior to collision
CN102792314B (en) Cross traffic collision alert system
CN103149939B (en) UAV dynamic target tracking and locating method based on the visual
US7949151B2 (en) Vehicle surroundings monitoring apparatus
JP4002919B2 (en) Mobile height discrimination system
JP4203512B2 (en) Vehicle environment monitoring device
CN100502463C (en) Method for collecting characteristics in telecommunication flow information video detection
US20080260207A1 (en) Vehicle environment monitoring apparatus
CN102096803B (en) Safe state recognition system for people on basis of machine vision
JP2004334819A (en) Stereo calibration device and stereo image monitoring device using same
JP4899424B2 (en) Object detecting device
CN102944224B (en) Work method for automatic environmental perception systemfor remotely piloted vehicle
CN101794515B (en) Target detection system and method based on covariance and binary-tree support vector machine
CN103578116B (en) Apparatus and method for tracking objects
US8810653B2 (en) Vehicle surroundings monitoring apparatus
CN102389361B (en) Blindman outdoor support system based on computer vision
CN101701828B (en) Blind autonomous navigation method based on stereoscopic vision and information fusion
CN103021177B (en) Method and system for processing traffic monitoring video image in foggy day
CN101237522A (en) Motion detection method and device
CN101488222A (en) Camera self-calibration method based on movement target image and movement information
CN102629326A (en) Lane line detection method based on monocular vision
JP5503578B2 (en) Object detection apparatus and an object detection method
CN104573646B (en) Front pedestrian detection method and system for vehicle based lidar and camera binocular
CN101999972B (en) Stereoscopic vision based auxiliary walking device for blindmen and auxiliary method thereof

Legal Events

Date Code Title Description
C06 Publication
C14 Grant of patent or utility model