CN106023270A - Video vehicle detection method based on locally symmetric features - Google Patents
Video vehicle detection method based on locally symmetric features Download PDFInfo
- Publication number
- CN106023270A CN106023270A CN201610338718.0A CN201610338718A CN106023270A CN 106023270 A CN106023270 A CN 106023270A CN 201610338718 A CN201610338718 A CN 201610338718A CN 106023270 A CN106023270 A CN 106023270A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- feature
- shadow
- corner point
- corner
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 abstract description 7
- 230000003068 static effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于局部对称特征的视频车辆检测方法,利用局部不变特征的集合来描述车辆目标,可以有效避免分割问题,与现有技术相比,该方法将车辆的对称特征作为车辆局部特征聚类的重要线索,实现车辆中心位置的定位,有效避免了采用常规聚类算法时产生的算法复杂度高的问题,而且该方法具有较高的检测精度,运算过程简单,实时性强,能有效地对静态图片和视频图像中的车辆目标进行检测和识别,因此具有广阔的应用前景。
The invention discloses a video vehicle detection method based on local symmetric features, using a set of local invariant features to describe the vehicle target, which can effectively avoid the segmentation problem. Compared with the prior art, the method takes the symmetric features of the vehicle as the vehicle The important clue of local feature clustering realizes the positioning of the center of the vehicle, effectively avoiding the problem of high algorithm complexity when using conventional clustering algorithms, and this method has high detection accuracy, simple operation process, and strong real-time performance , can effectively detect and recognize vehicle targets in static pictures and video images, so it has broad application prospects.
Description
技术领域technical field
本发明属于视频检测技术领域,具体涉及一种基于局部对称特征的视频车辆检测方法。The invention belongs to the technical field of video detection, and in particular relates to a video vehicle detection method based on local symmetrical features.
背景技术Background technique
经济的快速发展,车辆数目的急剧增长,使得无论是发达国家还是发展中国家都饱受交通问题的困扰,如何获取交通信息,有效的疏导交通,缓解交通拥堵,从而避免交通事故的发生显得越来越重要,基于视频的车辆检测技术作为计算机视觉和图像处理领域的重要研究内容,以其方便快捷的优点越来越受到人们的关注。The rapid development of the economy and the rapid increase in the number of vehicles have caused both developed countries and developing countries to suffer from traffic problems. How to obtain traffic information, effectively guide traffic, alleviate traffic congestion, and avoid traffic accidents is becoming more and more important. As an important research content in the field of computer vision and image processing, video-based vehicle detection technology has attracted more and more people's attention because of its convenience and quickness.
为了实现车辆目标的检测与识别,通常需要提取车辆目标,常用的车辆目标提取方法是利用车辆的运动特征,将目标从交通背景中分离出来,从而实现车辆目标的检测,然而,该方法容易受光线、天气、车辆遮挡等外界条件变化的影响,从复杂背景中分割出运动目标是非常困难的;另外一种方法是利用车辆的低层特征(如边缘特征、颜色特征、形状特征等)进行分割,但这种方法很难获得理想的分割效果,以上两种方法主要利用车辆的全局特性,不可避免地要对疑似车辆目标进行分割,分割的准确性将直接影响车辆识别的结果。但实际的交通场景中,运动车辆容易受到天气和光照变化、车辆拥挤、遮挡以及阴影等因素的影响,导致目标车辆的分割变得更加困难。In order to realize the detection and recognition of vehicle targets, it is usually necessary to extract vehicle targets. The commonly used vehicle target extraction method is to use the motion characteristics of the vehicle to separate the target from the traffic background, so as to realize the detection of vehicle targets. However, this method is susceptible to Influenced by changes in external conditions such as light, weather, and vehicle occlusion, it is very difficult to segment moving targets from complex backgrounds; another method is to use low-level features of vehicles (such as edge features, color features, shape features, etc.) for segmentation , but this method is difficult to obtain the ideal segmentation effect. The above two methods mainly use the global characteristics of the vehicle, and it is inevitable to segment the suspected vehicle target. The accuracy of the segmentation will directly affect the result of vehicle recognition. However, in actual traffic scenes, moving vehicles are easily affected by factors such as weather and illumination changes, vehicle congestion, occlusion, and shadows, which makes the segmentation of target vehicles more difficult.
发明内容Contents of the invention
针对现有技术的不足和缺陷,本发明的目的在于,提出一种基于局部对称特征的视频车辆检测方法,本方法利用局部不变特征的集合来描述车辆目标,可以有效避免分割问题,将车辆的对称特征作为车辆局部特征聚类的重要线索,实现车辆中心位置的定位。Aiming at the deficiencies and defects of the prior art, the purpose of the present invention is to propose a video vehicle detection method based on local symmetric features. This method uses a set of local invariant features to describe the vehicle target, which can effectively avoid the segmentation problem, and the vehicle The symmetry feature of the vehicle is used as an important clue for the clustering of the local features of the vehicle to realize the positioning of the center of the vehicle.
为了实现上述目的,本发明采用如下技术方案,一种基于局部对称特征的视频车辆检测方 法,包括以下步骤:In order to achieve the above object, the present invention adopts following technical scheme, a kind of video vehicle detection method based on local symmetry feature, comprises the following steps:
步骤一,在初始帧上,沿车道线手动设置ROI区域,并记录ROI区域边界线上每个像素点的坐标;Step 1, on the initial frame, manually set the ROI area along the lane line, and record the coordinates of each pixel on the boundary line of the ROI area;
步骤二,对于ROI区域内部,计算当前帧中视频图像的特征角点pi,并记录特征角点pi的位置坐标;Step 2, for the interior of the ROI area, calculate the feature corner point p i of the video image in the current frame, and record the position coordinates of the feature corner point p i ;
步骤三,以步骤二得到的各特征角点为中心,各构造一个正方形区域,采用特征描述算子构造各特征角点的特征矢量 Step 3: Center each feature corner point obtained in step 2, construct a square area each, and use the feature description operator to construct the feature vector of each feature corner point
步骤四,构造各特征角点的水平对称角点qi的特征矢量 Step 4, Construct the feature vector of the horizontal symmetric corner point q i of each feature corner point
步骤五,计算任一特征角点pi与其他所有特征角点分别对应的水平对称角点qi的距离,其最小距离表示为Min(pi),次最小距离表示为MinSec(pi),Min(pi)的计算公式如下所示:Step 5, calculate the distance between any feature corner point p i and the horizontal symmetrical corner point q i corresponding to all other feature corner points, the minimum distance is expressed as Min(p i ), and the second minimum distance is expressed as Min Sec (p i ), the calculation formula of Min(p i ) is as follows:
其中,表示第i个特征角点的特征矢量,表示第j个水平对称角点qj的特征矢量,qj为特征角点pj的水平对称角点;in, Represents the feature vector of the i-th feature corner, Represents the feature vector of the jth horizontal symmetric corner point q j , q j is the horizontal symmetric corner point of the feature corner point p j ;
步骤六,若特征角点pi满足以下条件,则特征角点pi和特征角点pj为对称角点对,特征角点pj为与特征角点pi距离最小的水平对称角点qj所对应的特征角点,;Step 6, if the characteristic corner point p i satisfies the following conditions, then the characteristic corner point p i and the characteristic corner point p j are symmetrical corner point pairs, and the characteristic corner point p j is the horizontal symmetrical corner point with the smallest distance from the characteristic corner point p i The characteristic corner point corresponding to q j ,;
步骤七,遍历所有特征角点pi,重复上述步骤五和步骤六,直至找到所有对称角点对;Step 7, traverse all the characteristic corner points p i , repeat the above steps 5 and 6 until all pairs of symmetrical corner points are found;
步骤八,假设用表示每个对称角点对,分别求得各个对称角点对中心位置的x坐标为计算得到中心位置的统计直方图,将该统计直方图的峰值点作为候选车辆中心线的初始位置xvehicle,并计算统计直方图的方差 Step 8, suppose to use Represents each pair of symmetrical corner points, and the x-coordinates of the center positions of each pair of symmetrical corner points are respectively obtained as Calculate the center position The statistical histogram of the statistical histogram, the peak point of the statistical histogram as the initial position x vehicle of the candidate vehicle centerline, and calculate the variance of the statistical histogram
步骤九,判断对称角点对是否属于同一辆车上的对称角点对,若对称角点对 的中心位置x坐标满足式(1.3),则保留该对称角点对,反之删除;Step 9, judge the symmetrical corner point pair Whether they belong to the symmetrical corner pair on the same vehicle, if the symmetrical corner pair The x-coordinates of the central position of the satisfy the formula (1.3), then keep the pair of symmetrical corners, otherwise delete;
步骤十,统计步骤九中保留下来的所有对称角点对中心位置的平均值μvehicle,μvehicle即为候选车辆的中心线位置;Step 10, counting the average value μ vehicle of all symmetrical corner points and center positions retained in step 9, and μ vehicle is the centerline position of the candidate vehicle;
步骤十一,选取当前视频图像中的若干帧,手动选取车辆底部阴影区域,利用阴影的几何特点、亮度和色彩信息进行建模,训练阴影样本的均值μshadow和方差σshadow,以及阴影区域的x方向像素点个数l和y方向像素点个数h;Step 11: Select several frames in the current video image, manually select the shadow area at the bottom of the vehicle, use the geometric characteristics, brightness and color information of the shadow to model, train the mean μ shadow and variance σ shadow of the shadow samples, and the shadow area The number of pixels in the x direction l and the number of pixels in the y direction h;
步骤十二,利用高斯混合模型,对中心线两侧的疑似车辆区域像素点进行测试,其中疑似车辆区域为x∈[μvehicle-l,μvehicle+l],如式(1.4)所示,其中,pi表示图像中被测像素点,Tshadow为阴影样本集的Gshadow(pi)函数的均值;如果被测像素点满足式(1.5),则判定为阴影点,当阴影点是连续的,且满足|Nx-l|<0.2*l和|Ny-h|<0.1*h时,Nx、Ny分别为x方向和y方向上连续的阴影点个数,则可判断该阴影区域所对应的中心线为车辆的中心线,否则该中心线不是车辆的中心线,结束检测;Step 12, use the Gaussian mixture model to test the pixels of the suspected vehicle area on both sides of the center line, where the suspected vehicle area is x∈[μ vehicle -l, μ vehicle +l], as shown in formula (1.4), Among them, p i represents the measured pixel point in the image, and T shadow is the mean value of the G shadow (p i ) function of the shadow sample set; if the measured pixel point satisfies formula (1.5), it is judged as a shadow point, when the shadow point is continuous, and when |N x -l|<0.2*l and |N y -h|<0.1*h are satisfied, N x and N y are the number of continuous shadow points in the x direction and y direction respectively, then it can be Determine that the centerline corresponding to the shaded area is the centerline of the vehicle, otherwise the centerline is not the centerline of the vehicle, and end the detection;
Gshadow(pi)>Tshadow (1.5)G shadow (p i )>T shadow (1.5)
步骤十三,假设车辆目标区域可表示为R=(lR,rR,uR,bR),lR和rR分别是以μvehicle为中心的目标区域的左侧和右侧边界值,uR和bR表示目标区域的上面和下面的边界值,确定lR、rR、uR和bR取值的方法如下:Step 13, assuming that the vehicle target area can be expressed as R=(l R ,r R ,u R ,b R ), l R and r R are respectively the left and right boundary values of the target area centered on μ vehicle , u R and b R represent the upper and lower boundary values of the target area, the methods to determine the values of l R , r R , u R and b R are as follows:
bR的计算方法:逐行统计车辆底部阴影区域像素值个数,取其最大值作为bR的宽度,以bR的宽度为界限,利用垂直Sobel算子,提取车辆底部阴影区域上方的ROI区域内图像的边缘,车辆的后保险杠和车辆底部阴影区域之间的中心线确定为bR的位置;The calculation method of b R : count the number of pixel values in the shadow area of the vehicle bottom line by line, take the maximum value as the width of b R , use the vertical Sobel operator to extract the ROI above the shadow area of the vehicle bottom with the width of b R as the limit The edge of the image in the area, the center line between the rear bumper of the vehicle and the shadow area at the bottom of the vehicle is determined as the position of b R ;
rR和lR的计算方法:rR和lR的取值根据车辆的中心线位置及bR的宽度来确定,如式(1.6) 所示;Calculation method of r R and l R : The values of r R and l R are determined according to the centerline position of the vehicle and the width of b R , as shown in formula (1.6);
lR=μvehicle-width(bR)/2rR=μvehicle+width(bR)/2 (1.6)l R =μ vehicle -width(b R )/2r R =μ vehicle +width(b R )/2 (1.6)
uR的计算方法:假设车辆高度为h,其高度和宽度满足比例关系h=γ(rR-lR),其中,γ由一组车辆数据训练获得;利用垂直Sobel算子提取图像[lR,rR]范围内水平边缘,并统计其灰度图像的水平投影,假设P(y)表示其水平投影直方图分布,那么y∈[bR+0.5h,bR+1.5h]范围内的P(y)最大值即为uR的值;Calculation method of u R : Assume that the height of the vehicle is h, and its height and width satisfy the proportional relationship h=γ(r R -l R ), where γ is obtained from a set of vehicle data training; the vertical Sobel operator is used to extract the image [l R , r R ] in the range of horizontal edges, and count the horizontal projection of its gray image, assuming that P(y) represents its horizontal projection histogram distribution, then the range of y∈[b R +0.5h,b R +1.5h] The maximum value of P(y) in is the value of u R ;
通过对上述方法,我们可以依次获得bR、lR、rR和uR取值,从而确定目标车辆。Through the above method, we can sequentially obtain the values of b R , l R , r R and u R to determine the target vehicle.
进一步的,所述步骤二中特征角点的提取方法为Harris角点提取算法。Further, the feature corner point extraction method in the step 2 is the Harris corner point extraction algorithm.
进一步的,所述步骤三中各特征角点的特征矢量Vi的构造方法为:利用Haar小波模板对正方形区域图像进行响应值计算,分别得到沿X方向和Y方向的Haar小波响应值,统计∑dx、∑dy、∑|dx|、∑|dy|的响应值,从而得到每个特征角点pi的特征矢量 Further, the construction method of the feature vector V i of each feature corner point in the step 3 is: use the Haar wavelet template to calculate the response value of the square area image, and obtain the Haar wavelet response values along the X direction and the Y direction respectively, and statistically The response values of ∑dx, ∑dy, ∑|dx|, ∑|dy|, so as to obtain the feature vector of each feature corner p i
进一步的,所述步骤三中构造的正方形区域大小为5*5。Further, the size of the square area constructed in the third step is 5*5.
进一步的,所述步骤三中Haar小波模板的尺寸为2*2。Further, the size of the Haar wavelet template in the third step is 2*2.
进一步的,所述步骤四中各特征角点的水平对称角点qi的特征矢量的构造方法为: Further, the feature vector of the horizontally symmetrical corner point q i of each feature corner point in the step 4 The construction method is:
与现有技术相比,本发明至少具有以下有益效果,本发明的一种基于局部对称特征的视频车辆检测方法,利用局部不变特征的集合来描述车辆目标,可以有效避免分割问题,与现有技术相比,该方法将车辆的对称特征作为车辆局部特征聚类的重要线索,实现车辆中心位置的定位,有效避免了采用常规聚类算法时产生的算法复杂度高的问题,而且该方法具有较高的检测精度,运算过程简单,实时性强,能有效地对静态图片和视频图像中的车辆目标进行检测和识别,因此具有广阔的应用前景。Compared with the prior art, the present invention has at least the following beneficial effects. A video vehicle detection method based on local symmetric features of the present invention uses a set of local invariant features to describe the vehicle target, which can effectively avoid segmentation problems. Compared with existing technologies, this method takes the symmetrical features of the vehicle as an important clue for the clustering of the local features of the vehicle, realizes the location of the center of the vehicle, and effectively avoids the problem of high algorithm complexity when using conventional clustering algorithms, and the method It has high detection accuracy, simple calculation process, strong real-time performance, and can effectively detect and recognize vehicle targets in static pictures and video images, so it has broad application prospects.
附图说明Description of drawings
图1为沿车道线手动设置ROI区域。Figure 1 manually sets the ROI area along the lane line.
图2为构造Harris角点特征矢量的图像区域及Haar小波模板的结构示意图,其中,(a)为以Harris角点为中心的5*5图像区域,(b)和(c)分别为X方向和Y方向的Haar小波响应模板。Figure 2 is a schematic diagram of the structure of the image area and the Haar wavelet template for constructing the Harris corner feature vector, where (a) is the 5*5 image area centered on the Harris corner, (b) and (c) are the X directions and the Haar wavelet response template in the Y direction.
图3为车辆目标区域参数示意图,(a)为车辆原图,(b)为车辆的水平边缘提取及水平投影图。Figure 3 is a schematic diagram of vehicle target area parameters, (a) is the original image of the vehicle, (b) is the horizontal edge extraction and horizontal projection image of the vehicle.
图4为车辆目标检测结果。Figure 4 shows the vehicle target detection results.
具体实施方式detailed description
下面结合附图和具体实施方式对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
一种基于局部对称特征的视频车辆检测方法,包括以下步骤:A video vehicle detection method based on local symmetric features, comprising the following steps:
步骤一,如图1所示,在初始帧上,沿车道线手动设置ROI区域,即图1中方形框(红色),并记录ROI区域边界线上每个像素点的坐标;Step 1, as shown in Figure 1, on the initial frame, manually set the ROI area along the lane line, that is, the square frame (red) in Figure 1, and record the coordinates of each pixel on the boundary line of the ROI area;
步骤二,对于ROI区域内部,采用Harris角点提取算法计算当前帧中视频图像的特征角点pi,并记录特征角点pi的位置坐标;Step 2, for the interior of the ROI area, the Harris corner point extraction algorithm is used to calculate the feature corner point p i of the video image in the current frame, and record the position coordinates of the feature corner point p i ;
步骤三,如图2所示,以步骤二得到的各特征角点为中心,各构造一个大小为5*5的正方形区域(图2a),利用Haar小波模板对正方形区域图像进行响应值计算,其中Haar小波模板的尺寸为2*2,分别得到沿X方向(图2b)和Y方向(图2c)的Haar小波响应值,统计∑dx、∑dy、∑|dx|、∑|dy|的响应值,从而得到每个特征角点pi的特征矢量采用特征描述算子构造各特征角点的特征矢量 Step 3, as shown in Figure 2, centering on each feature corner point obtained in Step 2, each constructs a square area with a size of 5*5 (Figure 2a), and uses the Haar wavelet template to calculate the response value of the square area image, The size of the Haar wavelet template is 2*2, and the Haar wavelet response values along the X direction (Fig. 2b) and the Y direction (Fig. 2c) are respectively obtained, and the statistics of ∑dx, ∑dy, ∑|dx|, ∑|dy| Response value, so as to get the feature vector of each feature corner p i Using feature description operator to construct feature vector of each feature corner point
步骤四,构造各特征角点的水平对称角点qi的特征矢量 Step 4, Construct the feature vector of the horizontally symmetrical corner q i of each feature corner
步骤五,计算任一特征角点pi与其他所有特征角点分别对应的水平对称角点qi的距离,其最小距离表示为Min(pi),次最小距离表示为MinSec(pi),Min(pi)的计算公式如下所示:Step 5, calculate the distance between any feature corner point p i and the horizontal symmetrical corner point q i corresponding to all other feature corner points, the minimum distance is expressed as Min(p i ), and the second minimum distance is expressed as Min Sec (p i ), the calculation formula of Min(p i ) is as follows:
其中,表示第i个特征角点的特征矢量,表示第j个水平对称角点qj的特征矢量,qj为特征角点pj的水平对称角点;in, Represents the feature vector of the i-th feature corner, Represents the feature vector of the jth horizontal symmetric corner point q j , q j is the horizontal symmetric corner point of the feature corner point p j ;
步骤六,若特征角点pi满足以下条件,则特征角点pi和特征角点pj为对称角点对,特征角点pj为与特征角点pi距离最小的水平对称角点qj所对应的特征角点,;Step 6, if the characteristic corner point p i satisfies the following conditions, then the characteristic corner point p i and the characteristic corner point p j are symmetrical corner point pairs, and the characteristic corner point p j is the horizontal symmetrical corner point with the smallest distance from the characteristic corner point p i The characteristic corner point corresponding to q j ,;
步骤七,遍历所有特征角点pi,重复上述步骤五和步骤六,直至找到所有对称角点对;Step 7, traverse all the characteristic corner points p i , repeat the above steps 5 and 6 until all pairs of symmetrical corner points are found;
步骤八,假设用表示每个对称角点对,分别求得各个对称角点对中心位置的x坐标为计算得到中心位置的统计直方图,将该统计直方图的峰值点作为候选车辆中心线的初始位置xvehicle,并计算统计直方图的方差 Step 8, suppose to use Represents each pair of symmetrical corner points, and the x-coordinates of the center positions of each pair of symmetrical corner points are respectively obtained as Calculate the center position The statistical histogram of the statistical histogram, the peak point of the statistical histogram as the initial position x vehicle of the candidate vehicle centerline, and calculate the variance of the statistical histogram
步骤九,判断对称角点对是否属于同一辆车上的对称角点对,若对称角点对的中心位置x坐标满足式(1.3),则保留该对称角点对,反之删除;Step 9, judge the symmetrical corner point pair Whether they belong to the symmetrical corner pair on the same vehicle, if the symmetrical corner pair The x-coordinates of the central position of the satisfy the formula (1.3), then keep the pair of symmetrical corners, otherwise delete;
步骤十,统计步骤九中保留下来的所有对称角点对中心位置的平均值μvehicle,μvehicle即为候选车辆的中心线位置;Step 10, counting the average value μ vehicle of all symmetrical corner points and center positions retained in step 9, and μ vehicle is the centerline position of the candidate vehicle;
步骤十一,选取当前视频图像中的若干帧,手动选取车辆底部阴影区域,利用阴影的几何特点、亮度和色彩信息进行建模,训练阴影样本的均值μshadow和方差σshadow,以及阴影区域的x方向像素点个数l和y方向像素点个数h;Step 11: Select several frames in the current video image, manually select the shadow area at the bottom of the vehicle, use the geometric characteristics, brightness and color information of the shadow to model, train the mean μ shadow and variance σ shadow of the shadow samples, and the shadow area The number of pixels in the x direction l and the number of pixels in the y direction h;
步骤十二,利用高斯混合模型,对中心线两侧的疑似车辆区域像素点进行测试,其中疑似车辆区域为x∈[μvehicle-l,μvehicle+l],如式(1.4)所示,其中,pi表示图像中被测像素点,Tshadow为阴影样本集的Gshadow(pi)函数的均值;如果被测像素点满足式(1.5),则判定为阴影点,当阴影 点是连续的,且满足|Nx-l|<0.2*l和|Ny-h|<0.1*h时,Nx、Ny分别为x方向和y方向上连续的阴影点个数,则可判断该阴影区域所对应的中心线为车辆的中心线,否则该中心线不是车辆的中心线,结束检测;Step 12, use the Gaussian mixture model to test the pixels of the suspected vehicle area on both sides of the center line, where the suspected vehicle area is x∈[μ vehicle -l, μ vehicle +l], as shown in formula (1.4), Among them, p i represents the measured pixel point in the image, and T shadow is the mean value of the G shadow (p i ) function of the shadow sample set; if the measured pixel point satisfies formula (1.5), it is judged as a shadow point, when the shadow point is continuous, and when |N x -l|<0.2*l and |N y -h|<0.1*h are satisfied, N x and N y are the number of continuous shadow points in the x direction and y direction respectively, then it can be Determine that the centerline corresponding to the shaded area is the centerline of the vehicle, otherwise the centerline is not the centerline of the vehicle, and end the detection;
Gshadow(pi)>Tshadow (1.5)G shadow (p i )>T shadow (1.5)
步骤十三,如图3所示,假设车辆目标区域可表示为R=(lR,rR,uR,bR),lR和rR分别是以μvehicle为中心的目标区域的左侧和右侧边界值,uR和bR表示目标区域的上面和下面的边界值,确定lR、rR、uR和bR取值的方法如下:Step 13, as shown in Figure 3, assume that the target area of the vehicle can be expressed as R=(l R , r R , u R , b R ), l R and r R are respectively the left sides of the target area centered on μ vehicle side and right boundary values, u R and b R represent the upper and lower boundary values of the target area, and the methods for determining the values of l R , r R , u R and b R are as follows:
bR的计算方法:逐行统计车辆底部阴影区域像素值个数,取其最大值作为bR的宽度,以bR的宽度为界限,利用垂直Sobel算子,提取车辆底部阴影区域上方的ROI区域内图像的边缘,车辆的后保险杠和车辆底部阴影区域之间的中心线确定为bR的位置;The calculation method of b R : count the number of pixel values in the shadow area of the vehicle bottom line by line, take the maximum value as the width of b R , use the vertical Sobel operator to extract the ROI above the shadow area of the vehicle bottom with the width of b R as the limit The edge of the image in the area, the center line between the rear bumper of the vehicle and the shadow area at the bottom of the vehicle is determined as the position of b R ;
rR和lR的计算方法:rR和lR的取值根据车辆的中心线位置及bR的宽度来确定,如式(1.6)所示;Calculation method of r R and l R : the values of r R and l R are determined according to the centerline position of the vehicle and the width of b R , as shown in formula (1.6);
lR=μvehicle-width(bR)/2rR=μvehicle+width(bR)/2 (1.6)l R =μ vehicle -width(b R )/2r R =μ vehicle +width(b R )/2 (1.6)
uR的计算方法:如图3所示,假设车辆高度为h,其高度和宽度满足比例关系h=γ(rR-lR),其中,γ由一组车辆数据训练获得;利用垂直Sobel算子提取图像[lR,rR]范围内水平边缘(如图3b所示),并统计其灰度图像的水平投影,假设P(y)表示其水平投影直方图分布,那么y∈[bR+0.5h,bR+1.5h]范围内的P(y)最大值即为uR的值;Calculation method of u R : as shown in Figure 3, assuming that the height of the vehicle is h, its height and width satisfy the proportional relationship h=γ(r R -l R ), where γ is obtained from a set of vehicle data training; using the vertical Sobel The operator extracts the horizontal edge in the range of image [l R , r R ] (as shown in Figure 3b), and counts the horizontal projection of its grayscale image. Assuming that P(y) represents the distribution of its horizontal projection histogram, then y∈[ The maximum value of P(y) within the range of b R +0.5h, b R +1.5h] is the value of u R ;
如图4所示,通过对上述方法,我们可以依次获得bR、lR、rR和uR取值,从而确定目标车辆。As shown in Figure 4, through the above method, we can sequentially obtain the values of b R , l R , r R and u R to determine the target vehicle.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610338718.0A CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610338718.0A CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106023270A true CN106023270A (en) | 2016-10-12 |
Family
ID=57096134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610338718.0A Pending CN106023270A (en) | 2016-05-19 | 2016-05-19 | Video vehicle detection method based on locally symmetric features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106023270A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304750A (en) * | 2017-01-13 | 2018-07-20 | 比亚迪股份有限公司 | Front vehicles recognition methods, device and vehicle |
CN108319910A (en) * | 2018-01-30 | 2018-07-24 | 海信集团有限公司 | A kind of vehicle identification method, device and terminal |
CN108846395A (en) * | 2018-04-13 | 2018-11-20 | 西藏民族大学 | The vehicle checking method merged based on vehicle Local Symmetries and shadow character |
-
2016
- 2016-05-19 CN CN201610338718.0A patent/CN106023270A/en active Pending
Non-Patent Citations (4)
Title |
---|
B.F.MOMIN等: "《Vehicle Detection in Video Surveillance System using Symmetrical SURF》", 《IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL,COMPUTER AND COMMUNICATION TECHNOLOGIES》 * |
卢胜男等: "《基于块匹配置信度的隧道交通背景提取算法》", 《电视技术》 * |
卢胜男等: "《基于虚拟检测窗口的车流量检测算法设计与实现》", 《电脑知识与技术》 * |
张亚娟: "《基于SURF特征的图像与视频拼接技术的研究》", 《中国优秀硕士论文全文数据库》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304750A (en) * | 2017-01-13 | 2018-07-20 | 比亚迪股份有限公司 | Front vehicles recognition methods, device and vehicle |
CN108304750B (en) * | 2017-01-13 | 2020-11-06 | 比亚迪股份有限公司 | Front vehicle identification method and device and vehicle |
CN108319910A (en) * | 2018-01-30 | 2018-07-24 | 海信集团有限公司 | A kind of vehicle identification method, device and terminal |
CN108319910B (en) * | 2018-01-30 | 2021-11-16 | 海信集团有限公司 | Vehicle identification method and device and terminal |
CN108846395A (en) * | 2018-04-13 | 2018-11-20 | 西藏民族大学 | The vehicle checking method merged based on vehicle Local Symmetries and shadow character |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104715252B (en) | A kind of registration number character dividing method of dynamic template combination pixel | |
CN105046196B (en) | Front truck information of vehicles structuring output method based on concatenated convolutional neutral net | |
CN109740595B (en) | Oblique vehicle detection and tracking system and method based on machine vision | |
CN105373794B (en) | A kind of licence plate recognition method | |
CN103824081B (en) | Method for detecting rapid robustness traffic signs on outdoor bad illumination condition | |
CN102663760B (en) | Location and segmentation method for windshield area of vehicle in images | |
CN109190444B (en) | An implementation method of a video-based toll lane vehicle feature recognition system | |
CN108108761A (en) | A kind of rapid transit signal lamp detection method based on depth characteristic study | |
CN103136528B (en) | A kind of licence plate recognition method based on dual edge detection | |
CN107895492A (en) | A kind of express highway intelligent analysis method based on conventional video | |
CN105447503B (en) | Pedestrian detection method based on rarefaction representation LBP and HOG fusion | |
CN107491753A (en) | A kind of parking offense detection method based on background modeling | |
CN104899554A (en) | Vehicle ranging method based on monocular vision | |
CN107506763A (en) | A kind of multiple dimensioned car plate precise positioning method based on convolutional neural networks | |
CN103268470B (en) | Object video real-time statistical method based on any scene | |
CN105893949A (en) | Lane line detection method under complex road condition scene | |
CN111860509B (en) | A two-stage method for accurate extraction of unconstrained license plate regions from coarse to fine | |
CN103810716B (en) | Move and the image partition method of Renyi entropy based on gray scale | |
CN107256633B (en) | Vehicle type classification method based on monocular camera three-dimensional estimation | |
CN103903018A (en) | Method and system for positioning license plate in complex scene | |
CN102004925B (en) | Method for training object classification model and identification method using object classification model | |
CN104599511B (en) | Traffic flow detection method based on background modeling | |
CN106295528A (en) | A kind of vehicle checking method based on multi-part spatial relation GMM modeling | |
CN103049788B (en) | Based on space number for the treatment of object detection system and the method for computer vision | |
CN107832762A (en) | A kind of License Plate based on multi-feature fusion and recognition methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161012 |
|
RJ01 | Rejection of invention patent application after publication |