CN101226592A - Part-Based Object Tracking Method - Google Patents
Part-Based Object Tracking Method Download PDFInfo
- Publication number
- CN101226592A CN101226592A CNA200810033733XA CN200810033733A CN101226592A CN 101226592 A CN101226592 A CN 101226592A CN A200810033733X A CNA200810033733X A CN A200810033733XA CN 200810033733 A CN200810033733 A CN 200810033733A CN 101226592 A CN101226592 A CN 101226592A
- Authority
- CN
- China
- Prior art keywords
- tracking
- parts
- point
- tracking unit
- component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000012937 correction Methods 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 8
- 230000001133 acceleration Effects 0.000 claims description 6
- 230000008034 disappearance Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 2
- 239000011159 matrix material Substances 0.000 description 13
- 238000002474 experimental method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000002245 particle Substances 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
一种图像处理技术领域的基于部件的对象跟踪方法,包括如下步骤:首先对出现的跟踪对象,使用加速区域角点检测的方法对跟踪对象的跟踪部件进行定位,然后通过灰度直方图描述跟踪部件,在后续的帧中通过卡尔曼滤波对跟踪部件进行跟踪,在每帧中通过观测值的测量来修正卡尔曼滤波的参数,并且进行部件的更新,最后将跟踪对象标识出来。本发明能够较准确地跟踪目标并具有非常快的跟踪速度,本发明使用对象角点为中心的小窗口作为跟踪部件,能够有效地克服遮挡等问题。
A component-based object tracking method in the field of image processing technology, comprising the following steps: firstly, for the tracking object that appears, the tracking component of the tracking object is positioned by using the method of accelerated area corner detection, and then the tracking is described by a grayscale histogram The component is tracked by the Kalman filter in subsequent frames, and the parameters of the Kalman filter are corrected by measuring the observed value in each frame, and the component is updated, and finally the tracked object is identified. The present invention can track the target more accurately and has a very fast tracking speed. The present invention uses a small window centered on the corner of the object as the tracking component, which can effectively overcome problems such as occlusion.
Description
技术领域technical field
本发明涉及一种图像处理技术领域的方法,具体是一种基于部件的对象跟踪方法。The invention relates to a method in the technical field of image processing, in particular to a component-based object tracking method.
背景技术Background technique
对象跟踪问题是计算机视觉中一个重要的研究领域,它在视频监控、图像压缩和三维重建等领域中都具有广泛的应用。运动对象在运动的过程中难免要遇到遮挡的问题,遮挡是影响跟踪稳定性的主要因素,克服遮挡是跟踪算法的难点之Object tracking is an important research field in computer vision, and it has a wide range of applications in the fields of video surveillance, image compression and 3D reconstruction. Moving objects will inevitably encounter the problem of occlusion in the process of movement. Occlusion is the main factor affecting the stability of tracking. Overcoming occlusion is one of the difficulties of tracking algorithms.
经对现有技术文献的检索发现,Amit Adam等在Computer Vision andPattern Recognition(计算机视觉与模型识别)(2006年1月:798-805)发表的Robust Fragments-based Tracking using the Integral Histogram(基于部件和直方图的跟踪)提出将跟踪窗口分成若干个子窗口,对于被遮挡的子窗口直方图给予补偿,但其不足在于采用穷搜索的方法匹配两窗口降低了跟踪的实时性,并且结果不能体现出跟踪对象的运动角度。After searching the prior art documents, it was found that Robust Fragments-based Tracking using the Integral Histogram published by Amit Adam et al. in Computer Vision and Pattern Recognition (January 2006: 798-805). Histogram tracking) proposes to divide the tracking window into several sub-windows, and compensate the blocked sub-window histogram, but its disadvantage is that the matching of two windows by using the poor search method reduces the real-time performance of tracking, and the result cannot reflect the tracking The angle of motion of the object.
发明内容Contents of the invention
本发明的目的在于克服上述现有技术的不足,提出了一种基于部件的跟踪方法,将目标中的多个部件作为跟踪对象,使其采用基于核的灰度直方图来描述跟踪对象中的各个部件,通过卡尔曼滤波器预测部件的参数,继而利用基于核的灰度直方图进行修正,以完成跟踪,不但有效地克服遮挡问题,而且克服对象内部存在的相对运动以及非刚体变形等问题,具有良好的实时性和很好的跟踪效果。The purpose of the present invention is to overcome the above-mentioned deficiencies in the prior art, and propose a component-based tracking method, which uses multiple components in the target as tracking objects, and uses kernel-based gray histograms to describe the tracking objects. For each component, the parameters of the component are predicted by the Kalman filter, and then the kernel-based gray histogram is used for correction to complete the tracking, which not only effectively overcomes the occlusion problem, but also overcomes the problems of relative motion and non-rigid body deformation inside the object , with good real-time performance and good tracking effect.
本发明是通过以下技术方案实现的,包括如下步骤:The present invention is achieved through the following technical solutions, comprising the steps of:
首先对出现的跟踪对象,使用FAST(加速区域)角点检测的方法对跟踪对象的跟踪部件进行定位;First, for the tracking object that appears, use the method of FAST (acceleration area) corner detection to locate the tracking part of the tracking object;
然后通过灰度直方图描述跟踪部件,在后续的帧中通过卡尔曼滤波对跟踪部件进行跟踪,在每帧中通过观测值的测量来修正卡尔曼滤波的参数,并且进行部件的更新;Then describe the tracking component through the gray histogram, track the tracking component through the Kalman filter in subsequent frames, and modify the parameters of the Kalman filter through the measurement of the observed value in each frame, and update the component;
最后将跟踪对象标识出来。Finally, the tracked object is identified.
所述对跟踪对象的跟踪部件进行定位,是指:采用FAST角点检测的方法检测运动对象中的角点,将以角点为中心的矩形窗口作为跟踪部件,FAST角点检测的方法具体为:为检查待检测点c周围的圆,寻找其中最长的圆弧,如果圆弧中所有的点的灰度值都大于c点的灰度值t个灰度值以上,t由用户根据需要设定,或者都小于c点的灰度值t个灰度值以上,则被判定为角点。The described positioning of the tracking component of the tracking object refers to: adopting the method of FAST corner point detection to detect the corner point in the moving object, using the rectangular window as the center of the corner point as the tracking component, the method of FAST corner point detection is specifically as follows : To check the circle around the point c to be detected, find the longest arc among them, if the gray value of all the points in the arc is greater than the gray value of point c by more than t gray values, t is determined by the user according to the needs set, or both are less than the gray value of point c by more than t gray values, it will be judged as a corner point.
所述通过灰度直方图描述跟踪部件,是指:采用基于核的灰度直方图来描述跟踪部件,直方图为n维向量,n由用户根据需要设定,首先将颜色空间由256维映射到n维,然后采用Biweight(双权重)核函数对每个点加权,使距离中心较远的像素的权值较小,这样可以降低背景噪声的影响,提高直方图的稳定性。The description of the tracking component through the gray histogram refers to: using the kernel-based gray histogram to describe the tracking component, the histogram is an n-dimensional vector, and n is set by the user according to the needs. First, the color space is mapped by 256 dimensions To the n dimension, and then use the Biweight (double weight) kernel function to weight each point, so that the weight of the pixel farther from the center is smaller, which can reduce the influence of background noise and improve the stability of the histogram.
所述的通过卡尔曼滤波对跟踪部件进行跟踪,具体为:卡尔曼滤波包括预测和修正两部分:预测部分采用预测方程组,利用前一时刻的状态值和预测误差做出预测,得到各个跟踪部件在当前时刻的位置,由于预测结果会存在一定误差,修改部分采用修正方程组,利用获得的当前时刻的观测值来修正预测结果。The tracking part is tracked by Kalman filtering, specifically: Kalman filtering includes two parts: prediction and correction: the prediction part adopts prediction equations, and uses the state value and prediction error of the previous moment to make a prediction, and obtains each tracking For the position of the component at the current moment, there will be some errors in the prediction results. The modification part uses the correction equations, and uses the obtained observations at the current moment to correct the prediction results.
所述通过观测值的测量来修正卡尔曼滤波器的参数,是指:在预测方程组预测出的跟踪部件位置周围采用螺旋式搜索的方法,找到一点,以该点为邻域的窗口的直方图和原部件直方图的欧氏距离小于设定的阈值,该点的位置就作为当前帧的观测值,修正方程组通过此观测值来修正当前的卡尔曼滤波器的预测值,得到修正后的状态估计值和噪声方差估计值。The parameters of the Kalman filter are corrected by the measurement of the observed values, which refers to: using a spiral search method around the position of the tracking component predicted by the prediction equations to find a point, and the histogram of the window with this point as the neighborhood The Euclidean distance between the graph and the histogram of the original part is less than the set threshold, and the position of this point is taken as the observation value of the current frame. The correction equations use this observation value to correct the current prediction value of the Kalman filter, and the corrected The state estimate and noise variance estimate of .
所述进行部件的更新,具体为:如果在预测点周围无法找到符合条件的点,采用判决方法将对象标识,保留位于标识区域之内的部件,即保留由于对象旋转或由于遮挡而导致的部件消失但仍在跟踪对象之内的部件,淘汰因跟踪失败处在跟踪对象之外的部件,并将原部件的灰度直方图与当前时刻部件所处位置的灰度直方图加权相加的方法对灰度直方图进行更新。The update of the components is specifically: if no qualified point can be found around the predicted point, use the judgment method to mark the object, and keep the components located in the marked area, that is, keep the components caused by the rotation of the object or the occlusion. Parts that disappear but are still within the tracking object, eliminate the parts that are outside the tracking object due to tracking failure, and add the gray histogram of the original part to the gray histogram of the current position of the part. Update the grayscale histogram.
所述将跟踪对象标识出来,是指:确定出跟踪对象的各个部件之后,用能够包含所有跟踪部件中心点的面积最小的矩形将跟踪对象标识出来,使得各个部件统一到一个对象,具体为:采用格雷厄姆方法确定出对象角点集合的凸壳,得到点集的凸壳之后,以凸壳上的一条边的延长线为矩形的一边,找出包含点集中所有点的矩形,依次旋转找出面积最小的矩形区域。Identifying the tracking object refers to: after determining each component of the tracking object, use the rectangle with the smallest area that can contain the center points of all tracking components to identify the tracking object, so that each component can be unified into one object, specifically: Use the Graham method to determine the convex hull of the corner point set of the object. After obtaining the convex hull of the point set, use the extension line of a side on the convex hull as one side of the rectangle to find the rectangle containing all the points in the point set and rotate them in turn. Find the rectangular region with the smallest area.
与现有技术相比,本发明具有如下有益效果:本发明能够较准确地跟踪目标以及具有非常快的跟踪速度,使用对象角点为中心的小窗口作为跟踪部件,能够有效地克服遮挡等问题。对象描述采用基于核的灰度直方图,以适应对象大小变化和旋转的情况,并且降低背景噪声的影响。通过卡尔曼滤波预测部件在下一帧的参数,以及采用直方图修正滤波器参数,保证了部件跟踪的准确性。最后用格雷厄姆方法确定对象中心所在的矩形,使得矩形倾角和大小都较为符合目标的形状特点。并且,本发明具有良好的实时性和稳定的跟踪效果,并且能够有效地克服遮挡、对象中速度不统一等问题。Compared with the prior art, the present invention has the following beneficial effects: the present invention can track the target more accurately and has a very fast tracking speed, and uses a small window centered on the corner of the object as the tracking component, which can effectively overcome problems such as occlusion . Object description uses a kernel-based grayscale histogram to adapt to object size changes and rotations, and to reduce the impact of background noise. Predicting the parameters of the component in the next frame through Kalman filtering, and using the histogram to correct the filter parameters ensure the accuracy of component tracking. Finally, the Graham method is used to determine the rectangle where the center of the object is located, so that the inclination and size of the rectangle are more in line with the shape characteristics of the target. Moreover, the present invention has good real-time performance and stable tracking effect, and can effectively overcome problems such as occlusion and non-uniform speed among objects.
附图说明Description of drawings
图1是本发明的实施例中FAST角点检测方法示意图;Fig. 1 is the schematic diagram of FAST corner detection method in the embodiment of the present invention;
图2本发明中卡尔曼滤波的流程图;The flow chart of Kalman filtering among Fig. 2 the present invention;
图3是本发明确定最小矩形面积示意图;Fig. 3 is a schematic diagram of determining the minimum rectangular area in the present invention;
图4位本发明的实施例实验中部件的初始位置和在后续帧中的位置图;Fig. 4 is the initial position of parts in the embodiment experiment of the present invention and the position figure in subsequent frame;
图5是本发明的实施例中实验一的结果图;Fig. 5 is the result figure of experiment one in the embodiment of the present invention;
图6是本发明的实施例中实验二的结果图;Fig. 6 is the result figure of experiment two in the embodiment of the present invention;
图7是本发明的实施例中遮挡实验的结果图;Fig. 7 is the result diagram of occlusion experiment in the embodiment of the present invention;
图8是本发明与均值平移跟踪方法的效果比较图。Fig. 8 is a comparison diagram of the effect of the present invention and the mean translation tracking method.
具体实施方式Detailed ways
下面结合附图对本发明的实施例作详细说明:本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below in conjunction with the accompanying drawings: this embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following the described embodiment.
本实施例采用的是跟踪一段视频序列中的运动物体,包括如下具体步骤:What this embodiment adopts is to track the moving object in a section of video sequence, comprise following specific steps:
1、部件定位:本实施例中的跟踪部件选取跟踪对象中以角点为中心的矩形区域,角点检测采用FAST角点检测方法,FAST是一种运算简单直观的角点检测方法,可以保证跟踪系统实时性的要求。FAST角点检测方法检测的角点为在该点周围有足够的点与该点处于不同的区域,具体为:检查待检测点c周围的圆,寻找其中最长的圆弧,如果圆弧中所有的点的灰度值都大于c点的灰度值t个灰度值以上(t=15),或者都小于c点的灰度值t个灰度值以上,则被判定为角点。1. Part positioning: The tracking part in this embodiment selects a rectangular area centered on the corner point in the tracking object, and the corner point detection adopts the FAST corner point detection method. FAST is a simple and intuitive corner point detection method, which can ensure Real-time requirements of the tracking system. The corner point detected by the FAST corner point detection method is that there are enough points around the point in a different area from the point, specifically: check the circle around the point c to be detected, and find the longest arc among them. If the gray values of all points are greater than the gray value of point c by more than t gray values (t=15), or are all smaller than the gray value of point c by more than t gray values, they are determined as corner points.
如图1所示,如果圆周上相对的两点(比如象素点1和9)的灰度值都与c点的灰度值相近,那么显然不需要检测12个点来判断c是否是角点,因此可以对上述方法进行优化,先检测像素点1,9,然后检测5和13。通过实验统计,经过优化之后,对于图像中待检测的一个点,平均只需要检测其周围的3.8个点就可以判断出是否为角点。As shown in Figure 1, if the gray values of two opposite points on the circumference (such as pixel points 1 and 9) are similar to the gray value of point c, then it is obviously not necessary to detect 12 points to determine whether c is an angle or not. point, so the above method can be optimized, first detect pixel points 1, 9, and then detect 5 and 13. Through experimental statistics, after optimization, for a point to be detected in the image, it only needs to detect 3.8 points around it on average to determine whether it is a corner point.
2、用基于核的灰度直方图向量来描述跟踪部件,假设{xi *}i=1...m表示部件中心角点c周围以半径为r的圆内所有像素相对于c的归一化坐标,将目标颜色的分布由原来的256维化为n维,本实施例中n设定为16,由函数b(x)将位置为x的像素映射到它所在的n维颜色空间中的一维,则灰度直方图的计算公式为:2. Use the kernel-based grayscale histogram vector To describe the tracking component, assuming that {x i * } i=1...m represents the normalized coordinates of all pixels in a circle with radius r around the central corner point c of the component relative to c, and the distribution of the target color is changed from the original The 256 dimensions of are converted into n dimensions. In this embodiment, n is set to 16, and the pixel at position x is mapped to one dimension in its n-dimensional color space by function b(x), then the grayscale histogram The calculation formula is:
其中:b(x)=I(x)/n,I(x)表示位置为x的像素的灰度值;k(x)为核函数,在本实施例中,采用Biweight核函数,其表达式为
3、如图2所示,采用卡尔曼滤波对跟踪部件进行跟踪,卡尔曼滤波的跟踪步骤分为预测和修正两个部分:3. As shown in Figure 2, the Kalman filter is used to track the tracking components, and the tracking step of the Kalman filter is divided into two parts: prediction and correction:
预测部分为:预测方程组利用前一时刻的状态值和预测误差得到当前时刻的预测值,预测方程组为:The prediction part is: the prediction equation group uses the state value and prediction error at the previous moment to obtain the prediction value at the current moment, and the prediction equation group is:
xk′=Axk-1+Buk-1(本实施例中B=0)x k '=Ax k-1 +Bu k-1 (B=0 in this embodiment)
Pk′=APk-1AT+QP k '=AP k-1 A T +Q
其中:xk′为k时刻的预测状态变量,xk-1为k-1时刻的状态变量,A为状态转移矩阵,Pk′为k时刻预测误差相关矩阵,Pk-1为k-1时刻误差相关矩阵的修正值,Q为运动噪声相关矩阵。Among them: x k ′ is the predicted state variable at time k, x k-1 is the state variable at time k-1, A is the state transition matrix, P k ′ is the prediction error correlation matrix at time k, and P k-1 is k- Correction value of the error correlation matrix at time 1, Q is the motion noise correlation matrix.
修正部分为:得到预测值和预测相关误差之后,修正方程组通过当前时刻的观测值对预测值和预测相关误差做出修正,修正方程组为:The correction part is: after obtaining the predicted value and prediction-related error, the correction equation group corrects the predicted value and prediction-related error through the observation value at the current moment. The correction equation group is:
Kk=Pk′HT(HPk′HT+R)-1 K k =P k ′H T (HP k ′H T +R) -1
xk=xk′+Kk(zk-Hxk′)x k =x k ′+K k (z k -Hx k ′)
Pk=(I-KkH)Pk′P k =(IK k H)P k ′
其中:Kk为卡尔曼滤波增益矩阵,H为测量矩阵,R为测量噪声相关矩阵,zk观测变量,Pk为k时刻误差相关矩阵的修正值,xk为k时刻的状态变量。Among them: K k is the Kalman filter gain matrix, H is the measurement matrix, R is the measurement noise correlation matrix, z k is the observed variable, P k is the correction value of the error correlation matrix at k time, x k is the state variable at k time.
修正方程组通过观测值zk来修正当前的预测值,得到修正后的状态估计值和噪声方差估计值。The correction equations modify the current forecast value through the observed value z k , and obtain the corrected state estimated value and noise variance estimated value.
本实施例中,观测值设定为跟踪部件中心所在的位置,状态变量设为部件中心的位置、运动速度和加速度,并且假设部件做匀加速运动,则将参数设定如下:In this embodiment, the observed value is set to track the position of the center of the component, the state variable is set to the position, velocity and acceleration of the center of the component, and assuming that the component moves at a uniform acceleration, the parameters are set as follows:
z=[sx,sy]T,x=[sx,vx,ax,sy,vy,ay]T,(z为观测值,下标x,y表示横坐标和纵坐标)z=[s x , s y ] T , x=[s x , v x , a x , s y , v y , a y ] T , (z is the observed value, subscript x, y represent the abscissa and ordinate coordinate)
状态转移矩阵
测量矩阵
参数Pk,运动噪声相关矩阵Q和测量噪声相关矩阵R初值需要利用先验知识来确定,在本实施例中假设观测值落入部件窗口中任意位置都是等概率的,设定测量噪声相关矩阵R的初值为:The parameter P k , the initial value of the motion noise correlation matrix Q and the measurement noise correlation matrix R needs to be determined by using prior knowledge. In this embodiment, it is assumed that the observed value falls into any position in the component window with equal probability. Set the measurement noise The initial value of the correlation matrix R is:
Nx,Ny分别为部件窗口横轴与纵轴的像素数。N x , N y are the number of pixels on the horizontal axis and vertical axis of the component window respectively.
4.观测值的测量及部件更新:在预测出的位移坐标(sx′,sy′)为中心的小邻域内采取螺旋式搜索的方法,快速地找到观测值的坐标,将螺旋式搜索到第一个符合ρ[pk,qk-1]<l的坐标作为zk,其中ρ[pk,qk-1]为当前坐标的直方图与对象颜色模型的距离,l为距离阈值。4. Measurement of observed values and update of components: take the spiral search method in the small neighborhood centered on the predicted displacement coordinates (s x ′, s y ′), quickly find the coordinates of the observed values, and use the spiral search To the first coordinate that meets ρ[p k , q k-1 ]<l as z k , where ρ[p k , q k-1 ] is the distance between the histogram of the current coordinate and the object color model, and l is the distance threshold.
如果在预测点周围无法找到符合条件的点,主要由于以下三种原因:(1)此部件已经跟踪失败,即部件已处在跟踪对象之外;(2)由于对象旋转等原因,部件消失,但是跟踪窗口仍在跟踪对象之内;(3)由于出现了遮挡,使部件消失。为了保证跟踪的稳定性以及有足够多数量的部件,需要淘汰第一种情况下的部件以及保留后两种部件。本实施例采取的判决方法为将对象标识之后,位于标识区域之内的部件保留,否则淘汰。If no qualified point can be found around the predicted point, it is mainly due to the following three reasons: (1) The component has failed to track, that is, the component is already outside the tracking object; (2) The component disappears due to object rotation, etc. But the tracking window is still within the tracking object; (3) The component disappears due to occlusion. In order to ensure the stability of tracking and a sufficient number of components, it is necessary to eliminate the components in the first case and retain the latter two components. The judging method adopted in this embodiment is to reserve the parts located in the marked area after the object is marked, otherwise, eliminate them.
另外,由于跟踪对象处在不断的变化中,并且图像中存在光照变化、噪声、遮挡等问题,所以要对其颜色模型进行更新,本实施例将当前帧的观测值的直方图加权来更新颜色模型,更新公式如下:qk=(1-α)qk-1+αpk,其中:qk,qk-1分别为第k、k-1帧的基于核的灰度直方图,α为更新因子,本实施例中设定为0.02。In addition, because the tracking object is constantly changing, and there are problems such as illumination changes, noise, and occlusion in the image, its color model needs to be updated. In this embodiment, the histogram of the observed value of the current frame is weighted to update the color model, the update formula is as follows: q k = (1-α)q k-1 + αp k , where: q k , q k-1 are the kernel-based gray histograms of frames k and k-1 respectively, and α is the update factor, which is set to 0.02 in this embodiment.
5.标识对象:确定出跟踪对象的各个部件之后,用能够包含所有跟踪部件中心点的面积最小的矩形将跟踪对象标识出来,使得各个部件统一到一个对象,由于对象边缘存在丰富的角点,此方法能够较为精确地标识出对象的大小、位置和倾角。5. Identifying objects: After determining each component of the tracking object, use the rectangle with the smallest area that can contain the center points of all tracking components to identify the tracking object, so that each component can be unified into one object. Since there are abundant corner points on the edge of the object, This method can more accurately identify the size, position and inclination of the object.
为了寻找能够包含所有跟踪部件中心点的面积最小的矩形必须首先确定角点点集的凸壳。本实施例采用格雷厄姆方法确定出对象角点集合的凸壳,平面点集的凸壳定义为包含点集的最小凸集,即以点集中部分点为顶点的一个凸多边形,对该凸多边形的任意一条边,点集中所有不在该边上的点都在该边的同一侧。In order to find the rectangle with the smallest area that can contain the center points of all tracking components, the convex hull of the corner point set must be determined first. This embodiment adopts the Graham method to determine the convex hull of the object corner point set. The convex hull of the plane point set is defined as the smallest convex set containing the point set, that is, a convex polygon with some points in the point set as vertices. Any side of the polygon, all points in the point set that are not on the side are on the same side of the side.
所述格雷厄姆方法(周培德.《计算几何-算法设计与分析(第二版)》.清华大学出版社.2005.),具体为:The Graham method (Zhou Peide. "Computational Geometry-Algorithm Design and Analysis (Second Edition)". Tsinghua University Press. 2005.), specifically:
①设凸集中坐标最小的点为p1,把p1同凸集中其他各点用线段连接,并计算这些线段与水平线的夹角,然后按夹角大小及到p1的距离进行排序,得到一个序列p1,p2,...pn,p1点是凸壳边界的顶点,p2与pn也必是;①Set the point with the smallest coordinates in the convex set as p 1 , connect p 1 with other points in the convex set with line segments, and calculate the angle between these line segments and the horizontal line, and then sort according to the size of the included angle and the distance to p 1 , to obtain A sequence p 1 , p 2 ,...p n , point p 1 is the vertex of the convex hull boundary, p 2 and p n must also be;
②判断是否是凸壳的点,删去p3、p4、...、pn-1中不是凸壳上的点,具体如下:② Determine whether it is a point of a convex hull, and delete the points in p 3 , p 4 , ..., p n-1 that are not on the convex hull, as follows:
[1]设定k=4;[1] Set k=4;
[2]设定j=2;[2] set j=2;
[3]如果p1和pk分别在线段pk-j+1pk-j两侧则删去pk-j+1,后继顶点编号减1,k=k-1,j=j-1,n=n-1;反之pk-j+1暂为凸壳顶点,并记录;[3] If p 1 and p k are respectively on both sides of the line segment p k-j+1 p kj , p k-j+1 is deleted, and the subsequent vertex number is reduced by 1, k=k-1, j=j-1, n=n-1; otherwise p k-j+1 is temporarily the vertex of the convex hull and recorded;
[4]j=j+1,执行[3],直到j=k-2;[4] j=j+1, execute [3] until j=k-2;
[5]k=k+1,执行[2],直到k=n。[5] k=k+1, execute [2] until k=n.
其中:用向量叉乘来判断两点在某一个线段两侧还是同侧,比:有两点P,Q,线段AB,计算叉乘PA×PB,QA×QB,如果同号说明在线段同侧,否则在异侧。Among them: use vector cross product to judge whether two points are on both sides of a certain line segment or the same side, ratio: there are two points P, Q, line segment AB, calculate the cross product PA×PB, QA×QB, if the same number means the same line segment side, otherwise on the opposite side.
③顺序输出凸壳顶点;③Sequentially output convex hull vertices;
得到点集的凸壳之后,以凸壳上的一条边的延长线为矩形的一边,找出包含点集中所有点的矩形,依次旋转找出面积最小的矩形区域。After obtaining the convex hull of the point set, take the extension line of a side on the convex hull as one side of the rectangle, find the rectangle containing all the points in the point set, and rotate in turn to find the smallest area of the rectangle.
如图3所示,假设确定的凸壳顶点为ABCDE,则以AB为边的矩形面积为:As shown in Figure 3, assuming that the determined convex hull vertex is ABCDE, the area of the rectangle with AB as the side is:
同理,可以算出以BC、CD、DE、EA为边的矩形面积,从中选出面积最小的矩形来标识跟踪对象。Similarly, the area of the rectangle with BC, CD, DE, and EA as sides can be calculated, and the rectangle with the smallest area can be selected to mark the tracking object.
如图5、6、7所示,为本实施例的三个实验结果图,由图可知,在视频序列中存在着大量的由于光照变化和摄像头抖动带来的噪声的情况下,本实施例跟踪方法仍然能够精确的描述出物体的位置、大小和旋转的角度。在目标由远及近以及转弯的过程中,依然有稳定的跟踪结果。As shown in Figures 5, 6, and 7, they are three experimental result diagrams of this embodiment. As can be seen from the figures, in the video sequence, there is a large amount of noise caused by illumination changes and camera shakes. The tracking method can still accurately describe the position, size and rotation angle of the object. In the process of the target from far to near and turning, there are still stable tracking results.
如图4所示,图(a)为实验1和实验2的部件的初始位置,图(b)为在后续帧中的位置;As shown in Figure 4, Figure (a) is the initial position of the parts of Experiment 1 and Experiment 2, and Figure (b) is the position in subsequent frames;
如图5所示,为实验1的结果图,(a)(b)(c)(d)分别为185帧、259帧、574帧、704帧的图像,图(c)中第574帧中有一人从车中走出,图(d)中第704帧中此人已经远离了跟踪的车辆,证明本实施例方法能够克服遮挡以及从遮挡中恢复。As shown in Figure 5, it is the result of Experiment 1, (a) (b) (c) (d) are the images of 185 frames, 259 frames, 574 frames, and 704 frames respectively, and the 574th frame in (c) A person stepped out of the car, and the person has moved away from the tracked vehicle in frame 704 in (d), which proves that the method of this embodiment can overcome occlusion and recover from occlusion.
如图6所示,为实验2的结果图,图(a)(b)(c)(d)分别为38帧、91帧、118帧、171帧的图像,图中背景树木随风摆动,并且跟踪人物身体的各个部位速度并不一致,由此看出此方法能够克服背景复杂变化以及跟踪对象内部速度不统一等问题。As shown in Figure 6, it is the result of Experiment 2. Figures (a), (b), (c) and (d) are images of 38 frames, 91 frames, 118 frames, and 171 frames respectively. The background trees in the figure swing with the wind. And the speed of each part of the tracking character's body is not consistent. It can be seen that this method can overcome the problems of complex changes in the background and inconsistent internal speed of the tracking object.
如图7所示,为遮挡实验,实验中人为加入了像素值为250的遮挡区域,图(a)(b)分别为230帧、260帧的图像,实验结果可以看出本实施例方法具有稳定的抗遮挡能力。As shown in Figure 7, it is an occlusion experiment. In the experiment, an occlusion area with a pixel value of 250 is artificially added. Figures (a) and (b) are images of 230 frames and 260 frames respectively. From the experimental results, it can be seen that the method of this embodiment has Stable anti-occlusion ability.
如表1所示,为本实施例方法与粒子滤波跟踪方法的速度比较,本实施例方法的速度有很大提高,为进一步对目标进行识别和分析提供了实时性的保证。As shown in Table 1, comparing the speed of the method of this embodiment with the particle filter tracking method, the speed of the method of this embodiment is greatly improved, which provides real-time guarantee for further identification and analysis of targets.
表1是本实施例方法和粒子滤波跟踪方法的速度比较;Table 1 is the speed comparison between the method of this embodiment and the particle filter tracking method;
如图8所示,图(a)、(b)为实施例方法的效果图,图(c)、(d)为Mean-Shift(均值平移)跟踪方法的效果图,本实施例方法与均值平移方法相比,本方法能够更准确的定位目标,更灵活的改变跟踪窗口的大小和旋转角度,并且具有更高的稳定性。As shown in Figure 8, figure (a), (b) is the effect figure of embodiment method, figure (c), (d) is the effect figure of Mean-Shift (mean shift) tracking method, present embodiment method and mean value Compared with the translation method, this method can locate the target more accurately, change the size and rotation angle of the tracking window more flexibly, and has higher stability.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA200810033733XA CN101226592A (en) | 2008-02-21 | 2008-02-21 | Part-Based Object Tracking Method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA200810033733XA CN101226592A (en) | 2008-02-21 | 2008-02-21 | Part-Based Object Tracking Method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101226592A true CN101226592A (en) | 2008-07-23 |
Family
ID=39858577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA200810033733XA Pending CN101226592A (en) | 2008-02-21 | 2008-02-21 | Part-Based Object Tracking Method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101226592A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101727570B (en) * | 2008-10-23 | 2012-05-23 | 华为技术有限公司 | Tracking method, detection tracking processing equipment and monitoring system |
CN102930557A (en) * | 2012-10-16 | 2013-02-13 | 苏州大学 | Particle filter tracking method for adaptive adjustment of tracking window size |
CN102930245A (en) * | 2012-09-24 | 2013-02-13 | 深圳市捷顺科技实业股份有限公司 | Method and system for tracking vehicles |
CN103106659A (en) * | 2013-01-28 | 2013-05-15 | 中国科学院上海微系统与信息技术研究所 | Open area target detection and tracking method based on binocular vision sparse point matching |
CN103279952A (en) * | 2013-05-17 | 2013-09-04 | 华为技术有限公司 | Target tracking method and device |
CN105898196A (en) * | 2014-11-24 | 2016-08-24 | 北京高尔智达科技有限公司 | Multi-spectral photoelectric automatic recognition and tracking system |
CN106683120A (en) * | 2016-12-28 | 2017-05-17 | 杭州趣维科技有限公司 | Image processing method being able to track and cover dynamic sticker |
CN106952295A (en) * | 2017-03-17 | 2017-07-14 | 公安部第三研究所 | A Vision-Based Implementation Method of Rotor UAV Tracking Moving Target |
CN107851318A (en) * | 2015-08-18 | 2018-03-27 | 高通股份有限公司 | System and method for Object tracking |
CN110633731A (en) * | 2019-08-13 | 2019-12-31 | 杭州电子科技大学 | A single-stage anchor-free object detection method based on interlaced perceptual convolution |
CN115037869A (en) * | 2021-03-05 | 2022-09-09 | Oppo广东移动通信有限公司 | Automatic focusing method and device, electronic equipment and computer readable storage medium |
-
2008
- 2008-02-21 CN CNA200810033733XA patent/CN101226592A/en active Pending
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101727570B (en) * | 2008-10-23 | 2012-05-23 | 华为技术有限公司 | Tracking method, detection tracking processing equipment and monitoring system |
CN102930245A (en) * | 2012-09-24 | 2013-02-13 | 深圳市捷顺科技实业股份有限公司 | Method and system for tracking vehicles |
CN102930245B (en) * | 2012-09-24 | 2015-03-18 | 深圳市捷顺科技实业股份有限公司 | Method and system for tracking vehicles |
CN102930557A (en) * | 2012-10-16 | 2013-02-13 | 苏州大学 | Particle filter tracking method for adaptive adjustment of tracking window size |
CN103106659A (en) * | 2013-01-28 | 2013-05-15 | 中国科学院上海微系统与信息技术研究所 | Open area target detection and tracking method based on binocular vision sparse point matching |
CN103279952A (en) * | 2013-05-17 | 2013-09-04 | 华为技术有限公司 | Target tracking method and device |
CN103279952B (en) * | 2013-05-17 | 2017-10-17 | 华为技术有限公司 | A kind of method for tracking target and device |
CN105898196A (en) * | 2014-11-24 | 2016-08-24 | 北京高尔智达科技有限公司 | Multi-spectral photoelectric automatic recognition and tracking system |
CN107851318B (en) * | 2015-08-18 | 2021-08-17 | 高通股份有限公司 | System and method for object tracking |
CN107851318A (en) * | 2015-08-18 | 2018-03-27 | 高通股份有限公司 | System and method for Object tracking |
CN106683120A (en) * | 2016-12-28 | 2017-05-17 | 杭州趣维科技有限公司 | Image processing method being able to track and cover dynamic sticker |
CN106683120B (en) * | 2016-12-28 | 2019-12-13 | 杭州趣维科技有限公司 | image processing method for tracking and covering dynamic sticker |
CN106952295A (en) * | 2017-03-17 | 2017-07-14 | 公安部第三研究所 | A Vision-Based Implementation Method of Rotor UAV Tracking Moving Target |
CN110633731A (en) * | 2019-08-13 | 2019-12-31 | 杭州电子科技大学 | A single-stage anchor-free object detection method based on interlaced perceptual convolution |
CN110633731B (en) * | 2019-08-13 | 2022-02-25 | 杭州电子科技大学 | Single-stage anchor-frame-free target detection method based on staggered sensing convolution |
CN115037869A (en) * | 2021-03-05 | 2022-09-09 | Oppo广东移动通信有限公司 | Automatic focusing method and device, electronic equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101226592A (en) | Part-Based Object Tracking Method | |
CN108122247B (en) | A kind of video object detection method based on saliency and feature prior model | |
CN108010067A (en) | A kind of visual target tracking method based on combination determination strategy | |
CN101354254B (en) | Method for tracking aircraft course | |
CN104200495A (en) | Multi-target tracking method in video surveillance | |
CN104992451A (en) | Improved target tracking method | |
CN108647694A (en) | Correlation filtering method for tracking target based on context-aware and automated response | |
CN107944354B (en) | Vehicle detection method based on deep learning | |
CN107844739B (en) | Robust target tracking method based on self-adaptive simultaneous sparse representation | |
Xiao et al. | Traffic sign detection based on histograms of oriented gradients and boolean convolutional neural networks | |
CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
NaNa et al. | Optimization of face tracking based on KCF and Camshift | |
Zhao et al. | APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking | |
Dou et al. | Robust visual tracking based on generative and discriminative model collaboration | |
CN111539987A (en) | Occlusion detection system and method based on discriminant model | |
Yang et al. | A light CNN based method for hand detection and orientation estimation | |
CN107292910A (en) | Moving target detecting method under a kind of mobile camera based on pixel modeling | |
Zakaria et al. | Particle swarm optimization and support vector machine for vehicle type classification in video stream | |
CN104517300A (en) | Vision judgment tracking method based on statistical characteristic | |
CN107886060A (en) | Pedestrian's automatic detection and tracking based on video | |
CN111681266A (en) | Ship tracking method, system, device and storage medium | |
CN108985216B (en) | Pedestrian head detection method based on multivariate logistic regression feature fusion | |
CN111160190B (en) | Vehicle-mounted pedestrian detection-oriented classification auxiliary kernel correlation filtering tracking method | |
Kavitha et al. | Performance analysis towards GUI-based vehicle detection and tracking using YOLOv3 and SORT algorithm | |
Zhu et al. | Visual tracking with dynamic model update and results fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080723 |