CN107392936B - A target tracking method based on meanshift - Google Patents
A target tracking method based on meanshift Download PDFInfo
- Publication number
- CN107392936B CN107392936B CN201710434697.7A CN201710434697A CN107392936B CN 107392936 B CN107392936 B CN 107392936B CN 201710434697 A CN201710434697 A CN 201710434697A CN 107392936 B CN107392936 B CN 107392936B
- Authority
- CN
- China
- Prior art keywords
- target
- rectangle
- pixel
- pixels
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于meanshift的目标跟踪方法,包括以下步骤:1,目标图像初始化,选定包含被跟踪目标矩形A1的初始位置;2,对目标矩形An的所有像素进行背景判断;3,计算目标矩形An的概率密度qu;4,运动目标在第n+1帧的候选目标区域,计算候选目标区域的概率密度pu;5,计算候选目标区域内的每个像素的权重ωi;6,计算候选目标区域的新位置ynew;7,如果||y0‑ynew||<ε或者迭代次数大于阈值,则停止迭代;否则继续迭代计算直到满足终止条件的候选目标位置。本发明基于meanshift的目标跟踪方法,对目标框里的像素进行判断,是否属于背景,如果属于背景,则不参与后续的计算,从而对真正的运动目标更好的建模,优化跟踪效果。
The invention discloses a target tracking method based on meanshift, comprising the following steps: 1, initializing a target image, and selecting an initial position including a tracked target rectangle A 1 ; 2, performing background judgment on all pixels of the target rectangle A n ; 3, calculate the probability density qu of the target rectangle An; 4, calculate the probability density p u of the candidate target area in the candidate target area of the n+ 1th frame of the moving target; 5, calculate the probability density of each pixel in the candidate target area Weight ω i ; 6, calculate the new position y new of the candidate target area; 7, if ||y 0 -y new || < ε or the number of iterations is greater than the threshold, then stop the iteration; otherwise, continue to iteratively calculate until the candidate that satisfies the termination condition target location. Based on the target tracking method of meanshift, the present invention judges whether the pixels in the target frame belong to the background.
Description
技术领域technical field
本发明涉及目标跟踪技术领域,具体涉及一种基于meanshift的目标跟踪方法。The invention relates to the technical field of target tracking, in particular to a target tracking method based on meanshift.
背景技术Background technique
在meanshift目标跟踪过程中,通常是对目标所在的矩形框中所有的像素进行建模。这样就带来一个问题,矩形框中的像素不完全是目标的,也存在一部分背景,背景的信息也包括在目标建模里了。特别是矩形框如果选择的过大、或背景和目标的颜色差异过大时,目标建模存在相当大的误差。如何对真正的运动目标更好的建模是一个影响跟踪效果的关键步骤。In the meanshift target tracking process, all pixels in the rectangular box where the target is located are usually modeled. This brings a problem, the pixels in the rectangular box are not completely the target, and there is also a part of the background, and the background information is also included in the target modeling. In particular, if the rectangular box is too large, or the color difference between the background and the target is too large, there will be considerable errors in target modeling. How to better model the real moving target is a key step that affects the tracking effect.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于meanshift的目标跟踪方法,通过背景建模获得矩阵框里面的像素是否属于背景,如果属于背景,则不参与后续的计算,以解决上述背景技术中提出的问题。The purpose of the present invention is to provide a target tracking method based on meanshift, obtain whether the pixels in the matrix frame belong to the background through background modeling, and if they belong to the background, do not participate in subsequent calculations to solve the problems raised in the above-mentioned background technology.
为实现上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:
一种基于meanshift的目标跟踪方法,通过摄像工具,对目标进行视频拍摄,得到目标的视频序列图像,其特征在于,所述目标跟踪方法包括以下步骤:A target tracking method based on meanshift, by using a camera tool, video shooting of the target is performed to obtain a video sequence image of the target, wherein the target tracking method comprises the following steps:
步骤1,目标图像初始化,选定包含被跟踪目标矩形A1的初始位置;Step 1, the target image is initialized, and the initial position containing the tracked target rectangle A 1 is selected;
步骤2,记第n帧图像的目标矩形An,对目标矩形的所有像素进行背景判断,如果判为背景,则指示函数BIn(x)记为1,否则为0;
步骤3,对第n帧图像的目标矩形An,利用指示函数BIn(x)信息,计算目标矩形的概率密度qu;Step 3, to the target rectangle A n of the nth frame image, utilize the indicator function B n (x) information to calculate the probability density qu of the target rectangle;
步骤4,运动目标在第n+1帧的候选目标区域,用第n帧图像的目标矩形位置y0计算候选目标区域的概率密度pu;
步骤5,计算候选目标区域内的每个像素的权重ωi;Step 5, calculate the weight ω i of each pixel in the candidate target area;
步骤6,计算候选目标区域的新位置ynew;Step 6, calculate the new position y new of the candidate target area;
步骤7,如果||y0-ynew||<ε或者迭代次数大于阈值,则停止迭代;否则令y0=ynew并转向步骤4,继续迭代计算直到满足终止条件的候选目标位置。Step 7, if ||y 0 -y new ||<ε or the number of iterations is greater than the threshold, stop the iteration; otherwise, set y 0 =y new and turn to
作为进一步改进,所述步骤2中,记第n帧图像的目标矩形An,对目标矩形的所有像素进行背景判断,如果判为背景,则指示函数BIn(x)记为1,否则为0,具体包括以下步骤:As a further improvement, in the
步骤21,判断目标矩形An中边缘部分的像素,记目标矩形An大小为w×d,目标矩形An的4个边缘为需要判断的像素区域,边缘的宽度h,边缘位置1、2、3、4为顺时针排列,边缘位置1位于目标矩形An正上方;Step 21: Determine the pixels in the edge part of the target rectangle An, mark the size of the target rectangle An as w×d, the four edges of the target rectangle An are the pixel areas that need to be judged, the width of the edge h, and the
目标矩形An中心位置的像素默认属于目标,也即中心像素的指示函数BIn(x)直接设置为0;The pixel at the center of the target rectangle An belongs to the target by default, that is, the indicator function B n (x) of the center pixel is directly set to 0;
步骤22,从目标矩形An左上角的顶点A开始,选择以A为左顶点,大小为3×3的矩形a,该矩阵包括9个像素,用高斯模型对这9个像素的灰度分布进行拟合,计算均值μ和方差σ2:Step 22, starting from the vertex A in the upper left corner of the target rectangle An, select a rectangle a with A as the left vertex and a size of 3 × 3, the matrix includes 9 pixels, and the grayscale distribution of these 9 pixels is calculated by the Gaussian model. Fit and compute mean μ and variance σ 2 :
其中gray(x)表示像素的灰度值;where gray(x) represents the gray value of the pixel;
对位于边缘位置1和边缘位置4里面所有的像素通过计算属于高斯模型的概率来判断是否属于背景,公式如下:For all the pixels located in edge position 1 and
其中f(x)表示像素(x)属于高斯模型的概率,因此指示函数BIn(x)可以用如下公式计算:where f(x) represents the probability that the pixel (x) belongs to the Gaussian model, so the indicator function BI n (x) can be calculated with the following formula:
用上述方法可以对边缘位置1和边缘位置4里面所有的像素进行判定,获得对应的指示函数BIn(x);With the above method, all pixels in edge position 1 and
步骤23,与步骤22同理,从目标矩形An右上角的顶点B出发,判断边缘位置1和边缘位置2里面所有的像素,若边缘位置1中的某一像素在步骤22中已判断为背景时,此步骤省略该像素的判断;Step 23, in the same way as step 22, starting from the vertex B in the upper right corner of the target rectangle An, determine all the pixels in edge position 1 and
步骤24,与步骤22同理,从目标矩形An右下角的顶点D出发,判断边缘位置2和边缘位置3的像素,若边缘位置2中的某一像素在步骤23中已判断为背景时,此步骤省略该像素的判断;Step 24, in the same way as step 22, starting from the vertex D in the lower right corner of the target rectangle An, determine the pixels of
步骤25,与步骤22同理,从目标矩形An左下角的顶点C出发,判断边缘位置3和边缘位置4的像素,若边缘位置3中的某一像素在步骤24中已判断为背景时,此步骤省略该像素的判断;Step 25, in the same way as step 22, starting from the vertex C in the lower left corner of the target rectangle An, determine the pixels of edge position 3 and
由此,目标矩形An中所有像素的指示函数BIn(x)均计算获得。Thus, the indicator functions B n ( x ) of all the pixels in the target rectangle An are calculated and obtained.
作为进一步改进,所述步骤3中,对第n帧图像的目标矩形An,利用指示函数BIn(x)信息,计算目标矩形的概率密度qu,具体包括以下内容:As a further improvement, in the step 3, for the target rectangle A n of the nth frame image, the probability density q u of the target rectangle is calculated by using the indicator function B n (x) information, which specifically includes the following contents:
选择灰度信息作为Mean Shift跟踪器的特征空间,统计特征空间的灰度直方图,将特征空间分成m=32份,每份记为特征空间的一个特征值,记x0为目标模板区域的中心位置坐标,设{xi},i=1,L,n为目标模板区域内所有的不属于背景的像素位置,也即它们的指示函数BIn(x,y)值均为0,则基于灰度特征u=1,L,m的目标模板的概率密度函数的计算公式如下:Select the grayscale information as the feature space of the Mean Shift tracker, count the grayscale histogram of the feature space, divide the feature space into m=32 parts, each part is recorded as a feature value of the feature space, and x0 is the center of the target template area. Position coordinates, let {x i }, i = 1, L, n be all pixel positions in the target template area that do not belong to the background, that is, their indicator function B n (x, y) values are 0, then based on The calculation formula of the probability density function of the target template of the grayscale feature u=1, L, m is as follows:
其中Cq是目标模板的归一化常数,K(g)是核函数。where Cq is the normalization constant of the target template, K(g) is the kernel function.
作为进一步改进,所述步骤4中,运动目标在第n+1帧的候选目标区域,用第n帧图像的目标矩形位置y0计算候选目标区域的概率密度pu,具体包括以下内容:As a further improvement, in the
用前一帧即第n帧图像中目标模板的位置开始计算,设候选目标区域中心为y0,该区域中与前一帧像素{xi},i=1,L,n位置对应的各像素用{yi},i=1,L,n表示,与目标模板的概率密度函数计算方式相同,可以得到候选区域的概率密度函数:Start the calculation with the position of the target template in the image of the previous frame, i.e. the nth frame, and set the center of the candidate target area to be y 0 . The pixel is represented by {y i }, i=1, L, n, which is the same as the calculation method of the probability density function of the target template, and the probability density function of the candidate area can be obtained:
作为进一步改进,所述步骤5中,计算候选目标区域内的每个像素的权重ωi,As a further improvement, in the step 5, the weight ω i of each pixel in the candidate target area is calculated,
作为进一步改进,所述步骤6,计算候选目标区域的新位置ynew,具体包括以下内容:As a further improvement, the step 6 is to calculate the new position y new of the candidate target area, which specifically includes the following contents:
通过Bhattacharyya系数来衡量目标模板和候选目标区域对应的直方图之间的相似性,以两个直方图的相似性最大为原则,使搜索窗口沿密度增加最大的方向移动到目标的真实位置;The similarity between the target template and the histogram corresponding to the candidate target area is measured by the Bhattacharyya coefficient, and the maximum similarity between the two histograms is used as the principle to move the search window to the real position of the target along the direction with the largest density increase;
其中qu为目标模板,pu为候选目标模板,Bhattacharyya系数定义如下:将经过泰勒级数展开后求导,得到候选目标区域中心位置的更新公式:where q u is the target template, p u is the candidate target template, and the Bhattacharyya coefficient is defined as follows: Will After Taylor series expansion, the derivation is obtained, and the update formula of the center position of the candidate target area is obtained:
其中g(x)=-k′(x),ωi为每个像素的权重。Where g(x)=-k'(x), ω i is the weight of each pixel.
本发明的有益效果:针对普通的Meanshift不考虑目标框里的像素是否属于背景,直接用目标框里的所有像素参与后续计算。本发明基于meanshift的目标跟踪方法,对目标框里的像素进行判断,是否属于背景,如果属于背景,则不参与后续的计算,从而对真正的运动目标更好的建模,优化跟踪效果。Beneficial effects of the present invention: For ordinary Meanshift, all pixels in the target frame are directly used for subsequent calculation without considering whether the pixels in the target frame belong to the background. Based on the target tracking method of meanshift, the present invention judges whether the pixels in the target frame belong to the background.
下面结合附图与具体实施方式,对本发明进一步详细说明。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
附图说明Description of drawings
图1为基于meanshift的目标跟踪方法的流程图;Fig. 1 is the flow chart of the target tracking method based on meanshift;
图2为实施例2的步骤2目标矩形An的结构示意图; Fig . 2 is the structural representation of
具体实施方式Detailed ways
实施例1,参见图1,本实施例提供的基于meanshift的目标跟踪方法,通过摄像工具,对目标进行视频拍摄,得到目标的视频序列图像{Pn(x,y)|n=1,2,L N},所述目标跟踪方法包括以下步骤:Embodiment 1, referring to FIG. 1 , the target tracking method based on meanshift provided in this embodiment uses a camera tool to shoot a video of the target to obtain a video sequence image of the target {P n (x,y)|n=1,2 , LN}, the target tracking method comprises the following steps:
步骤1,目标图像初始化,选定包含被跟踪目标矩形A1的初始位置;Step 1, the target image is initialized, and the initial position containing the tracked target rectangle A 1 is selected;
步骤2,记第n帧图像的目标矩形An,对目标矩形的所有像素进行背景判断,如果判为背景,则指示函数BIn(x)记为1,否则为0;
步骤3,对第n帧图像的目标矩形An,利用指示函数BIn(x)信息,计算目标矩形的概率密度qu;Step 3, to the target rectangle A n of the nth frame image, utilize the indicator function B n (x) information to calculate the probability density qu of the target rectangle;
步骤4,运动目标在第n+1帧的候选目标区域,用第n帧图像的目标矩形位置y0计算候选目标区域的概率密度pu;
步骤5,计算候选目标区域内的每个像素的权重ωi;Step 5, calculate the weight ω i of each pixel in the candidate target area;
步骤6,计算候选目标区域的新位置ynew;Step 6, calculate the new position y new of the candidate target area;
步骤7,如果||y0-ynew||<ε或者迭代次数大于阈值,则停止迭代;否则令y0=ynew并转向步骤4,继续迭代计算直到满足终止条件的候选目标位置。Step 7, if ||y 0 -y new ||<ε or the number of iterations is greater than the threshold, stop the iteration; otherwise, set y 0 =y new and turn to
实施例2,参见图1~2,本实施例提供的基于meanshift的目标跟踪方法,以相邻两帧图像第n(≥1)帧和第n+1帧为例,详细介绍如何用meanshift的思想进行目标跟踪,也即根据第n帧的目标矩形的位置,计算第n+1帧的跟踪矩形的位置。
首先通过摄像工具,对目标进行视频拍摄,得到目标的视频序列图像{Pn(x,y)|n=1,2,L N},所述目标跟踪方法包括以下步骤:First, the target is photographed with a camera tool to obtain a video sequence image {Pn(x,y)|n=1,2,L N} of the target. The target tracking method includes the following steps:
步骤1,目标图像初始化,手动选定包含被跟踪目标矩形A1的初始位置;Step 1 , the target image is initialized, and the initial position containing the tracked target rectangle A1 is manually selected;
步骤2,记第n帧图像的目标矩形An,对目标矩形的所有像素进行背景判断,如果判为背景,则指示函数BIn(x)记为1,否则为0,具体包括以下步骤:
步骤21,判断目标矩形An中边缘部分的像素,如图2所示,记目标矩形An大小为w×d,目标矩形An的4个边缘为需要判断的像素区域,边缘的宽度h,h取值10,边缘位置1、2、3、4为顺时针排列,边缘位置1位于目标矩形An正上方;Step 21: Determine the pixels in the edge portion of the target rectangle An, as shown in Figure 2, mark the size of the target rectangle An as w × d, the four edges of the target rectangle An are the pixel areas that need to be judged, and the width of the edges is h. , the value of h is 10, the edge positions 1, 2, 3, and 4 are arranged clockwise, and the edge position 1 is located directly above the target rectangle An;
目标矩形An中心位置的像素默认属于目标,也即中心像素的指示函数BIn(x)直接设置为0;The pixel at the center of the target rectangle An belongs to the target by default, that is, the indicator function B n (x) of the center pixel is directly set to 0;
步骤22,从目标矩形An左上角的顶点A开始,选择以A为左顶点,大小为3×3的矩形a,该矩阵包括9个像素,用高斯模型对这9个像素的灰度分布进行拟合,计算均值μ和方差σ2:Step 22, starting from the vertex A in the upper left corner of the target rectangle An, select a rectangle a with A as the left vertex and a size of 3 × 3, the matrix includes 9 pixels, and the grayscale distribution of these 9 pixels is calculated by the Gaussian model. Fit and compute mean μ and variance σ 2 :
其中gray(x)表示像素的灰度值;where gray(x) represents the gray value of the pixel;
对位于边缘位置1和边缘位置4里面所有的像素通过计算属于高斯模型的概率来判断是否属于背景,公式如下:For all the pixels located in edge position 1 and
其中f(x)表示像素(x)属于高斯模型的概率,因此指示函数BIn(x)可以用如下公式计算:where f(x) represents the probability that the pixel (x) belongs to the Gaussian model, so the indicator function BI n (x) can be calculated with the following formula:
用上述方法可以对边缘位置1和边缘位置4里面所有的像素进行判定,获得对应的指示函数BIn(x);With the above method, all pixels in edge position 1 and
步骤23,与步骤22同理,从目标矩形An右上角的顶点B出发,判断边缘位置1和边缘位置2里面所有的像素,若边缘位置1中的某一像素在步骤22中已判断为背景时,此步骤省略该像素的判断;Step 23, in the same way as step 22, starting from the vertex B in the upper right corner of the target rectangle An, determine all the pixels in edge position 1 and
步骤24,与步骤22同理,从目标矩形An右下角的顶点D出发,判断边缘位置2和边缘位置3的像素,若边缘位置2中的某一像素在步骤23中已判断为背景时,此步骤省略该像素的判断;Step 24, in the same way as step 22, starting from the vertex D in the lower right corner of the target rectangle An, determine the pixels of
步骤25,与步骤22同理,从目标矩形An左下角的顶点C出发,判断边缘位置3和边缘位置4的像素,若边缘位置3中的某一像素在步骤24中已判断为背景时,此步骤省略该像素的判断;Step 25, in the same way as step 22, starting from the vertex C in the lower left corner of the target rectangle An, determine the pixels of edge position 3 and
由此,目标矩形An中所有像素的指示函数BIn(x)均计算获得;Thus, the indicator functions B n ( x ) of all pixels in the target rectangle An are calculated and obtained;
步骤3,对第n帧图像的目标矩形An,利用指示函数BIn(x)信息,计算目标矩形的概率密度qu;具体包括以下内容:Step 3, for the target rectangle A n of the nth frame image, use the indicator function B n (x) information to calculate the probability density qu of the target rectangle; specifically include the following content:
选择灰度信息作为Mean Shift跟踪器的特征空间,统计特征空间的灰度直方图,将特征空间分成m=32份,每份记为特征空间的一个特征值,记x0为目标模板区域的中心位置坐标,设{xi},i=1,L,n为目标模板区域内所有的不属于背景的像素位置,也即它们的指示函数BIn(x,y)值均为0,则基于灰度特征u=1,L,m的目标模板的概率密度函数的计算公式如下:Select the grayscale information as the feature space of the Mean Shift tracker, count the grayscale histogram of the feature space, and divide the feature space into m=32 parts, each part is recorded as a feature value of the feature space, and x 0 is the target template area. The coordinates of the center position, let {x i }, i=1, L, n be all the pixel positions in the target template area that do not belong to the background, that is, their indicator function B n (x, y) values are 0, then The calculation formula of the probability density function of the target template based on the grayscale features u=1, L, m is as follows:
其中Cq是目标模板的归一化常数,K(g)是核函数。where Cq is the normalization constant of the target template, K(g) is the kernel function.
K(g)核函数用于考虑遮挡或背景干扰的影响,给靠近目标中心位置的像素赋予较大的权值,而远离目标模板中心位置的像素赋予较小的权值,以此来区分目标区域内不同位置处的像素在估计目标概率密度函数中所做的贡献,本实施例K(g)核函数选择高斯核函数其中h是核函数带宽,δ(x)为Kronecker delta函数,用于判断目标区域中像素xi的灰度值是否属于第u个单元的颜色索引值,等于为1,否则为0;The K(g) kernel function is used to consider the influence of occlusion or background interference, and assign larger weights to pixels close to the center of the target, while pixels far from the center of the target template are assigned smaller weights to distinguish the target. The contribution made by the pixels at different positions in the area in estimating the target probability density function, the K(g) kernel function in this embodiment selects the Gaussian kernel function Where h is the bandwidth of the kernel function, δ(x) is the Kronecker delta function, which is used to determine whether the gray value of the pixel xi in the target area belongs to the color index value of the u-th unit, and is equal to 1, otherwise it is 0;
步骤4,运动目标在第n+1帧的候选目标区域,用第n帧图像的目标矩形位置y0计算候选目标区域的概率密度pu;具体包括以下内容:
用前一帧即第n帧图像中目标模板的位置开始计算,设候选目标区域中心为y0,该区域中与前一帧像素{xi},i=1,L,n位置对应的各像素用{yi},i=1,L,n表示,与目标模板的概率密度函数计算方式相同,可以得到候选区域的概率密度函数:Start the calculation with the position of the target template in the image of the previous frame, i.e. the nth frame, and set the center of the candidate target area to be y 0 . The pixel is represented by {y i }, i=1, L, n, which is the same as the calculation method of the probability density function of the target template, and the probability density function of the candidate area can be obtained:
步骤5,计算候选目标区域内的每个像素的权重ωi;Step 5, calculate the weight ω i of each pixel in the candidate target area;
步骤6,计算候选目标区域的新位置ynew;具体包括以下内容:Step 6, calculate the new position y new of the candidate target area; specifically include the following:
通过Bhattacharyya系数来衡量目标模板和候选目标区域对应的直方图之间的相似性,以两个直方图的相似性最大为原则,使搜索窗口沿密度增加最大的方向移动到目标的真实位置;The similarity between the target template and the histogram corresponding to the candidate target area is measured by the Bhattacharyya coefficient, and the maximum similarity between the two histograms is used as the principle to move the search window to the real position of the target along the direction with the largest density increase;
其中qu为目标模板,pu为候选目标模板,Bhattacharyya系数定义如下:将经过泰勒级数展开后求导,得到候选目标区域中心位置的更新公式:where q u is the target template, p u is the candidate target template, and the Bhattacharyya coefficient is defined as follows: Will After Taylor series expansion, the derivation is obtained, and the update formula of the center position of the candidate target area is obtained:
其中g(x)=-k′(x),ωi为每个像素的权重;Where g(x)=-k'(x), ω i is the weight of each pixel;
步骤7,如果||y0-ynew||<ε或者迭代次数大于阈值,则停止迭代;否则令y0=ynew并转向步骤4,继续迭代计算直到满足终止条件的候选目标位置。Step 7, if ||y 0 -y new ||<ε or the number of iterations is greater than the threshold, stop the iteration; otherwise, set y 0 =y new and turn to step 4, and continue to iteratively calculate until the candidate target position that satisfies the termination condition.
相对于普通的Meanshift直接用目标框里的所有像素参与后续计算的方式。本发明基于meanshift的目标跟踪方法,对目标框里的像素进行判断,是否属于背景,如果属于背景,则不参与后续的计算,从而对真正的运动目标更好的建模,优化跟踪效果。Compared with ordinary Meanshift, it directly uses all pixels in the target frame to participate in subsequent calculations. Based on the target tracking method of meanshift, the present invention judges whether the pixels in the target frame belong to the background.
本发明并不限于上述实施方式,采用与本发明上述实施例相同或近似方法,而得到的其他基于meanshift的目标跟踪方法,均在本发明的保护范围之内。The present invention is not limited to the above-mentioned embodiments, and other meanshift-based target tracking methods obtained by adopting the same or similar methods as the above-mentioned embodiments of the present invention are all within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710434697.7A CN107392936B (en) | 2017-06-09 | 2017-06-09 | A target tracking method based on meanshift |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710434697.7A CN107392936B (en) | 2017-06-09 | 2017-06-09 | A target tracking method based on meanshift |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107392936A CN107392936A (en) | 2017-11-24 |
CN107392936B true CN107392936B (en) | 2020-06-05 |
Family
ID=60332350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710434697.7A Expired - Fee Related CN107392936B (en) | 2017-06-09 | 2017-06-09 | A target tracking method based on meanshift |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392936B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458862A (en) * | 2019-05-22 | 2019-11-15 | 西安邮电大学 | A Tracking Method for Moving Objects in Occluded Background |
CN111275740B (en) * | 2020-01-19 | 2021-10-22 | 武汉大学 | A satellite video target tracking method based on high-resolution twin network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101727570A (en) * | 2008-10-23 | 2010-06-09 | 华为技术有限公司 | Tracking method, track detection processing unit and monitor system |
CN101783015A (en) * | 2009-01-19 | 2010-07-21 | 北京中星微电子有限公司 | Equipment and method for tracking video |
CN102270346A (en) * | 2011-07-27 | 2011-12-07 | 宁波大学 | Method for extracting target object from interactive video |
CN103366163A (en) * | 2013-07-15 | 2013-10-23 | 北京丰华联合科技有限公司 | Human face detection system and method based on incremental learning |
CN104077779A (en) * | 2014-07-04 | 2014-10-01 | 中国航天科技集团公司第五研究院第五一三研究所 | Moving object statistical method with Gaussian background model and mean value shift tracking combined |
-
2017
- 2017-06-09 CN CN201710434697.7A patent/CN107392936B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101727570A (en) * | 2008-10-23 | 2010-06-09 | 华为技术有限公司 | Tracking method, track detection processing unit and monitor system |
CN101783015A (en) * | 2009-01-19 | 2010-07-21 | 北京中星微电子有限公司 | Equipment and method for tracking video |
CN102270346A (en) * | 2011-07-27 | 2011-12-07 | 宁波大学 | Method for extracting target object from interactive video |
CN103366163A (en) * | 2013-07-15 | 2013-10-23 | 北京丰华联合科技有限公司 | Human face detection system and method based on incremental learning |
CN104077779A (en) * | 2014-07-04 | 2014-10-01 | 中国航天科技集团公司第五研究院第五一三研究所 | Moving object statistical method with Gaussian background model and mean value shift tracking combined |
Non-Patent Citations (3)
Title |
---|
基于全景视觉的运动目标检测与跟踪方法研究;甄景蕾;《中国优秀硕士学位论文全文数据库-信息科技辑》;20100615(第6期);第I138-491页第5.3-5.4节 * |
基于判别式序列表的均值漂移目标跟踪算法;蒋良卫 等;《华中科技大学学报(自然科学版)》;20111130;第39卷(第增刊二期);第204-219页 * |
基于机器视觉的运动目标轨迹跟踪技术研究;管春苗;《中国优秀硕士学位论文全文数据库-信息科技辑》;20160215(第2期);第I138-1552页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107392936A (en) | 2017-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11763485B1 (en) | Deep learning based robot target recognition and motion detection method, storage medium and apparatus | |
CN112926410B (en) | Target tracking method, device, storage medium and intelligent video system | |
CN109784333B (en) | Three-dimensional target detection method and system based on point cloud weighted channel characteristics | |
US8335348B2 (en) | Visual object tracking with scale and orientation adaptation | |
CN106845621B (en) | Dense crowd counting method and system based on deep convolutional neural network | |
CN106570867B (en) | Movable contour model image fast segmentation method based on gray scale morphology energy method | |
CN105139420B (en) | A kind of video target tracking method based on particle filter and perception Hash | |
CN110276785B (en) | Anti-shielding infrared target tracking method | |
CN108038435B (en) | Feature extraction and target tracking method based on convolutional neural network | |
CN107330357A (en) | Vision SLAM closed loop detection methods based on deep neural network | |
CN108682039B (en) | Binocular stereo vision measuring method | |
CN102982545B (en) | A kind of image depth estimation method | |
CN106570874B (en) | Image marking method combining image local constraint and object global constraint | |
CN110084836A (en) | Method for tracking target based on the response fusion of depth convolution Dividing Characteristics | |
CN104200485A (en) | Video-monitoring-oriented human body tracking method | |
CN105279769B (en) | A kind of level particle filter tracking method for combining multiple features | |
WO2013012091A1 (en) | Information processing apparatus, object tracking method, and program storage medium | |
JP2021060868A5 (en) | ||
CN107452015A (en) | A kind of Target Tracking System with re-detection mechanism | |
CN103345760B (en) | A kind of automatic generation method of medical image object shapes template mark point | |
CN112614154B (en) | Target tracking track acquisition method and device and computer equipment | |
CN108428249A (en) | A kind of initial position and orientation estimation method based on optical flow tracking and double geometrical models | |
CN103279961A (en) | Video segmentation method based on depth recovery and motion estimation | |
CN106296732B (en) | A moving target tracking method in complex background | |
CN114898434B (en) | Training method, device, equipment and storage medium for mask recognition model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200706 Address after: 230000 west side of Xianghe North Road, Feidong Economic Development Zone, Feidong County, Hefei City, Anhui Province Patentee after: ANHUI GUANGZHEN PHOTOELECTRIC TECHNOLOGY Co.,Ltd. Address before: 523000 Guangdong province Dongguan Yinxing Industrial Zone Qingxi Town Guangdong light array photoelectric technology Co. Ltd. Patentee before: GUANGDONG LITE ARRAY Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200605 |
|
CF01 | Termination of patent right due to non-payment of annual fee |