CN103106666A - Moving object detection method based on sparsity and smoothness - Google Patents

Moving object detection method based on sparsity and smoothness Download PDF

Info

Publication number
CN103106666A
CN103106666A CN2013100298035A CN201310029803A CN103106666A CN 103106666 A CN103106666 A CN 103106666A CN 2013100298035 A CN2013100298035 A CN 2013100298035A CN 201310029803 A CN201310029803 A CN 201310029803A CN 103106666 A CN103106666 A CN 103106666A
Authority
CN
China
Prior art keywords
moving target
smoothness
foreground
regression model
sparsity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100298035A
Other languages
Chinese (zh)
Other versions
CN103106666B (en
Inventor
宋利
薛耿剑
孙军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201310029803.5A priority Critical patent/CN103106666B/en
Publication of CN103106666A publication Critical patent/CN103106666A/en
Application granted granted Critical
Publication of CN103106666B publication Critical patent/CN103106666B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开一种视频图像处理技术领域的基于稀疏性和平滑性的运动目标检测方法,该方法通过设计一种适合于运动目标检测的回归模型,然后在运用该模型对运动目标进行估计时,对运动目标项施加稀疏性和平滑性的约束,从而得到最终的检测结果。本发明在动态背景这一复杂环境下的运动目标检测结果准确而可靠。

Figure 201310029803

The invention discloses a moving target detection method based on sparsity and smoothness in the technical field of video image processing. The method designs a regression model suitable for moving target detection, and then uses the model to estimate the moving target. The constraints of sparsity and smoothness are imposed on the moving target items to obtain the final detection results. The detection result of the moving target in the complex environment of the dynamic background is accurate and reliable.

Figure 201310029803

Description

基于稀疏性和平滑性的运动目标检测方法Moving Object Detection Method Based on Sparsity and Smoothness

技术领域technical field

本发明涉及的是一种视频图像处理技术领域的方法,具体是一种基于稀疏性和平滑性的运动目标检测方法。The present invention relates to a method in the technical field of video image processing, in particular to a moving target detection method based on sparsity and smoothness.

背景技术Background technique

运动目标检测方法的研究和应用是计算机视觉、智能视频分析领域中一个活跃的分支,在视频监控、自动控制、安全检查等实际应用中发挥着重要的作用。准确可靠的运动目标检测结果是进行更高级别的信息处理的基础,如目标跟踪、目标识别、行为分析等。The research and application of moving object detection methods is an active branch in the field of computer vision and intelligent video analysis, and plays an important role in practical applications such as video surveillance, automatic control, and security inspection. Accurate and reliable moving target detection results are the basis for higher-level information processing, such as target tracking, target recognition, behavior analysis, etc.

目前的运动目标检测方法在通常的环境下已取得比较稳定且可靠的结果,但这些方法在复杂场景下的性能往往不能令人满意。动态背景下的运动目标检测作为若干复杂场景下目标检测的难点之一,多年来受到了广泛的关注,发明一种适用于动态背景下的运动目标检测方法具有重要的意义。The current moving target detection methods have achieved relatively stable and reliable results in common environments, but the performance of these methods in complex scenes is often unsatisfactory. Moving object detection in dynamic background, as one of the difficulties in object detection in several complex scenes, has received extensive attention for many years. It is of great significance to invent a moving object detection method suitable for dynamic background.

现有的运动目标检测方法主要分类三类:光流法,帧间差分法,背景差分法。The existing moving target detection methods are mainly classified into three categories: optical flow method, frame difference method, and background difference method.

光流法是通过计算像素点的运动矢量来分离运动目标的,这类方法计算量比较大,复杂度高,目前主要应用于移动摄像头环境中。帧间差分法是根据相邻帧之间对应像素点的强度变化来检测运动目标。这类方法虽然简单,但往往只能提取目标的轮廓,而且对噪声也比较敏感,因此这类方法实用性也不强。The optical flow method separates moving objects by calculating the motion vectors of pixels. This type of method has a relatively large amount of calculation and high complexity, and is currently mainly used in the mobile camera environment. The inter-frame difference method detects moving objects according to the intensity changes of corresponding pixel points between adjacent frames. Although this type of method is simple, it can only extract the outline of the target, and it is also sensitive to noise, so this type of method is not very practical.

背景差分法是目前最常用的运动目标检测方法,其基本思想是通过对视频帧的学习建立对背景的描述,然后将新得到的视频图像与背景模型进行比较计算,当新视频帧中的像素点不符合当前背景描述的时候,判定该点为前景点,否则属于背景点,通过逐点计算从而完成对运动目标的检测。背景差分法中比较有代表性的方法包括:C.Stauffer andW.E.L.Grimson等人1999年在Proc.Conf.Computer Vision and Pattern Recognition(计算机视觉与模式识别国际会议)发表的“Adaptive background mixture models of real-timetracking”(用于实时跟踪的自适应背景混合模型)一文中提出的高斯混合背景模型法,该方法认为像素点的值符合高斯分布,且每个像素点的值可由多个自适应的高斯混合背景模型加权组合得到,从而建立了高斯混合背景模型实现了运动目标的检测;A.Elgammal,R.Duraiswami,D.Harwood and L.S.Davis等人在2002年在Proc.IEEE(电子与电气工程师协会会刊)发表的“Background and foreground modeling using non-parametric kerneldensity estimation for visual surveillance”(用于视频监控的基于非参数核密度估计的背景前景建模)一文中提出用核密度估计的方法进行背景建模,该方法对像素点值的分布不做任何假设,而是通过时域的方法对像素点的值进行统计以得到核函数的参数估计,然后依据这些估计参数和核函数建立起背景模型,实现了目标的检测;Mert Dikmen andThomas S.Huang于2008年在19th International Conference on Pattern Recognition(第19界模式识别国际会议)发表的“Robust Estimation of Foreground in Surveillance Videos bySparse Error Estimation”(视频监控中基于稀疏错误估计的前景检测)一文中提出用基于稀疏理论的方法进行前景目标检测。该方法将背景和前景的检测看作是一类信号分离问题:背景信号随时间变换比较缓慢;前景信号变化不同于背景信号,且前景信号具有稀疏的性质。通过问题的转化并借助于现有的稀疏理论可以实现对稀疏信号即前景目标的估计。The background subtraction method is currently the most commonly used moving object detection method. Its basic idea is to establish a description of the background by learning the video frame, and then compare and calculate the newly obtained video image with the background model. When the pixel in the new video frame When the point does not conform to the current background description, it is judged as the foreground point, otherwise it belongs to the background point, and the detection of the moving target is completed by point-by-point calculation. The more representative methods in the background difference method include: "Adaptive background mixture models of The Gaussian mixture background model method proposed in the article "real-timetracking" (Adaptive background mixture model for real-time tracking), which considers that the value of the pixel point conforms to the Gaussian distribution, and the value of each pixel point can be determined by multiple adaptive The weighted combination of the Gaussian mixture background model is obtained, thereby establishing the Gaussian mixture background model to realize the detection of moving targets; A.Elgammal, R.Duraiswami, D.Harwood and L.S.Davis et al. Association Journal) published "Background and foreground modeling using non-parametric kernel density estimation for visual surveillance" (for video surveillance based on non-parametric kernel density estimation background foreground modeling) in the article proposed to use the method of kernel density estimation for background Modeling, this method does not make any assumptions about the distribution of pixel values, but calculates the value of pixels through the time domain method to obtain the parameter estimation of the kernel function, and then establishes the background model based on these estimated parameters and kernel function , realized the detection of the target; Mert Dikmen andThomas S.Huang published "Robust Estimation of Foreground in Surveillance Videos by Sparse Error Estimation" in the 19th International Conference on Pattern Recognition (the 19th International Conference on Pattern Recognition) in 2008 (in video surveillance Foreground Detection Based on Sparse Error Estimation) This paper proposes a method based on sparse theory for foreground object detection. This method regards the detection of background and foreground as a kind of signal separation problem: the background signal changes slowly with time; the foreground signal changes differently from the background signal, and the foreground signal has the property of sparseness. Through the conversion of the problem and with the help of the existing sparse theory, the estimation of the sparse signal, that is, the foreground target can be realized.

上述的背景差分方法在动态背景的场景下效果往往并不理想,这些方法将部分背景点误检为前景点。因此,需要寻求一种能够在动态背景环境下的运动目标检测方法。The above background subtraction methods are often not ideal in dynamic background scenes, and these methods mistakenly detect some background points as foreground points. Therefore, it is necessary to seek a method for detecting moving objects in a dynamic background environment.

申请号为CN201110223892.8的中国发明专利,该专利提供了一种基于线性回归模型的目标检测方法,该方法虽然能够实现在动态背景这一复杂环境下的运动目标检测,但效果还可以进一步优化。The Chinese invention patent with the application number CN201110223892.8 provides a target detection method based on a linear regression model. Although this method can realize the detection of moving targets in a complex environment such as a dynamic background, the effect can be further optimized .

发明内容Contents of the invention

本发明针对现有技术存在的上述不足,提供一种基于稀疏性和平滑性的运动目标检测方法,在动态场景这一复杂环境下的目标检测结果准确而且可靠。The present invention aims at the above-mentioned deficiencies in the prior art, and provides a moving target detection method based on sparsity and smoothness, and the target detection result is accurate and reliable in the complex environment of a dynamic scene.

本发明是通过以下技术方案实现的,本发明首先设计一种适合于目标检测的回归模型,然后在运用该模型对运动目标进行估计的时候,根据运动目标的特性对其施加稀疏性和平滑性的约束,从而得到最终的检测结果。The present invention is achieved through the following technical solutions. The present invention first designs a regression model suitable for target detection, and then applies sparsity and smoothness to the moving target according to the characteristics of the moving target when using the model to estimate the moving target constraints to get the final detection result.

所述的适合于目标检测的回归模型是指:该回归模型可以表示为如下形式:The regression model suitable for target detection refers to: the regression model can be expressed as the following form:

y=Xw+t+n    (1)y=Xw+t+n (1)

其中:y=(y1,y2,…,ym)T表示因变量,X=(x1,x2,…,xm)T表示自变量且x1,x2,…,xm均为p维向量,p表示每个向量所包含的元素个数,m表示观测量的个数且m>p,w=(w1,w2,…,wp)T表示回归模型的系数,t=(t1,t2,…,tm)T表示差异于自变量的部分,n=(n1,n2,…,nm)T表示随机误差项,根据经典线性回归模型对随机误差项的假设,n1,n2,…,nm均服从均值为0,方差为σ2的高斯分布且该高斯分布记为Ν(0,σ2)。Where: y=(y 1 ,y 2 ,…,y m ) T represents the dependent variable, X=(x 1 ,x 2 ,…,x m ) T represents the independent variable and x 1 ,x 2 ,…,x m Both are p-dimensional vectors, p represents the number of elements contained in each vector, m represents the number of observations and m>p, w=(w 1 ,w 2 ,…,w p ) T represents the coefficient of the regression model , t=(t 1 ,t 2 ,…,t m ) T represents the part different from the independent variable, n=(n 1 ,n 2 ,…,n m ) T represents the random error item, according to the classical linear regression model The assumption of the random error term, n 1 , n 2 ,...,n m all obey the Gaussian distribution with mean value 0 and variance σ 2 and the Gaussian distribution is denoted as N(0,σ 2 ).

所述的运用该模型进行运动目标的检测是指:对应于公式(1),将当前帧作为因变量y,将若干历史帧作为自变量X,将前景部分作为差异于自变量的部分t,将背景自身的运动看作噪声n。根据线性回归模型的本意需要根据历史帧和当前帧的数据对回归系数进行估计,为了更准确地估计,利用前景部分通常具有的两个特性即稀疏性和平滑性,定义一个新的目标函数求出回归模型的系数w,进而估计出差异于自变量的部分t。The described use of this model to detect the moving target refers to: corresponding to the formula (1), the current frame is used as the dependent variable y, several historical frames are used as the independent variable X, and the foreground part is used as a part t different from the independent variable, Consider the motion of the background itself as noise n. According to the original intention of the linear regression model, the regression coefficient needs to be estimated according to the data of the historical frame and the current frame. In order to estimate more accurately, a new objective function is defined by using the two characteristics of the foreground part, namely sparsity and smoothness. The coefficient w of the regression model is obtained, and then the part t different from the independent variable is estimated.

所述的新的目标函数是指:假设前景部分是稀疏的和平滑的,则目标函数表示为:The new objective function refers to: assuming that the foreground part is sparse and smooth, then the objective function is expressed as:

(( ww ^^ ,, tt ^^ )) == argarg minmin ww ,, tt || || ythe y -- Xwwxya -- tt || || 22 22 ++ λλ 11 || || tt || || 00 ++ λλ 22 || || tt ′′ || || 00 -- -- -- (( 22 ))

其中:||·||0表示矢量的0-范数,t′=(t2-t1,t3-t2,…,tm-tm-1)T表示前景信号相邻元素值差分所构成的一个新矢量,用以描述其平滑性,

Figure BDA00002780219900032
分别表示对模型系数和差异于自变量的部分的估计值,λ1和λ2表示调节系数,其中λ1控制前景信号的稀疏程度,λ2控制前景信号的平滑程度。由于公式(2)中同时有两个自变量,结合于目标检测实际问题中更关注于差异于自变量的部分
Figure BDA00002780219900033
运用一种简化的处理方法进行简化。Where: ||·|| 0 represents the 0-norm of the vector, t′=(t 2 -t 1 ,t 3 -t 2 ,…,t m -t m-1 ) T represents the adjacent element value of the foreground signal A new vector formed by the difference to describe its smoothness,
Figure BDA00002780219900032
Represent the estimated value of the model coefficient and the part different from the independent variable, λ 1 and λ 2 represent the adjustment coefficient, where λ 1 controls the sparseness of the foreground signal, and λ 2 controls the smoothness of the foreground signal. Since there are two independent variables in formula (2) at the same time, it is more concerned about the part that is different from the independent variables in the actual problem of target detection
Figure BDA00002780219900033
Simplify using a simplified approach.

优选地,上述方法的一种简化的处理是:首先计算公式(2)对变量w的偏导数并将所得到的表达式的值设定为0,这样可以求得目标函数(2)在取得最小值时所对应的w的取值,即令:Preferably, a simplified treatment of the above method is: first calculate the partial derivative of the formula (2) with respect to the variable w and set the value of the obtained expression to 0, so that the objective function (2) can be obtained after obtaining The value of w corresponding to the minimum value, that is:

∂∂ (( || || ythe y -- Xwwxya -- tt || || 22 22 ++ λλ 11 || || tt || || 00 ++ λλ 22 || || tt ′′ || || 00 )) ∂∂ ww == 00 -- -- -- (( 33 ))

通过对公式(3)的计算可以得到如下的公式:By calculating the formula (3), the following formula can be obtained:

ww ^^ == Xx ++ (( ythe y -- tt )) -- -- -- (( 44 ))

其中X+为X的伪逆矩阵,表示为X+=(XTX)-1XT。然后将公式(4)代入公式(2),并将矢量的0-范数用1-范数近似,可得到如下简化的目标函数:Where X + is the pseudo-inverse matrix of X, expressed as X + =(X T X) -1 X T . Then substitute formula (4) into formula (2), and approximate the 0-norm of the vector with 1-norm, the following simplified objective function can be obtained:

(( tt ^^ )) == argarg minmin tt || || WW (( ythe y -- tt )) || || 22 22 ++ λλ 11 || || tt || || 11 ++ λλ 22 || || tt ′′ || || 11 -- -- -- (( 55 ))

其中W=(Im-XX+),Im为大小为m的单位矩阵。Where W=(I m -XX + ), I m is an identity matrix with size m.

根据现有的稀疏理论及求解算法可估计出

Figure BDA00002780219900041
即可得到对差异于自变量的部分的估计值,同时考虑到图像矢量化后造成空间信息的丢失,采用以不同方式将图像数据矢量化,然后将估计得到的结果进行融合。According to the existing sparsity theory and solving algorithm, it can be estimated that
Figure BDA00002780219900041
The estimated value of the part different from the independent variable can be obtained. At the same time, considering the loss of spatial information after image vectorization, the image data is vectorized in different ways, and then the estimated results are fused.

所述的以不同方式将图像数据矢量化,然后将估计得到的结果进行融合是指:分别对图像进行水平矢量拉直和垂直矢量拉直,并根据这两种处理方式下的数据分别对前景目标估计,然后这两个估计结果进行逻辑或融合以得到最终结果。Said vectorizing the image data in different ways, and then fusing the estimated results refers to: respectively performing horizontal vector straightening and vertical vector straightening on the image, and separately foreground according to the data under these two processing methods The target is estimated, and then the two estimated results are logically or fused to obtain the final result.

与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明提供一种动态背景下的运动目标检测方法,使得在背景扰动的环境中能将相对稀疏的运动目标提取出来,同时对于提取过程中伴有的离散背景噪声,利用运动目标局部平滑的性质,将噪声分离出来并被去除,得到的检测结果中运动目标的结构清晰完整,准确可靠,而且背景噪声基本被抑制掉。与发明专利CN201110223892.8相比,本发明在用线性回归模型估计运动目标的过程中不仅利用了目标的稀疏属性,还对目标进行了局部平滑性的约束。本发明为动态背景下运动目标的检测这一技术难题提供了有效的解决方案。The invention provides a moving target detection method in a dynamic background, so that relatively sparse moving targets can be extracted in an environment of background disturbance, and at the same time, for the discrete background noise accompanying the extraction process, the local smoothness of the moving target can be used , the noise is separated and removed, the structure of the moving target in the obtained detection result is clear and complete, accurate and reliable, and the background noise is basically suppressed. Compared with the invention patent CN201110223892.8, the present invention not only utilizes the sparse attribute of the target in the process of estimating the moving target with the linear regression model, but also constrains the local smoothness of the target. The invention provides an effective solution to the technical problem of detecting a moving target under a dynamic background.

附图说明Description of drawings

通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other characteristics, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:

图1为本发明一实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.

图2为本发明一实施例效果示意图。Fig. 2 is a schematic diagram of the effect of an embodiment of the present invention.

具体实施方式Detailed ways

下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进。这些都属于本发明的保护范围。The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

如图1所示,本实施例提供一种基于稀疏性和平滑性的运动目标检测方法,具体实施细节如下,以下实施例没有详细说明的部分参照发明内容进行:As shown in FIG. 1 , this embodiment provides a method for detecting moving objects based on sparsity and smoothness. The specific implementation details are as follows. For the parts that are not described in detail in the following embodiments, refer to the content of the invention:

(1).构造自变量矩阵:从当前视频帧序列的前200幅图像中每隔10帧抽取一幅图像,共抽取20幅图像作为当前帧的训练视频图像,将每帧图像按水平(垂直)拉直的方式构成一个矢量,将该矢量加入到对应的自变量矩阵X的第i列中,其中(0<i≤20)。(1). Construct the independent variable matrix: extract an image every 10 frames from the first 200 images of the current video frame sequence, and extract a total of 20 images as the training video images of the current frame, and divide each frame image by horizontal (vertical) ) to form a vector in a straightened way, and add the vector to the i-th column of the corresponding independent variable matrix X, where (0<i≤20).

(2).计算自变量矩阵X的伪逆阵X+(2). Calculate the pseudo-inverse matrix X + of the independent variable matrix X.

(3).对于非训练帧,按照步骤(1)的拉直方式将其转化为矢量y。(3). For non-training frames, convert them into vector y according to the straightening method in step (1).

(4).根据已有数据X和y估计公式(5)中的参数λ1和λ2。具体步骤如下:(4). Estimate the parameters λ 1 and λ 2 in the formula (5) according to the existing data X and y. Specific steps are as follows:

(a).用经典线性回归模型来估计一个回归系数

Figure BDA00002780219900051
(a). Estimate a regression coefficient with a classical linear regression model
Figure BDA00002780219900051

(b).根据得到的系数计算误差矢量 (b). Calculate the error vector according to the obtained coefficients

(c).根据绝对中位偏差方法估计一个初始的偏差值:

Figure BDA00002780219900053
其中med(·)表示取中值操作;(c). Estimate an initial deviation value according to the absolute median deviation method:
Figure BDA00002780219900053
Among them, med(·) represents the operation of taking the median value;

(d).设置λ1和λ2的参数为 &lambda; 1 = &sigma; ^ 15 * 2 log ( m ) , &lambda; 2 = 25 * &sigma; ^ 15 * 2 log ( m ) . (d). Set the parameters of λ 1 and λ 2 as &lambda; 1 = &sigma; ^ 15 * 2 log ( m ) , &lambda; 2 = 25 * &sigma; ^ 15 * 2 log ( m ) .

(5).根据公式(5)估计差异于自变量的部分t,对估计得到的图像进行阈值化处理,大于阈值的像素点判定为前景点,否则为背景点。在此阈值设为25。(5). Estimate the part t different from the independent variable according to formula (5), and threshold the estimated image. Pixels greater than the threshold are determined as foreground points, otherwise they are background points. Here the threshold is set to 25.

(6).分别按水平拉直和垂直拉直的矢量构成方式,按照步骤(1)-(5)得到这两种拉直方式下所对应的估计结果。(6). According to the vector formation methods of horizontal straightening and vertical straightening respectively, according to steps (1)-(5) to obtain the corresponding estimation results under these two straightening methods.

(7).将上述两种估计结果进行逻辑或融合得到最终的结果。(7). The above two estimation results are logically or fused to obtain the final result.

实施效果Implementation Effect

依据上述步骤,对由互联网上提供的公开动态背景测试序列进行实验。序列场景为一个校园环境,然后机动车和人陆续经过,在整个过程中,树叶会随着风吹而抖动,本方法测试运动目标的检测结果。According to the above-mentioned steps, experiments are carried out on the public dynamic background test sequences provided on the Internet. The sequence scene is a campus environment, and then motor vehicles and people pass by one after another. During the whole process, the leaves will shake with the wind. This method tests the detection results of moving objects.

如图2所示,图(a)为序列输入帧,也就是训练图像;图(b)和图(c)分别是运动目标的第1204帧的测试图像及其检测结果;图(d)和图(e)分别是运动目标的第1385帧的测试图像及检测结果;图(f)和图(g)分别是运动目标的第1668帧的测试图像及检测结果。图(h)和图(i)分别是运动目标的第1812帧的测试图像及检测结果。从图中可见本发明方法在动态背景下对运动目标的检测结果准确而可靠,体现了本发明的有效性和价值性。As shown in Figure 2, Figure (a) is the sequence input frame, that is, the training image; Figure (b) and Figure (c) are the test image of the 1204th frame of the moving target and its detection results; Figures (d) and Picture (e) is the test image and detection result of the 1385th frame of the moving target; picture (f) and picture (g) are the test image and detection result of the 1668th frame of the moving target respectively. Figure (h) and Figure (i) are the test image and detection results of the 1812th frame of the moving target, respectively. It can be seen from the figure that the method of the present invention can detect moving objects accurately and reliably under a dynamic background, reflecting the effectiveness and value of the present invention.

为了体现本发明的进步性,本发明方法与传统的高斯混合背景模型法(C.Stauffer等提出,简称GMM)、核密度估计方法(A.Elgammal等提出,简称KDE)以及专利号为CN201110223892.8的发明专利“基于线性回归模型的目标检测方法”(简称线性回归模型方法)进行了量化比较。本发明采用F_score为衡量指标对三种方法的检测结果进行评价:In order to reflect the advancement of the present invention, the method of the present invention is combined with the traditional Gaussian mixture background model method (proposed by C. Stauffer et al., referred to as GMM), kernel density estimation method (proposed by A. Elgammal et al., referred to as KDE) and the patent number is CN201110223892. 8's invention patent "target detection method based on linear regression model" (referred to as linear regression model method) for quantitative comparison. The present invention adopts F_score to evaluate the detection results of three methods for measuring index:

Figure BDA00002780219900061
Figure BDA00002780219900061

其中查准率和查全率分别定位为:Among them, the precision rate and recall rate are respectively positioned as:

Figure BDA00002780219900062
Figure BDA00002780219900062

Figure BDA00002780219900063
Figure BDA00002780219900063

F_score指标越高表明方法越有效。The higher the F_score indicator, the more effective the method is.

通过对上述三种方法在该测试序列任意选取的10帧结果进行测试,并将评价的结果比较如下:By testing the results of 10 frames arbitrarily selected in the test sequence by the above three methods, and comparing the evaluation results as follows:

Figure BDA00002780219900064
Figure BDA00002780219900064

量化评价结果对比说明本方法在检测性能方面均优于上述传统两种方法以及线性回归模型方法,进一步体现了本发明方法的价值。The comparison of quantitative evaluation results shows that this method is superior to the above two traditional methods and the linear regression model method in terms of detection performance, which further reflects the value of the method of the present invention.

以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变形或修改,这并不影响本发明的实质内容。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the specific embodiments described above, and those skilled in the art may make various changes or modifications within the scope of the claims, which do not affect the essence of the present invention.

Claims (2)

1.一种基于稀疏性和平滑性的运动目标检测方法,其特征在于,通过设计一种适合于运动目标检测的回归模型,然后在运用该模型对运动目标进行估计时,对运动目标项施加稀疏性和平滑性的约束,从而得到最终的检测结果;1. A moving target detection method based on sparsity and smoothness, characterized in that, by designing a regression model suitable for moving target detection, then when using the model to estimate the moving target, the moving target item is imposed Constraints of sparsity and smoothness to obtain the final detection results; 所述的适合于运动目标检测的回归模型,该回归模型表示为如下形式:The regression model suitable for moving target detection, the regression model is expressed as the following form: y=Xw+t+n    (1)y=Xw+t+n (1) 其中:y=(y1,y2,…,ym)T表示因变量,X=(x1,x2,…,xm)T表示自变量且x1,x2,…,xm均为p维向量,p表示每个向量所包含的元素个数,m表示观测量的个数且m>p,w=(w1,w2,…,wp)T表示回归模型的系数,t=(t1,t2,…,tm)T表示差异于自变量的部分,n=(n1,n2,…,nm)T表示随机误差项,根据经典线性回归模型对随机误差项的假设,n1,n2,…,nm均服从均值为0,方差为σ2的高斯分布且该高斯分布记为Ν(0,σ2);Where: y=(y 1 ,y 2 ,…,y m ) T represents the dependent variable, X=(x 1 ,x 2 ,…,x m ) T represents the independent variable and x 1 ,x 2 ,…,x m Both are p-dimensional vectors, p represents the number of elements contained in each vector, m represents the number of observations and m>p, w=(w 1 ,w 2 ,…,w p ) T represents the coefficient of the regression model , t=(t 1 ,t 2 ,…,t m ) T represents the part different from the independent variable, n=(n 1 ,n 2 ,…,n m ) T represents the random error item, according to the classical linear regression model The assumption of the random error term, n 1 , n 2 ,...,n m all obey the mean value is 0, the variance is the Gaussian distribution of σ 2 and the Gaussian distribution is recorded as N(0, σ 2 ); 所述的运用该模型进行运动目标的检测是指:对应于公式(1),将当前帧作为因变量y,将若干历史帧作为自变量X,将前景部分作为差异于自变量的部分t,将背景自身的运动看作噪声n,根据线性回归模型的本意需要根据历史帧和当前帧的数据对回归系数进行估计,为了更准确地估计,利用前景部分具有的两个特性即稀疏性和平滑性,定义一个新的目标函数求出回归模型的系数w,进而估计出差异于自变量的部分t,考虑到图像矢量化后造成空间信息的丢失,采用以不同方式将图像数据矢量化,然后将估计得到的结果进行融合;The described use of this model to detect the moving target refers to: corresponding to the formula (1), the current frame is used as the dependent variable y, several historical frames are used as the independent variable X, and the foreground part is used as a part t different from the independent variable, The motion of the background itself is regarded as noise n. According to the original intention of the linear regression model, the regression coefficient needs to be estimated based on the data of the historical frame and the current frame. In order to estimate more accurately, the two characteristics of the foreground part, namely sparsity and smoothness, are used. , define a new objective function to obtain the coefficient w of the regression model, and then estimate the part t different from the independent variable. Considering the loss of spatial information after image vectorization, the image data is vectorized in different ways, and then Combine the estimated results; 所述的新的目标函数是指:假设前景部分是稀疏的和平滑的,则目标函数表示为:The new objective function refers to: assuming that the foreground part is sparse and smooth, then the objective function is expressed as: (( ww ^^ ,, tt ^^ )) == argarg minmin ww ,, tt || || ythe y -- Xwwxya -- tt || || 22 22 ++ &lambda;&lambda; 11 || || tt || || 00 ++ &lambda;&lambda; 22 || || tt &prime;&prime; || || 00 -- -- -- (( 22 )) 其中:||·||0表示矢量的0-范数,t′=(t2-t1,t3-t2,…,tm-tm-1)T表示前景信号相邻元素值差分所构成的一个新矢量,用以描述其平滑性,
Figure FDA00002780219800012
分别表示对模型系数和差异于自变量的部分的估计值,λ1和λ2表示调节系数,其中λ1控制前景信号的稀疏程度,λ2控制前景信号的平滑程度;
Where: ||·|| 0 represents the 0-norm of the vector, t′=(t 2 -t 1 ,t 3 -t 2 ,…,t m -t m-1 ) T represents the adjacent element value of the foreground signal A new vector formed by the difference to describe its smoothness,
Figure FDA00002780219800012
Represent the estimated value of the model coefficient and the part different from the independent variable, λ 1 and λ 2 represent the adjustment coefficient, where λ 1 controls the sparseness of the foreground signal, and λ 2 controls the smoothness of the foreground signal;
所述的采用以不同方式将图像数据矢量化,然后将估计得到的结果进行融合是指:分别对图像进行水平矢量拉直和垂直矢量拉直,并根据这两种处理方式下的数据分别对前景目标估计,然后这两个估计结果进行逻辑或融合以得到最终结果。Said vectorizing the image data in different ways, and then fusing the estimated results refers to: respectively performing horizontal vector straightening and vertical vector straightening on the image, and according to the data under these two processing methods, respectively The foreground object is estimated, and then the two estimated results are logically or fused to obtain the final result.
2.根据权利要求1所述的基于稀疏性和平滑性的运动目标检测方法,其特征是,所述方法的一种简化的处理是:首先计算公式(2)对变量w的偏导数并将所得到的表达式的值设定为0,这样求得目标函数(2)在取得最小值时所对应的w的取值,即令:2. the moving target detection method based on sparsity and smoothness according to claim 1, is characterized in that, a kind of simplified processing of described method is: at first calculate formula (2) to the partial derivative of variable w and put The value of the obtained expression is set to 0, so that the value of w corresponding to the objective function (2) is obtained when the minimum value is obtained, that is: &PartialD;&PartialD; (( || || ythe y -- Xwwxya -- tt || || 22 22 ++ &lambda;&lambda; 11 || || tt || || 00 ++ &lambda;&lambda; 22 || || tt &prime;&prime; || || 00 )) &PartialD;&PartialD; ww == 00 -- -- -- (( 33 )) 通过对公式(3)的计算得到如下的公式:By calculating the formula (3), the following formula is obtained: ww ^^ == Xx ++ (( ythe y -- tt )) -- -- -- (( 44 )) 其中X+为X的伪逆矩阵,表示为X+=(XTX)-1XT,然后将公式(4)代入公式(2),并将矢量的0-范数用1-范数近似,得到如下简化的目标函数:Where X + is the pseudo-inverse matrix of X, expressed as X + =(X T X) -1 X T , then substitute formula (4) into formula (2), and use the 0-norm of the vector with the 1-norm Approximation, the following simplified objective function is obtained: (( tt ^^ )) == argarg minmin tt || || WW (( ythe y -- tt )) || || 22 22 ++ &lambda;&lambda; 11 || || tt || || 11 ++ &lambda;&lambda; 22 || || tt &prime;&prime; || || 11 -- -- -- (( 55 )) 其中W=(Im-XX+),Im为大小为m的单位矩阵。Where W=(I m -XX + ), I m is an identity matrix with size m.
CN201310029803.5A 2013-01-25 2013-01-25 Based on openness and moving target detecting method that is flatness Expired - Fee Related CN103106666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310029803.5A CN103106666B (en) 2013-01-25 2013-01-25 Based on openness and moving target detecting method that is flatness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310029803.5A CN103106666B (en) 2013-01-25 2013-01-25 Based on openness and moving target detecting method that is flatness

Publications (2)

Publication Number Publication Date
CN103106666A true CN103106666A (en) 2013-05-15
CN103106666B CN103106666B (en) 2015-10-28

Family

ID=48314493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310029803.5A Expired - Fee Related CN103106666B (en) 2013-01-25 2013-01-25 Based on openness and moving target detecting method that is flatness

Country Status (1)

Country Link
CN (1) CN103106666B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154486A (en) * 2017-12-25 2018-06-12 电子科技大学 Remote sensing image time series cloud detection method of optic based on p norm regression models
CN113837967A (en) * 2021-09-27 2021-12-24 南京林业大学 Wild animal image denoising method based on sparse error constraint representation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134597A1 (en) * 2010-11-26 2012-05-31 Microsoft Corporation Reconstruction of sparse data
CN102509346A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Object illumination migration method based on edge retaining

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134597A1 (en) * 2010-11-26 2012-05-31 Microsoft Corporation Reconstruction of sparse data
CN102509346A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Object illumination migration method based on edge retaining

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LATECKI,L.J. ET AL.: "Tracking motion objects in infrared videos", 《IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE》, 16 September 2005 (2005-09-16), pages 99 - 104, XP010881157, DOI: 10.1109/AVSS.2005.1577250 *
曲云腾 等: "基于Kalman预测的人体运动目标跟踪", 《计算机系统应用》, vol. 20, no. 1, 31 December 2011 (2011-12-31), pages 137 - 140 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154486A (en) * 2017-12-25 2018-06-12 电子科技大学 Remote sensing image time series cloud detection method of optic based on p norm regression models
CN108154486B (en) * 2017-12-25 2021-07-16 电子科技大学 Cloud detection method for optical remote sensing image time series based on p-norm regression model
CN113837967A (en) * 2021-09-27 2021-12-24 南京林业大学 Wild animal image denoising method based on sparse error constraint representation
CN113837967B (en) * 2021-09-27 2023-11-17 南京林业大学 Wildlife image denoising method based on sparse error constrained representation

Also Published As

Publication number Publication date
CN103106666B (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN105069472B (en) A kind of vehicle checking method adaptive based on convolutional neural networks
Maddalena et al. The 3dSOBS+ algorithm for moving object detection
CN104537647B (en) A kind of object detection method and device
CN103729854B (en) A kind of method for detecting infrared puniness target based on tensor model
Sheng et al. Siamese denoising autoencoders for joints trajectories reconstruction and robust gait recognition
CN104408742B (en) A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis
CN110097115B (en) Video salient object detection method based on attention transfer mechanism
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN110211157A (en) A kind of target long time-tracking method based on correlation filtering
CN111814816A (en) A target detection method, device and storage medium thereof
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN107301376B (en) A Pedestrian Detection Method Based on Deep Learning Multi-layer Stimulation
CN103500345A (en) Method for learning person re-identification based on distance measure
CN110555870A (en) DCF tracking confidence evaluation and classifier updating method based on neural network
Guo et al. Partially-sparse restricted boltzmann machine for background modeling and subtraction
CN103699874A (en) Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation
CN106127112A (en) Data Dimensionality Reduction based on DLLE model and feature understanding method
Ma et al. Scene invariant crowd counting using multi‐scales head detection in video surveillance
CN104867162A (en) Motion object detection method based on multi-component robustness PCA
CN113033356B (en) A scale-adaptive long-term correlation target tracking method
Yang et al. End-to-end background subtraction via a multi-scale spatio-temporal model
Liu et al. HPN-SOE: Infrared small target detection and identification algorithm based on heterogeneous parallel networks with similarity object enhancement
Zhou Video expression recognition method based on spatiotemporal recurrent neural network and feature fusion
CN103106666B (en) Based on openness and moving target detecting method that is flatness

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151028

Termination date: 20220125