CN111104948A - Target tracking method based on adaptive fusion of double models - Google Patents

Target tracking method based on adaptive fusion of double models Download PDF

Info

Publication number
CN111104948A
CN111104948A CN201811259843.8A CN201811259843A CN111104948A CN 111104948 A CN111104948 A CN 111104948A CN 201811259843 A CN201811259843 A CN 201811259843A CN 111104948 A CN111104948 A CN 111104948A
Authority
CN
China
Prior art keywords
response
target
correlation filter
adaptive fusion
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811259843.8A
Other languages
Chinese (zh)
Inventor
戴伟聪
金龙旭
李国宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201811259843.8A priority Critical patent/CN111104948A/en
Publication of CN111104948A publication Critical patent/CN111104948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例提出一种基于双模型的自适应融合的目标跟踪方法。该基于双模型的自适应融合的目标跟踪方法利用基于相对置信度的自适应融合系数,使得相关滤波器响应和颜色分类器响应能够进行最优融合,进而充分地展现出各自模型的跟踪优势,有效地解决了现有Staple目标跟踪方法中的相关滤波器与颜色分类器的融合系数为常数而没有完全展示出相关滤波器跟踪模型和颜色分类器跟踪模型的优势的问题。

Figure 201811259843

The embodiment of the present invention proposes a dual-model-based adaptive fusion target tracking method. The dual-model-based adaptive fusion target tracking method utilizes the adaptive fusion coefficient based on relative confidence, so that the correlation filter response and the color classifier response can be optimally fused, thereby fully showing the tracking advantages of the respective models. It effectively solves the problem that the fusion coefficient of the correlation filter and the color classifier in the existing Staple target tracking method is constant and does not fully demonstrate the advantages of the correlation filter tracking model and the color classifier tracking model.

Figure 201811259843

Description

一种基于双模型的自适应融合的目标跟踪方法A Target Tracking Method Based on Dual Model Adaptive Fusion

技术领域technical field

本发明涉及计算机视觉的技术领域,具体涉及一种基于双模型的自适应融合的目标跟踪方法。The invention relates to the technical field of computer vision, in particular to a dual-model-based adaptive fusion target tracking method.

背景技术Background technique

目标跟踪是计算机视觉领域中主要的研究方向之一。目标跟踪涉及到的领域包括数字图像处理、机器学习、模式识别、神经网络、深度学习等,在视频监控、智能机器人等多个应用领域有着广阔的发展前景。Object tracking is one of the main research directions in the field of computer vision. The fields involved in target tracking include digital image processing, machine learning, pattern recognition, neural networks, deep learning, etc., and it has broad prospects for development in many application fields such as video surveillance and intelligent robots.

近些年,基于检测的目标跟踪方法得到了很大的发展,其中,最主流的研究方向之一是基于相关滤波器的目标跟踪方法。2014年Henriques等人将MOSSE与CSK使用的单通道灰度特征扩展为的多通道方向梯度直方图特征(HOG),并将特征用核技巧映射到高维空间,从而提出了KCF算法。KCF的提出,使得相关滤波类目标跟踪方法迅速发展。2015年Danelljan等人提出的SRDCF通过空域正则化去解决相关滤波器内在的边界效应,在VOT2015目标跟踪竞赛中名列前茅,但是SRDCF过大的计算量也限制了该算法的实用性。2016年Luca等人基于KCF的线性核版本DCF提出了Staple算法,Staple算法通过求解两个岭回归方程结合相关滤波与颜色分类器来提升跟踪算法的性能,得到了一个相当优秀的结果。但是,Staple算法中的相关滤波器与颜色分类器的融合系数为常数,这导致了Staple算法没有完全展示出相关滤波器跟踪模型和颜色分类器跟踪模型的优势。In recent years, detection-based target tracking methods have been greatly developed, and one of the most mainstream research directions is the correlation filter-based target tracking method. In 2014, Henriques et al. extended the single-channel grayscale feature used by MOSSE and CSK to a multi-channel histogram of directional gradient feature (HOG), and mapped the feature to a high-dimensional space with a kernel technique, thus proposing the KCF algorithm. The proposal of KCF makes the correlation filtering target tracking method develop rapidly. In 2015, the SRDCF proposed by Danelljan et al. used spatial regularization to solve the inherent boundary effect of the correlation filter, and came out on top in the VOT2015 target tracking competition. In 2016, Luca et al. proposed the Staple algorithm based on the linear kernel version of KCF, DCF. The Staple algorithm improved the performance of the tracking algorithm by solving two ridge regression equations combined with correlation filtering and color classifiers, and obtained a very good result. However, the fusion coefficient of the correlation filter and the color classifier in the Staple algorithm is constant, which leads to the Staple algorithm not fully showing the advantages of the correlation filter tracking model and the color classifier tracking model.

因此,针对现有的Staple算法没有完全展示出相关滤波器跟踪模型和颜色分类器跟踪模型的优势的问题,有必要提供一种能够完全展现出相关滤波器跟踪模型和颜色分类器跟踪模型的优势的目标跟踪方法。Therefore, in view of the problem that the existing Staple algorithm does not fully demonstrate the advantages of the correlation filter tracking model and the color classifier tracking model, it is necessary to provide a method that can fully demonstrate the advantages of the correlation filter tracking model and the color classifier tracking model. target tracking method.

发明内容SUMMARY OF THE INVENTION

针对现有的Staple算法没有完全展示出相关滤波器跟踪模型和颜色分类器跟踪模型的优势的问题,本发明实施例提出一种基于双模型的自适应融合的目标跟踪方法。本发明实施例所提出基于双模型的自适应融合的目标跟踪方法利用基于相对置信度的自适应融合系数,使得相关滤波器和颜色分类器能够进行最优融合,进而充分地展现出各自模型的跟踪优势。Aiming at the problem that the existing Staple algorithm does not fully demonstrate the advantages of the correlation filter tracking model and the color classifier tracking model, an embodiment of the present invention proposes a dual-model-based adaptive fusion target tracking method. The target tracking method based on the dual-model adaptive fusion proposed in the embodiment of the present invention utilizes the adaptive fusion coefficient based on relative confidence, so that the correlation filter and the color classifier can perform optimal fusion, and then fully demonstrate the performance of the respective models. Track advantage.

该基于双模型的自适应融合的目标跟踪方法的具体方案如下:一种基于双模型的自适应融合的目标跟踪方法,包括步骤S1:根据初始帧,获取目标初始信息;步骤S2:分别从前景区域和背景区域提取颜色直方图,并采用岭回归方程求解和训练颜色分类器;步骤S3:从相关滤波区域提取特征,并训练相关滤波器;步骤S4:初始化尺度滤波器,提取不同尺度图像块来训练尺度滤波器;步骤S5:利用所述颜色分类器检测目标,获得颜色分类器的响应;步骤S6:在相关滤波区域内用相关滤波器检测目标,获得相关滤波的响应;步骤S7:根据所述相关滤波的响应计算相对置信度,基于所述相对置信度计算自适应融合系数,采用所述自适应融合系数融合相关滤波器的响应和颜色分类器的响应,获得检测目标的位置;步骤S8:提取所述目标的特征,并更新所述相关滤波器和颜色分类器;步骤S9:检测尺度变化,更新目标、前景区域、背景区域和尺度滤波器;步骤S10:重复步骤S5至步骤S9,直至视频结束。The specific scheme of the dual-model-based adaptive fusion target tracking method is as follows: a dual-model-based adaptive fusion target tracking method, comprising step S1: obtaining initial target information according to the initial frame; step S2: separately from the foreground The color histogram is extracted from the region and the background region, and the ridge regression equation is used to solve and train the color classifier; step S3: extract features from the relevant filtering region, and train the relevant filter; step S4: initialize the scale filter, extract different scale image blocks to train the scale filter; Step S5: use the color classifier to detect the target, and obtain the response of the color classifier; Step S6: detect the target with the correlation filter in the correlation filter area, and obtain the response of the correlation filter; Step S7: according to Calculate the relative confidence in the response of the correlation filter, calculate the adaptive fusion coefficient based on the relative confidence, and use the adaptive fusion coefficient to fuse the response of the correlation filter and the response of the color classifier to obtain the position of the detection target; step S8: Extract the features of the target, and update the correlation filter and color classifier; Step S9: Detect scale changes, update the target, foreground area, background area and scale filter; Step S10: Repeat steps S5 to S9 , until the video ends.

优选地,所述目标初始信息包括目标位置、目标的长度、目标的宽度。Preferably, the target initial information includes target position, target length, and target width.

优选地,所述步骤S2中提取颜色直方图的过程为:将颜色空间均分为若干个颜色区间,定义每个颜色区间为直方图的一个直方柱,统计前景区域或者背景区域落在每一个直方柱中的像素点的个数。Preferably, the process of extracting the color histogram in the step S2 is as follows: dividing the color space into several color intervals, defining each color interval as a column of the histogram, and counting the foreground area or the background area falling within each The number of pixels in the histogram.

优选地,所述颜色直方图的直方柱的宽度值为8。Preferably, the value of the width of the bar of the color histogram is 8.

优选地,所述岭回归方程的表达式为:Preferably, the expression of the ridge regression equation is:

Figure BDA0001843647580000021
Figure BDA0001843647580000021

χt表示训练样本及其对应的回归值,β为所需要求解的颜色分类器,Lhist表示分类器的损失函数,λhist为正则化系数。χ t represents the training sample and its corresponding regression value, β is the color classifier to be solved, L hist represents the loss function of the classifier, and λ hist is the regularization coefficient.

优选地,训练相关滤波器的方法为最小化下述公式:Preferably, the method of training the correlation filter is to minimize the following formula:

Figure BDA0001843647580000022
Figure BDA0001843647580000022

其中,f表示样本,d为样本f的维数,h为相关滤波器,g表示相关滤波器需要的输出,g为高斯函数,*表示卷积运算,λ为正则化系数。Among them, f represents the sample, d is the dimension of the sample f, h is the correlation filter, g represents the output required by the correlation filter, g is the Gaussian function, * represents the convolution operation, and λ is the regularization coefficient.

优选地,所述相对置信度的表达式为:Preferably, the expression of the relative confidence is:

Figure BDA0001843647580000031
Figure BDA0001843647580000031

其中,rt为第t帧时相关滤波器的检测结果相对全局的相对置信度,APCEt为第t帧响应yt的平均相关峰值能量,APCEt的具体计算表达式为:Among them, r t is the relative confidence of the detection result of the correlation filter in the t-th frame relative to the global, APCE t is the average correlation peak energy of the t-th frame response y t , and the specific calculation expression of APCE t is:

Figure BDA0001843647580000032
Figure BDA0001843647580000032

优选地,所述自适应融合系数的表达式为:Preferably, the expression of the adaptive fusion coefficient is:

Figure BDA0001843647580000033
Figure BDA0001843647580000033

其中,αt为第t帧时的自适应融合系数,ρ为相对置信度的影响因子,rt为第t帧时相关滤波器的检测结果相对全局的相对置信度,α为常量加权系数。Among them, α t is the adaptive fusion coefficient at the t-th frame, ρ is the influence factor of the relative confidence, r t is the relative global confidence of the detection result of the correlation filter at the t-th frame, and α is a constant weighting coefficient.

优选地,采用所述自适应融合系数融合相关滤波器的响应和颜色分类器的响应,具体计算表达式如下所述:Preferably, the adaptive fusion coefficient is used to fuse the response of the correlation filter and the response of the color classifier, and the specific calculation expression is as follows:

response=(1-αt)response_cf+αt·response_p,response=(1-αt)response_cf+αt·response_p,

其中,response_cf为相关滤波器的响应,response_p为颜色分类器的响应,αt为第t帧时的自适应融合系数,response为最终响应.Among them, response_cf is the response of the correlation filter, response_p is the response of the color classifier, α t is the adaptive fusion coefficient at the t-th frame, and response is the final response.

优选地,影响因子ρ用于调整相关滤波器判别结果和颜色分类器判别结果的权重。Preferably, the impact factor ρ is used to adjust the weight of the correlation filter discrimination result and the color classifier discrimination result.

从以上技术方案可以看出,本发明实施例具有以下优点:As can be seen from the above technical solutions, the embodiments of the present invention have the following advantages:

本发明实施例提出一种基于双模型的自适应融合的目标跟踪方法,该方法通过基于相对置信度的自适应融合系数,使得相关滤波器和颜色分类器能够进行最优融合,进而充分地展现出各自模型的跟踪优势。The embodiment of the present invention proposes a dual-model-based adaptive fusion target tracking method, which enables the correlation filter and the color classifier to perform optimal fusion through an adaptive fusion coefficient based on relative confidence, thereby fully displaying the tracking advantages of the respective models.

附图说明Description of drawings

图1为本发明实施例中提供的一种基于双模型的自适应融合的目标跟踪方法的流程示意图;1 is a schematic flowchart of a dual-model-based adaptive fusion target tracking method provided in an embodiment of the present invention;

图2为本发明实施例所提出的一种基于双模型的自适应融合的目标跟踪方法和其他目标跟踪方法在OTB2013上的实验结果对比示意图;FIG. 2 is a schematic diagram of the comparison of experimental results between a dual-model-based adaptive fusion target tracking method and other target tracking methods proposed in an embodiment of the present invention on OTB2013;

图3为本发明实施例所提出的一种基于双模型的自适应融合的目标跟踪方法与DSST、KCF在不同图像中跟踪的定性比较示意图;3 is a schematic diagram of qualitative comparison between a dual-model-based adaptive fusion target tracking method proposed in an embodiment of the present invention and DSST and KCF tracking in different images;

图4为图1所示实施例的另一种步骤流程表示示意图。FIG. 4 is a schematic diagram showing another step flow of the embodiment shown in FIG. 1 .

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if present) in the description and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to Describe a particular order or sequence. It is to be understood that data so used may be interchanged under appropriate circumstances so that the embodiments described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.

如图1所述,本发明实施例提供的一种基于双模型的自适应融合的目标跟踪方法。该方法包括十个步骤,每个步骤具体的内容如下所述。As shown in FIG. 1 , an embodiment of the present invention provides a dual-model-based adaptive fusion target tracking method. The method includes ten steps, and the specific content of each step is as follows.

步骤S1:根据初始帧,获取目标初始信息。目标初始信息包括目标位置、目标的长度、目标的宽度。进一步地,在步骤S1中,还包括一些初始化参数、初始化区域的常规初始化操作。Step S1: Acquire initial information of the target according to the initial frame. The initial information of the target includes the target position, the length of the target, and the width of the target. Further, in step S1, it also includes some initialization parameters and normal initialization operations of the initialization area.

步骤S2:分别从前景区域和背景区域提取颜色直方图,并采用岭回归方程求解和训练颜色分类器。Step S2: extracting color histograms from the foreground area and background area, respectively, and using the ridge regression equation to solve and train the color classifier.

从前景区域或从背景区域提取颜色直方图的具体过程为:将颜色空间均分为若干个颜色区间,定义每个颜色区间为直方图的一个直方柱,统计前景区域或者背景区域落在每一个直方柱中的像素点的个数。在一具体实施例中,颜色直方图的直方柱的宽度值为8。The specific process of extracting a color histogram from the foreground area or from the background area is: divide the color space into several color intervals, define each color interval as a histogram of the histogram, and count the foreground area or background area falling in each The number of pixels in the histogram. In a specific embodiment, the value of the width of the bar of the color histogram is 8.

颜色分类器的响应通过求解岭回归方程获得,岭回归方程具体表达式如公式1所示:The response of the color classifier is obtained by solving the ridge regression equation. The specific expression of the ridge regression equation is shown in Equation 1:

Figure BDA0001843647580000051
Figure BDA0001843647580000051

χt表示训练样本及其对应的回归值,β为所需要求解的颜色分类器,Lhist表示分类器的损失函数,λhist为正则化系数。χ t represents the training sample and its corresponding regression value, β is the color classifier to be solved, L hist represents the loss function of the classifier, and λ hist is the regularization coefficient.

令(q,y)∈W表示一系列的矩形的采样框q及其对应的回归标签

Figure BDA0001843647580000052
其中包括正样本(p,1)。X表示图像。所有采样图像的损失可以表示为:可将公式1转变成公式2:Let (q, y) ∈ W denote a series of rectangular sampling boxes q and their corresponding regression labels
Figure BDA0001843647580000052
which includes positive samples (p,1). X represents an image. The loss of all sampled images can be expressed as: Equation 1 can be transformed into Equation 2:

lhist(x,p,β)=∑(q,y)∈WT[∑u∈HψT(x,q)[u]]-y)2 (公式2)l hist (x,p,β)=∑ (q,y)∈WT [∑ u∈H ψ T(x,q) [u]]-y) 2 (Equation 2)

ψT(x,q)表示一个M通道的特征变换,β为模型。应用线性回归在每一个像素u上使属于背景区域B的像素的回归值为0,属于前景区域O的像素的回归值为1,并且以ψ作为ψT(x,q)的简写,则对于单独一副图像的损失函数可以写为:ψ T(x, q) represents the feature transformation of an M channel, and β is the model. Apply linear regression on each pixel u to make the regression value of the pixel belonging to the background area B to 0, the regression value of the pixel belonging to the foreground area O to be 1, and use ψ as the abbreviation of ψ T(x, q) , then for The loss function for a single image can be written as:

Figure BDA0001843647580000053
Figure BDA0001843647580000053

在上式,O表示紧围目标的矩形前景区域,B表示包含目标的矩形背景区域。In the above formula, O represents the rectangular foreground area closely surrounding the target, and B represents the rectangular background area containing the target.

在本发明实施例中,彩色图像采用RGB颜色空间,因此,使用RGB颜色直方图作为特征。将损失函数分解为直方图中每一个直方柱之和,本发明实施例中的优选值M=32。In the embodiment of the present invention, the color image adopts the RGB color space, therefore, the RGB color histogram is used as a feature. The loss function is decomposed into the sum of each histogram in the histogram, and the preferred value in the embodiment of the present invention is M=32.

βTψ[u]可以通过构造查找表k快速获得,k将像素值u映射为所属直方柱的序号,即用颜色直方图进行反向投影。令βTψ[u]=βk(u),可以得到公式4:β T ψ[u] can be quickly obtained by constructing a lookup table k, which maps the pixel value u to the serial number of the histogram to which it belongs, that is, back-projecting with a color histogram. Let β T ψ[u]=β k(u) , Equation 4 can be obtained:

Figure BDA0001843647580000061
Figure BDA0001843647580000061

其中,Nj(A)={xi∈A:k(u)=j}为区域A中第j个直方柱中元素个数的总和。Wherein, N j (A)={x i ∈ A:k(u)=j} is the sum of the number of elements in the j-th histogram in the area A.

因此,公式1的岭回归问题的解如公式5所示:Therefore, the solution to the ridge regression problem of Equation 1 is shown in Equation 5:

Figure BDA0001843647580000062
Figure BDA0001843647580000062

其中,

Figure BDA0001843647580000063
Figure BDA0001843647580000064
使用积分图计算,获得颜色分类器的响应。in,
Figure BDA0001843647580000063
right
Figure BDA0001843647580000064
Using the integral plot calculation, obtain the response of the color classifier.

步骤S3:从相关滤波区域提取特征,并训练相关滤波器。根据目标中心提取样本模板x,对其进行循环移位构造大量的训练样本xi。提取多通道方向梯度直方图特征(HOG)训练生成相关滤波器。Step S3: Extract features from the correlation filter region, and train the correlation filter. The sample template x is extracted according to the target center, and a large number of training samples x i are constructed by cyclic shift. Extract multi-channel histogram of oriented gradient features (HOG) and train to generate correlation filters.

相关滤波器可以由一个岭回归方程求解获得,对于一个由d维特征组成的样本f,可以通过最小化公式6来训练一个d维的相关滤波器h:The correlation filter can be obtained by solving a ridge regression equation. For a sample f composed of d-dimensional features, a d-dimensional correlation filter h can be trained by minimizing Equation 6:

Figure BDA0001843647580000065
Figure BDA0001843647580000065

其中,f表示样本,d为样本f的维数,h为相关滤波器,g表示相关滤波器需要的输出,g为高斯函数,*表示卷积运算,λ为正则化系数。λ用于防止过拟合。Among them, f represents the sample, d is the dimension of the sample f, h is the correlation filter, g represents the output required by the correlation filter, g is the Gaussian function, * represents the convolution operation, and λ is the regularization coefficient. λ is used to prevent overfitting.

步骤S4:初始化尺度滤波器,提取不同尺度图像块来训练尺度滤波器。通过以上一帧确定的目标位置为中心,提取一系列不同尺度的图像块特征,构建特征金字塔。以H×W为目标尺寸,则在目标位置附近提取总数为S个大小为anH×anW的图像块,a表示尺度系数。n的具体表达式如公式7所示:Step S4: Initialize the scale filter, and extract image blocks of different scales to train the scale filter. With the target position determined in the previous frame as the center, a series of image block features of different scales are extracted to construct a feature pyramid. Taking H×W as the target size, a total of S image blocks of size an H×a n W are extracted near the target position, and a represents the scale coefficient. The specific expression of n is shown in Equation 7:

Figure BDA0001843647580000066
Figure BDA0001843647580000066

其中,在该实施例中令S=33。当然,在其他实施例中,S的具体取值也可以为其他数字。Wherein, in this embodiment, let S=33. Of course, in other embodiments, the specific value of S may also be other numbers.

步骤S5:利用所述颜色分类器检测目标,获得颜色分类器的响应。最小化公式6后,将公式6转换到频域可以计算获得滤波器Hl在频域的表达式,具体表达式如公式8所示:Step S5: Use the color classifier to detect the target, and obtain the response of the color classifier. After minimizing Equation 6, convert Equation 6 to the frequency domain to obtain the expression of the filter H l in the frequency domain. The specific expression is shown in Equation 8:

Figure BDA0001843647580000071
Figure BDA0001843647580000071

其中,公式8中的大写字母意味着对应的离散傅里叶变换,

Figure BDA0001843647580000072
表示Fk对应的复共轭。where the capital letters in Equation 8 mean the corresponding discrete Fourier transform,
Figure BDA0001843647580000072
represents the complex conjugate corresponding to F k .

步骤S6:在相关滤波区域内用相关滤波器检测目标,获得相关滤波的响应。将公式8做逆傅里叶变化即可得到相关滤波器的响应。Step S6: use the correlation filter to detect the target in the correlation filter area, and obtain the response of the correlation filter. The response of the correlation filter can be obtained by doing the inverse Fourier transform of Equation 8.

步骤S7:根据所述相关滤波的响应计算相对置信度,基于所述相对置信度计算自适应融合系数,采用所述自适应融合系数融合相关滤波器的响应和颜色分类器的响应,获得检测目标的位置。Step S7: Calculate the relative confidence according to the response of the correlation filter, calculate the adaptive fusion coefficient based on the relative confidence, use the adaptive fusion coefficient to fuse the response of the correlation filter and the response of the color classifier, and obtain the detection target s position.

出于结合两个跟踪模型的想法以达成优势互补的思想,Staple目标跟踪方法通过常数系数α加权平均集成相关滤波器的响应response_cf与颜色分类器的响应response_p,具体的结合表达式如公式9所示:Out of the idea of combining the two tracking models to achieve complementary advantages, the Staple target tracking method integrates the response_cf of the correlation filter and the response_p of the color classifier through a weighted average of the constant coefficient α. The specific combination expression is as shown in Equation 9. Show:

response=(1-α)response_cf+α·response_p (公式9)response=(1-α)response_cf+α·response_p (Equation 9)

其中,α为常数系数,response_cf为相关滤波器的响应,response_p为颜色分类器的响应。这种加权集成的方法虽然有效的融合了两个互补的模型,但是仅以一个固定融合系数结导致相关滤波器与颜色分类器无法达到最优结合。针对这一问题,本发明实施例提出基于相对置信度的自适应融合系数。where α is a constant coefficient, response_cf is the response of the correlation filter, and response_p is the response of the color classifier. Although this weighted integration method effectively fuses two complementary models, only one fixed fusion coefficient leads to the failure of the optimal combination of the correlation filter and the color classifier. In response to this problem, the embodiments of the present invention propose an adaptive fusion coefficient based on relative confidence.

本发明实施例采用平均相关峰值能量(简称APCE)对融合系数实现自适应调整。平均相关峰值能量(简称APCE)是一个用于评价相关滤波器检测结果置信度的指标,平均相关峰值能量(简称APCE)越大,检测结果的置信度越高。第t帧响应yt的APCE的表达式如公式10所示:In the embodiment of the present invention, the average correlation peak energy (APCE for short) is used to implement self-adaptive adjustment of the fusion coefficient. The average correlation peak energy (APCE for short) is an index used to evaluate the confidence of the detection result of the correlation filter. The larger the average correlation peak energy (APCE for short), the higher the confidence of the detection result. The expression for the APCE of frame t in response to y t is shown in Equation 10:

Figure BDA0001843647580000073
Figure BDA0001843647580000073

其中,

Figure BDA0001843647580000074
表示响应yt中的最大值,
Figure BDA0001843647580000075
表示响应yt中的最小值,mean表示求均值。in,
Figure BDA0001843647580000074
represents the maximum value in the response y t ,
Figure BDA0001843647580000075
represents the minimum value in the response y t , and mean represents the mean value.

第t帧时相关滤波器的检测结果相对全局的相对置信度的表达式,如公式11所示:The expression of the relative confidence of the detection result of the correlation filter relative to the global at the t-th frame is shown in Equation 11:

Figure BDA0001843647580000081
Figure BDA0001843647580000081

其中,rt表示第t帧时相关滤波器的检测结果相对全局的相对置信度。Among them, r t represents the relative confidence of the detection result of the correlation filter relative to the global at the t-th frame.

因此,公式9中的常数系数α可以用自适应融合系数αt来表示。自适应融合系数αt的具体表达式,如公式12所示:Therefore, the constant coefficient α in Equation 9 can be represented by the adaptive fusion coefficient α t . The specific expression of the adaptive fusion coefficient α t is shown in Equation 12:

Figure BDA0001843647580000082
Figure BDA0001843647580000082

其中,αt为第t帧时的自适应融合系数,ρ为相对置信度的影响因子,rt为第t帧时相关滤波器的检测结果相对全局的相对置信度,α为常量加权系数。影响因子ρ用于调整相关滤波器判别结果和颜色分类器判别结果的权重。当相关滤波器的检测结果的相对置信度大于1时,则更加相信相关滤波器的判别结果,否则更加相信颜色分类器的判别结果。Among them, α t is the adaptive fusion coefficient at the t-th frame, ρ is the influence factor of the relative confidence, r t is the relative global confidence of the detection result of the correlation filter at the t-th frame, and α is a constant weighting coefficient. The impact factor ρ is used to adjust the weights of the correlation filter discrimination results and the color classifier discrimination results. When the relative confidence of the detection result of the correlation filter is greater than 1, the judgment result of the correlation filter is more trusted, otherwise the judgment result of the color classifier is more trusted.

步骤S8:提取所述目标的特征,并更新所述相关滤波器和颜色分类器。Step S8: Extract the features of the target, and update the correlation filter and color classifier.

步骤S9:检测尺度变化,更新目标、前景区域、背景区域和尺度滤波器。Step S9: Detect scale change, update the target, foreground area, background area and scale filter.

在新的位置处,提取33个不同的尺度的图像块,并将33个不同尺度的图像块调整到同一大小,通过循环移位产生候选尺度图像。调用尺度相关滤波器对候选尺度图像进行检测,选取响应最大的尺度为新的尺度。更新目标、前景区域和背景区域。At the new position, 33 image blocks with different scales are extracted, and the 33 image blocks with different scales are adjusted to the same size, and candidate scale images are generated by cyclic shift. The scale correlation filter is called to detect the candidate scale images, and the scale with the largest response is selected as the new scale. Update the target, foreground area and background area.

步骤S10:重复步骤S5至步骤S9,直至视频结束。Step S10: Repeat steps S5 to S9 until the video ends.

本发明实施例所提出一种基于双模型的自适应融合的目标跟踪方法,该方法通过基于相对置信度的自适应融合系数,使得相关滤波器和颜色分类器能够进行最优融合,进而充分地展现出各自模型的跟踪优势。The embodiment of the present invention proposes a dual-model-based adaptive fusion target tracking method. The method enables the correlation filter and the color classifier to perform optimal fusion through the adaptive fusion coefficient based on relative confidence, and then fully Demonstrate the tracking advantages of the respective models.

本发明实施例所提出的基于双模型的自适应融合的目标跟踪方法在I7-4710HQ2.5GHZ处理器,8G内存的电脑上,使用Matlab R2016a的运行速度可达28帧每秒。The dual-model-based adaptive fusion target tracking method proposed in the embodiment of the present invention runs at a speed of up to 28 frames per second using Matlab R2016a on a computer with an I7-4710HQ2.5GHZ processor and 8G memory.

如图2所示,本发明实施例所提出的一种基于双模型的自适应融合的目标跟踪方法和其他目标跟踪方法在OTB2013上的实验结果对比示意图。通过图2比对可知,本发明实施例所提出的基于双模型的自适应融合的目标跟踪方法(图中显示为Our算法),相对于原版算法Staple在精度和成功率分别提升了1.9%与2%。如图3所示,本发明实施例所提出的一种基于双模型的自适应融合的目标跟踪方法与DSST、KCF在不同图像中跟踪的定性比较示意图。从图3中可以看出,本发明实施例所提供的基于双模型的自适应融合的目标跟踪方法对应目标的跟踪更加准确。As shown in FIG. 2 , a schematic diagram of the comparison of the experimental results of a dual-model-based adaptive fusion target tracking method proposed in an embodiment of the present invention and other target tracking methods on OTB2013. It can be seen from the comparison in FIG. 2 that the target tracking method based on the dual-model adaptive fusion proposed in the embodiment of the present invention (the Our algorithm is shown in the figure), compared with the original algorithm Staple, the accuracy and success rate are improved by 1.9% and 1.9% respectively. 2%. As shown in FIG. 3 , a schematic diagram of qualitative comparison between a dual-model-based adaptive fusion target tracking method proposed in an embodiment of the present invention and DSST and KCF tracking in different images. It can be seen from FIG. 3 that the target tracking method based on the dual-model adaptive fusion provided by the embodiment of the present invention can track the corresponding target more accurately.

如图4所示,为图1所示实施例的另一种步骤流程表示示意图。开始跟踪后先进行初始化;初始化之后分别训练三个滤波器:训练尺度滤波器、训练颜色分类器和训练相关滤波器;用训练后的相关滤波器检测目标以获得相关滤波响应;用训练后的颜色分类器检测目标以获得分类器响应;计算自融合系统,用自融合系数将相关滤波响应和分类器响应进行融合,获得融合后的目标位置;用训练后的尺度滤波器检测尺度变化;再进行一些列的更新;判断视频是否结束,若视频未结束,则继续,否则结束。As shown in FIG. 4 , it is a schematic diagram showing another step flow of the embodiment shown in FIG. 1 . Initialize after starting tracking; train three filters after initialization: training scale filter, training color classifier, and training correlation filter; use the trained correlation filter to detect the target to obtain the correlation filter response; use the trained correlation filter The color classifier detects the target to obtain the classifier response; calculates the self-fusion system, fuses the correlation filter response and the classifier response with the self-fusion coefficient, and obtains the fused target position; uses the trained scale filter to detect the scale change; Update some columns; judge whether the video is over, if the video is not over, continue, otherwise end.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.

尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it should be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the present invention. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (10)

1.一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述方法包括:1. A target tracking method based on adaptive fusion of dual models, characterized in that the method comprises: 步骤S1:根据初始帧,获取目标初始信息;Step S1: obtain the initial information of the target according to the initial frame; 步骤S2:分别从前景区域和背景区域提取颜色直方图,并采用岭回归方程求解和训练颜色分类器;Step S2: extract the color histogram from the foreground area and the background area respectively, and use the ridge regression equation to solve and train the color classifier; 步骤S3:从相关滤波区域提取特征,并训练相关滤波器;Step S3: extracting features from the relevant filtering area, and training the relevant filter; 步骤S4:初始化尺度滤波器,提取不同尺度图像块来训练尺度滤波器;Step S4: initialize the scale filter, and extract image blocks of different scales to train the scale filter; 步骤S5:利用所述颜色分类器检测目标,获得颜色分类器的响应;Step S5: utilize the color classifier to detect the target, and obtain the response of the color classifier; 步骤S6:在相关滤波区域内用相关滤波器检测目标,获得相关滤波的响应;Step S6: use the correlation filter to detect the target in the correlation filter area, and obtain the response of the correlation filter; 步骤S7:根据所述相关滤波的响应计算相对置信度,基于所述相对置信度计算自适应融合系数,采用所述自适应融合系数融合相关滤波器的响应和颜色分类器的响应,获得检测目标的位置;Step S7: Calculate the relative confidence according to the response of the correlation filter, calculate the adaptive fusion coefficient based on the relative confidence, use the adaptive fusion coefficient to fuse the response of the correlation filter and the response of the color classifier, and obtain the detection target s position; 步骤S8:提取所述目标的特征,并更新所述相关滤波器和颜色分类器;Step S8: extract the feature of the target, and update the correlation filter and the color classifier; 步骤S9:检测尺度变化,更新目标、前景区域、背景区域和尺度滤波器;Step S9: detecting scale changes, updating the target, foreground area, background area and scale filter; 步骤S10:重复步骤S5至步骤S9,直至视频结束。Step S10: Repeat steps S5 to S9 until the video ends. 2.根据权利要求1所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述目标初始信息包括目标位置、目标的长度、目标的宽度。2 . The dual-model-based adaptive fusion target tracking method according to claim 1 , wherein the initial target information includes target position, target length, and target width. 3 . 3.根据权利要求1所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述步骤S2中提取颜色直方图的过程为:将颜色空间均分为若干个颜色区间,定义每个颜色区间为直方图的一个直方柱,统计前景区域或者背景区域落在每一个直方柱中的像素点的个数。3. a kind of target tracking method based on the self-adaptive fusion of dual models according to claim 1, is characterized in that, the process of extracting color histogram in described step S2 is: the color space is equally divided into several color intervals , define each color interval as a bar of the histogram, and count the number of pixels in the foreground area or background area that fall in each bar. 4.根据权利要求3所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述颜色直方图的直方柱的宽度值为8。4 . The dual-model-based adaptive fusion target tracking method according to claim 3 , wherein the value of the width of the histogram of the color histogram is 8. 5 . 5.根据权利要求1所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述岭回归方程的表达式为:5. a kind of target tracking method based on the self-adaptive fusion of dual models according to claim 1, is characterized in that, the expression of described ridge regression equation is:
Figure FDA0001843647570000011
Figure FDA0001843647570000011
χt表示训练样本及其对应的回归值,β为所需要求解的颜色分类器,Lhist表示分类器的损失函数,λhist为正则化系数。χ t represents the training sample and its corresponding regression value, β is the color classifier to be solved, L hist represents the loss function of the classifier, and λ hist is the regularization coefficient.
6.根据权利要求1所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,训练相关滤波器的方法为最小化下述公式:6. a kind of target tracking method based on the self-adaptive fusion of dual models according to claim 1, is characterized in that, the method for training correlation filter is to minimize following formula:
Figure FDA0001843647570000021
Figure FDA0001843647570000021
其中,f表示样本,d为样本f的维数,h为相关滤波器,g表示相关滤波器需要的输出,g为高斯函数,*表示卷积运算,λ为正则化系数。Among them, f represents the sample, d is the dimension of the sample f, h is the correlation filter, g represents the output required by the correlation filter, g is the Gaussian function, * represents the convolution operation, and λ is the regularization coefficient.
7.根据权利要求1所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述相对置信度的表达式为:7. a kind of target tracking method based on dual-model adaptive fusion according to claim 1, is characterized in that, the expression of described relative confidence is:
Figure FDA0001843647570000022
Figure FDA0001843647570000022
其中,rt为第t帧时相关滤波器的检测结果相对全局的相对置信度,APCEt为第t帧响应yt的平均相关峰值能量,APCEt的具体计算表达式为:Among them, r t is the relative confidence of the detection result of the correlation filter in the t-th frame relative to the global, APCE t is the average correlation peak energy of the t-th frame response y t , and the specific calculation expression of APCE t is:
Figure FDA0001843647570000023
Figure FDA0001843647570000023
8.根据权利要求7所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,所述自适应融合系数的表达式为:8. a kind of target tracking method based on dual-model adaptive fusion according to claim 7, is characterized in that, the expression of described adaptive fusion coefficient is:
Figure FDA0001843647570000024
Figure FDA0001843647570000024
其中,αt为第t帧时的自适应融合系数,ρ为相对置信度的影响因子,rt为第t帧时相关滤波器的检测结果相对全局的相对置信度,α为常量加权系数。Among them, αt is the adaptive fusion coefficient at the t-th frame, ρ is the influence factor of the relative confidence, rt is the relative global confidence of the detection result of the correlation filter at the t -th frame, and α is a constant weighting coefficient.
9.根据权利要求8所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,采用所述自适应融合系数融合相关滤波器的响应和颜色分类器的响应,具体计算表达式如下所述:9. a kind of target tracking method based on dual-model adaptive fusion according to claim 8, is characterized in that, adopting described adaptive fusion coefficient to fuse the response of correlation filter and the response of color classifier, the concrete calculation expression The formula is as follows: response=(1-αt)response_cf+αt·response_p,response=(1-α t )response_cf+α t ·response_p, 其中,response_cf为相关滤波器的响应,response_p为颜色分类器的响应,αt为第t帧时的自适应融合系数,response为最终响应。Among them, response_cf is the response of the correlation filter, response_p is the response of the color classifier, α t is the adaptive fusion coefficient at the t-th frame, and response is the final response. 10.根据权利要求8所述的一种基于双模型的自适应融合的目标跟踪方法,其特征在于,影响因子ρ用于调整相关滤波器判别结果和颜色分类器判别结果的权重。10 . The dual-model-based adaptive fusion target tracking method according to claim 8 , wherein the influence factor ρ is used to adjust the weight of the correlation filter discrimination result and the color classifier discrimination result. 11 .
CN201811259843.8A 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models Pending CN111104948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259843.8A CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259843.8A CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Publications (1)

Publication Number Publication Date
CN111104948A true CN111104948A (en) 2020-05-05

Family

ID=70419143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259843.8A Pending CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Country Status (1)

Country Link
CN (1) CN111104948A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888586A (en) * 2021-09-01 2022-01-04 河北汉光重工有限责任公司 A method and device for target tracking based on correlation filtering

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200065A1 (en) * 2001-04-20 2003-10-23 Li Luo Wen Maneuvering target tracking method via modifying the interacting multiple model (IMM) and the interacting acceleration compensation (IAC) algorithms
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
US20130084006A1 (en) * 2011-09-29 2013-04-04 Mediatek Singapore Pte. Ltd. Method and Apparatus for Foreground Object Detection
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model
US20130156299A1 (en) * 2011-12-17 2013-06-20 Motorola Solutions, Inc. Method and apparatus for detecting people within video frames based upon multiple colors within their clothing
CN103186230A (en) * 2011-12-30 2013-07-03 北京朝歌数码科技股份有限公司 Man-machine interaction method based on color identification and tracking
CN104833357A (en) * 2015-04-16 2015-08-12 中国科学院光电研究院 Multisystem multi-model mixing interactive information fusion positioning method
CN108646725A (en) * 2018-07-31 2018-10-12 河北工业大学 Dual model method for diagnosing faults based on dynamic weighting

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200065A1 (en) * 2001-04-20 2003-10-23 Li Luo Wen Maneuvering target tracking method via modifying the interacting multiple model (IMM) and the interacting acceleration compensation (IAC) algorithms
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
US20130084006A1 (en) * 2011-09-29 2013-04-04 Mediatek Singapore Pte. Ltd. Method and Apparatus for Foreground Object Detection
US20130156299A1 (en) * 2011-12-17 2013-06-20 Motorola Solutions, Inc. Method and apparatus for detecting people within video frames based upon multiple colors within their clothing
CN103186230A (en) * 2011-12-30 2013-07-03 北京朝歌数码科技股份有限公司 Man-machine interaction method based on color identification and tracking
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model
CN104833357A (en) * 2015-04-16 2015-08-12 中国科学院光电研究院 Multisystem multi-model mixing interactive information fusion positioning method
CN108646725A (en) * 2018-07-31 2018-10-12 河北工业大学 Dual model method for diagnosing faults based on dynamic weighting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
熊昌镇等: "稳健的双模型自适应切换实时跟踪算法", 《光学学报》 *
王艳川等: "基于双模型融合的自适应目标跟踪算法", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888586A (en) * 2021-09-01 2022-01-04 河北汉光重工有限责任公司 A method and device for target tracking based on correlation filtering
CN113888586B (en) * 2021-09-01 2024-10-29 河北汉光重工有限责任公司 Target tracking method and device based on correlation filtering

Similar Documents

Publication Publication Date Title
CN107169994B (en) Correlation filtering tracking method based on multi-feature fusion
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN107909005A (en) Personage's gesture recognition method under monitoring scene based on deep learning
CN108986140A (en) Target scale adaptive tracking method based on correlation filtering and color detection
CN107481264A (en) A kind of video target tracking method of adaptive scale
CN107194317B (en) A violent behavior detection method based on grid clustering analysis
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN107169985A (en) A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN106446890B (en) A candidate region extraction method based on window scoring and superpixel segmentation
CN107220640A (en) Character identifying method, device, computer equipment and computer-readable recording medium
CN102682303A (en) Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN105095870A (en) Pedestrian re-recognition method based on transfer learning
CN108596951A (en) A kind of method for tracking target of fusion feature
CN110866287A (en) Point attack method for generating countercheck sample based on weight spectrum
CN107564035B (en) Video Tracking Method Based on Important Region Recognition and Matching
CN105488811A (en) Depth gradient-based target tracking method and system
CN113111878B (en) Infrared weak and small target detection method under complex background
CN108256462A (en) A kind of demographic method in market monitor video
CN106557750A (en) It is a kind of based on the colour of skin and the method for detecting human face of depth y-bend characteristics tree
CN113888586A (en) A method and device for target tracking based on correlation filtering
CN105678249A (en) Face identification method aiming at registered face and to-be-identified face image quality difference
CN112613565B (en) Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN110827319B (en) Improved Staple target tracking method based on local sensitive histogram
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
CN114202643A (en) Apple leaf disease identification terminal and method based on multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505