WO2018068718A1 - 目标跟踪方法和目标跟踪设备 - Google Patents

目标跟踪方法和目标跟踪设备 Download PDF

Info

Publication number
WO2018068718A1
WO2018068718A1 PCT/CN2017/105652 CN2017105652W WO2018068718A1 WO 2018068718 A1 WO2018068718 A1 WO 2018068718A1 CN 2017105652 W CN2017105652 W CN 2017105652W WO 2018068718 A1 WO2018068718 A1 WO 2018068718A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
target
region
target tracking
local
Prior art date
Application number
PCT/CN2017/105652
Other languages
English (en)
French (fr)
Inventor
陈海林
Original Assignee
夏普株式会社
陈海林
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 夏普株式会社, 陈海林 filed Critical 夏普株式会社
Publication of WO2018068718A1 publication Critical patent/WO2018068718A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20124Active shape model [ASM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present application relates generally to image processing and, in particular, to a target tracking method and target tracking device, for example, for use in mobile devices, smart televisions, and other devices.
  • Document 1 discloses a video object tracking method in the case of scale changes and occlusion.
  • LBP local binary pattern
  • NMI normalized Moment of inertia
  • Document 2 discloses a video object tracking method based on neighborhood component analysis and size space theory. According to the target tracking method, the position and size of the target can be more accurately located, and the target illumination and color change problems can be more effectively adapted, and the target occlusion is processed robustly.
  • Document 3 discloses a target tracking method in a video, comprising: presetting a target position of a current frame in a video, initializing a sparse matrix; acquiring a high-dimensional feature vector of a sample around the target position, and using sparseness The matrix projects the high-dimensional vector to the low-dimensional vector; updates the classifier parameters to obtain a new classifier for the next frame, and selects the sample position with the largest classification value as the target position of the next frame. This method increases the speed of the target tracking method.
  • Document 4 discloses a system and method for tracking targets for detecting and tracking objects in a sequence of images. First, a first set of parameters for the parameterized shape model is generated based on the first image. A second set of parameters is then generated by fitting the parametric shape model to the objects in the second of the plurality of images. Especially in the case of occlusion, it is not easy to implement a parametric shape model for any non-rigid object.
  • Document 1 is mainly for scale changes and occlusions.
  • Video target tracking can not be used to deal with non-rigid targets;
  • Document 2 is aimed at target illumination and color change in target tracking, while improving tracking position, not suitable for non-rigid targets with especially occlusion;
  • Document 3 is also suitable for providing tracking speed.
  • Document 4 is not easy to implement a parametric shape model for any non-rigid object, especially in the case of occlusion.
  • a target tracking method which may include: designating a target to be tracked in a first frame of an image frame sequence; determining whether at least one candidate partial region exists in a current frame by using a local region model; In the case of a candidate local region, the candidate local regions are respectively matched to the corresponding local regions in the previous frame;
  • the target tracking method may further comprise updating the local area model using the target tracking result.
  • updating the local area model using the target tracking result may include: online training a plurality of classifiers using the initially specified information of the target to be tracked and the target tracking result of the current frame; merging the learning results; and utilizing the merge The learning result updates the local area model to use the updated local area model for the target tracking operation of the next frame.
  • determining whether the at least one candidate local region exists in the current frame may include: performing superpixel-based image segmentation on the current frame to obtain a plurality of local regions; determining, by using the local region model, whether the plurality of local regions exist Candidate partial regions that meet predetermined conditions.
  • determining whether there is a candidate partial region that meets a predetermined condition may include estimating a probability that each of the plurality of partial regions belongs to the tracked target by using the following formula
  • p i uses the i-th learning method to estimate the probability that the corresponding local region belongs to the tracked target, p is the integrated probability that the corresponding local region belongs to the tracked target, and N is a natural number greater than 2;
  • a corresponding local region whose integration probability p is greater than the third threshold is determined as the candidate local region; and if there is no corresponding local region whose integration probability p is greater than the third threshold, the target loss is determined. among them,
  • the primary matching may include: respectively calculating a first similarity between each of the candidate local regions and a corresponding local region of the previous frame, and if the first similarity is greater than the first threshold, the current candidate local The locale is set to the depth matching area.
  • the neighborhood growth may include: calculating, for each depth matching region, a second similarity of each of the adjacent other unmatched local regions of the depth matching region and the depth matching region, If the second similarity is greater than the second threshold, the unmatched local region is considered to belong to the target region, and the unmatched local region is set as the grown region; and the grown region is output as the target tracking result.
  • the step of calculating the second similarity is iteratively performed on the growing neighborhood region until the growing region is adjacent to the other depth matching regions.
  • a target tracking device may include: a primary matching unit configured to receive a sequence of image frames, determine, by using a local region model, whether at least one candidate partial region exists in a current frame, and at least In the case of a candidate local region, each of the at least one candidate partial region is primarily matched with the corresponding local region in the previous frame to obtain at least one depth matching region; the neighborhood growing unit is configured to be directed from the primary matching At least one depth matching area of the unit performs neighborhood growth to obtain a target tracking result; and a model updating unit configured to update the local area model by using the target tracking result from the neighboring growing unit, and send the updated local area model to Primary matching unit.
  • the depth matching region of the target in the current frame and the target in the previous frame is obtained by using the primary matching, and the target matching result is obtained by performing neighborhood growth on the deep matching region.
  • an integrated learning method is used to update the local area model and utilize the updated local area model to derive at least one candidate local area.
  • FIG. 1 shows a flow chart of a target tracking method according to an embodiment of the present invention
  • FIG. 2 illustrates an example flow chart for determining candidate local regions in a target tracking method in accordance with an embodiment of the present invention
  • FIG. 3 illustrates an example flow diagram of performing a primary match in a target tracking method in accordance with an embodiment of the present invention
  • FIG. 4 illustrates an example of neighborhood growth in a target tracking method in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates an example flowchart of updating a local area model in a target tracking method according to an embodiment of the present invention
  • FIG. 6 shows a schematic block diagram of a target tracking device in accordance with an embodiment of the present invention.
  • a target tracking method and a target tracking device wherein a technique called "recurrent graph tracking" is utilized to obtain a target in a current frame and a previous frame by using a primary match.
  • the depth matching area performs neighborhood tracking on the depth matching area to obtain the target tracking result.
  • at least one candidate partial region is obtained using, for example, an online learning method to update the local region model and using the updated local region model.
  • FIG. 1 shows a flow chart of a target tracking method in accordance with an embodiment of the present invention.
  • serial numbers of the respective steps in the following methods are only as a representation of the steps for the description, and should not be regarded as indicating the order of execution of the respective steps.
  • the methods need not be performed exactly as shown, unless explicitly stated; similarly, multiple blocks may be executed in parallel rather than sequentially. It should also be understood that the method can be implemented on a variety of devices as well.
  • a video object tracking method 100 may include the following steps.
  • the target to be tracked is specified in the previous frame of the image frame sequence.
  • the "previous frame” herein may be the first frame of the sequence of image frames, the previous frame of the current frame, or one or more intermediate frames. Further, the “previous frame” may include at least one frame.
  • step S120 the local area model is used to determine whether at least one candidate local area exists in the current frame. If there is a candidate local area, the method 100 proceeds to step S130. If there is no candidate local area, the method 100 proceeds to step S170 to determine "target lost.”
  • each of the at least one candidate partial region is respectively associated with a corresponding partial region in the previous frame. Perform a primary match.
  • step S140 it is determined whether there is at least one depth matching area in the current frame according to the result of the main matching. If so, the method 100 proceeds to step S150. In step S150, neighborhood growth is performed for the at least one depth matching region to obtain a target tracking result. If not, the method 100 proceeds to step S170 to determine "target lost.”
  • FIG. 2 is a flow chart showing an example of determining a candidate partial region in a target tracking method according to an embodiment of the present invention
  • FIG. 3 is a flowchart showing an example of performing a primary matching in a target tracking method according to an embodiment of the present invention.
  • An example of neighborhood growth in the target tracking method according to an embodiment of the present invention is shown
  • FIG. 5 shows an example flow chart of updating the local region model in the target tracking method according to an embodiment of the present invention.
  • step 120 of determining a candidate partial region may include the following steps, in accordance with an embodiment of the present invention.
  • superpixel-based image segmentation is performed on the current frame t in the image frame sequence to obtain a plurality of partial regions.
  • a superpixel-based segmentation method such as Simple Linear Iterative Clustering can be used.
  • step S223 it is determined whether or not there is a candidate partial region satisfying a predetermined condition among the plurality of partial regions using the local region model. If it is determined that there are candidate partial regions satisfying predetermined conditions among the plurality of partial regions, step S130 in FIG. 1 is performed, otherwise, step S170 in FIG. 1 is performed.
  • determining whether there is a candidate partial region that meets the predetermined condition may include calculating a feature for each local region, using each classifier learned online to estimate a probability that the local region belongs to the target. Then, use the following formula to estimate the probability that each local area belongs to the tracked target:
  • w i is a coefficient
  • p i is the probability that the corresponding local region estimated by the i-th classifier belongs to the tracked target
  • p is the integration probability that the corresponding local region belongs to the tracked target
  • N is a natural number greater than 2.
  • a corresponding partial region whose integration probability p is greater than the third threshold may be determined as the candidate partial region.
  • the third threshold may be set according to actual conditions, for example, a range of 0.6 to 0.8. Of course, those skilled in the art will appreciate that the present invention is not limited thereto.
  • a local area model is created for an object to be tracked, and the local area model reflects a mapping relationship between features of each local area and a classifier to be used, and the classifier is configured to determine that the corresponding local area belongs to be tracked.
  • the probability of the target With the local region feature, the probability that the local region belongs to the target to be tracked can be obtained according to the characteristics of the local region.
  • the mapping relationship in the local area model can be updated using the target tracking result of the previous frame, and the coefficients of the classifier are continuously updated, which will be described in detail below.
  • step S130 in Fig. 1 is performed.
  • FIG. 3 shows an example of performing a main match.
  • the previous frame may be set to the previous frame #t-1 of the current frame #t and the first frame #1 of the sequence of image frames, and may also include one or more intermediate frames #m,m A natural number greater than 1 and less than t-1.
  • intermediate frames #m,m A natural number greater than 1 and less than t-1.
  • An intermediate frame #m is used in the example of Figure 3 for primary matching.
  • each corresponding one of each candidate partial region of the current frame #t obtained in step S223 is compared with the corresponding portion of the first frame #1, the intermediate frame #m, and the previous frame #t-1.
  • the first similarity between the regions if the first similarity is greater than the first threshold, the current candidate local region is set as the depth matching region.
  • the current frame #t is matched with the previous frame #t-1 and the first frame #1, respectively, to obtain two pairwise matching sequences. It is also possible to match the current frame #t with one or more intermediate frames #m to obtain a corresponding pairwise matching sequence.
  • the intermediate frame can be randomly selected.
  • the similarity may be calculated according to one or some features (eg, a histogram), and the pair having the largest similarity and greater than the first threshold is considered to be a match, and the candidate local region is selected as the local region that strongly matches the previous frame.
  • the first threshold may be set to 0.9.
  • a first threshold may be set by a person skilled in the art according to actual conditions.
  • merge all the strongly matched local regions There may be multiple ways of merging, for example, any one of the candidate partial regions may be strong with any of the preceding frames (in this example, the previous frame #t-1, the first frame #1, and the intermediate frame #m) Match the union of local regions as a depth matching region.
  • the present invention is not limited thereto, and other strong matching local area combining modes may be set according to actual conditions.
  • step S130 if there is no depth matching area between the candidate local area of the current frame and any previous previous frame, step S170 is performed to determine "tracking loss". If it is determined that there is at least one depth matching area according to the result of the main matching, step S150 is performed to perform field growth.
  • the neighborhood growth may include: calculating, for each depth matching region, a second similarity of each of the adjacent other unmatched local regions of the depth matching region and the depth matching region, if If the second similarity is greater than the second threshold, the unmatched The local area belongs to the target area, and the unmatched local area is set as the growing area; the step of calculating the second similarity is performed iteratively on the growing area, and the growing area is output as the target tracking result.
  • the second threshold may be set according to the actual situation. For example, the second threshold may be set to 0.9. Of course, the present invention is not limited thereto, and a second threshold may be set by a person skilled in the art according to actual conditions.
  • FIG. 4 illustrates an example of neighborhood growth in a target tracking method in accordance with an embodiment of the present invention.
  • a deep super-pixel map is first constructed using a depth matching area between a current frame (Frame #89) and a previous frame (Frame #88), and three solid arrows indicate a depth matching area. Then, a depth matching area between the current frame (Frame #89) and the intermediate frame (for example, Frame #38) and the first frame (Frame #1) can be used (as indicated by a dotted arrow and a chain line arrow, respectively).
  • the depth matching area is considered to be a local area belonging to the target to be tracked in the current frame, and all adjacent other unmatched local areas of each depth matching area are similar to the corresponding unmatched local area, as above
  • the method for calculating the similarity may be various, and details are not described herein again. If the similarity is greater than a certain threshold, such as 0.9, then the unmatched local region is also considered to belong to the target region.
  • the neighborhood growth result is a superpixel containing dots. An image containing a rectangular box corresponds to each tracking result. Wherein, a manually labeled rectangular frame can be used for the first frame.
  • the step of calculating the second similarity is iteratively performed on the grown regions until the grown region regions are adjacent to the other depth matching regions.
  • the target tracking area including the depth matching area and the growing area is obtained as the target tracking result of the current frame.
  • the local region model can then be updated with the target tracking results of the current frame.
  • FIG. 5 illustrates an example flow diagram of updating a local area model in a target tracking method in accordance with an embodiment of the present invention.
  • the first classifier and the second classifier are trained to perform online learning using the information of the initially specified target to be tracked and the target tracking result of the current frame, respectively.
  • the learning results are combined, the combined result is used to update the local area model, and then the updated local area model is used to perform the target tracking operation of the next frame.
  • classifiers are shown in Figure 5, those skilled in the art will appreciate that more classifiers can be used to provide tracking accuracy, but tracking speed may be reduced.
  • learning may be performed using, for example, an online SVM (Support Vector Machine), an RDA (Regularized Double Average), and a Composite-Objective Mirror Descent (COMID) algorithm.
  • SVM Serial Vector Machine
  • RDA Registered Double Average
  • COMID Composite-Objective Mirror Descent
  • two learned classifiers can be used to estimate the probability that each local region belongs to the tracked object.
  • various joint probability calculation methods well known in the art can be used.
  • a subject in order to better track a non-rigid visual object having occlusion, a subject is proposed.
  • Standard tracking method Firstly, the local area online learning is performed based on the previous target tracking result to update the local area model. Then, the local area matching is performed based on the local area model to obtain the depth matching area. Next, the local area of the neighborhood is grown based on the depth matching area. Obtaining the target tracking result and updating the local area model with the target tracking result for target tracking of the next frame.
  • the main matching is used to realize the small in two selected frames during the tracking period. Tracking of the deformed local area. After the main matching is implemented, the local area of the neighborhood around the local area of the main matching is grown, thereby realizing the tracking of the local area having a large deformation in the two selected frames.
  • FIG. 6 shows a schematic block diagram of a target tracking device in accordance with an embodiment of the present invention.
  • the target tracking device 600 may include: a main matching unit 610 configured to receive a sequence of image frames, determine, by using a local region model, whether at least one candidate local region exists in the current frame, and in the presence of at least one candidate partial region In this case, each of the at least one candidate partial region is respectively matched with the corresponding local region in the previous frame to obtain at least one depth matching region; the neighborhood growing unit 620 is configured to be at least for the primary matching unit 610.
  • a depth matching area performs neighborhood growth to obtain a target tracking result; and a model updating unit 630 configured to update the local area model by using the target tracking result from the neighborhood growing unit 620, and send the updated local area model to the main Matching unit 610.
  • the target tracking device can be implemented in various manners. Numerous embodiments of the devices and/or processes have been illustrated by the use of block diagrams, flowcharts, and/or examples. In the event that such block diagrams, flow diagrams, and/or examples include one or more functions and/or operations, those skilled in the art will appreciate that each function and/or operation in such block diagram, flowchart, or example can be They are implemented separately and/or together by various hardware, software, firmware or virtually any combination thereof. In one embodiment, portions of the subject matter of the present disclosure may be implemented in an application specific integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), or other integrated format.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
  • a computer program eg, implemented as one or more programs running on one or more computer systems
  • implemented as one or more programs running on one or more processors eg, implemented as one or One or more programs running on multiple microprocessors
  • Implemented as firmware, or substantially as any combination of the above, and those skilled in the art, in light of this disclosure, will have the ability to design circuits and/or write software and/or firmware code.
  • signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种目标跟踪方法和一种目标跟踪设备,所述方法包括:在图像帧序列的首帧中指定要跟踪目标(S110);利用局部区域模型确定当前帧中是否存在至少一个候选局部区域(S120);在存在至少一个候选局部区域的情况下,将所述候选局部区域分别与在前帧中的对应局部区域进行主匹配(S130);如果根据主匹配的结果确定当前帧中存在至少一个深度匹配区域(S140),则针对所述至少一个深度匹配区域执行邻域生长,得到目标跟踪结果(S150)。利用本方法,能够在具有遮挡的情况下有效地实现非刚性对象的目标跟踪。

Description

目标跟踪方法和目标跟踪设备
本申请要求于2016年10月13日提交的、申请号为201610892797.X的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请总体上涉及图像处理,具体地,涉及一种例如应用于移动设备、智能电视和其他设备的目标跟踪方法和目标跟踪设备。
背景技术
实际中,存在多种可变形的非刚性可视对象,这种可视对象的视觉外观和形状可能变化较大。因此,尤其在某些复杂环境中,难以实现这种非刚性目标的视觉跟踪。
已经提出了一些技术方案来进行目标跟踪。
例如,文献1(CN103325126A)公开了一种在尺度变化和遮挡情况下的视频目标跟踪方法。该方法为解决局部二值模式(LBP,Local Binary Pattern)跟踪算法在尺度变化和遮挡情况下跟踪效果不理想的问题,提出了一种基于LBP算法、归一化转动惯量(NMI,Normalized Moment of Inertia)特征和卡尔曼滤波相结合的优化目标跟踪方法。
例如,文献2(CN103413312A)公开了一种基于邻里成分分析和尺寸空间理论的视频目标跟踪方法。根据这种目标跟踪方法,能够更准确地定位目标的位置和尺寸大小,更有效的适应目标光照、色彩变化问题,同时鲁棒的处理目标遮挡。
例如,文献3(CN104318590A)公开了一种视频中的目标跟踪方法,包括:预先设定视频中的当前帧的目标位置,初始化稀疏矩阵;获取目标位置周围样本的高维特征向量,并使用稀疏矩阵将该高维向量投影到低维向量;更新分类器参数,得到用于下一帧的新的分类器,选择分类值最大的样本位置作为下一帧的目标位置。该方法提高了目标跟踪方法的速度。
再例如,文献4(AU20140253687)公开了一种跟踪目标的系统和方法,用于检测和跟踪图像序列中的对象。首先,基于第一图像产生用于参数化形状模型的第一参数集合。然后,通过将参数化形状模型拟合到多个图像中的第二图像中的对象来产生第二参数集合。尤其在具有遮挡的情况下,不易实现针对任意非刚性对象的参数化形状模型。
然而,上述技术方案均存在缺陷。例如,文献1主要针对尺度变化和遮挡情况下的 视频目标跟踪,不能用于处理非刚性目标;文献2针对目标跟踪中的目标光照、色彩变化问题,同时改进跟踪位置,不适于尤其具有遮挡的非刚性目标;文献3针对提供跟踪速度,同样不适于非刚性目标;文献4不易实现针对任意非刚性对象的参数化形状模型,尤其在具有遮挡的情况下。
发明内容
本发明提出了一种目标跟踪方法和目标跟踪设备,以便在具有遮挡的情况下进行非刚性对象的目标跟踪。根据本发明的一方面,提出了一种目标跟踪方法,可以包括:在图像帧序列的首帧中指定要跟踪目标;利用局部区域模型确定当前帧中是否存在至少一个候选局部区域;在存在至少一个候选局部区域的情况下,将所述候选局部区域分别与在前帧中的对应局部区域进行主匹配;
如果根据主匹配的结果确定当前帧中存在至少一个深度匹配区域,则针对所述至少一个深度匹配区域执行邻域生长,得到目标跟踪结果。
在实施例中,所述目标跟踪方法还可以包括使用目标跟踪结果更新所述局部区域模型。
在实施例中,使用目标跟踪结果更新所述局部区域模型可以包括:分别使用初始指定的要跟踪目标的信息和当前帧的目标跟踪结果来在线训练多个分类器;合并学习结果;以及利用合并的学习结果来更新局部区域模型,以便使用更新的局部区域模型来进行下一帧的目标跟踪操作。
在实施例中,确定当前帧中是否存在至少一个候选局部区域可以包括:对当前帧执行基于超像素的图像分割以得到多个局部区域;利用局部区域模型确定所述多个局部区域中是否存在符合预定条件的候选局部区域。
在实施例中,确定是否存在符合预定条件的候选局部区域可以包括:利用以下公式估计多个局部区域中的每个局部区域属于被跟踪目标的概率
Figure PCTCN2017105652-appb-000001
其中,wi是系数,pi利用第i种学习方法估计的对应局部区域属于被跟踪目标的概率,p是对应局部区域属于被跟踪目标的集成概率,N是大于2的自然数;
将集成概率p大于第三阈值的对应局部区域确定为候选局部区域;以及如果不存在集成概率p大于第三阈值的对应局部区域,则确定目标丢失。其中,
Figure PCTCN2017105652-appb-000002
在实施例中,主匹配可以包括:分别计算候选局部区域中的每一个与在前帧的对应局部区域之间的第一相似度,如果第一相似度大于第一阈值,则将当前候选局部区域设置为深度匹配区域。
在实施例中,邻域生长可以包括:针对每个深度匹配区域,计算所述深度匹配区域的所有相邻的其它未被匹配到的局部区域各自与所述深度匹配区域的第二相似度,如果第二相似度大于第二阈值,则认为该未被匹配的局部区域属于目标区域,并将该未被匹配的局部区域设定为生长的区域;以及输出生长的区域作为目标跟踪结果。
在实施例中,当存在多个深度匹配区域时,对生长的邻域区域迭代地执行计算第二相似度的步骤,直到生长的区域与其他深度匹配区域相邻接。
根据本发明的另一方面,提供了一种目标跟踪设备,可以包括:主匹配单元,配置为接收图像帧序列,利用局部区域模型确定当前帧中是否存在至少一个候选局部区域,并在存在至少一个候选局部区域的情况下,将至少一个候选局部区域中的每一个分别与在前帧中的对应局部区域进行主匹配,得到至少一个深度匹配区域;邻域生长单元,配置为针对来自主匹配单元的至少一个深度匹配区域执行邻域生长,得到目标跟踪结果;以及模型更新单元,配置为利用来自邻域生长单元的目标跟踪结果对局部区域模型进行更新,并将更新的局部区域模型发送到主匹配单元。
根据本发明实施例,利用主匹配得到当前帧中与在前帧中目标的深度匹配区域,对深度匹配区域进行邻域生长得到目标跟踪结果。此外,使用集成的学习方法来更新局部区域模型并利用更新的局部区域模型来得到至少一个候选局部区域。利用根据本发明实施例的技术方案,能够在具有遮挡的情况下有效地实现非刚性对象的目标跟踪。
附图说明
通过参考附图更加清楚地理解本发明实施例的特征和优点,附图是示意性的而不应理解为对本发明进行任何限制,在附图中:
图1示出了根据本发明实施例的目标跟踪方法的流程图;
图2示出了根据本发明实施例的目标跟踪方法中确定候选局部区域的示例流程图;
图3示出了根据本发明实施例的目标跟踪方法中执行主匹配的示例流程图;
图4示出了根据本发明实施例的目标跟踪方法中邻域生长的示例。
图5示出了根据本发明实施例的目标跟踪方法中更新局部区域模型的示例流程图;以及
图6示出了根据本发明实施例的目标跟踪设备的示意方框图。
具体实施方式
根据本发明实施例,提供了一种目标跟踪方法和目标跟踪设备,其中利用被称作“递归图跟踪”(recurrent graph tracking)的技术,利用主匹配得到当前帧中与在前帧中目标的深度匹配区域,对深度匹配区域进行邻域生长得到目标跟踪结果。此外,使用例如在线学习方法来更新局部区域模型并利用更新的局部区域模型来得到至少一个候选局部区域。利用根据本发明实施例的技术方案,能够在具有遮挡的情况下有效地实现非刚性对象的目标跟踪。
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明实施例进一步详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1示出了根据本发明实施例的目标跟踪方法的流程图。应注意,以下方法中各个步骤的序号仅作为该步骤的表示以便描述,而不应被看作表示该各个的执行顺序。除非明确指出,否则该方法不需要完全按照所示顺序来执行;类似地,可以平行地执行多种块,而非顺序执行。还应理解,同样可以在多种设备上实现所述方法。
如图1所示,根据本发明实施例的视频目标跟踪方法100可以包括以下步骤。
在步骤S110,在图像帧序列的在前帧中指定要跟踪目标。这里的“在前帧”可以是图像帧序列的第一帧、当前帧的前一帧、或一个或多个中间帧。此外,“在前帧”可以包括至少一帧。
在步骤S120,利用局部区域模型确定当前帧中是否存在至少一个候选局部区域。如果存在候选局部区域,则方法100进行到步骤S130。如果不存在候选局部区域,则方法100进行到步骤S170,确定“目标丢失”。
在步骤S130,将至少一个候选局部区域中的每一个分别与在前帧中的对应局部区域 进行主匹配。
然后,在步骤S140,根据主匹配的结果确定当前帧中是否存在至少一个深度匹配区域。如果是,则方法100进行到步骤S150。在步骤S150,针对所述至少一个深度匹配区域执行邻域生长,得到目标跟踪结果。如果不是,则方法100进行到步骤S170,确定“目标丢失”。
图2示出了根据本发明实施例的目标跟踪方法中确定候选局部区域的示例流程图,图3示出了根据本发明实施例的目标跟踪方法中执行主匹配的示例流程图,图4示出了根据本发明实施例的目标跟踪方法中邻域生长的示例,图5示出了根据本发明实施例的目标跟踪方法中更新局部区域模型的示例流程图。接下来将结合图1-图5来详细描述根据本发明实施例的目标跟踪方法。
如图2所示,根据本发明实施例,确定候选局部区域的步骤120可以包括以下步骤。首先,在子步骤S221,对图像帧序列中的当前帧t执行基于超像素的图像分割以得到多个局部区域。例如,可以使用基于超像素的分割方法,例如简单线性迭代聚类分析(Simple Linear Iterative Clustering)。
接下来,在子步骤S223,利用局部区域模型,确定多个局部区域中是否存在符合预定条件的候选局部区域。如果确定多个局部区域中存在符合预定条件的候选局部区域,则执行图1中的步骤S130,否则,执行图1中的步骤S170。在优选实施例中,确定是否存在符合预定条件的候选局部区域可以包括:对每个局部区域计算特征,利用在线学习到的每个分类器估计该局部区域属于目标的概率。然后,利用以下公式估计每个局部区域属于被跟踪目标的概率:
Figure PCTCN2017105652-appb-000003
其中,wi是系数,pi利用第i种分类器估计的对应局部区域属于被跟踪目标的概率,p是对应局部区域属于被跟踪目标的集成概率,N是大于2的自然数。优选地,
Figure PCTCN2017105652-appb-000004
可以将集成概率p大于第三阈值的对应局部区域确定为候选局部区域。可以根据实际情况来设定第三阈值,例如,0.6到0.8的范围。当然,本领域技术人员可以理解,本发明 并不局限于此。
根据本发明实施例,针对要跟踪的目标创建局部区域模型,局部区域模型反映了各个局部区域的特征与要使用的分类器之间的映射关系,该分类器用于确定对应局部区域属于要跟踪的目标的概率。利用局部区域特征,可以根据局部区域的特征来得到该局部区域属于要跟踪的目标的概率。可以使用上一帧的目标跟踪结果对局部区域模型中的映射关系进行更新,不断更新分类器的系数,下文将详细对该操作进行描述。
在子步骤S223中,如果确定存在至少一个候选局部区域,则执行图1中的步骤S130。图3示出了执行主匹配的一个示例。根据优选实施例,可以将在前帧设定为当前帧#t的前一帧#t-1和图像帧序列的第一帧#1,还可以包括一个或多个中间帧#m,m是大于1小于t-1的自然数。本领域技术人员可以理解,使用中间帧进行主匹配可以进一步提高匹配精度,但同时会增大计算量,影响跟踪速度。本领域技术人员可以根据实际情况来选择是否使用中间帧以及使用多少个中间帧。图3的示例中使用了一个中间帧#m进行主匹配。
如图3所示,首先分别计算在步骤S223中得到当前帧#t的每一个候选局部区域中的每一个与第一帧#1、中间帧#m和前一帧#t-1的对应局部区域之间的第一相似度,如果第一相似度大于第一阈值,则将当前候选局部区域设置为深度匹配区域。具体地,将当前帧#t与前一帧#t-1和第1帧#1分别进行匹配,得到两个逐对匹配序列。还可以将当前帧#t与一个或多个中间帧#m进行匹配,得到对应的逐对匹配序列。可以随机选择中间帧。可以根据某一个或一些特征(例如直方图)计算相似度,将相似度最大且大于第一阈值的对认为是匹配的,选择该候选局部区域作为与对应在前帧的强匹配的局部区域。例如,可以将第一阈值设置为0.9,当然,本发明并不局限于此,本领域技术人员可以根据实际情况设置第一阈值。然后,合并得到的所有强匹配的局部区域。合并方式可以有多种,例如,可以将候选局部区域中与所有在前帧(本示例中是前一帧#t-1、第1帧#1和中间帧#m)中的任意一个存在强匹配局部区域的并集作为深度匹配区域。当然,本发明并不局限于此,可以根据实际情况来设定其他的强匹配局部区域合并方式。
在步骤S130中,如果当前帧的候选局部区域与任意一个在前帧都不存在深度匹配区域,则执行步骤S170,确定“跟踪丢失”。如果根据主匹配的结果,确定存在至少一个深度匹配区域,则执行步骤S150,进行领域生长。优选地,邻域生长可以包括:针对每个深度匹配区域,计算所述深度匹配区域的所有相邻的其它未被匹配到的局部区域各自与所述深度匹配区域的第二相似度,如果第二相似度大于第二阈值,则认为该未被匹配 的局部区域属于目标区域,并将该未被匹配的局部区域设定为生长的区域;对生长的区域迭代地执行计算第二相似度的步骤,以及输出生长的区域作为目标跟踪结果。可以根据实际情况来设定第二阈值,例如,可以将第二阈值设置为0.9,当然,本发明并不局限于此,本领域技术人员可以根据实际情况设置第二阈值。
图4示出了根据本发明实施例的目标跟踪方法中邻域生长的示例。如图4所示,例如,首先使用当前帧(Frame#89)和前一帧(Frame#88)之间的深度匹配区域来构建基础超像素图,三个实线箭头指示了深度匹配区域。然后,可以使用当前帧(Frame#89)与中间帧(例如Frame#38)和第一帧(Frame#1)之间的深度匹配区域(分别如虚线箭头和点划线箭头所示)。认为深度匹配区域是当前帧内属于待跟踪目标的局部区域,每个深度匹配区域的所有相邻的其它未被匹配到的局部区域,与相应的未被匹配到的局部区域的相似度,如上所述计算相似度的方法可以有多种,此处不再赘述。如果相似度大于某一阈值,如0.9,则认为该未被匹配的局部区域也是属于目标区域。邻域生长结果是包含圆点的超像素。包含矩形框的图像对应于每个跟踪结果。其中,对于第一帧可以使用人工标注的矩形框。当存在多个深度匹配区域时,对生长的区域迭代地执行计算第二相似度的步骤,直到生长的区域区域与其他深度匹配区域相邻接。
通过邻域生长,得到了包括深度匹配区域和生长的区域在内的目标跟踪区域,作为当前帧的目标跟踪结果。然后,可以利用当前帧的目标跟踪结果来更新局部区域模型。
图5示出了根据本发明实施例的目标跟踪方法中更新局部区域模型的示例流程图。如图5所示,分别使用初始指定的要跟踪目标的信息和当前帧的目标跟踪结果来训练第一分类器和第二分类器从而进行在线学习。将学习结果进行合并,利用合并结果来更新局部区域模型,然后使用更新的局部区域模型来进行下一帧的目标跟踪操作。
尽管图5中示出了两个分类器,本领域技术人员可以理解,可以使用更多的分类器来提供跟踪精度,但是可能会降低跟踪速度。此外,可以使用例如在线SVM(支持矢量机)、RDA(规则化双平均)和Composite-Objective Mirror Descent(COMID)算法)等来进行学习。在使用更新的局部区域模型针对下一帧获得候选局部模型时,可以使用两个学习的分类器来估计每个局部区域属于被跟踪对象的概率。此外,关于学习结果的合并方法,可以使用本领域公知的各种联合概率计算方法。
根据本发明实施例,为了更好地跟踪具有遮挡的非刚性可视对象,提出了一种目 标跟踪方法。首先,基于在先目标跟踪结果进行局部区域在线学习以更新局部区域模型,然后,基于局部区域模型进行局部区域的主匹配得到深度匹配区域,接下来,基于深度匹配区域进行邻域局部区域的生长,得到目标跟踪结果,并利用该目标跟踪结果更新局部区域模型以用于下一帧的目标跟踪。利用上述递归目标跟踪方法,即使跟踪目标的外观和结构具有较大变化,由于利用局部区域在线学习来实时更新局部区域模型,在跟踪期间,利用主匹配实现了在两个选定帧中具有微小变形的局部区域的跟踪。在实现主匹配之后,进行了主匹配的局部区域周围的邻域局部区域进行生长,从而实现了在两个选定帧中具有较大变形的局部区域的跟踪。
根据本发明实施例,还提供了一种实现目标跟踪方法的目标跟踪设备。图6示出了根据本发明实施例的目标跟踪设备的示意方框图。如图6所示,目标跟踪设备600可以包括:主匹配单元610,配置为接收图像帧序列,利用局部区域模型确定当前帧中是否存在至少一个候选局部区域,并在存在至少一个候选局部区域的情况下,将至少一个候选局部区域中的每一个分别与在前帧中的对应局部区域进行主匹配,得到至少一个深度匹配区域;邻域生长单元620,配置为针对来自主匹配单元610的至少一个深度匹配区域执行邻域生长,得到目标跟踪结果;以及模型更新单元630,配置为利用来自邻域生长单元620的目标跟踪结果对局部区域模型进行更新,并将更新的局部区域模型发送到主匹配单元610。
在以上实施例中,本领域技术人员应该理解,根据本发明实施例的目标跟踪设备可以按照各种方式实现。通过使用方框图、流程图和/或示例,已经阐述了设备和/或工艺的众多实施例。在这种方框图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种方框图、流程图或示例中的每一功能和/或操作可以通过各种硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序), 实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (16)

  1. 一种目标跟踪方法,包括:
    在图像帧序列的首帧中指定要跟踪目标;
    利用局部区域模型确定当前帧中是否存在至少一个候选局部区域;
    在存在至少一个候选局部区域的情况下,将所述候选局部区域分别与在前帧中的对应局部区域进行主匹配;
    如果根据主匹配的结果确定当前帧中存在至少一个深度匹配区域,则针对所述至少一个深度匹配区域执行邻域生长,得到目标跟踪结果。
  2. 根据权利要求1所述的方法,还包括使用目标跟踪结果更新所述局部区域模型。
  3. 根据权利要求2所述的方法,其中,所述使用目标跟踪结果更新所述局部区域模型包括:
    分别使用初始指定的要跟踪目标的信息和当前帧的目标跟踪结果来在线训练多个分类器;
    合并学习结果;以及
    利用合并的学习结果来更新局部区域模型,以便使用更新的局部区域模型来进行下一帧的目标跟踪操作。
  4. 根据权利要求1所述的方法,其中,所述确定当前帧中是否存在至少一个候选局部区域包括:
    对当前帧执行基于超像素的图像分割以得到多个局部区域;
    利用局部区域模型确定所述多个局部区域中是否存在符合预定条件的候选局部区域。
  5. 根据权利要求4所述的方法,其中,确定是否存在符合预定条件的候选局部区域包括:利用以下公式估计多个局部区域中的每个局部区域属于被跟踪目标的概率
    Figure PCTCN2017105652-appb-100001
    其中,wi是系数,pi利用第i种学习方法估计的对应局部区域属于被跟踪目标的概率,p是对应局部区域属于被跟踪目标的集成概率,N是大于2的自然数;
    将集成概率p大于第三阈值的对应局部区域确定为候选局部区域;以及
    如果不存在集成概率p大于第三阈值的对应局部区域,则确定目标丢失。
  6. 根据权利要求5所述的方法,其中,
    Figure PCTCN2017105652-appb-100002
  7. 根据权利要求1所述的方法,其中,所述主匹配包括:分别计算候选局部区域中的每一个与在前帧的对应局部区域之间的第一相似度,如果第一相似度大于第一阈值,则将当前候选局部区域设置为深度匹配区域。
  8. 根据权利要求1所述的方法,其中,所述邻域生长包括:
    针对每个深度匹配区域,计算所述深度匹配区域的所有相邻的其它未被匹配到的局部区域各自与所述深度匹配区域的第二相似度,如果第二相似度大于第二阈值,则认为该未被匹配的局部区域属于目标区域,并将该未被匹配的局部区域设定为生长的区域;以及
    输出生长的区域作为目标跟踪结果。
  9. 根据权利要求8所述的方法,其中,当存在多个深度匹配区域时,对生长的邻域区域迭代地执行计算第二相似度的步骤,直到生长的区域与其他深度匹配区域相邻接。
  10. 一种目标跟踪设备,包括:
    主匹配单元,配置为接收图像帧序列,利用局部区域模型确定当前帧中是否存在至少一个候选局部区域,并在存在至少一个候选局部区域的情况下,将至少一个候选局部区域中的每一个分别与在前帧中的对应局部区域进行主匹配,得到至少一个深度匹配区域;
    邻域生长单元,配置为针对来自主匹配单元的至少一个深度匹配区域执行邻域生长,得到目标跟踪结果;以及
    模型更新单元,配置为利用来自邻域生长单元的目标跟踪结果对局部区域模型进行更新,并将更新的局部区域模型发送到主匹配单元。
  11. 根据权利要求10所述的设备,其中,所述主匹配单元还配置为:
    对当前帧执行基于超像素的图像分割以得到多个局部区域;
    利用局部区域模型确定所述多个局部区域中是否存在符合预定条件的候选局部区域。
  12. 根据权利要求11所述的设备,其中,所述主匹配单元还配置为:利用以下公式估计多个局部区域中的每个局部区域属于被跟踪目标的概率
    Figure PCTCN2017105652-appb-100003
    其中,wi是系数,pi利用第i种学习方法估计的对应局部区域属于被跟踪目标的概率,p是对应局部区域属于被跟踪目标的集成概率,N是大于2的自然数;
    将集成概率p大于第三阈值的对应局部区域确定为候选局部区域;以及
    如果不存在集成概率p大于第三阈值的对应局部区域,则确定目标丢失。
  13. 根据权利要求10所述的设备,其中,所述主匹配单元还配置为:分别计算候选局部区域中的每一个与在前帧的对应局部区域之间的第一相似度,如果第一相似度大于第一阈值,则将当前候选局部区域设置为深度匹配区域。
  14. 根据权利要求10所述的设备,其中,所述邻域生长单元还配置为:
    针对每个深度匹配区域,计算所述深度匹配区域的所有相邻的其它未被匹配到的局部区域各自与所述深度匹配区域的第二相似度,如果第二相似度大于第二阈值,则认为该未被匹配的局部区域属于目标区域,并将该未被匹配的局部区域设定为生长的区域;以及
    输出生长的区域作为目标跟踪结果。
  15. 根据权利要求14所述的设备,其中,所述邻域生长单元还配置为:当存在多个深度匹配区域时,对生长的邻域区域迭代地执行计算第二相似度,直到生长的区域与其他深度匹配区域相邻接。
  16. 根据权利要求10所述的设备,其中,所述模型更新单元还配置为:
    分别使用初始指定的要跟踪目标的信息和当前帧的目标跟踪结果来训练多个在线分类器;
    合并学习结果;以及
    利用合并的学习结果来更新局部区域模型,以便使用更新的局部区域模型来进行下一帧的目标跟踪操作。
PCT/CN2017/105652 2016-10-13 2017-10-11 目标跟踪方法和目标跟踪设备 WO2018068718A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610892797.X 2016-10-13
CN201610892797.XA CN107945208A (zh) 2016-10-13 2016-10-13 目标跟踪方法和目标跟踪设备

Publications (1)

Publication Number Publication Date
WO2018068718A1 true WO2018068718A1 (zh) 2018-04-19

Family

ID=61905170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/105652 WO2018068718A1 (zh) 2016-10-13 2017-10-11 目标跟踪方法和目标跟踪设备

Country Status (2)

Country Link
CN (1) CN107945208A (zh)
WO (1) WO2018068718A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489085A (zh) * 2020-12-11 2021-03-12 北京澎思科技有限公司 目标跟踪方法、目标跟踪装置、电子设备及存储介质
CN112711997A (zh) * 2020-12-24 2021-04-27 上海寒武纪信息科技有限公司 对数据流进行处理的方法和设备
CN113610895A (zh) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 目标跟踪方法、装置、电子设备及可读存储介质
CN113676775A (zh) * 2021-08-27 2021-11-19 苏州因塞德信息科技有限公司 一种利用人工智能在视频和游戏中进行广告植入的方法
CN114092515A (zh) * 2021-11-08 2022-02-25 国汽智控(北京)科技有限公司 用于障碍遮挡的目标跟踪检测方法、装置、设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584269A (zh) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 一种目标跟踪方法
CN109544598B (zh) * 2018-11-21 2021-09-24 电子科技大学 目标跟踪方法、装置及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129690A (zh) * 2011-03-21 2011-07-20 西安理工大学 一种抗环境干扰的人体运动目标的跟踪方法
CN103325126A (zh) * 2013-07-09 2013-09-25 中国石油大学(华东) 一种在尺度变化和遮挡情况下的视频目标跟踪方法
CN103413312A (zh) * 2013-08-19 2013-11-27 华北电力大学 基于邻里成分分析和尺度空间理论的视频目标跟踪方法
CN104318590A (zh) * 2014-11-10 2015-01-28 成都信升斯科技有限公司 视频中的目标跟踪方法
US20160071287A1 (en) * 2013-04-19 2016-03-10 Commonwealth Scientific And Industrial Research Organisation System and method of tracking an object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129690A (zh) * 2011-03-21 2011-07-20 西安理工大学 一种抗环境干扰的人体运动目标的跟踪方法
US20160071287A1 (en) * 2013-04-19 2016-03-10 Commonwealth Scientific And Industrial Research Organisation System and method of tracking an object
CN103325126A (zh) * 2013-07-09 2013-09-25 中国石油大学(华东) 一种在尺度变化和遮挡情况下的视频目标跟踪方法
CN103413312A (zh) * 2013-08-19 2013-11-27 华北电力大学 基于邻里成分分析和尺度空间理论的视频目标跟踪方法
CN104318590A (zh) * 2014-11-10 2015-01-28 成都信升斯科技有限公司 视频中的目标跟踪方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489085A (zh) * 2020-12-11 2021-03-12 北京澎思科技有限公司 目标跟踪方法、目标跟踪装置、电子设备及存储介质
CN112711997A (zh) * 2020-12-24 2021-04-27 上海寒武纪信息科技有限公司 对数据流进行处理的方法和设备
CN113610895A (zh) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 目标跟踪方法、装置、电子设备及可读存储介质
CN113676775A (zh) * 2021-08-27 2021-11-19 苏州因塞德信息科技有限公司 一种利用人工智能在视频和游戏中进行广告植入的方法
CN114092515A (zh) * 2021-11-08 2022-02-25 国汽智控(北京)科技有限公司 用于障碍遮挡的目标跟踪检测方法、装置、设备及介质
CN114092515B (zh) * 2021-11-08 2024-03-05 国汽智控(北京)科技有限公司 用于障碍遮挡的目标跟踪检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN107945208A (zh) 2018-04-20

Similar Documents

Publication Publication Date Title
WO2018068718A1 (zh) 目标跟踪方法和目标跟踪设备
US10902243B2 (en) Vision based target tracking that distinguishes facial feature targets
WO2019228211A1 (zh) 基于车道线的智能驾驶控制方法和装置、电子设备
US9613298B2 (en) Tracking using sensor data
Tissainayagam et al. Object tracking in image sequences using point features
KR102153607B1 (ko) 영상에서의 전경 검출 장치 및 방법
CN111860414B (zh) 一种基于多特征融合检测Deepfake视频方法
US20140185924A1 (en) Face Alignment by Explicit Shape Regression
US20130216127A1 (en) Image segmentation using reduced foreground training data
JP2015167017A (ja) マルチタスク学習を使用したラベル付けされていないビデオのための自己学習オブジェクト検出器
US11720745B2 (en) Detecting occlusion of digital ink
WO2013012091A1 (ja) 情報処理装置、物体追跡方法およびプログラム記憶媒体
EP3343507A1 (en) Producing a segmented image of a scene
Wang et al. An active contour model based on local pre-piecewise fitting bias corrections for fast and accurate segmentation
Xiao et al. An enhanced adaptive coupled-layer LGTracker++
Ait Abdelali et al. An adaptive object tracking using Kalman filter and probability product kernel
Zhou et al. Object Tracking Based on Camshift with Multi-feature Fusion.
JP7392488B2 (ja) 遺留物誤検出の認識方法、装置及び画像処理装置
CN117115117B (zh) 基于小样本下的病理图像识别方法、电子设备及存储介质
EP3343504B1 (en) Producing a segmented image using markov random field optimization
CN103065302B (zh) 一种基于离群数据挖掘的图像显著性检测方法
CN107704864B (zh) 基于图像对象性语义检测的显著目标检测方法
Sasi et al. Shadow detection and removal from real images: state of art
Ghosh et al. Robust simultaneous registration and segmentation with sparse error reconstruction
US20210279506A1 (en) Systems, methods, and devices for head pose determination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17860054

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17860054

Country of ref document: EP

Kind code of ref document: A1