WO2017185688A1 - 一种在线目标跟踪方法及装置 - Google Patents
一种在线目标跟踪方法及装置 Download PDFInfo
- Publication number
- WO2017185688A1 WO2017185688A1 PCT/CN2016/103141 CN2016103141W WO2017185688A1 WO 2017185688 A1 WO2017185688 A1 WO 2017185688A1 CN 2016103141 W CN2016103141 W CN 2016103141W WO 2017185688 A1 WO2017185688 A1 WO 2017185688A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- predicted
- prediction
- trajectory
- video frame
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present invention relates to the field of target tracking, and in particular to an online target tracking method and apparatus.
- Online target tracking is a hot research topic in computer vision. It is of great significance for high-level visual research such as motion recognition, behavior analysis and scene understanding, and has wide applications in video surveillance, intelligent robots, human-computer interaction and other fields. prospect.
- the online target tracking method includes: performing target detection on a current video frame to obtain an observation object; acquiring a fuzzy membership matrix between the set of observation objects and the set of prediction targets, wherein the set of prediction targets is at least using the previous video frame a set of predicted target states obtained by predicting a set of target states; correlating the observed objects with the predicted targets according to the fuzzy membership matrix to obtain effective target trajectories; and unassociated observation objects and unrelated predicted targets Trajectory management is performed to establish a temporary target trajectory and delete the invalid target; all effective target trajectories and temporary target trajectories are filtered to obtain a set of target states of the current video frame, and prediction is performed using the set of target states of the current video frame.
- the acquiring a fuzzy membership matrix between the set of the observed objects and the set of the predicted targets includes: using the set of the predicted targets as a cluster center, and acquiring each of the predicted targets and the set of the observed objects in the set of the predicted targets The first membership degree between the observation objects; the second membership degree between each observation object in the set of the observation target and the prediction target in the set of the observation target is obtained by using the set of the observation object as a cluster center Obtaining a fuzzy membership matrix by using the first membership degree and the second membership degree.
- the set of predicted targets is the cluster center
- g(o i , z k ) represents the feature distance between the predicted target o i and the observed object z k ;
- obtaining the second membership degree between each of the observed objects in the set of the observed objects and each of the predicted targets in the set includes: obtaining the second membership degree
- g(o i , z k ) is the feature distance between the predicted target o i and the observed object z k ;
- Acquiring the fuzzy membership matrix by using the first membership degree and the second membership degree comprises: acquiring, by using the first membership degree and the second membership degree, each prediction target in each set of the observation object and the prediction target in the set of the observation object Comprehensive membership between s ik
- f 1 ( ⁇ ) is the spatial distance feature similarity measure function
- f 2 ( ⁇ ) is the geometric size feature similarity measure function
- f 3 ( ⁇ ) is the motion direction feature similarity measure function
- f 4 ( ⁇ ) is the color
- f 5 ( ⁇ ) is a gradient direction feature similarity measure function, which is defined as follows:
- the target image is split into an upper half sub-block T 1 and a lower half sub-block T 2 , and the upper half sub-block T 1 is:
- the lower half of the sub-block T 2 is:
- S T is a dividing line of the target image, and the gray level of the target image is discretized into m levels;
- B (x i) is the quantized value of the pixel at x i, x i when the quantization value of the pixel of the B (x i) corresponding to the pixel level u, the ⁇ [b (x i) -u ] Take 1, or ⁇ [b(x i )-u] takes 0;
- ⁇ ( ⁇ ) represents the Pap value.
- the correlating the observed object with the predicted target according to the fuzzy membership matrix to obtain the effective target trajectory includes: finding the maximum value s pq among all the unmarked elements in the fuzzy membership matrix S; marking the fuzzy membership matrix S p-th row of all elements and the q th column of all of the elements; spatial Analyzing prediction target O p and the object to be observed Z q distance characteristic similarity measure function f 1 (o p, z q ) is greater than the threshold value of the constant beta]; if f 1 ( o p , z q )> ⁇ , it is judged that the prediction target o p is correctly associated with the observation object z q as the effective target trajectory; the above steps are performed cyclically until all the rows or all columns in the fuzzy membership matrix S are marked.
- the trajectory management is performed on the unassociated observation object and the unassociated prediction target, to establish the temporary target trajectory, and deleting the invalid target includes: using the unrelated observation object and the prediction target for the unrelated observation object The occlusion degree between the two, obtains the discriminant function of the unrelated observation object, and determines whether the temporary target trajectory is established for the unrelated observation object according to the discriminant function, and the unassociated prediction target corresponds to the unassociated prediction target. If the target consecutive ⁇ 1 frame is not associated, then the target is invalid and the invalid target is deleted, and ⁇ 1 is an integer greater than 1.
- using unrelated observations includes: obtaining the occlusion degree between the unrelated observation object z ⁇ ⁇ and the predicted target o ⁇ O
- ⁇ is a constant parameter and 0 ⁇ 1;
- Determining whether to establish a temporary target trajectory for the unrelated observation object according to the discriminant function includes: for each unrelated observation object, if the discriminant function is 1, the temporary target trajectory is established for the unrelated observation object, if If the discriminant function is 0, it is not established.
- all effective target trajectories and temporary target trajectories are filtered to obtain a current video frame.
- the set of target states and the prediction using the set of target states of the current video frame includes filtering and predicting the effective target trajectory and the temporary target trajectory using a Kalman filter.
- the performing target detection on the current video frame includes: performing target detection on the current video frame by using a mixed Gaussian background model.
- the online target tracking device includes: a detection module configured to perform target detection on a current video frame to obtain an observation object; and a matrix acquisition module configured to acquire a fuzzy membership matrix between the set of the observation object and the set of the prediction target, wherein
- the set of prediction targets is a set of predicted target states obtained by predicting at least a set of target states of the previous video frame; an association module is configured to associate the observed objects with the predicted targets according to the fuzzy membership matrix to obtain effective targets Trajectory; a trajectory management module for trajectory management of unassociated observation objects and unassociated prediction targets to establish temporary target trajectories and delete invalid targets; filter prediction module for all effective target trajectories and temporary targets The trajectory is filtered to obtain a set of target states of the current video frame and predicted using a set of target states of the current video frame.
- the online target tracking device comprises: a processor and a camera, the processor is connected to the camera; the processor is configured to perform target detection on the current video frame acquired from the camera to obtain an observation object; and obtain a set between the observation object and the set of the prediction target a fuzzy membership matrix, wherein the set of prediction targets is a set of predicted target states obtained by predicting at least a set of target states of a previous video frame; and the observed objects and the predicted targets are correlated according to the fuzzy membership matrix to obtain effective Target trajectory; trajectory management of unrelated observation objects and unrelated prediction targets to establish temporary target trajectories and delete invalid targets; filter all effective target trajectories and temporary target trajectories to obtain target state of current video frames The set and use the set of target states of the current video frame for prediction.
- the beneficial effects of the present invention are: constructing a fuzzy membership matrix and associating the observed object with the predicted target according to the same, and solving the complex association when the number of predicted targets and the number of observed objects are not equal when there is a missed detection or a new target occurs.
- the problem is to trajectory management of unrelated observation objects and unrelated prediction targets, to determine whether they are new targets, and to establish temporary target trajectories for new targets, reducing false target trajectory initiation and achieving high robustness.
- FIG. 1 is a flow chart of a first embodiment of an online target tracking method of the present invention
- FIG. 2 is a flow chart of a second embodiment of the online target tracking method of the present invention.
- FIG. 3 is a flowchart of establishing a temporary target trajectory in the third embodiment of the online target tracking method of the present invention.
- FIG. 4 is a schematic structural diagram of a first embodiment of an online target tracking device according to the present invention.
- FIG. 5 is a schematic structural diagram of a second embodiment of the online target tracking device of the present invention.
- the first embodiment of the online target tracking method of the present invention includes:
- S1 Perform target detection on the current video frame to obtain an observation object.
- the moving target detection algorithm such as frame difference method, optical flow method and background subtraction method is used to detect the image of the current video frame, to find the moving pixels from the image, supplemented by median filtering and simple morphological processing, and finally The target of the motion in the image is obtained as an observation object.
- an observation object is a rectangle or other shaped area in an image.
- the background subtraction method based on the mixed Gaussian background model is used to perform target detection on the image of the current video frame.
- the Fuzzy C-Means algorithm is used to obtain the fuzzy membership matrix between the set of observation objects and the set of prediction targets.
- the set of prediction targets is a set of predicted target states of the current video frame obtained by predicting at least the set of target states of the previous video frame.
- the fuzzy membership degree between the predicted target and the observed object can be calculated by using the set of predicted targets as the cluster center to obtain the fuzzy membership degree matrix.
- the fuzzy set between the observed object and the predicted target can also be calculated by using the set of observed objects as the cluster center.
- the membership degree is obtained to obtain the fuzzy membership matrix.
- the fuzzy membership degree between the target and the observed object and the fuzzy membership degree between the observed object and the predicted target can also be obtained to obtain the fuzzy membership matrix.
- the observation object and the prediction target are correlated according to the fuzzy membership matrix, and the pair of prediction targets and observation objects on the correct association are effective target trajectories.
- S4 Perform trajectory management on the unrelated observation object and the unassociated prediction target to establish a temporary target trajectory and delete the invalid target.
- false observation objects may appear.
- multiple observation objects are detected for the same target, and multiple targets or targets and backgrounds are used as observation objects.
- the unrelated observation object may be a new target or a false observation object. Therefore, it is necessary to judge whether the unrelated observation object is a false observation object, and the unrelated observation object that is not a false observation object is determined to be new.
- the goal is to establish a temporary trajectory for it.
- Untargeted predicted targets may occur when the target moves out of the camera's shooting range, is occluded by the background or other targets.
- the predicted value is used as the target state of the current video frame, and if the target consecutive multiple frames are not associated, it is determined that the target is invalid and the invalid target is deleted.
- S5 Filter all valid target trajectories and temporary target trajectories to obtain a set of target states of the current video frame, and perform prediction using a set of target states of the current video frame.
- the set of target states of the current video frame includes the state of all targets in the current video frame.
- the result of the prediction using the set of target states of the current video frame is used as a set of prediction targets for the next video frame for use by the next video frame target tracking.
- the effective target trajectory and the temporary target trajectory are filtered and predicted using a Kalman filter.
- the fuzzy membership matrix is constructed and the observation object and the prediction target are correlated according to the same, and the complex association when the number of prediction targets and the number of observation objects are not equal when there is a missed detection or a new target is solved.
- the problem is to trajectory management of unrelated observation objects and unrelated prediction targets, to determine whether they are new targets, and to establish temporary target trajectories for new targets, reducing false target trajectory initiation and achieving high robustness.
- the second embodiment of the online target tracking method of the present invention is based on the first embodiment of the online target tracking method of the present invention, and further defining steps S2 and S3 includes:
- S21 Taking a set of predicted targets as a cluster center, acquiring a first membership degree between each predicted target in the set of predicted targets and each observed object in the set of observed objects.
- the first objective function is constructed by taking the set of predicted targets as the cluster center:
- u ik is the first membership degree, that is, the fuzzy membership degree between the prediction target and the observation object
- g(o i , z k ) represents the feature distance between the prediction target o i and the observation object z k .
- the space-time multi-attribute features including the spatial distance feature, the geometric size feature, the color feature, the gradient direction feature, and the motion direction feature are used to predict the target and
- the distance between the observed objects is measured to define the characteristic distance between the predicted target o i and the observed object z k
- f 1 ( ⁇ ) is the spatial distance feature similarity measure function
- f 2 ( ⁇ ) is the geometric size feature similarity measure function
- f 3 ( ⁇ ) is the motion direction feature similarity measure function
- f 4 ( ⁇ ) is the color
- f 5 ( ⁇ ) is a gradient direction feature similarity measure function, which is defined as follows:
- the object of the target tracking is pedestrian.
- the dress of the pedestrian can be divided into two relatively independent parts, the color features of the upper part of the pedestrian and the lower part of the pedestrian.
- the color features are relatively independent.
- the human target it is divided into upper and lower partial sub-blocks, respectively describing the color features of the two sub-blocks, and the sub-block target color histogram is used to calculate the color feature similarity between the predicted target and the observed object.
- the target image is split into an upper half sub-block T 1 and a lower half sub-block T 2 , and the upper half sub-block T 1 is:
- the lower half of the sub-block T 2 is: Where S T is the dividing line of the target image.
- the gray scale of the target image is discretized into m levels, and the color histogram of the upper half sub-block T 1 among them:
- B (x i) is the quantized value of a pixel at x i, x i if the quantization of the pixel value B (x i) corresponding to the pixel level u, the ⁇ [b (x i) -u ] Take 1, Otherwise ⁇ [b(x i )-u] takes 0.
- the predicted target o i is split into upper and lower sub-blocks and the color of the upper half of the sub-block is calculated by using equations (6) and (7), respectively.
- Histogram And the color histogram of the lower half of the sub-block Split the observed object z k into upper and lower sub-blocks and calculate the color histogram of the upper half of the sub-blocks using equations (6) and (7).
- the color histogram of the lower half of the sub-block The color feature similarity measure function between the predicted target o i and the observed object z k is calculated by using the color histogram of each sub-block:
- ⁇ ( ⁇ ) represents the Pap value.
- H g ( ⁇ ) in f 5 ( ⁇ ) represents the histogram of the block gradient direction, The variance constant for the gradient direction.
- S22 Obtain a second membership degree between each observation target in the set of the observation target and each prediction target in the set of the prediction target by using the set of the observation objects as a cluster center.
- u' ki is the second membership degree, that is, the fuzzy membership degree between the observation object and the prediction target
- g(o i , z k ) represents the feature distance between the prediction target o i and the observation object z k .
- the prediction target is associated with the observed object based on the maximum membership degree criterion.
- step S24 If there are no unmarked rows or columns in the fuzzy membership matrix S, that is, all rows or all columns in the fuzzy membership matrix S have been marked, the flow is ended; otherwise, the process proceeds to step S24.
- the number of predicted targets may not be equal to the number of observed objects.
- the fuzzy target degree of the target corresponding to the current target should be small, but in the formula (11) Under the influence of the constraint condition, it is possible to calculate that the prediction target has a large fuzzy membership degree for several observation objects existing near the prediction target, which is inconsistent with the real situation. In addition, when the number of observation objects is 1, under the constraint condition in equation (11), it will be calculated that all the prediction targets have a membership degree of 1 for the observation object, which is inconsistent with the actual situation.
- the ambiguity of the association between the predicted target and the observed object in the complex environment is considered, and the comprehensive membership degree is calculated by the first membership degree and the second membership degree, thereby solving the problem of missed detection or new target occurrence.
- the complex correlation problem when the number of predicted targets is not equal to the number of observed objects is beneficial to solve the problem of online tracking of targets with high frequency occlusion and a large number of false observation environments.
- the third embodiment of the online target tracking method of the present invention is based on the first embodiment of the online target tracking method of the present invention, and further defining step S4 includes:
- the occlusion degree between the unrelated observation object and the prediction target is used, the discriminant function of the unrelated observation object is acquired, and whether the temporary target is established as the unrelated observation object is determined according to the discriminant function. Track. Further, if the temporary target trajectory is consecutively associated with the ⁇ 2 frame, it is converted into a valid target trajectory, otherwise the temporary target trajectory is deleted, where ⁇ 2 is an integer greater than 1.
- the determination target is invalid and the invalid target is deleted, where ⁇ 1 is an integer greater than 1.
- the discriminant function of the unrelated observation object is acquired, and whether the temporary target trajectory is established for the unrelated observation object according to the discriminant function is included.
- r( ⁇ ) represents the area. 0 ⁇ ⁇ (z, o) ⁇ 1, when ⁇ (z, o) > 0, occlusion occurs between the observation object z and the prediction target o.
- ⁇ is a constant parameter and 0 ⁇ ⁇ ⁇ 1.
- step S43 For each unrelated observation object, if the discriminant function is 1, the process proceeds to step S43; if the discriminant function is 0, the process proceeds to step S44.
- the occlusion degree between the unrelated observation object and the prediction target is analyzed, and the discriminant function of the unrelated observation object is obtained according to the occlusion degree to determine whether to establish a temporary for the unrelated observation object.
- Target trajectory can effectively prevent false observation objects from being used as new targets To improve the accuracy of target tracking.
- the following table shows the results of experiments on the public test video sequence PETS.S2L1 using an embodiment of the online target tracking method of the present invention.
- This embodiment is a combination of the first, second and third embodiments of the present invention, and uses Kalman filtering.
- the literature [1] algorithm is a multi-target tracking algorithm proposed by Berclaz et al. for K shortest path optimization. See J Berclaz, F Fleuret, E Mosetken, et al. Multiple Object Tracking Using K-Shortest Paths Optimization [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 9, 2011: 1806-1819.
- the literature [2] algorithm is a proposed multi-objective online tracking algorithm based on sub-track confidence and online learning of discriminative appearance models. See S Bae, K Yoon. Robust Online Multi-object Tracking Based on Tracklet Confidence and Online Discriminative Appearance Learning [C]. IEEE CVPR, 2014: 1218-1225.
- the abscissa is an evaluation index for evaluating the experimental results, including multi-target tracking accuracy (MOTP ⁇ ), multi-target tracking accuracy (MOTA ⁇ ), target label change times (IDS ⁇ ), and accurate tracking target ratio (MT ⁇ ).
- MOTP ⁇ multi-target tracking accuracy
- MOTA ⁇ multi-target tracking accuracy
- IDS ⁇ target label change times
- MT ⁇ accurate tracking target ratio
- Lost target ratio (ML ⁇ ) target track disconnection number (FG ⁇ )
- MOTP The definition of MOTP is:
- r( ⁇ ) represents the area of the region
- ⁇ t represents the number of times the t-tracking tracking algorithm outputs the state matching the target real state.
- MOTA The definition of MOTA is:
- FP t represents the number of error states output by the tracking algorithm at time t
- FN t represents the number of real targets missed in the output of the t-time tracking algorithm
- IDS t represents the number of times the target tag changes at time t
- ⁇ t represents t The number of time targets.
- the MT is defined as the number of target trajectories in the target state of the tracking algorithm that match the target true state with a matching rate of more than 80%.
- ML is defined as the number of target trajectories in the target state of the tracking algorithm that match the target true state with a matching rate of less than 20%.
- FG is defined as the number of times the tracking algorithm outputs the target trajectory.
- the test video sequence PETS.S2L1 contains a variety of tracking difficulties, including the intersection of the target trajectory due to the close proximity of the targets and the high frequency occlusion between the targets; the streetlights at the location when the target stays at the center of the scene Complete occlusion for a long time; sudden stop, turn and other movements during the target's travel and dramatic changes in the target's posture.
- the method of the present embodiment is superior to the comparison algorithm in the performance index of multi-target tracking accuracy (MOTA), wherein the literature [2] algorithm is an online tracking algorithm, and the literature [5] algorithm is offline. Tracking algorithm.
- MOTA multi-target tracking accuracy
- the proposed algorithm is slightly worse than the literature [2] algorithm but significantly better than the literature [1] algorithm, which illustrates the effectiveness of the fuzzy data association method proposed in this paper. Since the Kalman filter is used in this embodiment, the target state of the nonlinear motion cannot be accurately estimated and predicted.
- the MOTA of the method of the present embodiment is still superior to the comparison algorithm, which fully demonstrates that the method of the embodiment effectively reduces the erroneous target trajectory start while ensuring accurate data association.
- the first embodiment of the online target tracking device of the present invention includes:
- the detecting module 10 is configured to perform target detection on the current video frame to obtain an observation object.
- the matrix obtaining module 20 is configured to obtain a fuzzy membership matrix between the set of the observed object and the set of the predicted target, wherein the set of the predicted target is a predicted target state obtained by predicting at least the set of target states of the previous video frame. Collection.
- the association module 30 is configured to associate the observed object with the predicted target according to the fuzzy membership matrix to obtain the effective target trajectory.
- the trajectory management module 40 is configured to perform trajectory management on the unassociated observation object and the unassociated prediction target to establish a temporary target trajectory and delete the invalid target.
- the filter prediction module 50 is configured to filter all valid target trajectories and temporary target trajectories to obtain a set of target states of the current video frame, and perform prediction using a set of target states of the current video frame.
- the modules included in the online target tracking device of this embodiment are used to execute the steps in the first embodiment of the online target tracking method of the present invention corresponding to FIG. 1 and FIG. 1 .
- FIG. 1 and FIG. 1 For details, please refer to FIG. 1 and FIG. 1 .
- the second embodiment of the online target tracking device of the present invention comprises: a processor 110 and a camera 120.
- the camera 120 can be a local camera, the processor 110 is connected to the camera 120 via a bus; the camera 120 can also be a remote camera, and the processor 110 is connected to the camera 120 via a local area network or the Internet.
- the processor 110 controls the operation of the online target tracking device, and the processor 110 may also be referred to as a CPU (Central Processing Unit).
- Processor 110 may be an integrated circuit chip with signal processing capabilities.
- the processor 110 can also be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, and discrete hardware components.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the online target tracking device may further include a memory (not shown) for storing instructions and data necessary for the processor 110 to operate, and for storing video data captured by the transmitter 120.
- the processor 110 is configured to perform target detection on the current video frame acquired from the camera 120 to obtain an observation object; acquire a fuzzy membership matrix between the set of the observation object and the set of prediction targets, where the set of prediction targets is at least used before a set of predicted target states obtained by predicting a set of target states of a video frame; correlating the observed object with the predicted target according to the fuzzy membership matrix to obtain a valid target trajectory; and unassociated observation objects are not associated
- the predicted target performs trajectory management to establish a temporary target trajectory and delete the invalid target; filter all effective target trajectories and temporary target trajectories to obtain a set of target states of the current video frame, and perform a set of target states of the current video frame. prediction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
一种在线目标跟踪方法及跟踪装置,其中,所述跟踪方法包括:对当前视频帧进行目标检测,以得到观测对象(S1);获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵(S2),其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹(S3);对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标(S4);对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测(S5)。
Description
本发明涉及目标跟踪领域,特别是涉及一种在线目标跟踪方法及装置。
在线目标跟踪是计算机视觉中的一个热点研究课题,其对于动作识别、行为分析、场景理解等高层次的视觉研究具有重要意义,并且在视频监控、智能机器人、人机交互等领域有着广泛的应用前景。
在复杂的环境中,目标与背景静物间、目标与目标间的高频率遮挡,以及存在大量虚假观测对象,仍是多目标在线跟踪的难点问题。
【发明内容】
为了至少部分解决以上问题,本发明提出了一种在线目标跟踪方法。该在线目标跟踪方法包括:对当前视频帧进行目标检测,以得到观测对象;获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹;对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
其中,获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵包括:以预测目标的集合为聚类中心,获取预测目标的集合中的每个预测目标与观测对象的集合中的每个观测对象之间的第一隶属度;以观测对象的集合为聚类中心,获取观测对象的集合中的每个观测对象与预测目标的集合中的每个预测目标之间的第二隶属度;利用第一隶属度和第二隶属度获取模糊隶属度矩阵。
其中,预测目标的集合为O={o1,...,ol},观测对象的集合为Z={z1,...,zr};以预测目标的集合为聚类中心,获取预测目标的集合中的每个预测目标与观测对象
的集合中的每个观测对象之间的第一隶属度包括:获取第一隶属度
其中m=2,g(oi,zk)表示预测目标oi与观测对象zk间的特征距离;
以观测对象的集合为聚类中心,获取观测对象的集合中的每个观测对象与预测目标的集合中的每个预测目标之间的第二隶属度包括:获取第二隶属度
其中m=2,g(oi,zk)为预测目标oi与观测对象zk间的特征距离;
利用第一隶属度和第二隶属度获取模糊隶属度矩阵包括:利用第一隶属度和第二隶属度,获取观测对象的集合中的每个观测对象与预测目标的集合中的每个预测目标之间的综合隶属度sik
sik=α×uik+(1-α)×u′ki (3)
其中,α为正常系数且α∈[0,1];利用综合隶属度sik获取模糊隶属度矩阵S=[sik]l×r。
其中,预测目标oi与观测对象zk间的特征距离
g(oi,zk)=1-f1(oi,zk)×f2(oi,zk)×f3(oi,zk)×f4(oi,zk)×f5(oi,zk) (4)
其中f1(·)为空间距离特征相似性度量函数,f2(·)为几何尺寸特征相似性度量函数,f3(·)为运动方向特征相似性度量函数,f4(·)为颜色特征相似性度量函数,f5(·)为梯度方向特征相似性度量函数,其定义如下:
其中,(xo,yo)为目标oi的中心坐标,(xz,yz)为观测对象zk的中心坐标,ho为目标oi的图像高度,为空间距离方差常量,hz为观测对象zk的图像高度,为几何尺寸方差常量,(x′o,y′o)为上一时刻目标oi的中心坐标,为上一时刻目标oi的速度在图像坐标轴上的投影,为运动方向方差常量,g(·)为相似度系数函数,ρ(·)表示求巴氏系数,表示上半部分子块的颜色直方图,表示下半部分子块的颜色直方图,Hg(·)表示分块梯度方向直方图特征,为梯度方向方差常量。
其中,目标图像由{xi}i=1,...,n共计n个像素点构成,目标图像可以对应预测目标oi或观测对象zk,点xi坐标为将目标图像拆分为上半部分子块T1和下半部分子块T2,上半部分子块T1为:下半部分子块T2为:其中ST为目标图像的分割线,将目标图像的灰度离散为m级;
其中,b(xi)为xi处像素的量化值,若xi处像素的量化值b(xi)对应于像素级u,则δ[b(xi)-u]取1,否则δ[b(xi)-u]取0;
将预测目标oi拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图将观测对象zk拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图并利用各子块的颜色直方图计算预测目标oi和观测对象zk之间的颜色特征相似性度量函数:
其中,根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹包括:查找模糊隶属度矩阵S中未被标记的所有元素中的最大值spq;标记模糊隶属度矩阵S中的第p行所有元素以及第q列所有元素;判断预测目标op与观测对象zq的空间距离特征相似性度量函数f1(op,zq)是否大于阈值常量β;若f1(op,zq)>β,则判断预测目标op与观测对象zq正确关联,为有效目标轨迹;循环执行上述步骤直至模糊隶属度矩阵S中的所有行或所有列均被标记。
其中,对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹、删除无效目标包括:对未被关联的观测对象,利用未被关联的观测对象与预测目标之间的遮挡度,获取未被关联的观测对象的判别函数,根据判别函数判断是否为未被关联的观测对象建立临时目标轨迹,对未被关联的预测目标,若未被关联的预测目标对应的目标连续λ1帧未被关联,则判断目标无效并删除无效目标,λ1为大于1的整数。
其中,未被关联的观测对象的集合为Ω={z1,...,zm},预测目标的集合为O={o1,...,ol};利用未被关联的观测对象与预测目标之间的遮挡度,获取未被关联的观测对象的判别函数包括:获取未被关联的观测对象z∈Ω与预测目标o∈O之间的遮挡度
其中r(·)表示求面积;
获取每个未被关联的观测对象z∈Ω的判别函数
其中γ为常量参数,且0<γ<1;
根据判别函数判断是否为未被关联的观测对象建立临时目标轨迹包括:对每个未被关联的观测对象,若其判别函数为1,则为未被关联的观测对象建立临时目标轨迹,若其判别函数为0则不建立。
其中,对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的
目标状态的集合,并利用当前视频帧的目标状态的集合进行预测包括:使用卡尔曼滤波器对有效目标轨迹和临时目标轨迹进行滤波和预测。
其中,对当前视频帧进行目标检测包括:使用混合高斯背景模型对当前视频帧进行目标检测。
为了至少部分解决以上问题,本发明提出了一种在线目标跟踪装置。该在线目标跟踪装置包括:检测模块,用于对当前视频帧进行目标检测,以得到观测对象;矩阵获取模块,用于获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;关联模块,用于根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹;轨迹管理模块,用于对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;滤波预测模块,用于对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
为了至少部分解决以上问题,本发明提出了一种在线目标跟踪装置。该在线目标跟踪装置包括:处理器和摄像机,处理器连接摄像机;处理器用于对从摄像机获取的当前视频帧进行目标检测,以得到观测对象;获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹;对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
本发明的有益效果是:构建模糊隶属度矩阵并根据其对观测对象和预测目标进行关联,解决了当存在漏检或者出现新目标等情况下预测目标数目与观测对象数目不相等时的复杂关联问题,对未被关联的观测对象和未被关联的预测目标进行轨迹管理,判断其是否为新的目标,并为新目标建立临时目标轨迹,减少了错误的目标轨迹起始,实现高鲁棒性的在线目标跟踪。
图1是本发明在线目标跟踪方法第一实施例的流程图;
图2是本发明在线目标跟踪方法第二实施例的流程图;
图3是本发明在线目标跟踪方法第三实施例中建立临时目标轨迹的流程图;
图4是本发明在线目标跟踪装置第一实施例的结构示意图;
图5是本发明在线目标跟踪装置第二实施例的结构示意图。
下面结合附图和实施例对本发明进行详细说明。
如图1所示,本发明在线目标跟踪方法第一实施例包括:
S1:对当前视频帧进行目标检测,以得到观测对象。
使用帧差法、光流法、背景减除法等运动目标检测算法对当前视频帧的图像进行目标检测,以从图像中找出运动的像素,辅以中值滤波和简单的形态学处理,最终得到图像中运动的目标作为观测对象。一般而言,观测对象是图像中的一个矩形或者其他形状的区域。
在本发明在线目标跟踪方法一个实施例中,采用基于混合高斯背景模型的背景减除法对当前视频帧的图像进行目标检测。
S2:获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵。
使用模糊C均值聚类(Fuzzy C-Means)算法获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵。其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的当前视频帧的预测目标状态的集合。
可以以预测目标的集合为聚类中心计算预测目标与观测对象之间的模糊隶属度以获取模糊隶属度矩阵;也可以以观测对象的集合为聚类中心计算观测对象与预测目标之间的模糊隶属度以获取模糊隶属度矩阵;也可以结合目标与观测对象之间的模糊隶属度以及观测对象与预测目标之间的模糊隶属度以获取模糊隶属度矩阵。
S3:根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹。
基于最大隶属度准则,根据模糊隶属度矩阵对观测对象和预测目标进行关联,正确关联上的一对预测目标与观测对象为有效目标轨迹。
S4:对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标。
复杂环境下,由于背景干扰、目标自身形变等多种因素的影响,可能出现虚假观测对象,例如对同一个目标检测出多个观测对象,将多个目标或者目标与背景作为观测对象等。未被关联的观测对象可能是新出现的目标,也可能是虚假观测对象,因此需要判断未被关联的观测对象是否为虚假观测对象,不是虚假观测对象的未被关联的观测对象被判定为新的目标,为其建立临时轨迹。
当目标移动出摄像机的拍摄范围、被背景或者其他目标遮挡时,可能会出现未被关联的预测目标。对未被关联的预测目标,采用预测值作为当前视频帧的目标状态,若该目标连续多帧未被关联,则判断该目标无效并删除无效目标。
S5:对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
当前视频帧的目标状态的集合中包括了当前视频帧中所有目标的状态。利用当前视频帧的目标状态的集合进行预测的结果作为下一视频帧的预测目标的集合以供下一视频帧目标跟踪使用。
在本发明在线目标跟踪方法的一个实施例中,使用卡尔曼滤波器对有效目标轨迹和临时目标轨迹进行滤波和预测。
按照上述步骤对摄像机输出的视频逐帧进行处理,以实现目标在线跟踪。
通过上述实施例的实施,构建模糊隶属度矩阵并根据其对观测对象和预测目标进行关联,解决了当存在漏检或者出现新目标等情况下预测目标数目与观测对象数目不相等时的复杂关联问题,对未被关联的观测对象和未被关联的预测目标进行轨迹管理,判断其是否为新的目标,并为新目标建立临时目标轨迹,减少了错误的目标轨迹起始,实现高鲁棒性的在线目标跟踪。
如图2所示,本发明在线目标跟踪方法第二实施例,是在本发明在线目标跟踪方法第一实施例的基础上,进一步限定步骤S2和S3包括:
S21:以预测目标的集合为聚类中心,获取预测目标的集合中的每个预测目标与观测对象的集合中的每个观测对象之间的第一隶属度。
预测目标的集合为O={o1,...,ol},观测对象的集合为Z={z1,...,zr}。以预测目标的集合为聚类中心,构建第一目标函数:
其中m=2,uik为第一隶属度,即预测目标与观测对象之间的模糊隶属度,g(oi,zk)表示预测目标oi与观测对象zk间的特征距离。
在本发明在线目标跟踪方法的一个实施例中,基于空时线索,利用包括空间距离特征、几何尺寸特征、颜色特征、梯度方向特征以及运动方向特征在内的空时多属性特征对预测目标与观测对象间的距离进行度量,定义预测目标oi与观测对象zk间的特征距离
g(oi,zk)=1-f1(oi,zk)×f2(oi,zk)×f3(oi,zk)×f4(oi,zk)×f5(oi,zk) (4)
其中f1(·)为空间距离特征相似性度量函数,f2(·)为几何尺寸特征相似性度量函数,f3(·)为运动方向特征相似性度量函数,f4(·)为颜色特征相似性度量函数,f5(·)为梯度方向特征相似性度量函数,其定义如下:
其中,f1(·)中的(xo,yo)为目标oi的中心坐标,(xz,yz)为观测对象zk的中心坐标,||·||2为二范数。f2(·)中的ho为目标oi的图像高度,为空间距离方差常量,hz为观测对象zk的图像高度,为几何尺寸方差常量。f3(·)中的(x′o,y′o)为上一时刻目标oi的中心坐标,为上一时刻目标oi的速度在图像坐标轴上的投影,为运动方向方差常量。
对于f4(·)颜色特征相似性度量函数,目标跟踪的对象是行人,一般情况下,行人的着装可以分为两个相对独立的部分,行人上半部分的颜色特征与行人下
半部分的颜色特征相对独立。为此,对于行人类目标,将其拆分为上下两个部分子块,分别描述两个子块的颜色特征,利用子块目标颜色直方图来计算预测目标和观测对象间的颜色特征相似度。
目标图像由{xi}i=1,...,n共计n个像素点构成,目标图像可以对应预测目标oi或观测对象zk,点xi坐标为将目标图像拆分为上半部分子块T1和下半部分子块T2,上半部分子块T1为:下半部分子块T2为:其中,ST为目标图像的分割线。
式中,b(xi)为xi处像素的量化值,若xi处像素的量化值b(xi)对应于像素级u,则δ[b(xi)-u]取1,否则δ[b(xi)-u]取0。
为了计算预测目标oi和观测对象zk之间的颜色特征相似性,将预测目标oi拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图将观测对象zk拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图并利用各子块的颜色直方图计算预测目标oi和观测对象zk之间的颜色特征相似性度量函数:
在本发明在线目标跟踪方法的其他实施例中,可以采用其他形式来利用空时多属性特征定义预测目标oi与观测对象zk间的特征距离,例如g′(oi,zk)=exp(-f1(oi,zk)×f2(oi,zk)×f3(oi,zk)×f4(oi,zk)×f5(oi,zk))。当然,也可以使用更少或者更多特征相似性度量函数来定义预测目标oi与观测对象zk间的特征距离。
利用拉格朗日乘子法,可得第一隶属度:
S22:以观测对象的集合为聚类中心,获取观测对象的集合中的每个观测对象与预测目标的集合中的每个预测目标之间的第二隶属度。
以观测对象的集合为聚类中心,构建第二目标函数:
其中m=2,u′ki为第二隶属度,即观测对象与预测目标之间的模糊隶属度,g(oi,zk)表示预测目标oi与观测对象zk间的特征距离。
利用拉格朗日乘子法,可得第二隶属度:
S23:利用第一隶属度和第二隶属度获取模糊隶属度矩阵。
根据式(1)计算出的第一隶属度uik和式(2)计算出的第二隶属度u′ik,计算预测目标oi与观测对象zk之间的综合隶属度
sik=α×uik+(1-α)×u′ki (3)
其中,α为正常系数且α∈[0,1]。利用综合隶属度sik获取模糊隶属度矩阵S=[sik]l×r。
S24:查找模糊隶属度矩阵S中未被标记的所有元素中的最大值spq。
基于最大隶属度准则对预测目标与观测对象进行关联。
S25:标记模糊隶属度矩阵S中的第p行所有元素以及第q列所有元素。
S26:判断预测目标op与观测对象zq的空间距离特征相似性度量函数f1(op,zq)是否大于阈值常量β。
其中0<β<1,β越大,对预测目标op与观测对象zq的空间距离特征相似性要求越高。若f1(op,zq)>β,则跳转到步骤S27;否则跳转到步骤S28。
S27:预测目标op与观测对象zq正确关联,为有效目标轨迹。
接步骤S28。
S28:判断模糊隶属度矩阵S中是否不存在未被标记的行或列。
若模糊隶属度矩阵S中不存在未被标记的行或列,即模糊隶属度矩阵S中的所有行或者所有列均已被标记,则结束流程;否则跳转到步骤S24。
当出现新目标、目标由于被遮挡、离开监控范围等情况导致漏检时,预测目标的数目与观测对象的数目可能不相等。
如果只以预测目标的集合作为模糊聚类中心计算模糊隶属度,当视频帧中出现新目标时,新目标对应的观测对象将作为野值存在,其对于所有预测目标的模糊隶属度均应很小。但是在式(10)中约束条件的作用下,可能计算得出该观测对象对于几个预测目标有较大的模糊隶属度,从而与真实情况不符。另外当预测目标数为1时,在式(10)中约束条件的作用下,将会计算得出所有观测对象对于预测目标的隶属度均为1,与实际情况不符。
如果以观测对象作为模糊聚类中心,当视频帧中的目标由于遮挡等因素被漏检时,该目标对应的预测目标对于当前所有观测对象的模糊隶属度均应很小,但是在式(11)中约束条件的作用下,可能计算得出该预测目标对于存在于该预测目标附近的几个观测对象具有较大的模糊隶属度,从而与真实情况不符。另外当观测对象数为1时,在式(11)中约束条件的作用下,将会计算得出所有预测目标对于观测对象的隶属度均为1,与实际情况不符。
通过上述实施例的实施,考虑了复杂环境下预测目标与观测对象之间关联的模糊性,通过第一隶属度和第二隶属度计算综合隶属度,解决了当存在漏检或者新目标出现等预测目标数目与观测对象数目不相等时的复杂关联问题,从而有利于解决存在高频率遮挡以及大量虚假观测环境下的目标在线跟踪问题。
本发明在线目标跟踪方法第三实施例,是在本发明在线目标跟踪方法第一实施例的基础上,进一步限定步骤S4包括:
对未被关联的观测对象,利用未被关联的观测对象与预测目标之间的遮挡度,获取未被关联的观测对象的判别函数,根据判别函数判断是否为未被关联的观测对象建立临时目标轨迹。进一步的,若临时目标轨迹连续λ2帧都被关联上,则将其转化为有效目标轨迹,否则删除该临时目标轨迹,其中λ2为大于1的整数。
对未被关联的预测目标,若未被关联的预测目标对应的目标连续λ1帧未被关联,则判断目标无效并删除无效目标,其中λ1为大于1的整数。
如图3所示,利用未被关联的观测对象与预测目标之间的遮挡度,获取未被关联的观测对象的判别函数,根据判别函数判断是否为未被关联的观测对象建立临时目标轨迹包括:
S41:获取未被关联的观测对象与预测目标之间的遮挡度。
未被关联的观测对象的集合为Ω={z1,...,zm},预测目标的集合为O={o1,...,ol}。获取未被关联的观测对象z∈Ω与预测目标o∈O之间的遮挡度
其中r(·)表示求面积。0≤ω(z,o)≤1,当ω(z,o)>0时,观测对象z与预测目标o之间发生了遮挡。
S42:获取每个未被关联的观测对象的判别函数。
观测对象z∈Ω的判别函数
其中γ为常量参数,且0<γ<1。
对每个未被关联的观测对象,若其判别函数为1,则跳转到步骤S43;若其判别函数为0,则跳转到步骤S44。
S43:为未被关联的观测对象建立临时目标轨迹。
结束本次流程。
S44:不为未被关联的观测对象建立临时目标轨迹。
结束本次流程。
通过上述实施例的实施,对未被关联的观测对象与预测目标之间的遮挡度进行分析,并根据遮挡度获取未被关联的观测对象的判别函数决定是否为未被关联的观测对象建立临时目标轨迹,可以有效的防止将虚假观测对象作为新目
标,提高目标跟踪的准确性。
下表为使用本发明在线目标跟踪方法一实施例对公开测试视频序列PETS.S2L1进行实验的结果,本实施例是本发明第一、第二与第三实施例的结合,并采用卡尔曼滤波器对有效目标轨迹和临时目标轨迹进行滤波和预测。文献[1]算法为Berclaz等人提出的一种K最短路径优化求解的多目标跟踪算法,见J Berclaz,F Fleuret,E Türetken,et al.Multiple Object Tracking Using K-Shortest Paths Optimization[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.33,No.9,2011:1806-1819。文献[2]算法为提出的一种基于子轨迹置信度以及判别外观模型在线学习的多目标在线跟踪算法,见S Bae,K Yoon.Robust Online Multi-object Tracking Based on Tracklet Confidence and Online Discriminative Appearance Learning[C].IEEE CVPR,2014:1218-1225。
表1
横坐标为对实验结果进行评估的评价指标,包括多目标跟踪精度(MOTP↑)、多目标跟踪准确性(MOTA↑)、目标标签变化次数(IDS↓)、准确跟踪的目标比例(MT↑)、丢失的目标比例(ML↓)、目标轨迹断开次数(FG↓),其中上升箭头↑表示该项数值越大则跟踪效果越好,下降箭头↓表示该项数值越小则跟踪效果越好。
MOTP的定义为:
MOTA的定义为:
其中,FPt表示t时刻跟踪算法输出的错误状态的数量,FNt表示t时刻跟踪算法输出中所漏掉的真实目标的数量,IDSt表示t时刻目标标签发生变化的次数,
μt表示t时刻目标的数量。
MT定义为跟踪算法输出目标状态中与目标真实状态匹配率超过80%的目标轨迹的数量。ML定义为跟踪算法输出目标状态中与目标真实状态匹配率低于20%的目标轨迹的数量。FG定义为跟踪算法输出目标轨迹所断开的次数。
测试视频序列PETS.S2L1包含多种跟踪困难因素,其中包括由于目标相互靠近所导致的目标轨迹交叉以及目标间的高频率遮挡;目标在场景中心位置处停留时,该位置处的街灯所造成的长时间完全遮挡;目标行进过程中突然停下、转身等运动状态以及目标姿态的剧烈变化。
从表1可以看出,本实施例的方法在多目标跟踪准确性(MOTA)这一性能指标上要优于对比算法,其中文献[2]算法为在线跟踪算法、文献[5]算法为离线跟踪算法。在目标标签变化(IDS)这一项指标上,本文算法略差于文献[2]算法但明显好于文献[1]算法,说明了本文提出的模糊数据关联方法的有效性。由于本实施例中采用卡尔曼滤波器,无法对非线性运动的目标状态进行准确地估计与预测,因此,在目标轨迹断开次数(FG)以及多目标跟踪精度(MOTP)这两项指标上要低于对比算法,尽管如此,本实施例方法的MOTA仍然优于对比算法,这充分说明本实施例方法在保证准确数据关联的同时,有效减少了错误的目标轨迹起始。
如图4所示,本发明在线目标跟踪装置第一实施例包括:
检测模块10,用于对当前视频帧进行目标检测,以得到观测对象.
矩阵获取模块20,用于获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合。
关联模块30,用于根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹。
轨迹管理模块40,用于对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标。
滤波预测模块50,用于对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
本实施例的在线目标跟踪装置包括的各模块用于执行图1以及图1对应的本发明在线目标跟踪方法第一实施例中的各步骤,具体内容请参阅图1以及图1
对应的本发明在线目标跟踪方法第一实施例,在此不再赘述。
如图5所示,本发明在线目标跟踪装置第二实施例包括:处理器110和摄像机120。摄像机120可以为本地摄像机,处理器110通过总线连接摄像机120;摄像机120也可以为远程摄像机,处理器110通过局域网或互联网连接摄像机120。
处理器110控制在线目标跟踪装置的操作,处理器110还可以称为CPU(Central Processing Unit,中央处理单元)。处理器110可能是一种集成电路芯片,具有信号的处理能力。处理器110还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在线目标跟踪装置可以进一步包括存储器(图中未画出),存储器用于存储处理器110工作所必需的指令及数据,也可以存储传输器120拍摄的视频数据。
处理器110用于对从摄像机120获取的当前视频帧进行目标检测,以得到观测对象;获取观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据模糊隶属度矩阵对观测对象和预测目标进行关联,以获取有效目标轨迹;对未被关联的观测对象和未被关联的预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;对所有有效目标轨迹和临时目标轨迹进行滤波以得到当前视频帧的目标状态的集合,并利用当前视频帧的目标状态的集合进行预测。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
Claims (12)
- 一种在线目标跟踪方法,其特征在于,包括:对当前视频帧进行目标检测,以得到观测对象;获取所述观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中所述预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据所述模糊隶属度矩阵对所述观测对象和所述预测目标进行关联,以获取有效目标轨迹;对未被关联的所述观测对象和未被关联的所述预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;对所有所述有效目标轨迹和所述临时目标轨迹进行滤波以得到所述当前视频帧的目标状态的集合,并利用所述当前视频帧的目标状态的集合进行预测。
- 根据权利要求1所述的方法,其特征在于,所述获取所述观测对象的集合与预测目标的集合之间的模糊隶属度矩阵包括:以所述预测目标的集合为聚类中心,获取所述预测目标的集合中的每个预测目标与所述观测对象的集合中的每个观测对象之间的第一隶属度;以所述观测对象的集合为聚类中心,获取所述观测对象的集合中的每个观测对象与所述预测目标的集合中的每个预测目标之间的第二隶属度;利用所述第一隶属度和所述第二隶属度获取所述模糊隶属度矩阵。
- 根据权利要求2所述的方法,其特征在于,所述预测目标的集合为O={o1,...,ol},所述观测对象的集合为Z={z1,...,zr};所述以所述预测目标的集合为聚类中心,获取所述预测目标的集合中的每个预测目标与所述观测对象的集合中的每个观测对象之间的第一隶属度包括:获取所述第一隶属度其中m=2,g(oi,zk)表示预测目标oi与观测对象zk间的特征距离;所述以所述观测对象的集合为聚类中心,获取所述观测对象的集合中的每个观测对象与所述预测目标的集合中的每个预测目标之间的第二隶属度包括:获取所述第二隶属度其中m=2,g(oi,zk)为预测目标oi与观测对象zk间的特征距离;所述利用所述第一隶属度和所述第二隶属度获取所述模糊隶属度矩阵包括:利用所述第一隶属度和所述第二隶属度,获取所述观测对象的集合中的每个观测对象与所述预测目标的集合中的每个预测目标之间的综合隶属度siksik=α×uik+(1-α)×u′ki (3)其中,α为正常系数且α∈[0,1];利用所述综合隶属度sik获取所述模糊隶属度矩阵S=[sik]l×r。
- 根据权利要求3所述的方法,其特征在于,所述预测目标oi与观测对象zk间的特征距离g(oi,zk)=1-f1(oi,zk)×f2(oi,zk)×f3(oi,zk)×f4(oi,zk)×f5(oi,zk) (4)其中f1(·)为空间距离特征相似性度量函数,f2(·)为几何尺寸特征相似性度量函数,f3(·)为运动方向特征相似性度量函数,f4(·)为颜色特征相似性度量函数,f5(·)为梯度方向特征相似性度量函数,其定义如下:
- 根据权利要求4所述的方法,其特征在于,目标图像由{xi}i=1,…,n共计n个像素点构成,所述目标图像可以对应所述预测目标oi或所述观测对象zk,点xi坐标为将所述目标图像拆分为上半部分子块T1和下半部分子块T2,所述上半部分子块T1为:所述下半部分子块T2为:其中ST为所述目标图像的分割线,将所述目标图像的灰度离散为m级;其中,b(xi)为xi处像素的量化值,若xi处像素的量化值b(xi)对应于像素级u,则δ[b(xi)-u]取1,否则δ[b(xi)-u]取0;将所述预测目标oi拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图将所述观测对象zk拆分成上下两个子块并利用公式(6)(7)分别计算其上半部分子块的颜色直方图和下半部分子块的颜色直方图并利用各子块的颜色直方图计算预测目标oi和观测对象zk之间的颜色特征相似性度量函数:
- 根据权利要求2所述的方法,其特征在于,所述根据所述模糊隶属度矩阵对所述观测对象和所述预测目标进行关联,以获取有效目标轨迹包括:查找所述模糊隶属度矩阵S中未被标记的所有元素中的最大值spq;标记所述模糊隶属度矩阵S中的第p行所有元素以及第q列所有元素;判断预测目标op与观测对象zq的空间距离特征相似性度量函数f1(op,zq)是否大于阈值常量β;若f1(op,zq)>β,则判断所述预测目标op与观测对象zq正确关联,为有效目标轨迹;循环执行上述步骤直至所述模糊隶属度矩阵S中的所有行或所有列均被标记。
- 根据权利要求1所述的方法,其特征在于,所述对未被关联的所述观测对象和未被关联的所述预测目标进行轨迹管理,以建立临时目标轨迹、删除无效目标包括:对未被关联的所述观测对象,利用所述未被关联的观测对象与所述预测目标之间的遮挡度,获取所述未被关联的观测对象的判别函数,根据所述判别函数判断是否为所述未被关联的观测对象建立临时目标轨迹,对未被关联的所述预测目标,若所述未被关联的预测目标对应的目标连续λ1帧未被关联,则判断所述目标无效并删除所述无效目标,λ1为大于1的整数。
- 根据权利要求7所述的方法,其特征在于,所述未被关联的观测对象的集合为Ω={z1,...,zm},所述预测目标的集合为O={o1,...,ol};所述利用所述未被关联的观测对象与所述预测目标之间的遮挡度,获取所述未被关联的观测对象的判别函数包括:获取所述未被关联的观测对象z∈Ω与预测目标o∈O之间的遮挡度其中r(·)表示求面积;获取每个所述未被关联的观测对象z∈Ω的判别函数其中γ为常量参数,且0<γ<1;所述根据所述判别函数判断是否为所述未被关联的观测对象建立临时目标轨迹包括:对每个未被关联的观测对象,若其判别函数为1,则为所述未被关联的观测对象建立所述临时目标轨迹,若其判别函数为0则不建立。
- 根据权利要求1-8中任一项所述的方法,其特征在于,所述对所有所述有效目标轨迹和所述临时目标轨迹进行滤波以得到所述当前视频帧的目标状态的集合,并利用所述当前视频帧的目标状态的集合进行预测包括:使用卡尔曼滤波器对所述有效目标轨迹和所述临时目标轨迹进行滤波和预测。
- 根据权利要求1-8中任一项所述的方法,其特征在于,所述对当前视频帧进行目标检测包括:使用混合高斯背景模型对当前视频帧进行目标检测。
- 一种在线目标跟踪装置,其特征在于,包括:检测模块,用于对当前视频帧进行目标检测,以得到观测对象;矩阵获取模块,用于获取所述观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中所述预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;关联模块,用于根据所述模糊隶属度矩阵对所述观测对象和所述预测目标进行关联,以获取有效目标轨迹;轨迹管理模块,用于对未被关联的所述观测对象和未被关联的所述预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;滤波预测模块,用于对所有所述有效目标轨迹和所述临时目标轨迹进行滤波以得到所述当前视频帧的目标状态的集合,并利用所述当前视频帧的目标状态的集合进行预测。
- 一种在线目标跟踪装置,其特征在于,包括:处理器和摄像机,处理器 连接摄像机;处理器用于对从摄像机获取的当前视频帧进行目标检测,以得到观测对象;获取所述观测对象的集合与预测目标的集合之间的模糊隶属度矩阵,其中所述预测目标的集合为至少利用前一视频帧的目标状态的集合进行预测而得到的预测目标状态的集合;根据所述模糊隶属度矩阵对所述观测对象和所述预测目标进行关联,以获取有效目标轨迹;对未被关联的所述观测对象和未被关联的所述预测目标进行轨迹管理,以建立临时目标轨迹并删除无效目标;对所有所述有效目标轨迹和所述临时目标轨迹进行滤波以得到所述当前视频帧的目标状态的集合,并利用所述当前视频帧的目标状态的集合进行预测。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610269208.2 | 2016-04-26 | ||
CN201610269208.2A CN105894542B (zh) | 2016-04-26 | 2016-04-26 | 一种在线目标跟踪方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017185688A1 true WO2017185688A1 (zh) | 2017-11-02 |
Family
ID=56704760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/103141 WO2017185688A1 (zh) | 2016-04-26 | 2016-10-25 | 一种在线目标跟踪方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105894542B (zh) |
WO (1) | WO2017185688A1 (zh) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109916407A (zh) * | 2019-02-03 | 2019-06-21 | 河南科技大学 | 基于自适应卡尔曼滤波器的室内移动机器人组合定位方法 |
CN111274336A (zh) * | 2019-12-18 | 2020-06-12 | 浙江大华技术股份有限公司 | 目标轨迹的处理方法、装置、存储介质及电子装置 |
CN111986230A (zh) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | 一种视频中目标物的姿态跟踪方法及装置 |
CN112084372A (zh) * | 2020-09-14 | 2020-12-15 | 北京数衍科技有限公司 | 行人轨迹更新方法及装置 |
CN112116634A (zh) * | 2020-07-30 | 2020-12-22 | 西安交通大学 | 一种半在线机置的多目标跟踪方法 |
CN112632463A (zh) * | 2020-12-22 | 2021-04-09 | 中国航空工业集团公司沈阳飞机设计研究所 | 一种基于多属性的目标数据关联方法及装置 |
CN113111142A (zh) * | 2021-03-23 | 2021-07-13 | 中国人民解放军91388部队 | 指显平台对水下目标航迹野值的实时处理方法 |
CN113139417A (zh) * | 2020-11-24 | 2021-07-20 | 深圳云天励飞技术股份有限公司 | 行动对象追踪方法及相关设备 |
CN113177470A (zh) * | 2021-04-28 | 2021-07-27 | 华中科技大学 | 行人轨迹预测方法、装置、设备及存储介质 |
CN113281760A (zh) * | 2021-05-21 | 2021-08-20 | 阿波罗智能技术(北京)有限公司 | 障碍物检测方法、装置、电子设备、车辆和存储介质 |
CN113534135A (zh) * | 2021-06-30 | 2021-10-22 | 中国人民解放军海军航空大学 | 一种基于离散度线性趋势检验的航迹关联方法及装置 |
CN114066944A (zh) * | 2022-01-17 | 2022-02-18 | 天津聚芯光禾科技有限公司 | 基于行人跟踪的光模块生产车间工人岗位行为分析方法 |
CN116718197A (zh) * | 2023-08-09 | 2023-09-08 | 腾讯科技(深圳)有限公司 | 轨迹处理方法、装置、电子设备及存储介质 |
CN117455955A (zh) * | 2023-12-14 | 2024-01-26 | 武汉纺织大学 | 一种基于无人机视角下的行人多目标跟踪方法 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105894542B (zh) * | 2016-04-26 | 2019-06-11 | 深圳大学 | 一种在线目标跟踪方法及装置 |
CN106327526B (zh) * | 2016-08-22 | 2020-07-07 | 杭州保新科技有限公司 | 图像目标跟踪方法与系统 |
WO2018107492A1 (zh) * | 2016-12-16 | 2018-06-21 | 深圳大学 | 基于直觉模糊随机森林的目标跟踪方法及装置 |
CN107169996A (zh) * | 2017-05-15 | 2017-09-15 | 华侨大学 | 一种视频中动态人脸识别方法 |
WO2018227491A1 (zh) * | 2017-06-15 | 2018-12-20 | 深圳大学 | 视频多目标模糊数据关联方法及装置 |
WO2019006633A1 (zh) * | 2017-07-04 | 2019-01-10 | 深圳大学 | 基于模糊逻辑的视频多目标跟踪方法及装置 |
CN109426791B (zh) * | 2017-09-01 | 2022-09-16 | 深圳市金溢科技股份有限公司 | 一种多站点多元车辆匹配方法、服务器及系统 |
CN110349184B (zh) * | 2019-06-06 | 2022-08-09 | 南京工程学院 | 基于迭代滤波和观测判别的多行人跟踪方法 |
CN110363165B (zh) * | 2019-07-18 | 2023-04-14 | 深圳大学 | 基于tsk模糊系统的多目标跟踪方法、装置及存储介质 |
CN110349188B (zh) * | 2019-07-18 | 2023-10-27 | 深圳大学 | 基于tsk模糊模型的多目标跟踪方法、装置及存储介质 |
CN113247720A (zh) * | 2021-06-02 | 2021-08-13 | 浙江新再灵科技股份有限公司 | 基于视频的智能梯控方法及系统 |
CN113534127B (zh) * | 2021-07-13 | 2023-10-27 | 深圳大学 | 一种多目标数据关联方法、装置及计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080187175A1 (en) * | 2007-02-07 | 2008-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus for tracking object, and method and apparatus for calculating object pose information |
CN102853836A (zh) * | 2012-09-10 | 2013-01-02 | 电子科技大学 | 一种基于航迹质量的反馈加权融合方法 |
CN103679753A (zh) * | 2013-12-16 | 2014-03-26 | 深圳大学 | 一种概率假设密度滤波器的轨迹标识方法及轨迹标识系统 |
CN103955892A (zh) * | 2014-04-03 | 2014-07-30 | 深圳大学 | 一种目标跟踪方法及扩展截断无迹卡尔曼滤波方法、装置 |
CN105205313A (zh) * | 2015-09-07 | 2015-12-30 | 深圳大学 | 模糊高斯和粒子滤波方法、装置及目标跟踪方法、装置 |
CN105894542A (zh) * | 2016-04-26 | 2016-08-24 | 深圳大学 | 一种在线目标跟踪方法及装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632376A (zh) * | 2013-12-12 | 2014-03-12 | 江苏大学 | 一种两级框架的车辆部分遮挡消除方法 |
CN103942774A (zh) * | 2014-01-20 | 2014-07-23 | 天津大学 | 一种基于相似性传播的多目标协同显著区域检测方法 |
CN104851112B (zh) * | 2015-04-28 | 2017-03-01 | 北京理工大学 | 一种基于数据集补偿的运动目标检测跟踪算法的评估方法 |
CN104899590B (zh) * | 2015-05-21 | 2019-08-09 | 深圳大学 | 一种无人机视觉目标跟随方法及系统 |
-
2016
- 2016-04-26 CN CN201610269208.2A patent/CN105894542B/zh active Active
- 2016-10-25 WO PCT/CN2016/103141 patent/WO2017185688A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080187175A1 (en) * | 2007-02-07 | 2008-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus for tracking object, and method and apparatus for calculating object pose information |
CN102853836A (zh) * | 2012-09-10 | 2013-01-02 | 电子科技大学 | 一种基于航迹质量的反馈加权融合方法 |
CN103679753A (zh) * | 2013-12-16 | 2014-03-26 | 深圳大学 | 一种概率假设密度滤波器的轨迹标识方法及轨迹标识系统 |
CN103955892A (zh) * | 2014-04-03 | 2014-07-30 | 深圳大学 | 一种目标跟踪方法及扩展截断无迹卡尔曼滤波方法、装置 |
CN105205313A (zh) * | 2015-09-07 | 2015-12-30 | 深圳大学 | 模糊高斯和粒子滤波方法、装置及目标跟踪方法、装置 |
CN105894542A (zh) * | 2016-04-26 | 2016-08-24 | 深圳大学 | 一种在线目标跟踪方法及装置 |
Non-Patent Citations (1)
Title |
---|
ZHANG, GANG ET AL.: "An Improved Multi-target Tracking Data Association Algorithm Based on FCM", JOURNAL OF AIR FORCE ENGINEERING UNIVERSITY ( NATURAL SCIENCE EDITION, vol. 11, no. 1, 28 February 2010 (2010-02-28) * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109916407A (zh) * | 2019-02-03 | 2019-06-21 | 河南科技大学 | 基于自适应卡尔曼滤波器的室内移动机器人组合定位方法 |
CN109916407B (zh) * | 2019-02-03 | 2023-03-31 | 河南科技大学 | 基于自适应卡尔曼滤波器的室内移动机器人组合定位方法 |
CN111986230A (zh) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | 一种视频中目标物的姿态跟踪方法及装置 |
CN111274336A (zh) * | 2019-12-18 | 2020-06-12 | 浙江大华技术股份有限公司 | 目标轨迹的处理方法、装置、存储介质及电子装置 |
CN111274336B (zh) * | 2019-12-18 | 2023-05-09 | 浙江大华技术股份有限公司 | 目标轨迹的处理方法、装置、存储介质及电子装置 |
CN112116634B (zh) * | 2020-07-30 | 2024-05-07 | 西安交通大学 | 一种半在线机置的多目标跟踪方法 |
CN112116634A (zh) * | 2020-07-30 | 2020-12-22 | 西安交通大学 | 一种半在线机置的多目标跟踪方法 |
CN112084372B (zh) * | 2020-09-14 | 2024-01-26 | 北京数衍科技有限公司 | 行人轨迹更新方法及装置 |
CN112084372A (zh) * | 2020-09-14 | 2020-12-15 | 北京数衍科技有限公司 | 行人轨迹更新方法及装置 |
CN113139417A (zh) * | 2020-11-24 | 2021-07-20 | 深圳云天励飞技术股份有限公司 | 行动对象追踪方法及相关设备 |
CN113139417B (zh) * | 2020-11-24 | 2024-05-03 | 深圳云天励飞技术股份有限公司 | 行动对象追踪方法及相关设备 |
CN112632463A (zh) * | 2020-12-22 | 2021-04-09 | 中国航空工业集团公司沈阳飞机设计研究所 | 一种基于多属性的目标数据关联方法及装置 |
CN112632463B (zh) * | 2020-12-22 | 2024-06-11 | 中国航空工业集团公司沈阳飞机设计研究所 | 一种基于多属性的目标数据关联方法及装置 |
CN113111142A (zh) * | 2021-03-23 | 2021-07-13 | 中国人民解放军91388部队 | 指显平台对水下目标航迹野值的实时处理方法 |
CN113111142B (zh) * | 2021-03-23 | 2024-02-02 | 中国人民解放军91388部队 | 指显平台对水下目标航迹野值的实时处理方法 |
CN113177470A (zh) * | 2021-04-28 | 2021-07-27 | 华中科技大学 | 行人轨迹预测方法、装置、设备及存储介质 |
CN113281760A (zh) * | 2021-05-21 | 2021-08-20 | 阿波罗智能技术(北京)有限公司 | 障碍物检测方法、装置、电子设备、车辆和存储介质 |
CN113534135B (zh) * | 2021-06-30 | 2024-04-12 | 中国人民解放军海军航空大学 | 一种基于离散度线性趋势检验的航迹关联方法及装置 |
CN113534135A (zh) * | 2021-06-30 | 2021-10-22 | 中国人民解放军海军航空大学 | 一种基于离散度线性趋势检验的航迹关联方法及装置 |
CN114066944B (zh) * | 2022-01-17 | 2022-04-12 | 天津聚芯光禾科技有限公司 | 基于行人跟踪的光模块生产车间工人岗位行为分析方法 |
CN114066944A (zh) * | 2022-01-17 | 2022-02-18 | 天津聚芯光禾科技有限公司 | 基于行人跟踪的光模块生产车间工人岗位行为分析方法 |
CN116718197A (zh) * | 2023-08-09 | 2023-09-08 | 腾讯科技(深圳)有限公司 | 轨迹处理方法、装置、电子设备及存储介质 |
CN116718197B (zh) * | 2023-08-09 | 2023-10-24 | 腾讯科技(深圳)有限公司 | 轨迹处理方法、装置、电子设备及存储介质 |
CN117455955A (zh) * | 2023-12-14 | 2024-01-26 | 武汉纺织大学 | 一种基于无人机视角下的行人多目标跟踪方法 |
CN117455955B (zh) * | 2023-12-14 | 2024-03-08 | 武汉纺织大学 | 一种基于无人机视角下的行人多目标跟踪方法 |
Also Published As
Publication number | Publication date |
---|---|
CN105894542B (zh) | 2019-06-11 |
CN105894542A (zh) | 2016-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017185688A1 (zh) | 一种在线目标跟踪方法及装置 | |
US11455735B2 (en) | Target tracking method, device, system and non-transitory computer readable storage medium | |
CN108447078B (zh) | 基于视觉显著性的干扰感知跟踪算法 | |
US20220180534A1 (en) | Pedestrian tracking method, computing device, pedestrian tracking system and storage medium | |
CN113192105B (zh) | 一种室内多人追踪及姿态估量的方法及装置 | |
CN106570490B (zh) | 一种基于快速聚类的行人实时跟踪方法 | |
CN101344965A (zh) | 基于双目摄像的跟踪系统 | |
CN111862145B (zh) | 一种基于多尺度行人检测的目标跟踪方法 | |
KR101023951B1 (ko) | 행동인식 시스템 및 방법 | |
CN107194950B (zh) | 一种基于慢特征分析的多人跟踪方法 | |
WO2018227491A1 (zh) | 视频多目标模糊数据关联方法及装置 | |
CN114926859A (zh) | 一种结合头部跟踪的密集场景下行人多目标跟踪方法 | |
He et al. | Fast online multi-pedestrian tracking via integrating motion model and deep appearance model | |
CN106447698A (zh) | 一种基于距离传感器的多行人跟踪方法和系统 | |
CN111986237A (zh) | 一种人数无关的实时多目标跟踪算法 | |
Xue et al. | Multiple pedestrian tracking under first-person perspective using deep neural network and social force optimization | |
Zhong et al. | DynaTM-SLAM: Fast filtering of dynamic feature points and object-based localization in dynamic indoor environments | |
Shi et al. | Recognition of abnormal human behavior in elevators based on CNN | |
Yuan et al. | Multiple object detection and tracking from drone videos based on GM-YOLO and multi-tracker | |
Li et al. | Loitering detection based on trajectory analysis | |
CN112767438B (zh) | 结合时空运动的多目标跟踪方法 | |
Li et al. | Improved CAMShift object tracking based on Epanechnikov Kernel density estimation and Kalman filter | |
Tan et al. | Sequence-tracker: Multiple object tracking with sequence features in severe occlusion scene | |
Tabassum et al. | Anonymous person tracking across multiple camera using color histogram and body pose estimation | |
Yan | Using the Improved SSD Algorithm to Motion Target Detection and Tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16900196 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/03/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16900196 Country of ref document: EP Kind code of ref document: A1 |