WO2019006632A1 - 一种视频多目标跟踪方法及装置 - Google Patents

一种视频多目标跟踪方法及装置 Download PDF

Info

Publication number
WO2019006632A1
WO2019006632A1 PCT/CN2017/091574 CN2017091574W WO2019006632A1 WO 2019006632 A1 WO2019006632 A1 WO 2019006632A1 CN 2017091574 W CN2017091574 W CN 2017091574W WO 2019006632 A1 WO2019006632 A1 WO 2019006632A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory
frame
new
prediction
result
Prior art date
Application number
PCT/CN2017/091574
Other languages
English (en)
French (fr)
Inventor
李良群
张富有
湛西羊
谢维信
刘宗香
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2017/091574 priority Critical patent/WO2019006632A1/zh
Publication of WO2019006632A1 publication Critical patent/WO2019006632A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods

Definitions

  • the present invention relates to the field of target tracking, and in particular to a video multi-target tracking method and apparatus.
  • Online target tracking is a hot research topic in computer vision. It is of great significance for high-level visual research such as motion recognition, behavior analysis and scene understanding, and has wide applications in video surveillance, intelligent robots, human-computer interaction and other fields. prospect.
  • the technical problem to be solved by the present invention is to provide a video multi-target tracking method and device, which can solve the problem of low tracking accuracy due to the batch-off phenomenon of the target trajectory in the prior art.
  • a technical solution adopted by the present invention is to provide a video multi-target tracking method, including: performing motion detection on a current video frame, detecting a possible moving object as an observation result; and observing the result and the target
  • the prediction result is data association, wherein the prediction result is obtained by predicting at least the trajectory of the target of the previous video frame; trajectory management is performed on the unrelated observation result and the prediction result, and the trajectory management includes utilizing the unrelated prediction result
  • Obtain a final trajectory obtain a new trajectory by using unrelated observations, perform trajectory association on the final trajectory and the new trajectory, perform trajectory fusion on the associated final trajectory and the new trajectory, and delete the unrelated final trajectory that meets the preset condition.
  • a video multi-target tracking device including: a processor and a camera, the processor is connected to the camera; and when the processor is working, the method is implemented as described above. .
  • the invention has the beneficial effects that the present invention performs motion detection on the current video frame to obtain an observation result, performs data association between the observation result and the target prediction result, and performs trajectory management on the unrelated observation result and the prediction result to obtain a final trajectory. And a new trajectory, trajectory association of the final trajectory and the new trajectory, trajectory fusion of the associated final trajectory and the new trajectory, and deletion of the unrelated final trajectory according to the preset condition, so that the temporarily unrelated prediction result and
  • the observation result can generate the final trajectory and the new trajectory, and the trajectory association and the trajectory fusion are performed by using the final trajectory and the new trajectory, so that the target trajectories of the broken batch are reconnected, thereby realizing the trajectory connection of the same target at different times, improving the accuracy of the target tracking, and improving the target. Tracking performance.
  • FIG. 1 is a schematic flow chart of a first embodiment of a video multi-target tracking method according to the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a video multi-target tracking method according to the present invention.
  • Figure 3 is the principle of using the fuzzy inference system to calculate the correlation probability of the observation and prediction results.
  • step S121 in FIG. 2 is a schematic diagram of a specific process of step S121 in FIG. 2;
  • FIG. 5 is a schematic diagram of a specific process of step S122 in FIG. 2;
  • FIG. 6 is a schematic diagram of a membership function of an input fuzzy set
  • FIG. 8 is a schematic flow chart of a third embodiment of a video multi-target tracking method according to the present invention.
  • step S141 in FIG. 8 is a schematic diagram of a specific process of step S141 in FIG. 8.
  • FIG. 10 is a schematic diagram of a specific process of step S144 in FIG. 8;
  • FIG. 11 is a schematic flow chart of a fourth embodiment of a video multi-target tracking method according to the present invention.
  • FIG. 12 is a schematic flowchart diagram of a fifth embodiment of a video multi-target tracking method according to the present invention.
  • FIG. 13 is a schematic flowchart of a sixth embodiment of a video multi-target tracking method according to the present invention.
  • FIG. 14 is a schematic flow chart of an embodiment of a video multi-target tracking apparatus according to the present invention.
  • the first embodiment of the video multi-target tracking method of the present invention includes:
  • Step S11 Perform motion detection on the current video frame, and detect the possible moving object as an observation result
  • motion detection algorithms such as a frame difference method, an optical flow method, and a background subtraction method may be used to perform motion detection on the current video frame to find pixels belonging to the motion foreground, supplemented by median filtering and simple morphological processing.
  • the possible moving objects in the current video frame are obtained as observation results.
  • One observation is an image block in the current video frame.
  • the shape of the observation is a rectangle.
  • the motion detection of the current video frame is performed by using a mixed Gaussian detection method, and the acquired observation result may include state information such as an x coordinate, a y coordinate, a height, and a width of the image block.
  • Step S12 performing data association on the observation result and the prediction result of the target
  • the prediction result is a prediction state of the target obtained by predicting at least the trajectory of the target of the previous video frame, and the prediction result may include an x coordinate, a y coordinate, a speed and a height in the x and/or y directions of the target predicted position, and Status information such as the width, the prediction method may be determined according to actual needs, and is not specifically limited herein.
  • the fuzzy inference system is used to correlate the observation result with the prediction result of the target, and the fuzzy membership degree of the observation result and the prediction result is obtained, and the fuzzy membership degree is used instead of the correlation probability of the observation result and the prediction result.
  • the association probability matrix is established. Based on the maximum membership degree criterion, the observation result and the prediction result are correlated according to the association probability matrix, and the pair of prediction results and observation results on the correct association are effective target trajectories.
  • the effective target trajectory of the association success is Kalman filtered to update the effective target trajectory by using its associated observation result.
  • association methods may also be used to perform data association between the observation result and the prediction result of the target, which is not specifically limited herein.
  • Step S13 performing trajectory management on the unrelated observation results and the prediction results
  • the trajectory management comprises acquiring the final trajectory by using the unrelated prediction result, and acquiring the new trajectory by using the unrelated observation result;
  • False observations such as detecting multiple observations for the same target, using multiple targets or targets and background as observations, and so on.
  • Unrelated observations may be new targets or false observations. Therefore, it is necessary to judge whether unrelated observations are false observations, and unrelated observations that are not false observations are judged as new. The goal is to create a new trajectory for it.
  • the unrelated observation result may be predicted as a trajectory starting point, and the data is associated with the observation result of the next frame. If consecutive multiple frames (for example, five consecutive frames) are successfully associated, Then the associated observation is judged as a new target, and a new trajectory is established for it.
  • other methods may also be used to determine whether the unrelated observation result is a new target, which is not specifically limited herein.
  • Unrelated predictions may occur when the target moves out of the camera's shooting range, is obscured by the background or other targets.
  • the extrapolation prediction is continued, and the data is associated with the next frame observation result. If the target continuous multi-frame (for example, 5 consecutive frames) is not associated, the unrelated prediction result is used. The corresponding target trajectory is used as the final trajectory.
  • Step S14 performing trajectory association on the final trajectory and the new trajectory
  • the fuzzy similarity between the final trajectory and the new trajectory is calculated by using the average fuzzy similarity of the motion, shape, color, local binary mode, speed, and the like of the final trajectory and the new trajectory, and the fuzzy similarity is established.
  • the reliability matrix based on the principle of maximum credibility, trajectory association of the final trajectory and the new trajectory according to the fuzzy credibility matrix, and a pair of final trajectories and new trajectories successfully associated with consecutive multiple frames (for example, three consecutive frames) are determined as The trajectory of the same target.
  • Step S15 performing trajectory fusion on the associated final trajectory and the new trajectory
  • a linear interpolation method is used to insert a video frame between the associated final trajectory and the new trajectory, so that the associated final trajectory and the new trajectory are merged into the same trajectory, and at the same time, the new trajectory can be
  • the track identifier (track ID) is set to the track ID of the associated final track, so that the final track is restored to the existing effective track that can normally predict the association, and the target tracking is continued.
  • trajectory fusion of the associated final trajectory and the new trajectory may be performed by other methods, which is not specifically limited herein.
  • Step S16 Delete the unrelated end track that meets the preset condition.
  • the preset condition is a preset invalid track condition, which can be set according to specific requirements, and is not specifically limited herein.
  • a final trajectory with more than 70 frames of consecutive unupdated frames is set as an invalid trajectory
  • the final trajectory is not associated with 71 consecutive frames, the final trajectory is an invalid trajectory, and is deleted, thereby saving Storage space, controlling the amount of calculation.
  • the final trajectory with the number of consecutive unupdated frames exceeding 50/60/65 frames may be set as an invalid trajectory, which is not specifically limited herein.
  • the observation result is obtained by performing motion detection on the current video frame, and the observation result is correlated with the prediction result of the target, and the trajectory management is performed on the unrelated observation result and the prediction result to obtain the final trajectory and the new trajectory.
  • trajectory association between the final trajectory and the new trajectory, trajectory fusion of the associated final trajectory and the new trajectory, and deleting the unassociated end that meets the preset condition The trajectory is such that the temporarily uncorrelated prediction results and observations can generate the final trajectory and the new trajectory, and the trajectory association and the trajectory fusion are performed by using the final trajectory and the new trajectory, so that the target trajectories of the batch are reconnected, thereby achieving the same target at different times.
  • the trajectory connection improves the accuracy of target tracking and improves the performance of target tracking.
  • Step S12 further includes:
  • motion characteristics, shape features, color features (ie, RGB color histogram features) and local binary pattern features (LBP) of observation results and prediction results are utilized.
  • the similarity between the two is to construct the input variables E i,j (k) and ⁇ E i,j (k) of the fuzzy inference system, where E i,j (k) and ⁇ E i,j (k) respectively represent the target Prediction error and amount of error variation.
  • x i (k) represents the x coordinate of the i-th target predicted position
  • y i (k) represents the y coordinate of the i-th target predicted position
  • h i (k) and w i (k) respectively represent the height and width of the i-th target
  • the j-th observation in the current k-th frame is among them with Indicates the x coordinate, y coordinate, height, and width of the jth observation, respectively.
  • step S121 specifically includes:
  • H k The x coordinate of the i-th prediction result of the k-1th frame, the velocity in the x direction, the y coordinate, and the velocity in the y direction, H k is an observation matrix, For the state transition matrix, H k and They are defined as follows:
  • h i (k) is the height of the i-th prediction result in the k-th frame
  • ⁇ s is the height standard deviation
  • the height standard deviation ⁇ s 6.
  • the height standard deviation ⁇ s may also take other values according to actual requirements, for example, 8 , which is not specifically limited herein.
  • the RGB color feature and the LBP feature are used to extract the statistical and texture information of the target.
  • N is the number of color spaces
  • H m (x i ) is the i-th prediction result in the m-th color
  • H n (z j ) is the number of pixels in the nth color space of the jth observation result, with The number of average pixels in the N color spaces of the i-th prediction result and the j-th observation result, respectively;
  • the amount of error prediction for the motion model between the ith prediction result and the jth observation result in the kth frame For the motion model prediction error between the ith prediction result and the jth observation result in the kth frame, The prediction error of the motion model between the i-th prediction result and the j-th observation result in the k-1th frame;
  • E i,j (k) is the prediction error between the ith prediction result and the jth observation result in the kth frame
  • ⁇ E i,j (k) is the ith prediction result and the first prediction frame in the kth frame.
  • the amount of change in prediction error between j observations, E i,j (k) and ⁇ E i,j (k) is in the range of [0,1].
  • the motion feature, the shape feature, the color feature (ie, the RGB color histogram feature), and the local binary mode feature of the prediction result and the observation result are respectively calculated by using the formula (1-7) and the formula (8-11).
  • the similarity between Local Binary Patterns (LBP), that is, the prediction error of each feature and the amount of change in prediction error, and then the input variables E i,j (k) and ⁇ E i of the fuzzy inference system are constructed by formula (11-12). , j (k), for subsequent fuzzy reasoning.
  • observation result and other features of the prediction result may be used to calculate the prediction error and the prediction error variation of the target, which are not specifically limited herein.
  • S122 Perform fuzzy inference by using prediction error and prediction error variation, and obtain an association probability between the observation result and the prediction result;
  • the fuzzy inference system mainly includes four basic elements: input fuzzification, establishment of fuzzy rule base, fuzzy inference engine, defuzzifier (fuzzy information precision output).
  • E i,j (k) and ⁇ E i,j (k) are used as input variables of the fuzzy inference machine, and the fuzzy membership degree of the observation result and the prediction result is obtained by fuzzy inference, and then the fuzzy membership degree is used instead.
  • step S122 specifically includes:
  • S1221 Determine an input fuzzy membership function corresponding to the prediction error and the amount of change of the prediction error according to the fuzzy rule;
  • the accuracy of the output is affected by the number of fuzzy sets.
  • the input variables E i,j (k) and ⁇ E i,j (k) are blurred by using five language fuzzy sets ⁇ ZE, SP, MP, LP, VP ⁇ , as shown in FIG.
  • the fuzzy membership functions are respectively used for ⁇ 0, ZE (k), ⁇ 0, SP (k), ⁇ 0, MP (k), ⁇ 0, LP (k), ⁇ 0, VP (k), and 5 fuzzy sets respectively. Indicates zero, small, medium, large, and very large.
  • the fuzzy inference rules can be as shown in Table 1 below:
  • S1222 Obtain a fuzzy membership value corresponding to the prediction error and the change amount of the prediction error by using an input fuzzy membership function
  • rule 1 if E i,j (k) is ZE, and ⁇ E i,j (k) is ZE, then ⁇ i,j (k) is EP as an example, according to rule 1, fuzzy input variable E If the fuzzy set corresponding to i,j (k) is ZE, the corresponding fuzzy membership value can be obtained by using the value of E i,j (k) according to the fuzzy membership function shown in FIG. 6 . The same method can be used to find the fuzzy membership value corresponding to the fuzzy input variable ⁇ E i,j (k)
  • means take small.
  • rule 1 the corresponding fuzzy output is EP, and the output of rule 1 can be calculated by the following formula:
  • the number of rules may be set according to actual requirements, and is not specifically limited herein.
  • the largest fuzzy output in the fuzzy output of all fuzzy rules is:
  • the defuzzified output result ⁇ i,j (k) is the correlation probability of the jth observation result and the ith prediction result.
  • S124 Select an element of the unassociated probability maximum from the associated probability matrix, mark all the elements of the row and column of the element with the highest probability of association, and associate the observation result corresponding to the element with the highest probability of the association with the prediction result. ;
  • S125 Looping the previous step until there is no unmarked row/column, recording the unassociated observations to the unassociated observation set, and recording the unassociated prediction results to the unassociated prediction set.
  • step S124 is repeated until the row/column of the association matrix U is marked, and the uncorrelated observations corresponding to the unmarked columns are recorded to the unrelated observation set. Record uncorrelated predictions corresponding to unmarked rows to unassociated prediction sets
  • Step S14 further includes:
  • ⁇ ij represents the fuzzy similarity of the final trajectory i and the new trajectory j.
  • the fuzzy similarity of the final trajectory and the new trajectory is calculated by using the similarity of the motion, the shape, the RGB color histogram, the LBP and the velocity feature of the final trajectory and the new trajectory.
  • other features may also be utilized, which are not specifically limited herein.
  • step S141 specifically includes:
  • f M (i, j) is the motion feature similarity of the k-frame end track i and the new track j
  • f M (i, j) is the motion feature similarity of the k-frame end track i and the new track j
  • the variance of the motion feature In this embodiment, the variance of the motion feature In other embodiments, the variance of the motion feature Other values can be taken according to actual needs, for example, 4, which is not specifically limited herein.
  • f S (i, j) is the shape feature similarity of the k-frame end track i and the new track j, The shape model prediction error for the k-th frame termination trajectory i and the new trajectory j, The variance of the shape feature;
  • the variance of the shape feature In this embodiment, the variance of the shape feature In other embodiments, the variance of the shape feature Other values can be taken according to actual needs, for example, 4, which is not specifically limited herein.
  • the variance of the color features In this embodiment, the variance of the color features In other embodiments, the variance of the color feature Other values can be taken according to actual needs, for example, 4, which is not specifically limited herein.
  • the local binary pattern feature similarity of the track i and the new track j is terminated
  • the local binary mode feature prediction error of the track i and the new track j is terminated, The variance of the characteristics of the local binary pattern
  • the variance of the local binary pattern feature In this embodiment, the variance of the local binary pattern feature In other embodiments, the variance of the local binary pattern feature Other values can be taken according to actual needs, for example, 4, which is not specifically limited herein.
  • f V (i, j) is the velocity feature similarity of the k-th frame final trajectory i and the new trajectory j
  • x i and y i are respectively the x-coordinate and y-coordinate of the k-th frame ending trajectory i
  • x j and y j is the x coordinate and the y coordinate of the new track j of the kth frame, respectively.
  • the variance of the local binary pattern feature In this embodiment, the variance of the local binary pattern feature In other embodiments, the variance of the local binary pattern feature Other values, such as 80, can be taken according to actual needs, and are not specifically limited herein.
  • ⁇ ij S k ( ⁇ k (i,j)) is the fuzzy similarity of the k-frame end trajectory i and the new trajectory j, For the k-th frame, the average similarity of the features of the track i and the new track j is terminated, ⁇ k (i, j) is the similarity vector, and ⁇ k (i, j) is defined as follows:
  • the average similarity of the motion, shape, color, local binary mode, and velocity characteristics of the k-th frame ending track i and the new track j are respectively defined as:
  • S142 Determine whether the maximum fuzzy similarity of the final trajectory and the new trajectory is not less than a credibility threshold
  • the maximum fuzzy similarity of the final trajectory and the new trajectory is:
  • the reliability threshold ⁇ may also adopt other value ranges according to requirements, for example, 0.4 ⁇ ⁇ ⁇ 0.9, which is not specifically limited herein.
  • step S144 specifically includes:
  • the track quality is defined as follows:
  • m ij* (k) is the trajectory quality of the k-th frame ending trajectory i and the new trajectory j
  • m ij* (k-1) is the trajectory quality of the k-1th frame ending trajectory i and the new trajectory j.
  • the preset threshold is 3. If the final trajectory i is associated with the new trajectory j, the trajectory quality m ij* (k) is increased by 1. If m ij* (k) ⁇ 3, the final trajectory i is considered to be If the new trajectory j is the same trajectory, the subsequent steps of trajectory fusion of the associated final trajectory and the new trajectory can be performed.
  • This embodiment can be combined with the first and/or second embodiment of the video multi-target tracking method of the present invention.
  • Step S13 further includes:
  • S132 Determine whether the number of consecutive unassociated prediction results that are not associated is greater than a first threshold
  • the extrapolation prediction is continued for the unassociated prediction result, and the data is associated with the next frame observation result, if the number of consecutively unassociated frames of the target is greater than the first threshold (for example, 5), the target trajectory corresponding to the unrelated prediction result is taken as the final trajectory.
  • the specific value of the first threshold may be set according to actual requirements, and is not specifically limited herein.
  • S134 Perform prediction based on the unrelated observation result, and continue to associate with the observation result of the next video frame;
  • S135 determining whether the number of consecutively associated frames of the unrelated observation result is not less than a second threshold
  • the unassociated observation result may be predicted as a trajectory starting point, and the data is continuously associated with the observation result of the next frame, if the number of consecutively successful frames is not less than the second threshold. (eg 4), the associated observation is judged to be a new target for which a new trajectory is established.
  • the specific value of the second threshold may be set according to actual requirements, and is not specifically limited herein.
  • This embodiment can also be combined with the second and/or third embodiment of the video multi-target tracking method of the present invention.
  • Step S15 further includes:
  • S151 Perform linear interpolation between the associated final trajectory and the new trajectory to connect the associated final trajectory and the new trajectory.
  • a linear interpolation method is used to uniformly insert a video frame between the associated final trajectory and the new trajectory, so that the associated final trajectory and the new trajectory are merged into the same trajectory.
  • This embodiment can also be combined with any of the second to fourth embodiments of the video multi-target tracking method of the present invention or a combination thereof.
  • Step S16 further includes:
  • S161 Determine whether the number of unupdated frames of the unassociated final track is greater than a preset number of frames
  • the final track is considered to be an invalid track, and is deleted, thereby saving storage space and controlling Calculated amount.
  • the preset number of frames can be set according to actual requirements, and is not specifically limited herein.
  • This embodiment can also be combined with any of the second to fifth embodiments of the video multi-target tracking method of the present invention or a combination thereof.
  • an embodiment of the video multi-target tracking device of the present invention includes a processor 110 and a camera 120.
  • the camera 120 can be a local camera, the processor 110 is connected to the camera 120 via a bus; the camera 120 can also be a remote camera, and the processor 110 is connected to the camera 120 via a local area network or the Internet.
  • the processor 110 controls the operation of the video multi-target tracking device, which may also be referred to as a CPU (Central Processing Unit).
  • Processor 110 may be an integrated circuit chip with signal processing capabilities.
  • the processor 110 can also be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, and discrete hardware components.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the video multi-target tracking device may further include a memory (not shown) for storing instructions and data necessary for the processor 110 to operate, and for storing video data captured by the transmitter 120.
  • the processor 110 is operative to implement the method according to any one of the first to sixth embodiments of the video multi-target tracking method of the present invention or the non-contradictory combination thereof.
  • the specific function may refer to the video multi-target tracking method of the present invention. The description in the embodiments is not repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种视频多目标跟踪方法及装置,所述方法包括:对当前视频帧进行运动检测,检测得到的可能运动对象作为观测结果;对观测结果和目标的预测结果进行数据关联,其中预测结果是至少利用前一视频帧的目标的轨迹进行预测而得到的;对未被关联的观测结果和预测结果进行轨迹管理,轨迹管理包括利用未被关联的预测结果获取终结轨迹,利用未被关联的观测结果获取新轨迹;对终结轨迹和新轨迹进行轨迹关联;对被关联的终结轨迹和新轨迹进行轨迹融合;删除符合预设条件的未被关联的终结轨迹。通过上述方式,本发明能够提高目标跟踪的精度。

Description

一种视频多目标跟踪方法及装置 【技术领域】
本发明涉及目标跟踪领域,特别是涉及一种视频多目标跟踪方法及装置。
【背景技术】
在线目标跟踪是计算机视觉中的一个热点研究课题,其对于动作识别、行为分析、场景理解等高层次的视觉研究具有重要意义,并且在视频监控、智能机器人、人机交互等领域有着广泛的应用前景。
在复杂场景下,由于目标自身形变、目标间相互遮挡或者背景静物对目标的遮挡等因素的影响,尤其是目标被长时间遮挡或目标轨迹长时间没有更新的情况下,目标轨迹会出现断批现象,降低目标跟踪精度。
【发明内容】
本发明主要解决的技术问题是提供一种视频多目标跟踪方法及装置,能够解决现有技术中由于目标轨迹的断批现象导致跟踪精度低的问题。
为了解决上述技术问题,本发明采用的一个技术方案是:提供一种视频多目标跟踪方法,包括:对当前视频帧进行运动检测,检测得到的可能运动对象作为观测结果;对观测结果和目标的预测结果进行数据关联,其中预测结果是至少利用前一视频帧的目标的轨迹进行预测而得到的;对未被关联的观测结果和预测结果进行轨迹管理,轨迹管理包括利用未被关联的预测结果获取终结轨迹,利用未被关联的观测结果获取新轨迹;对终结轨迹和新轨迹进行轨迹关联;对被关联的终结轨迹和新轨迹进行轨迹融合;删除符合预设条件的未被关联的终结轨迹。
为了解决上述技术问题,本发明采用的另一个技术方案是:提供一种视频多目标跟踪装置,包括:处理器和摄像机,处理器连接摄像机;处理器工作时,用于实现如上所述的方法。
本发明的有益效果是:本发明对当前视频帧进行运动检测获得观测结果,对观测结果和目标的预测结果进行数据关联,对未被关联的观测结果和预测结果进行轨迹管理,以获得终结轨迹和新轨迹,对终结轨迹和新轨迹进行轨迹关联,对被关联的终结轨迹和新轨迹进行轨迹融合,并删除符合预设条件的未被关联的终结轨迹,使得暂时未被关联的预测结果和观测结果可以生成终结轨迹和新轨迹,利用终结轨迹和新轨迹进行轨迹关联和轨迹融合,使得断批的目标轨迹重新连接,从而实现同一目标不同时刻的轨迹连接,提高目标跟踪的精度,改善目标跟踪的性能。
【附图说明】
图1是本发明视频多目标跟踪方法第一实施例的流程示意图;
图2是本发明视频多目标跟踪方法第二实施例的流程示意图;
图3是利用模糊推理系统计算获取观测结果和预测结果的关联概率的原理 示意图;
图4是图2中步骤S121的具体流程示意图;
图5是图2中步骤S122的具体流程示意图;
图6是输入模糊集的隶属度函数示意图;
图7是输出模糊集的隶属度函数示意图;
图8是本发明视频多目标跟踪方法第三实施例的流程示意图;
图9是图8中步骤S141的具体流程示意图;
图10是图8中步骤S144的具体流程示意图;
图11是本发明视频多目标跟踪方法第四实施例的流程示意图;
图12是本发明视频多目标跟踪方法第五实施例的流程示意图;
图13是本发明视频多目标跟踪方法第六实施例的流程示意图;
图14是本发明视频多目标跟踪装置一实施例的流程示意图。
【具体实施方式】
如图1所示,本发明视频多目标跟踪方法第一实施例包括:
步骤S11:对当前视频帧进行运动检测,检测得到的可能运动对象作为观测结果;
具体地,可以使用帧差法、光流法、背景减除法等运动检测算法对当前视频帧进行运动检测,以从中找出属于运动前景的像素,辅以中值滤波和简单的形态学处理,最终得到当前视频帧中的可能运动对象作为观测结果。一个观测结果是当前视频帧中的一个图像块,一般而言,观测结果的形状为矩形。在一个应用例中,采用混合高斯检测方法对当前视频帧进行运动检测,获取的观测结果可以包括该图像块的x坐标、y坐标、高度和宽度等状态信息。
步骤S12:对观测结果和目标的预测结果进行数据关联;
其中,预测结果是至少利用前一视频帧的目标的轨迹进行预测而得到的目标的预测状态,预测结果可以包括目标预测位置的x坐标、y坐标、x和/或y方向的速度、高度、宽度等状态信息,该预测方法可以根据实际需求而定,此处不做具体限定。
具体地,在一个应用例中,利用模糊推理系统对观测结果和目标的预测结果进行数据关联,获取观测结果和预测结果的模糊隶属度,采用该模糊隶属度代替观测结果和预测结果的关联概率,建立关联概率矩阵,基于最大隶属度准则,根据关联概率矩阵对观测结果和预测结果进行关联,正确关联上的一对预测结果与观测结果为有效目标轨迹。此外,对关联成功的该有效目标轨迹利用其关联的观测结果对其预测结果进行卡尔曼滤波以更新该有效目标轨迹。
当然,在其他应用例中,也可以采用其他关联方法对观测结果和目标的预测结果进行数据关联,此处不做具体限定。
步骤S13:对未被关联的观测结果和预测结果进行轨迹管理;
其中,轨迹管理包括利用未被关联的预测结果获取终结轨迹,利用未被关联的观测结果获取新轨迹;
复杂环境下,由于背景干扰、目标自身形变等多种因素的影响,可能出现 虚假观测结果,例如对同一个目标检测出多个观测结果,将多个目标或者目标与背景作为观测结果等。未被关联的观测结果可能是新出现的目标,也可能是虚假观测结果,因此需要判断未被关联的观测结果是否为虚假观测结果,不是虚假观测结果的未被关联的观测结果被判定为新的目标,为其建立新轨迹。
具体地,在一个应用例中,可以将该未被关联的观测结果作为轨迹起始点进行预测,并与下一帧的观测结果进行数据关联,若连续多帧(例如连续5帧)关联成功,则该被关联的观测结果被判定为新的目标,为其建立新轨迹。当然,在其他应用例中,也可以采用其他方法判定该未被关联的观测结果是否为新的目标,此处不做具体限定。
当目标移动出摄像机的拍摄范围、被背景或者其他目标遮挡时,可能会出现未被关联的预测结果。对未被关联的预测结果,继续进行外推预测,并与下一帧观测结果进行数据关联,若该目标连续多帧(例如连续5帧)未被关联,则将该未被关联的预测结果对应的目标轨迹作为终结轨迹。
步骤S14:对终结轨迹和新轨迹进行轨迹关联;
具体地,在一个应用例中,利用终结轨迹和新轨迹的运动、形状、颜色、局部二进制模式、速度等各个特征的平均模糊相似度,计算终结轨迹和新轨迹的模糊相似度,建立模糊可信度矩阵,基于最大可信度原则,根据模糊可信度矩阵对终结轨迹和新轨迹进行轨迹关联,连续多帧(例如连续3帧)成功关联上的一对终结轨迹和新轨迹被判定为同一目标的轨迹。
当然,在其他应用例,也可以采用其他方法对终结轨迹和新轨迹进行轨迹关联,此处不做具体限定。
步骤S15:对被关联的终结轨迹和新轨迹进行轨迹融合;
具体地,在一个应用例中,采用线性插值方法在被关联的终结轨迹和新轨迹之间插入视频帧,以使得被关联的终结轨迹和新轨迹融合为同一轨迹,同时,可以将新轨迹的轨迹标识(轨迹ID)设置为关联的终结轨迹的轨迹ID,从而使得该终结轨迹恢复为可正常预测关联的已存在的有效轨迹,继续进行目标跟踪。
当然,在其他应用例中,也可以采用其他方法对被关联的终结轨迹和新轨迹进行轨迹融合,此处不做具体限定。
步骤S16:删除符合预设条件的未被关联的终结轨迹。
其中,预设条件是预先设置的无效轨迹条件,可以根据具体需求设置,此处不做具体限定。
具体地,在一个应用例中,将连续未更新帧数超过70帧的终结轨迹设置为无效轨迹,则该终结轨迹连续71帧未被关联时,该终结轨迹为无效轨迹,予以删除,从而节省存储空间,控制计算量。
当然,在其他应用例中,也可以将连续未更新帧数超过50/60/65帧的终结轨迹设置为无效轨迹,此处不做具体限定。
本实施方式中,通过对当前视频帧进行运动检测获得观测结果,对观测结果和目标的预测结果进行数据关联,对未被关联的观测结果和预测结果进行轨迹管理,以获得终结轨迹和新轨迹,对终结轨迹和新轨迹进行轨迹关联,对被关联的终结轨迹和新轨迹进行轨迹融合,并删除符合预设条件的未被关联的终 结轨迹,使得暂时未被关联的预测结果和观测结果可以生成终结轨迹和新轨迹,利用终结轨迹和新轨迹进行轨迹关联和轨迹融合,使得断批的目标轨迹重新连接,从而实现不同时刻同一目标的轨迹连接,提高目标跟踪的精度,改善目标跟踪的性能。
如图2所示,本发明视频多目标跟踪方法第二实施例是在本发明视频多目标跟踪方法第一实施例的基础上,步骤S12进一步包括:
S121:利用观测结果和预测结果获取目标的预测误差和预测误差变化量;
具体地,结合图3所示,在本实施例中,利用观测结果和预测结果的运动特征、形状特征、颜色特征(即RGB颜色直方图特征)和局部二进制模式特征(Local Binary Patterns,LBP)之间的相似性来构建模糊推理系统的输入变量Ei,j(k)和ΔEi,j(k),其中Ei,j(k)和ΔEi,j(k)分别表示该目标的预测误差和误差变化量。
其中,假设第k帧中第i个预测结果为
Figure PCTCN2017091574-appb-000001
其中
Figure PCTCN2017091574-appb-000002
xi(k)表示第i个目标预测位置的x坐标,yi(k)表示第i个目标预测位置的y坐标,
Figure PCTCN2017091574-appb-000003
Figure PCTCN2017091574-appb-000004
分别表示第i个目标在x和y方向的速度,hi(k)和wi(k)分别表示第i个目标的高度和宽度;同时假设当前第k帧中第j个观测结果为
Figure PCTCN2017091574-appb-000005
其中
Figure PCTCN2017091574-appb-000006
Figure PCTCN2017091574-appb-000007
Figure PCTCN2017091574-appb-000008
分别表示第j个观测结果的x坐标、y坐标、高度和宽度。
进一步地,如图4所示,步骤S121具体包括:
S1211:利用公式(1)计算目标的运动模型预测误差:
Figure PCTCN2017091574-appb-000009
其中,||·||2为二范数,
Figure PCTCN2017091574-appb-000010
为当前第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差,
Figure PCTCN2017091574-appb-000011
为第k帧中第j个观测结果的x坐标和y坐标,
Figure PCTCN2017091574-appb-000012
为第k-1帧中第i个预测结果,
Figure PCTCN2017091574-appb-000013
为第k帧中第j个预测结果,定义为:
Figure PCTCN2017091574-appb-000014
其中,
Figure PCTCN2017091574-appb-000015
为第k-1帧第i个预测结果的x坐标、在x方向的速度、y坐标以及在y方向的速度,Hk为观测矩阵,
Figure PCTCN2017091574-appb-000016
为状态转移矩阵,Hk
Figure PCTCN2017091574-appb-000017
分别定义如下:
Figure PCTCN2017091574-appb-000018
其中τ=1。
S1212:利用公式(4)计算目标的形状模型预测误差:
Figure PCTCN2017091574-appb-000019
其中,
Figure PCTCN2017091574-appb-000020
为当前第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差,hi(k)为第k帧中第i个预测结果的高度,
Figure PCTCN2017091574-appb-000021
为第k帧中第j个观测结果的高度,σs为高度标准方差;
本实施例中,高度标准方差σs=6,在其他实施例中,该高度标准方差σs也可以根据实际需求取其他值,例如8,此处不做具体限定。
表观是多目标跟踪中的一个重要的特征,本实施例中,为了使得表观更具鲁棒性,利用RGB颜色特征和LBP特征提取目标的统计和纹理信息。
S1213:利用公式(5)计算目标的颜色特征预测误差:
Figure PCTCN2017091574-appb-000022
其中,
Figure PCTCN2017091574-appb-000023
为当前第k帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差,N为颜色空间的个数,Hm(xi)为第i个预测结果在第m个颜色空间中的像素个数,Hn(zj)为第j个观测结果在第n个颜色空间中的像素个数,
Figure PCTCN2017091574-appb-000024
Figure PCTCN2017091574-appb-000025
分别为第i个预测结果和第j个观测结果在N个颜色空间中的平均像素个数;
本实施例中,颜色空间的个数N=16,在其他实施例中,该颜色空间的个数N也可以根据实际需求取其他值,例如8,此处不做具体限定。
S1214:利用公式(6)计算目标的局部二进制模式特征预测误差:
Figure PCTCN2017091574-appb-000026
其中,
Figure PCTCN2017091574-appb-000027
为当前第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差,
Figure PCTCN2017091574-appb-000028
Figure PCTCN2017091574-appb-000029
分别为第i个预测结果和第j个观测结果的局部二进制模式特征直方图,ρ为
Figure PCTCN2017091574-appb-000030
Figure PCTCN2017091574-appb-000031
的巴氏系数,定义如下:
Figure PCTCN2017091574-appb-000032
S1215:利用公式(8)计算目标的运动模型预测误差变化量:
Figure PCTCN2017091574-appb-000033
其中,
Figure PCTCN2017091574-appb-000034
为第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差变化量,
Figure PCTCN2017091574-appb-000035
为第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差,
Figure PCTCN2017091574-appb-000036
为第k-1帧中第i个预测结果和第j个观测结果之间的运动模型预测误差;
S1216:利用公式(9)计算目标的形状模型预测误差变化量:
Figure PCTCN2017091574-appb-000037
其中,
Figure PCTCN2017091574-appb-000038
为第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差变化量,
Figure PCTCN2017091574-appb-000039
为第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差,
Figure PCTCN2017091574-appb-000040
为第k-1帧中第i个预测结果和第j个观测结果之间的形状模型预测误差;
S1217:利用公式(10)计算目标的颜色特征预测误差变化量:
Figure PCTCN2017091574-appb-000041
其中,
Figure PCTCN2017091574-appb-000042
为第k帧中第i个预测结果和第j个观测结果之间的颜色特 征预测误差变化量,
Figure PCTCN2017091574-appb-000043
为第k帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差,
Figure PCTCN2017091574-appb-000044
为第k-1帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差;
S1218:利用公式(11)计算目标的局部二进制模式特征预测误差变化量:
Figure PCTCN2017091574-appb-000045
其中,
Figure PCTCN2017091574-appb-000046
为第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差变化量,
Figure PCTCN2017091574-appb-000047
为第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差,
Figure PCTCN2017091574-appb-000048
为第k-1帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差;
S1219:利用公式(12)和(13)计算目标的预测误差和预测误差变化量:
Figure PCTCN2017091574-appb-000049
Figure PCTCN2017091574-appb-000050
其中,Ei,j(k)为第k帧中第i个预测结果和第j个观测结果之间的预测误差,ΔEi,j(k)为第k帧中第i个预测结果和第j个观测结果之间的预测误差变化量,Ei,j(k)和ΔEi,j(k)的取值范围为[0,1]。
本实施例中,利用公式(1-7)和公式(8-11)分别计算出预测结果和观测结果的运动特征、形状特征、颜色特征(即RGB颜色直方图特征)和局部二进制模式特征(Local Binary Patterns,LBP)之间的相似性,即各特征的预测误差和预测误差变化量,再通过公式(11-12)来构建模糊推理系统的输入变量Ei,j(k)和ΔEi,j(k),以便于后续模糊推理。
当然,在其他实施例中,也可以采用观测结果和预测结果的其他特征,例如速度特征等计算目标的预测误差和预测误差变化量,此处不做具体限定。
S122:利用预测误差和预测误差变化量进行模糊推理,获取观测结果和预测结果的关联概率;
其中,模糊推理系统主要包含四个基本要素:输入的模糊化、建立模糊规则库、模糊推理机、去模糊器(模糊信息精确化输出)。在本实施例中,利用Ei,j(k)和ΔEi,j(k)作为模糊推理机的输入变量,通过模糊推理得到观测结果与预测结果 的模糊隶属度,然后用模糊隶属度代替观测结果与预测结果之间的关联概率,从而实现数据关联和不确定性的处理。
进一步地,如图5所示,步骤S122具体包括:
S1221:根据模糊规则分别确定预测误差和预测误差变化量对应的输入模糊隶属度函数;
一般来说,输出的精度受到模糊集数量的影响,模糊集越多,输出就越精确;但同时,模糊集越多,计算复杂度就越大,所以通常模糊集数量是由经验选取的。
本实施例中,输入变量Ei,j(k)和ΔEi,j(k)利用5个语言模糊集{ZE,SP,MP,LP,VP}进行模糊化,如图6所示,其模糊隶属函数分别用μ0,ZE(k)、μ0,SP(k)、μ0,MP(k)、μ0,LP(k)、μ0,VP(k),5个模糊集分别表示零、正小、正中、正大和非常大。对于输出,包含五个模糊集:{ZE,SP,MP,LP,VP,EP},其中EP表示极大模糊集,如图7所示,其模糊隶属函数分别用μ1,ZE(k)、μ1,SP(k)、μ1,MP(k)、μ1,LP(k)、μ1,VP(k)、μ1,EP(k)表示。
根据上述定义的输入和输出模糊集,模糊推理规则可以如下表1所示:
表1模糊规则
Figure PCTCN2017091574-appb-000051
S1222:利用输入模糊隶属度函数分别获取预测误差和预测误差变化量对应的模糊隶属度值;
具体地,以规则1:如果Ei,j(k)是ZE,并且ΔEi,j(k)是ZE,则μi,j(k)是EP为例,根据规则1,模糊输入变量Ei,j(k)对应的模糊集为ZE,则可以根据图6所示的模糊隶属函数,利用Ei,j(k)的值求出对应的模糊隶属度值
Figure PCTCN2017091574-appb-000052
可以采用同样的方法求出模糊输入变量ΔEi,j(k)对应的模糊隶属度值
Figure PCTCN2017091574-appb-000053
S1223:利用模糊隶属度值获取模糊规则的适用度;
具体地,利用如下公式计算出规则1的适用度:
Figure PCTCN2017091574-appb-000054
其中,∧表示取小。
S1224:利用适用度获取模糊规则的模糊输出;
具体地,根据规则1,对应的模糊输出为EP,则规则1的输出可以用如下公式计算:
Figure PCTCN2017091574-appb-000055
采用同样的方法,可以计算出所有规则的模糊输出
Figure PCTCN2017091574-appb-000056
其中,根据表1可知,本实施例中,规则的数量M=25。当然,在其他实施例中,规则的 数量可以根据实际需求设置,此处不做具体限定。
S1225:利用所有模糊规则的模糊输出获取观测结果和预测结果的关联概率。
具体地,所有模糊规则的模糊输出中最大的模糊输出为:
Figure PCTCN2017091574-appb-000057
其中,∨表示取大。由于上述公式得到的是一个模糊化的输出,可以利用如下公式获得去模糊化的输出结果:
Figure PCTCN2017091574-appb-000058
其中,
Figure PCTCN2017091574-appb-000059
表示模糊规则m对应输出模糊集合的质心。
该去模糊化的输出结果μi,j(k)即为第j个观测结果和第i个预测结果的关联概率。
S123:以关联概率为元素建立关联概率矩阵;
具体地,重复上述步骤S121和S122,可以获取的当前帧中所有观测结果和预测结果的关联概率,然后将获取的关联概率作为元素,建立关联概率矩阵
Figure PCTCN2017091574-appb-000060
S124:从关联概率矩阵中选取未被标记的关联概率最大值的元素,标记关联概率最大值的元素所在行和列的所有元素,并将关联概率最大值的元素对应的观测结果和预测结果关联;
具体地,基于最大隶属度原则,从关联矩阵U中未被标记的所有元素中找出最大值μpq=max([μij(k)],i=1,2,...,m;j=1,2,...,n;并标记关联矩阵U中的第p行所有元素以及第q列所有元素,即标记该最大值μpq对应的行和列的所有元素,同时确认第p个预测结果与第q个观测结果关联。
S125:循环前一步骤直至不存在未被标记的行/列后,将未被关联的观测结果记录到未关联观测集,将未被关联的预测结果记录到未关联预测集。
具体地,重复上述步骤S124,直至该关联矩阵U的行/列均被标记后,则将未被标记的列对应的未被关联的观测结果记录到未关联观测集
Figure PCTCN2017091574-appb-000061
将未被标记的行对应的未被关联的预测结果记录到未关联预测集
Figure PCTCN2017091574-appb-000062
如图8所示,本发明视频多目标跟踪方法第三实施例是在本发明视频多目标跟踪方法第一实施例的基础上,步骤S14进一步包括:
S141:计算终结轨迹和新轨迹的模糊相似度;
具体地,假设第k帧,
Figure PCTCN2017091574-appb-000063
表示终结轨迹集,
Figure PCTCN2017091574-appb-000064
表示新轨迹集,其中,
Figure PCTCN2017091574-appb-000065
Figure PCTCN2017091574-appb-000066
分别表示终结轨迹的数量和新轨迹的数量,l0,i表示终结轨迹i终结的时刻,l1,j表示新轨迹j的起始时间,如果l1,j=k,则新轨迹
Figure PCTCN2017091574-appb-000067
为一 个新的未被关联的观测结果。设
Figure PCTCN2017091574-appb-000068
可信度矩阵,μij表示终结轨迹i和新轨迹j的模糊相似度。
本实施例中,利用终结轨迹和新轨迹的运动、形状、RGB颜色直方图、LBP和速度特征的相似度,计算终结轨迹和新轨迹的模糊相似度。当然,在其他实施例中,也可以利用其他特征,此处不做具体限定。
进一步地,如图9所示,步骤S141具体包括:
S1411:利用公式(14)计算终结轨迹和新轨迹的运动特征相似度:
Figure PCTCN2017091574-appb-000069
其中,fM(i,j)为第k帧终结轨迹i和新轨迹j的运动特征相似度,
Figure PCTCN2017091574-appb-000070
为第k帧终结轨迹i和新轨迹j的运动模型预测误差,
Figure PCTCN2017091574-appb-000071
为运动特征的方差;
本实施例中,运动特征的方差
Figure PCTCN2017091574-appb-000072
在其他实施例中,该运动特征的方差
Figure PCTCN2017091574-appb-000073
可以根据实际需求取其他值,例如4,此处不做具体限定。
S1412:利用公式(15)计算终结轨迹和新轨迹的形状特征相似度:
Figure PCTCN2017091574-appb-000074
其中,fS(i,j)为第k帧终结轨迹i和新轨迹j的形状特征相似度,
Figure PCTCN2017091574-appb-000075
为第k帧终结轨迹i和新轨迹j的形状模型预测误差,
Figure PCTCN2017091574-appb-000076
为形状特征的方差;
本实施例中,形状特征的方差
Figure PCTCN2017091574-appb-000077
在其他实施例中,该形状特征的方差
Figure PCTCN2017091574-appb-000078
可以根据实际需求取其他值,例如4,此处不做具体限定。
S1413:利用公式(16)计算终结轨迹和新轨迹的颜色特征相似度:
Figure PCTCN2017091574-appb-000079
其中,
Figure PCTCN2017091574-appb-000080
为第k帧终结轨迹i和新轨迹j的颜色特征相似度,
Figure PCTCN2017091574-appb-000081
为第k帧终结轨迹i和新轨迹j的颜色特征预测误差,
Figure PCTCN2017091574-appb-000082
为颜色特征的方差;
本实施例中,颜色特征的方差
Figure PCTCN2017091574-appb-000083
在其他实施例中,该颜色特征的方差
Figure PCTCN2017091574-appb-000084
可以根据实际需求取其他值,例如4,此处不做具体限定。
S1414:利用公式(17)计算终结轨迹和新轨迹的局部二进制模式特征相似度:
Figure PCTCN2017091574-appb-000085
其中,
Figure PCTCN2017091574-appb-000086
为第k帧终结轨迹i和新轨迹j的局部二进制模式特征相似度,
Figure PCTCN2017091574-appb-000087
为第k帧终结轨迹i和新轨迹j的局部二进制模式特征预测误差,
Figure PCTCN2017091574-appb-000088
为局 部二进制模式特征的方差;
本实施例中,局部二进制模式特征的方差
Figure PCTCN2017091574-appb-000089
在其他实施例中,该局部二进制模式特征的方差
Figure PCTCN2017091574-appb-000090
可以根据实际需求取其他值,例如4,此处不做具体限定。
S1415:利用公式(18)计算终结轨迹和新轨迹的速度特征相似度:
Figure PCTCN2017091574-appb-000091
其中,fV(i,j)为第k帧终结轨迹i和新轨迹j的速度特征相似度,xi和yi分别为第k帧终结轨迹i的x坐标和y坐标,xj和yj分别为第k帧新轨迹j的x坐标和y坐标,
Figure PCTCN2017091574-appb-000092
Figure PCTCN2017091574-appb-000093
分别为第k帧终结轨迹i的x方向和y方向的速度,
Figure PCTCN2017091574-appb-000094
为速度特征的方差;
本实施例中,局部二进制模式特征的方差
Figure PCTCN2017091574-appb-000095
在其他实施例中,该局部二进制模式特征的方差
Figure PCTCN2017091574-appb-000096
可以根据实际需求取其他值,例如80,此处不做具体限定。
S1416:利用公式(19)计算终结轨迹和新轨迹的模糊相似度:
Figure PCTCN2017091574-appb-000097
其中,∧为取小操作,∨为取大操作,μij=Skk(i,j))为第k帧终结轨迹i和新轨迹j的模糊相似度,
Figure PCTCN2017091574-appb-000098
为第k帧终结轨迹i和新轨迹j各特征的平均相似度,Λk(i,j)为相似度矢量,Λk(i,j)定义如下:
Figure PCTCN2017091574-appb-000099
其中,
Figure PCTCN2017091574-appb-000100
分别为第k帧终结轨迹i和新轨迹j的运动、形状、颜色、局部二进制模式和速度特征的平均相似度。
具体地,第k帧终结轨迹i和新轨迹j的运动、形状、颜色、局部二进制模式和速度各特征平均相似度分别定义为:
Figure PCTCN2017091574-appb-000101
Figure PCTCN2017091574-appb-000102
Figure PCTCN2017091574-appb-000103
Figure PCTCN2017091574-appb-000104
Figure PCTCN2017091574-appb-000105
其中,κ=k-l1,j+3,
Figure PCTCN2017091574-appb-000106
为第m帧终结轨迹i和第n帧新轨迹j的运动特征相似度,
Figure PCTCN2017091574-appb-000107
为第m帧终结轨迹i和第n帧新轨迹j的形状特征相似度,
Figure PCTCN2017091574-appb-000108
为第m帧终结轨迹i和第n帧新轨迹j的颜色特征相似度,
Figure PCTCN2017091574-appb-000109
为第m帧终结轨迹i和第n帧新轨迹j的局部二进制模式特征相似度,
Figure PCTCN2017091574-appb-000110
为第m帧终结轨迹i和第n帧新轨迹j的速度特征相似度。
S142:判断终结轨迹和新轨迹的最大模糊相似度是否不小于可信度阈值;
S143:若不小于,则关联相似度最大的终结轨迹和新轨迹;
具体地,终结轨迹和新轨迹的最大模糊相似度为:
Figure PCTCN2017091574-appb-000111
如果μij*≥ε,则终结轨迹i与新航迹j关联,同时新轨迹j不再与其它终结轨迹进行关联,其中,ε表示可信度阈值,本实施例中0.5≤ε≤1。
当然,在其他实施例中,该可信度阈值ε也可以根据需求采用其他取值范围,例如0.4≤ε≤0.9,此处不做具体限定。
S144:判断终结轨迹和新轨迹是否连续预设帧数关联;
S145:若是,则执行对被关联的终结轨迹和新轨迹进行轨迹融合的步骤。
进一步地,如图10所示,步骤S144具体包括:
S1441:判断终结轨迹和新轨迹的轨迹质量是否不小于预设阈值;
S1442:若是,则判定终结轨迹和新轨迹连续预设帧数关联。
具体地,轨迹质量定义如下:
Figure PCTCN2017091574-appb-000112
其中,mij*(k)为第k帧终结轨迹i和新轨迹j的轨迹质量,mij*(k-1)为第k-1帧终结轨迹i和新轨迹j的轨迹质量。
本实施例中,该预设阈值为3,若终结轨迹i和新轨迹j关联,则轨迹质量mij*(k)加1,如果mij*(k)≥3,则认为终结轨迹i与新轨迹j为同一轨迹,则可以执行后续对被关联的终结轨迹和新轨迹进行轨迹融合的步骤。
本实施例可以与本发明视频多目标跟踪方法第一和/或第二实施例相结合。
如图11所示,本发明视频多目标跟踪方法第四实施例是在本发明视频多目标跟踪方法第一实施例的基础上,步骤S13进一步包括:
S131:对未被关联的预测结果进行预测,并继续与下一视频帧的观测结果进行关联;
S132:判断未被关联的预测结果连续未被关联的帧数是否大于第一阈值;
S133:若大于第一阈值,则未被关联的预测结果对应目标的轨迹为终结轨迹;
具体地,在一个应用例中,对未被关联的预测结果,继续进行外推预测,并与下一帧观测结果进行数据关联,若该目标连续未被关联的帧数是否大于第一阈值(例如5),则将该未被关联的预测结果对应的目标轨迹作为终结轨迹。 其中,第一阈值的具体取值可以根据实际需求设置,此处不做具体限定。
S134:以未被关联的观测结果为起点进行预测,并继续与下一视频帧的观测结果进行关联;
S135:判断未被关联的观测结果连续被关联的帧数是否不小于第二阈值;
S136:若不小于第二阈值,则未被关联的观测结果对应目标的轨迹为新轨迹。
具体地,在一个应用例中,可以将该未被关联的观测结果作为轨迹起始点进行预测,并继续与下一帧的观测结果进行数据关联,若连续关联成功的帧数不小于第二阈值(例如4),则该被关联的观测结果被判定为新的目标,为其建立新轨迹。其中,第二阈值的具体取值可以根据实际需求设置,此处不做具体限定。
本实施例还可以与本发明视频多目标跟踪方法第二和/或第三实施例相结合。
如图12所示,本发明视频多目标跟踪方法第五实施例是在本发明视频多目标跟踪方法第一实施例的基础上,步骤S15进一步包括:
S151:在被关联的终结轨迹和新轨迹之间进行线性插值,以连接被关联的终结轨迹和新轨迹。
具体地,在一个应用例中,采用线性插值方法在被关联的终结轨迹和新轨迹之间均匀插入视频帧,以使得被关联的终结轨迹和新轨迹融合为同一轨迹。
本实施例还可以与本发明视频多目标跟踪方法第二至第四任一实施例或者其组合相结合。
如图13所示,本发明视频多目标跟踪方法第六实施例是在本发明视频多目标跟踪方法第一实施例的基础上,步骤S16进一步包括:
S161:判断未被关联的终结轨迹的未更新帧数是否大于预设帧数;
S162:若大于,则将未被关联的终结轨迹删除。
具体地,在一个应用例中,若未被关联的终结轨迹的未更新帧数大于预设帧数(例如70帧),则认为该终结轨迹是无效轨迹,予以删除,从而节省存储空间,控制计算量。其中,该预设帧数可以根据实际需求设置,此处不做具体限定。
本实施例还可以与本发明视频多目标跟踪方法第二至第五任一实施例或者其组合相结合。
如图14所示,本发明视频多目标跟踪装置一实施例包括:处理器110和摄像机120。摄像机120可以为本地摄像机,处理器110通过总线连接摄像机120;摄像机120也可以为远程摄像机,处理器110通过局域网或互联网连接摄像机120。
处理器110控制视频多目标跟踪装置的操作,处理器110还可以称为CPU(Central Processing Unit,中央处理单元)。处理器110可能是一种集成电路芯片,具有信号的处理能力。处理器110还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该视频多目标跟踪装置可以进一步包括存储器(图中未画出),存储器用于存储处理器110工作所必需的指令及数据,也可以存储传输器120拍摄的视频数据。
处理器110工作时,用于实现如本发明视频多目标跟踪方法第一至第六任一实施例或者其不矛盾的组合所述的方法,具体功能可以参考本发明视频多目标跟踪方法各对应实施例中的描述,此处不再重复。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (13)

  1. 一种视频多目标跟踪方法,其特征在于,包括:
    对当前视频帧进行运动检测,检测得到的可能运动对象作为观测结果;
    对所述观测结果和目标的预测结果进行数据关联,其中所述预测结果是至少利用前一视频帧的目标的轨迹进行预测而得到的;
    对未被关联的所述观测结果和所述预测结果进行轨迹管理,所述轨迹管理包括利用未被关联的所述预测结果获取终结轨迹,利用未被关联的所述观测结果获取新轨迹;
    对所述终结轨迹和所述新轨迹进行轨迹关联;
    对被关联的所述终结轨迹和所述新轨迹进行轨迹融合;
    删除符合预设条件的未被关联的所述终结轨迹。
  2. 根据权利要求1所述的方法,其特征在于,
    所述对所述观测结果和目标的预测结果进行数据关联包括:
    利用所述观测结果和所述预测结果获取目标的预测误差和预测误差变化量;
    利用所述预测误差和所述预测误差变化量进行模糊推理,获取所述观测结果和所述预测结果的关联概率;
    以所述关联概率为元素建立关联概率矩阵;
    从所述关联概率矩阵中选取未被标记的所述关联概率最大值的元素,标记所述关联概率最大值的元素所在行和列的所有元素,并将所述关联概率最大值的元素对应的所述观测结果和所述预测结果关联;
    循环前一步骤直至不存在未被标记的行/列后,将未被关联的所述观测结果记录到未关联观测集,将未被关联的所述预测结果记录到未关联预测集。
  3. 根据权利要求2所述的方法,其特征在于,
    所述利用所述观测结果和所述预测结果获取目标的预测误差和预测误差变化量包括:
    利用如下公式(1)计算目标的运动模型预测误差:
    Figure PCTCN2017091574-appb-100001
    其中,||·||2为二范数,
    Figure PCTCN2017091574-appb-100002
    为当前第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差,
    Figure PCTCN2017091574-appb-100003
    为第k帧中第j个观测结果的x坐标和y坐标,
    Figure PCTCN2017091574-appb-100004
    为第k-1帧中第i个预测结果,
    Figure PCTCN2017091574-appb-100005
    为第k帧中第j个预测结果,定义为:
    Figure PCTCN2017091574-appb-100006
    其中,
    Figure PCTCN2017091574-appb-100007
    为第k-1帧第i个预测 结果的x坐标、x方向的速度、y坐标以及y方向的速度,Hk为观测矩阵,
    Figure PCTCN2017091574-appb-100008
    为状态转移矩阵,Hk
    Figure PCTCN2017091574-appb-100009
    分别定义如下:
    Figure PCTCN2017091574-appb-100010
    其中τ=1。
    利用如下公式(4)计算目标的形状模型预测误差:
    Figure PCTCN2017091574-appb-100011
    其中,
    Figure PCTCN2017091574-appb-100012
    为当前第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差,hi(k)为第k帧中第i个预测结果的高度,
    Figure PCTCN2017091574-appb-100013
    为第k帧中第j个观测结果的高度,σs为高度标准方差;
    利用如下公式(5)计算目标的颜色特征预测误差:
    Figure PCTCN2017091574-appb-100014
    其中,
    Figure PCTCN2017091574-appb-100015
    为当前第k帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差,N为颜色空间的个数,Hm(xi)为第i个预测结果在第m个颜色空间中的像素个数,Hn(zj)为第j个观测结果在第n个颜色空间中的像素个数,
    Figure PCTCN2017091574-appb-100016
    Figure PCTCN2017091574-appb-100017
    分别为第i个预测结果和第j个观测结果在N个颜色空间中的平均像素个数;
    利用如下公式(6)计算目标的局部二进制模式特征预测误差:
    Figure PCTCN2017091574-appb-100018
    其中,
    Figure PCTCN2017091574-appb-100019
    为当前第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差,
    Figure PCTCN2017091574-appb-100020
    Figure PCTCN2017091574-appb-100021
    分别为第i个预测结果和第j个观测结果的局部二进制模式特征直方图,ρ为
    Figure PCTCN2017091574-appb-100022
    Figure PCTCN2017091574-appb-100023
    的巴氏系数,定义如下:
    Figure PCTCN2017091574-appb-100024
    利用如下公式(8)计算目标的运动模型预测误差变化量:
    Figure PCTCN2017091574-appb-100025
    其中,
    Figure PCTCN2017091574-appb-100026
    为第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差变化量,
    Figure PCTCN2017091574-appb-100027
    为第k帧中第i个预测结果和第j个观测结果之间的运动模型预测误差,
    Figure PCTCN2017091574-appb-100028
    为第k-1帧中第i个预测结果和第j个观测结果之间的运动模型预测误差;
    利用如下公式(9)计算目标的形状模型预测误差变化量:
    Figure PCTCN2017091574-appb-100029
    其中,
    Figure PCTCN2017091574-appb-100030
    为第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差变化量,
    Figure PCTCN2017091574-appb-100031
    为第k帧中第i个预测结果和第j个观测结果之间的形状模型预测误差,
    Figure PCTCN2017091574-appb-100032
    为第k-1帧中第i个预测结果和第j个观测结果之间的形状模型预测误差;
    利用如下公式(10)计算目标的颜色特征预测误差变化量:
    Figure PCTCN2017091574-appb-100033
    其中,
    Figure PCTCN2017091574-appb-100034
    为第k帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差变化量,
    Figure PCTCN2017091574-appb-100035
    为第k帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差,
    Figure PCTCN2017091574-appb-100036
    为第k-1帧中第i个预测结果和第j个观测结果之间的颜色特征预测误差;
    利用如下公式(11)计算目标的局部二进制模式特征预测误差变化量:
    Figure PCTCN2017091574-appb-100037
    其中,
    Figure PCTCN2017091574-appb-100038
    为第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差变化量,
    Figure PCTCN2017091574-appb-100039
    为第k帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差,
    Figure PCTCN2017091574-appb-100040
    为第k-1帧中第i个预测结果和第j个观测结果之间的局部二进制模式特征预测误差;
    利用如下公式(12)和(13)计算目标的预测误差和预测误差变化量:
    Figure PCTCN2017091574-appb-100041
    Figure PCTCN2017091574-appb-100042
    其中,Ei,j(k)为第k帧中第i个预测结果和第j个观测结果之间的预测误差,ΔEi,j(k)为第k帧中第i个预测结果和第j个观测结果之间的预测误差变化量。
  4. 根据权利要求2所述的方法,其特征在于,
    所述利用所述预测误差和所述预测误差变化量进行模糊推理,获取所述观测结果和所述预测结果的关联概率包括:
    根据模糊规则分别确定所述预测误差和所述预测误差变化量对应的输入模糊隶属度函数;
    利用所述输入模糊隶属度函数分别获取所述预测误差和所述预测误差变化量对应的模糊隶属度值;
    利用所述模糊隶属度值获取所述模糊规则的适用度;
    利用所述适用度获取所述模糊规则的模糊输出;
    利用所有所述模糊规则的模糊输出获取所述观测结果和所述预测结果的关联概率。
  5. 根据权利要求1所述的方法,其特征在于,
    所述对所述终结轨迹和所述新轨迹进行轨迹关联包括:
    计算所述终结轨迹和所述新轨迹的模糊相似度;
    判断所述终结轨迹和所述新轨迹的最大模糊相似度是否不小于可信度阈值;
    若不小于,则关联所述相似度最大的所述终结轨迹和所述新轨迹;
    判断所述终结轨迹和所述新轨迹是否连续预设帧数关联;
    若是,则执行所述对被关联的所述终结轨迹和所述新轨迹进行轨迹融合的 步骤。
  6. 根据权利要求5所述的方法,其特征在于,
    所述计算所述终结轨迹和所述新轨迹的模糊相似度包括:
    利用如下公式(14)计算所述终结轨迹和所述新轨迹的运动特征相似度:
    Figure PCTCN2017091574-appb-100043
    其中,fM(i,j)为第k帧终结轨迹i和新轨迹j的运动特征相似度,
    Figure PCTCN2017091574-appb-100044
    为第k帧终结轨迹i和新轨迹j的运动模型预测误差,
    Figure PCTCN2017091574-appb-100045
    为运动特征的方差;
    利用如下公式(15)计算所述终结轨迹和所述新轨迹的形状特征相似度:
    Figure PCTCN2017091574-appb-100046
    其中,fS(i,j)为第k帧终结轨迹i和新轨迹j的形状特征相似度,
    Figure PCTCN2017091574-appb-100047
    为第k帧终结轨迹i和新轨迹j的形状模型预测误差,
    Figure PCTCN2017091574-appb-100048
    为形状特征的方差;
    利用如下公式(16)计算所述终结轨迹和所述新轨迹的颜色特征相似度:
    Figure PCTCN2017091574-appb-100049
    其中,
    Figure PCTCN2017091574-appb-100050
    为第k帧终结轨迹i和新轨迹j的颜色特征相似度,
    Figure PCTCN2017091574-appb-100051
    为第k帧终结轨迹i和新轨迹j的颜色特征预测误差,
    Figure PCTCN2017091574-appb-100052
    为颜色特征的方差;
    利用如下公式(17)计算所述终结轨迹和所述新轨迹的局部二进制模式特征相似度:
    Figure PCTCN2017091574-appb-100053
    其中,
    Figure PCTCN2017091574-appb-100054
    为第k帧终结轨迹i和新轨迹j的局部二进制模式特征相似度,
    Figure PCTCN2017091574-appb-100055
    为第k帧终结轨迹i和新轨迹j的局部二进制模式特征预测误差,
    Figure PCTCN2017091574-appb-100056
    为局部二进制模式特征的方差;
    利用如下公式(18)计算所述终结轨迹和所述新轨迹的速度特征相似度:
    Figure PCTCN2017091574-appb-100057
    其中,fV(i,j)为第k帧终结轨迹i和新轨迹j的速度特征相似度,xi和yi分别为第k帧终结轨迹i的x坐标和y坐标,xj和yj分别为第k帧新轨迹j的x坐标和y坐标,
    Figure PCTCN2017091574-appb-100058
    Figure PCTCN2017091574-appb-100059
    分别为第k帧终结轨迹i的x方向和y方向的速度,
    Figure PCTCN2017091574-appb-100060
    为速度特征的方差;
    利用如下公式(19)计算所述终结轨迹和所述新轨迹的模糊相似度:
    Figure PCTCN2017091574-appb-100061
    其中,∧为取小操作,∨为取大操作,μij=Skk(i,j))为第k帧终结轨迹i和新轨迹j的模糊相似度,
    Figure PCTCN2017091574-appb-100062
    为第k帧终结轨迹i和新轨迹j各特征的平均相似度,Λk(i,j)为相似度矢量,Λk(i,j)定义如下:
    Figure PCTCN2017091574-appb-100063
    其中,
    Figure PCTCN2017091574-appb-100064
    分别为第k帧终结轨迹i和新轨迹j的运动、形状、颜色、局部二进制模式和速度特征的平均相似度。
  7. 根据权利要求6所述的方法,其特征在于,
    所述第k帧终结轨迹i和新轨迹j的运动特征平均相似度定义为:
    Figure PCTCN2017091574-appb-100065
    其中,κ=k-l1,j+3,
    Figure PCTCN2017091574-appb-100066
    为第m帧终结轨迹i和第n帧新轨迹j的运动特征相似度;
    所述第k帧终结轨迹i和新轨迹j的形状特征平均相似度定义为:
    Figure PCTCN2017091574-appb-100067
    其中,
    Figure PCTCN2017091574-appb-100068
    为第m帧终结轨迹i和第n帧新轨迹j的形状特征相似度;
    所述第k帧终结轨迹i和新轨迹j的颜色特征平均相似度定义为:
    Figure PCTCN2017091574-appb-100069
    其中,
    Figure PCTCN2017091574-appb-100070
    为第m帧终结轨迹i和第n帧新轨迹j的颜色特征相似度;
    所述第k帧终结轨迹i和新轨迹j的局部二进制模式特征平均相似度定义为:
    Figure PCTCN2017091574-appb-100071
    其中,
    Figure PCTCN2017091574-appb-100072
    为第m帧终结轨迹i和第n帧新轨迹j的局部二进制模式特征相似度;
    所述第k帧终结轨迹i和新轨迹j的速度特征平均相似度定义为:
    Figure PCTCN2017091574-appb-100073
    其中,
    Figure PCTCN2017091574-appb-100074
    为第m帧终结轨迹i和第n帧新轨迹j的速度特征相似度。
  8. 根据权利要求5所述的方法,其特征在于,
    所述判断所述终结轨迹和所述新轨迹是否连续预设帧数关联包括:
    判断所述终结轨迹和所述新轨迹的轨迹质量是否不小于预设阈值;
    若是,则判定所述终结轨迹和所述新轨迹连续预设帧数关联。
  9. 根据权利要求8所述的方法,其特征在于,
    所述轨迹质量定义如下:
    Figure PCTCN2017091574-appb-100075
    其中,
    Figure PCTCN2017091574-appb-100076
    为第k帧终结轨迹i和新轨迹j的轨迹质量,
    Figure PCTCN2017091574-appb-100077
    为第k-1帧终结轨迹i和新轨迹j的轨迹质量。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,
    所述对被关联的所述终结轨迹和所述新轨迹进行轨迹融合包括:
    在被关联的所述终结轨迹和所述新轨迹之间进行线性插值,以连接被关联的所述终结轨迹和所述新轨迹。
  11. 根据权利要求1-9中任一项所述的方法,其特征在于,
    所述删除符合预设条件的未被关联的所述终结轨迹包括:
    判断所述未被关联的所述终结轨迹的未更新帧数是否大于预设帧数;
    若大于,则将所述未被关联的所述终结轨迹删除。
  12. 根据权利要求1-9中任一项所述的方法,其特征在于,
    所述利用未被关联的所述预测结果获取终结轨迹包括:
    对未被关联的所述预测结果进行预测,并继续与下一视频帧的观测结果进行关联;
    判断未被关联的所述预测结果连续未被关联的帧数是否大于第一阈值;
    若大于所述第一阈值,则未被关联的所述预测结果对应目标的轨迹为所述终结轨迹;
    所述利用未被关联的所述观测结果获取新轨迹包括:
    以未被关联的所述观测结果为起点进行预测,并继续与下一视频帧的观测结果进行关联;
    判断未被关联的所述观测结果连续被关联的帧数是否不小于第二阈值;
    若不小于所述第二阈值,则未被关联的所述观测结果对应目标的轨迹为所述新轨迹。
  13. 一种视频多目标跟踪装置,其特征在于,包括:处理器和摄像机,所述处理器连接所述摄像机;
    所述处理器工作时,用于实现如权利要求1-12任一项所述的方法。
PCT/CN2017/091574 2017-07-04 2017-07-04 一种视频多目标跟踪方法及装置 WO2019006632A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091574 WO2019006632A1 (zh) 2017-07-04 2017-07-04 一种视频多目标跟踪方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091574 WO2019006632A1 (zh) 2017-07-04 2017-07-04 一种视频多目标跟踪方法及装置

Publications (1)

Publication Number Publication Date
WO2019006632A1 true WO2019006632A1 (zh) 2019-01-10

Family

ID=64949596

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091574 WO2019006632A1 (zh) 2017-07-04 2017-07-04 一种视频多目标跟踪方法及装置

Country Status (1)

Country Link
WO (1) WO2019006632A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396628A (zh) * 2019-08-14 2021-02-23 上海开域信息科技有限公司 一种基于目标高度预测的抗轨迹抖动方法
CN113012203A (zh) * 2021-04-15 2021-06-22 南京莱斯电子设备有限公司 一种复杂背景下高精度多目标跟踪方法
CN113674317A (zh) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 一种高位视频的车辆跟踪方法及装置
CN114972417A (zh) * 2022-04-02 2022-08-30 江南大学 动态轨迹质量量化和特征重规划的多目标跟踪方法
CN115311329A (zh) * 2019-10-11 2022-11-08 杭州云栖智慧视通科技有限公司 一种基于双环节约束的视频多目标跟踪方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763645A (zh) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 目标轨迹拼接的方法及装置
CN104915970A (zh) * 2015-06-12 2015-09-16 南京邮电大学 一种基于轨迹关联的多目标跟踪方法
CN106846361A (zh) * 2016-12-16 2017-06-13 深圳大学 基于直觉模糊随机森林的目标跟踪方法及装置
CN106846355A (zh) * 2016-12-16 2017-06-13 深圳大学 基于提升直觉模糊树的目标跟踪方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763645A (zh) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 目标轨迹拼接的方法及装置
CN104915970A (zh) * 2015-06-12 2015-09-16 南京邮电大学 一种基于轨迹关联的多目标跟踪方法
CN106846361A (zh) * 2016-12-16 2017-06-13 深圳大学 基于直觉模糊随机森林的目标跟踪方法及装置
CN106846355A (zh) * 2016-12-16 2017-06-13 深圳大学 基于提升直觉模糊树的目标跟踪方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIANG-QUN LI ET AL.: "Online Visual Multi-object Tracking Based on Fuzzy Logic", 2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEM AND KNOWLEDGE DISCOVERY, August 2016 (2016-08-01), pages 1001 - 1005, XP055558657 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396628A (zh) * 2019-08-14 2021-02-23 上海开域信息科技有限公司 一种基于目标高度预测的抗轨迹抖动方法
CN112396628B (zh) * 2019-08-14 2024-05-17 上海开域信息科技有限公司 一种基于目标高度预测的抗轨迹抖动方法
CN115311329A (zh) * 2019-10-11 2022-11-08 杭州云栖智慧视通科技有限公司 一种基于双环节约束的视频多目标跟踪方法
CN115311329B (zh) * 2019-10-11 2023-05-23 杭州云栖智慧视通科技有限公司 一种基于双环节约束的视频多目标跟踪方法
CN113012203A (zh) * 2021-04-15 2021-06-22 南京莱斯电子设备有限公司 一种复杂背景下高精度多目标跟踪方法
CN113012203B (zh) * 2021-04-15 2023-10-20 南京莱斯电子设备有限公司 一种复杂背景下高精度多目标跟踪方法
CN113674317A (zh) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 一种高位视频的车辆跟踪方法及装置
CN113674317B (zh) * 2021-08-10 2024-04-26 深圳市捷顺科技实业股份有限公司 一种高位视频的车辆跟踪方法及装置
CN114972417A (zh) * 2022-04-02 2022-08-30 江南大学 动态轨迹质量量化和特征重规划的多目标跟踪方法

Similar Documents

Publication Publication Date Title
CN107516321B (zh) 一种视频多目标跟踪方法及装置
CN114972418B (zh) 基于核自适应滤波与yolox检测结合的机动多目标跟踪方法
CN110660082B (zh) 一种基于图卷积与轨迹卷积网络学习的目标跟踪方法
WO2019006632A1 (zh) 一种视频多目标跟踪方法及装置
CN111127513B (zh) 一种多目标跟踪方法
CN109344725B (zh) 一种基于时空关注度机制的多行人在线跟踪方法
WO2022217840A1 (zh) 一种复杂背景下高精度多目标跟踪方法
CN110288627B (zh) 一种基于深度学习和数据关联的在线多目标跟踪方法
CN110782483B (zh) 基于分布式相机网络的多视图多目标跟踪方法及系统
WO2021007984A1 (zh) 基于tsk模糊分类器的目标跟踪方法、装置及存储介质
WO2018107488A1 (zh) 基于提升直觉模糊树的目标跟踪方法及装置
CN112052802B (zh) 一种基于机器视觉的前方车辆行为识别方法
CN112989889B (zh) 一种基于姿态指导的步态识别方法
CN111626194A (zh) 一种使用深度关联度量的行人多目标跟踪方法
CN110555870A (zh) 基于神经网络的dcf跟踪置信度评价与分类器更新方法
CN111476238A (zh) 一种基于区域尺度感知技术的害虫图像检测方法
Yang et al. A probabilistic framework for multitarget tracking with mutual occlusions
CN110717934A (zh) 一种基于strcf的抗遮挡目标跟踪方法
WO2018227491A1 (zh) 视频多目标模糊数据关联方法及装置
CN116645396A (zh) 轨迹确定方法、装置、计算机可读存储介质及电子设备
CN112233145A (zh) 一种基于rgb-d时空上下文模型的多目标遮挡跟踪方法
CN111986237A (zh) 一种人数无关的实时多目标跟踪算法
CN108765459A (zh) 基于小轨迹图关联模型的半在线视觉多目标跟踪方法
CN112507859A (zh) 一种用于移动机器人的视觉跟踪方法
CN110349184B (zh) 基于迭代滤波和观测判别的多行人跟踪方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17917149

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.06.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17917149

Country of ref document: EP

Kind code of ref document: A1