CN115311330A - Video multi-target tracking method based on position prediction - Google Patents
Video multi-target tracking method based on position prediction Download PDFInfo
- Publication number
- CN115311330A CN115311330A CN202211130202.9A CN202211130202A CN115311330A CN 115311330 A CN115311330 A CN 115311330A CN 202211130202 A CN202211130202 A CN 202211130202A CN 115311330 A CN115311330 A CN 115311330A
- Authority
- CN
- China
- Prior art keywords
- track
- detection target
- historical
- target
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 134
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 239000012634 fragment Substances 0.000 abstract description 4
- 230000009191 jumping Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010205 computational analysis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video multi-target tracking method based on position prediction, and relates to the technical field of videos and images. The method comprises the following steps: recording time information, position information and content characteristics of the detection target; calculating the content feature similarity of the detection target and all track containers, and if the maximum similarity value is greater than a first threshold value, determining that the detection target and all track containers are effectively matched; for unmatched track containers, further calculating the change proportion of the historical track speed to the temporary track speed, the movement included angle between the detection target and the motion direction of the historical track and the width change proportion, and if the three are within a given threshold interval, determining that the three are effectively matched; and storing the effectively matched detection target into a corresponding track container. The invention has real and reliable matching result and effectively reduces the serial number phenomenon caused by track fragments, shielding and other conditions.
Description
Cross Reference to Related Applications
The application is based on the application number of 201910964726X, and the application date is as follows: 10 and 11 in 2019, the invention name is: a divisional application of a video multi-target tracking method based on multi-dimensional feature fusion is provided.
Technical Field
The invention relates to the technical field of video and image processing, in particular to a video multi-target tracking method based on position prediction.
Background
The multi-target tracking technology is an important link of a video analysis technology, and the technology analyzes input Detection target Detection images through target historical tracks according to the sequence of occurrence time to obtain the motion direction and the predicted position, and performs matching connection by combining the content characteristic similarity between targets. How to effectively match and concatenate the target in each video frame with the track target in the historical frame is the key of the technology. The technical field of multi-target tracking at present generally adopts target prediction (a selection scheme of Kalman trajectory prediction in a relatively mature way) position, then matches the actual position of a detected target with the predicted position, and then matches the trajectory and the target which are not matched with each other by feature similarity; or preferentially matching the track and the target by using the feature similarity to ensure that the long-distance moving target can be matched, and then matching the actual position of the detected target with the predicted position.
Aiming at the problems of target overlapping and target detection loss caused by mutual shielding of targets in daily monitoring videos, the phenomenon of rapid movement of the targets is common; therefore, the problems that simple position matching and feature similarity matching cannot be well solved are caused, the final track fragments are excessive, and the phenomenon of serial numbers (IDSwitch) in the track is serious.
Disclosure of Invention
The invention aims to provide a video multi-target tracking method based on position prediction, which has a real and reliable matching result and effectively reduces the phenomenon of serial numbers caused by track fragments, shielding and the like.
In order to achieve the purpose, the invention provides the following technical scheme:
a video multi-target tracking method based on position prediction is characterized by comprising the following steps:
s1, recording time information, position information and content characteristics of a detection target;
s2, calculating the content feature similarity of the detection target and all track containers, and if the maximum similarity value is greater than a first threshold value, determining that the detection target and all track containers are effectively matched;
s3, for unmatched track containers, further calculating the change proportion of the historical speed and the real-time speed of the track, the movement included angle between the detection target and the motion direction of the historical track and the width change proportion of the track, and if the change proportion, the detection target and the historical track are within a given threshold interval, determining that the track containers are effectively matched;
and S4, storing the effectively matched detection target into a corresponding track container.
Further, the specific content of S2 is as follows:
extracting content characteristics of the detection target, and performing content characteristic similarity calculation with the content characteristics stored in all track containers to obtain a corresponding similarity value sequence; finding out the maximum similarity value from the similarity value sequence to be compared with a first threshold value, calculating the distance between the detection target and the last frame in the track container, and comparing the distance with a distance threshold value; if the maximum similarity value is greater than the first threshold and the distance is within the distance threshold, a valid match is identified.
Further, the specific content of S2 is as follows:
if the track container has historical record information of the detection target, predicting the position information of the detection target which should appear in the current frame in real time to obtain a predicted rectangular area;
detecting a rectangular area of an actual detection target, calculating an intersection ratio of a prediction rectangular area and the rectangular area, and taking the difference between 1 and the intersection ratio as the distance between the actual detection target and the prediction detection target; and calculating the similarity of the content characteristics of the track container corresponding to the minimum distance and the detection target, and comparing the similarity with a first threshold value, wherein if the maximum similarity value is greater than the first threshold value, the track container is determined to be effectively matched.
Further, the specific content of S3 is as follows:
for unmatched track containers, the track container PTrack with the maximum similarity to the content features of the detection target is selected Cont Simultaneously recording the corresponding similarity values Simi Cont (ii) a Calculating the distances between the detection target and all track containers, and taking the track container Ptrack corresponding to the minimum distance Euc Calculating the track container Ptrack Euc Similarity value Simi with content feature of detection target Euc ;
S31, if PTrack Cont And Ptrack Euc Same, and Simi Cont If the three are in the corresponding given threshold interval, the effective matching is determined;
s32, if PTrack Cont And Ptrack Euc Different when Simi Cont And Simi Euc When the larger value is larger than or equal to the given parameter threshold, calculating the width change proportion, and if the width change proportion is within the given threshold interval, determining that the width change proportion is effectively matched; when Simi Cont And Simi Euc When the larger value of the two values is smaller than a given parameter threshold value, taking the track container corresponding to the larger value of the two values, calculating the change proportion of the historical speed and the real-time speed of the track, and the movement included angle and the width change proportion of the detection target and the motion direction of the historical track, and if the three values are in the corresponding given threshold value interval, determining that the three values are effectively matched;
s33, if the detection target still does not find a track container which is effectively matched with the detection target, taking Simi Cont And Simi Euc And calculating the change proportion of the historical track speed and the real-time speed, and the movement included angle and the width change proportion of the detection target and the historical track movement direction, wherein effective matching is determined if the three values are in corresponding given threshold intervals.
Further, the calculation method of the historical speed and the real-time speed of the track is as follows:
V hist =(R t -R t-Δt ) Δ t, wherein R t Is position information of the last to last frame of the history information, R t-Δt Is the position information of the penultimate frame of the history information, and Δ t is the time interval of two frames of history information;
V cur =(R cur -R t )/Δt cur wherein R is t Is position information of a last to last frame of history information, R cur Is position information of a detection target, Δ t cur Is the time interval between the detection target time and the last but one frame of the history information.
Further, the method for calculating the motion included angle between the detection target and the motion direction of the historical track is as follows:
wherein,is R t Center point position to R cur Vector representation of the center point position;is R t-Δt Center point location to R t Vector representation of the location of the center point.
Further, in the S31, if PTrack Cont If there is only one frame of information in the history information, the width change ratio is directly calculated, and if the width change ratio is within a given threshold interval, it is determined as a valid match.
Further, the distance is a euclidean distance.
Further, the content features are extracted by adopting a deep convolutional neural network.
Further, the position information includes a center position, an aspect ratio, and a height of the detection target.
Compared with the prior art, the invention has the beneficial effects that: in the invention, from the perspective of multi-feature fusion, feature matching is preferred, and in the first two links, content feature similarity constraint is added to the matching result of each link, so that the matching result is output truly and reliably; and selecting the optimal matching logic for the Euclidean distance space and the content characteristic space for the last remaining detection targets which are not successfully matched in a discrete mode, and then ensuring the truth and reliability of matching by using the multidimensional constraints of time, position, speed and angle. The invention has high operation efficiency and far-exceeding real-time requirements, effectively reduces track fragments and the serial number phenomenon caused by shielding and the like, and obtains good application effect.
Drawings
Fig. 1 is an overall schematic block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a video multi-target tracking method based on position prediction, which is characterized by comprising the following steps:
s1, recording time information, position information and content characteristics of a detection target.
Specifically, for a newly added Detection target Detection, a new track container PathTrack is opened, the life time limit MaxAge of the track is initialized, and the preferential MaxAge =56; initializing a state matrix of a Kalman filter of the new track container PathTrack to be an initialization state; recording time information, position information, corresponding speed information in image coordinates and image content characteristics of the Detection target Detection, and storing the time information, the position information, the corresponding speed information and the image content characteristics of the Detection target Detection in the MsgBox unit stack. The position information includes a center position, an aspect ratio, and a height of the detection target. Preferably, the image content features adopt abstract features extracted by a deep convolutional neural network.
And S2, calculating the content feature similarity of the detection target and all track containers, and if the maximum similarity value is greater than a first threshold value, determining that the detection target and all track containers are effectively matched. And the content feature matching is adopted, so that the matched result is output really and reliably.
There are three specific embodiments of step S2, which are as follows:
in the first embodiment, the specific content of S2 is as follows:
and extracting image content characteristics of the Detection target Detection, and performing content characteristic similarity calculation with the image content characteristics of the MsgBox information stored in all track containers PathTrack. Wherein, it is noted that two N-dimensional image content features X, Y are described as: x (X) 1 ,x 2 ,...,x N ),Y(y 1 ,y 2 ,...,y N ) The corresponding image content similarity calculation formula between X and Y is as follows:and according to a similarity calculation formula of the content features, obtaining a similarity result sequence of the image content features corresponding to the Detection and MsgBox information stored in the PathTrack. Finding out the maximum similarity value and the corresponding relation between the Detection and PathTrack from the similarity value sequence, wherein the central position of the Detection is (x) det ,y det ) The position information of the last MsgBox center in PathTrack is (x) last ,y last ) The corresponding distance calculation formula:an invalid match is considered if the ratio of the corresponding distance D to the width dimension information of Detection is greater than the corresponding distance threshold condition distThr. Otherwise, comparing the maximum similarity value with a first threshold value, if the maximum similarity value is greater than the first threshold value SimiThr, determining that the maximum similarity value is effective matching, and jumping to S4; otherwise, an invalid match is considered. Preferably, the distance threshold distThr =1.1 and the first threshold simitr =0.5.
In the second embodiment, the specific content of S2 is as follows:
if the track container PathTrack has history information of the Detection target Detection, a Kalman filter is used to predict the history information in real time for the position information (including the central position, the aspect ratio and the height of the Detection target Detection and the corresponding speed information predictDetection in the image coordinate) of the Detection target Detection to appear in the current frame, so as to obtain a predicted rectangular area PreRect. Wherein, the Kalman filter adopts a uniform velocity model and a linear observation model.
Detecting a rectangular area Rect where an actual Detection target Detection is located, and respectively carrying out position distance calculation with a prediction rectangular area PreRect of all track containers where the Detection target Detection exists to obtain corresponding distance relation sequences. Here, the method for calculating the position distance according to the present invention includes: calculating the intersection ratio of the prediction rectangular region PreRect and the rectangular region Rect, and then taking the difference between 1 and the intersection ratio as the distance dist between the actual detection target and the prediction detection target; the specific formula is as follows:
calculating the content feature similarity of the track container PathTrack corresponding to the minimum distance and the Detection target Detection from the distance relation sequence, comparing the content feature similarity with the first threshold value SimiThr in the first embodiment, if the maximum similarity is greater than the first threshold value SimiThr, determining that the content feature similarity is effective matching, and jumping to S4; otherwise, it is considered an invalid match.
And in the second embodiment, the Kalman prediction is utilized to analyze and predict the position of the historical track data, the predicted position is used for carrying out position matching with the target, and the matched pair is subjected to image content similarity constraint, so that the matching reliability is further improved.
In the third embodiment, the specific content of S2 may also be a superposition of the first embodiment and the second embodiment, which is specifically as follows:
if the track container PathTrack has history information of the Detection target Detection, a Kalman filter is used to predict the history information in real time for the position information (including the central position, the aspect ratio and the height of the Detection target Detection and the corresponding speed information predictDetection in the image coordinate) of the Detection target Detection to appear in the current frame, so as to obtain a predicted rectangular area PreRect. Wherein, the Kalman filter adopts a constant speed model and a linear observation model.
And extracting image content characteristics of the Detection target Detection, and performing content characteristic similarity calculation with the image content characteristics of the MsgBox information stored in all track containers PathTrack. Wherein, two N-dimensional image content characteristics X and Y are recorded as follows: x (X) 1 ,x 2 ,...,x N ),Y(y 1 ,y 2 ,...,y N ) The corresponding image content similarity between X and Y is calculated by the following formula:and according to a similarity calculation formula of the content features, obtaining a similarity result sequence of the image content features corresponding to the Detection and MsgBox information stored in the PathTrack. Finding out the maximum similarity value and the corresponding relation between Detection and PathTrack from the similarity value sequence, wherein the central position of the Detection is (x) det ,y det ) The position information of the last MsgBox center in PathTrack is (x) last ,y last ) The corresponding distance calculation formula:an invalid match is considered if the ratio of the corresponding distance D to the width dimension information of Detection is greater than the corresponding distance threshold condition distThr. Otherwise, comparing the maximum similarity value with a first threshold value, if the maximum similarity value is greater than the first threshold value SimiThr, determining that the maximum similarity value is effective matching, and jumping to S4; otherwise, it is considered an invalid match. Preferably, the distance threshold distThr =1.1 and the first threshold value simitr =0.5.
If the Detection target Detection is invalid and matched, further detecting the rectangular area Rect where the actual Detection target Detection is located, and respectively calculating the position distance with the predicted rectangular area Rect of all the track containers where the Detection target Detection exists to obtain the corresponding distance relation sequence. Here, the method for calculating the position distance according to the present invention includes: calculating the intersection ratio of the prediction rectangular region PreRect and the rectangular region Rect, and then taking the difference between 1 and the intersection ratio as the distance dist between an actual detection target and a prediction detection target; the concrete formula is as follows:
calculating the content feature similarity of the track container PathTrack corresponding to the minimum distance from the distance relation sequence and the Detection target Detection, comparing the content feature similarity with the first threshold value SimiThr, if the maximum similarity is greater than the first threshold value SimiThr, determining that the content feature similarity is effective matching, and jumping to S4; otherwise, an invalid match is considered.
Compared with the first embodiment and the second embodiment, the third embodiment adopts the feature matching which is divided into two links, the matching result of each link is added with the content feature similarity constraint, and the matching result is output to be real and reliable.
And S3, for unmatched track containers PathTrack, further calculating the change proportion of the historical speed and the real-time speed of the track, the movement included angle between the detection target and the motion direction of the historical track and the width change proportion, and if the three are within a given threshold interval, determining that the three are effectively matched. The specific contents are as follows:
for the unmatched track container PathTrack, on one hand, the track container PTrack with the maximum similarity to the content feature of the Detection target Detection is taken Cont Simultaneously recording the corresponding similarity values Simi Cont (ii) a It is worth mentioning that if the similarity value Simi Cont If the threshold value is less than 0.3, the PTrack is judged Cont And if not, jumping to S1.
On the other hand, the Euclidean distances between the detection target and all the track containers are calculated, and the track container Ptrack corresponding to the minimum distance is selected Euc Calculating the track container Ptrack Euc Similarity value Simi with content feature of detection target Euc (ii) a If the distance isIf the ratio of the minimum value to the width of Detection is greater than a given threshold value MoveDist, preferably MoveDist =3.3, then Ptrack is determined Euc And if not, jumping to S1. It is worth mentioning that the euclidean distance dist herein is calculated by the following formula:
wherein dist is the Euclidean distance; (x) new ,y new ) The coordinate of the central point of the rectangular region Rect; (x) hist ,y hist ) Coordinates of the center point of the predicted rectangular area preret.
S31, if PTrack Cont And Ptrack Euc Same, and Simi Cont Greater than or equal to a similarity threshold ValidSimiThr of valid matches, preferably, validSimiThr =0.65. Then, if the PTrack Cont When the history information MsgBox exists for two frames or more, calculating the history speed V of the track hist And real-time speed V cur If the change proportion, the motion included Angle between the detected target and the motion direction of the historical track and the width change proportion are all in the corresponding given threshold interval, the detection target and the historical track are determined to be effectively matched, and the step is carried out to S4. In particular, computational analysis V cur And V hist Amplitude of variation, multiple of variation being less than a given parameter threshold [1/VThr, VThr]Preferably, VThr =2; at the same time, the Angle of the direction is smaller than a given parameter threshold value anglerthr, preferably anglerthr =45 °. If not, an invalid match is considered; otherwise, calculating the width ratio change, namely the ratio of the width size information in the specific first-last MsgBox to the width of the rectangular region Rect, and if the ratio is in a given change threshold interval [1/changeRate, changeRate]And then a valid match is considered, preferably ChangeRate =1.5; then the match is considered a valid match and a jump is made to S4.
If the PTrack Cont If the history information of (1) has only one frame, the width change ratio is directly calculated, and if the width change ratio is in a given threshold interval [1/ChangeRate, changeRate ]]And if the number of the data is within the range, the data is determined to be a valid match, and the step is jumped to S4. The following are the historical speedsDegree V hist Real time velocity V cur And a specific calculation method of the Angle of motion.
The calculation method of the historical track speed and the real-time track speed comprises the following steps:
V hist =(R t -R t-Δt ) [ Delta ] t, wherein R t Is the position information of the last frame of the MsgBox information, R t-Δt Is the position information of the MsgBox frame last but one, R t Center (x) of (c) t ,y t ),R t-Δt Center (x) of (2) t-Δt ,y t-Δt ) Then (R) t -R t-Δt ) The calculation formula of (c):at is the time interval of two frames of history information.
V cur =(R cur -R t )/Δt cur Wherein R is t Is the position information of the last frame of the MsgBox information, R cur Is position information of Detection, note R cur Center (x) of (c) cur ,y cur ) Then (R) cur -R t ) The calculation formula of (2):Δt cur is the time interval between the detection target time and the last but one frame of the history information.
The calculation method of the motion included angle between the detection target and the motion direction of the historical track is as follows:
wherein,is R t Center point location to R cur Vector representation of the center point position;is R t-Δt Center point location to R t Vector representation of the location of the center point.
S32, if PTrack Cont And Ptrack Euc In contrast, when Simi Cont And Simi Euc When the larger value is equal to or greater than the given parameter threshold value directmechthr, directmechthr =0.85 is preferable, and the width change ratio is calculated. If the width variation ratio is within a given threshold interval [1/ChangeRate, changeRate]If the matching is not valid, the step is S4; when Simi Cont And Simi Euc When the larger value is smaller than the given parameter threshold value, the track container corresponding to the larger value is taken as the PTrack Better . Like S31, when PTrack Better Extracting V when the medium history track MsgBox exists for more than two frames cur And V hist Angle information. Calculating the historical velocity V of the track hist And real-time speed V cur If the change proportion, the motion included Angle between the detected target and the motion direction of the historical track and the width change proportion are all in the corresponding given threshold interval, the detection target and the historical track are determined to be effectively matched, and the step is carried out to S4.
S33, if the Detection target Detection still does not find a track container which is effectively matched with the Detection target Detection, taking Simi Cont And Simi Euc Track containers corresponding to the smaller value of the residual (i.e. the residual track containers for matching in S32) are marked as PTrack Last . Like S31, when PTrack Last Extracting V when the medium history track MsgBox exists for more than two frames cur And V hist Angle information. Calculating the historical velocity V of the track hist And real-time speed V cur If the change proportion, the motion included Angle between the detected target and the motion direction of the historical track and the width change proportion are all in the corresponding given threshold value interval, effective matching is determined, and S4 is skipped. Otherwise, jumping to S1, and opening a new track container.
And S4, storing the effectively matched detection target into a corresponding track container. Specifically, if the track container is matched with a new Detection target Detection, the MsgBox correspondingly recording the Detection information is stored in the stack of the track container PathTrack and is stored in sequence; meanwhile, the existence life counter of the track is 0, and simultaneously, the state of the Kalman state matrix is updated; if no new Detection target Detection is matched, the life counter is increased by 1 until the track life counter reaches MaxAge, and the track is ended.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (7)
1. A video multi-target tracking method based on position prediction is characterized by comprising the following steps:
s1, recording time information, position information and content characteristics of a detection target;
s2, calculating the content feature similarity of the detection target and all track containers, and if the maximum similarity value is greater than a first threshold value, determining that the detection target and all track containers are effectively matched;
if the track container has historical record information of the detection target, predicting the position information of the detection target which should appear in the current frame in real time to obtain a predicted rectangular area;
detecting a rectangular area of an actual detection target, calculating an intersection ratio of a prediction rectangular area and the rectangular area, and taking the difference between 1 and the intersection ratio as the distance between the actual detection target and the prediction detection target; calculating the content feature similarity of the track container corresponding to the minimum distance and the detection target, and comparing the content feature similarity with a first threshold value, and if the similarity is greater than the first threshold value, determining that the track container is effectively matched;
s3, for the unmatched track containers, further calculating the change proportion of the historical speed and the real-time speed of the track, the movement included angle between the detection target and the motion direction of the historical track and the width change proportion, and if the change proportion, the detection target and the historical track are within a given threshold interval, determining that the track containers are effectively matched; the width change proportion is the ratio of the width of the rectangular area where the detection target of the last frame in the track container is located to the width of the rectangular area where the current detection target is located;
and S4, storing the effectively matched detection target into a corresponding track container.
2. The method for multi-target tracking of video based on location prediction according to claim 1, wherein the details of S3 are as follows:
for the unmatched track containers, the track container PTrack with the maximum similarity to the content characteristics of the detection target is taken Cont Simultaneously recording the corresponding similarity values Simi Cont (ii) a Calculating Euclidean distances between the coordinates of the central point of the detection target and the coordinates of the central points of prediction rectangular areas of all track containers, wherein the prediction rectangular areas are obtained by predicting the position information of the detection target which should appear in the current frame in real time according to the historical record information of the track containers by using a Kalman filter, and the track container Ptrack corresponding to the minimum distance value is taken Euc Calculating the track container Ptrack Euc Similarity value Simi with content feature of detection target Euc ;
S31, if PTrack Cont And Ptrack Euc Same, and Simi Cont If the three are in the corresponding given threshold interval, the effective matching is determined;
s32, if PTrack Cont And Ptrack Euc In contrast, when Simi Cont And Simi Euc When the larger value is larger than or equal to the given parameter threshold, calculating the width change proportion, and if the width change proportion is within the given threshold interval, determining that the width change proportion is effectively matched; when Simi Cont And Simi Euc When the larger value of the two values is smaller than the given parameter threshold value, the track container corresponding to the larger value of the two values is taken, the change proportion of the historical speed and the real-time speed of the track, and the movement included angle and the width change proportion of the detection target and the motion direction of the historical track are calculatedIf the three are within the corresponding given threshold interval, the effective matching is determined;
s33, if the detection target still does not find a track container which is effectively matched with the detection target, taking Simi Cont And Simi Euc And calculating the change proportion of the historical speed and the real-time speed of the track, the movement included angle and the width change proportion of the movement direction of the detection target and the historical track, and if the change proportion, the detection target and the historical track are within a corresponding given threshold interval, determining that the change proportion is effective matching.
3. The method for video multi-target tracking based on location prediction according to claim 1 or 2, characterized in that the calculation method of the track historical speed and the real-time speed is as follows:
V hist =(R t -R t-Δt ) Δ t, wherein R t Is position information of the last to last frame of the history information, R t-Δt Is the position information of the penultimate frame of the history information, and Δ t is the time interval of two frames of history information;
V cur =(R cur -R t )/Δt cur wherein R is t Is position information of the last to last frame of the history information, R cur Is position information of the detection target, Δ t cur Is the time interval between the detection target time and the last but one frame of the history information.
4. The video multi-target tracking method based on position prediction according to claim 3, wherein the calculation method of the motion included angle between the detection target and the motion direction of the historical track is as follows:
5. The method as claimed in claim 2, wherein in the step S31, if PTrack Cont If there is only one frame of information in the history information, the width change ratio is directly calculated, and if the width change ratio is within a given threshold interval, it is determined as a valid match.
6. The video multi-target tracking method based on position prediction as claimed in claim 1, wherein the content features are extracted by a deep convolutional neural network.
7. The video multi-target tracking method based on position prediction as claimed in claim 1, wherein the position information comprises a center position, an aspect ratio and a height of a detection target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211130202.9A CN115311330B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on position prediction |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211130202.9A CN115311330B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on position prediction |
CN201910964726.XA CN110675432B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on multi-dimensional feature fusion |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910964726.XA Division CN110675432B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on multi-dimensional feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115311330A true CN115311330A (en) | 2022-11-08 |
CN115311330B CN115311330B (en) | 2023-04-07 |
Family
ID=69081635
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910964726.XA Active CN110675432B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on multi-dimensional feature fusion |
CN202211130202.9A Active CN115311330B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on position prediction |
CN202211130193.3A Active CN115311329B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on double-link constraint |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910964726.XA Active CN110675432B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on multi-dimensional feature fusion |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211130193.3A Active CN115311329B (en) | 2019-10-11 | 2019-10-11 | Video multi-target tracking method based on double-link constraint |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN110675432B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965657A (en) * | 2023-02-28 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Target tracking method, electronic device, storage medium, and vehicle |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111914754B (en) * | 2020-08-03 | 2023-06-30 | 杭州云栖智慧视通科技有限公司 | Image content similarity measurement method and device and computer equipment |
CN112184769B (en) * | 2020-09-27 | 2023-05-02 | 上海高德威智能交通系统有限公司 | Method, device and equipment for identifying tracking abnormality |
CN114913198A (en) * | 2021-01-29 | 2022-08-16 | 清华大学 | Multi-target tracking method and device, storage medium and terminal |
CN113112526B (en) * | 2021-04-27 | 2023-09-22 | 北京百度网讯科技有限公司 | Target tracking method, device, equipment and medium |
CN114155273B (en) * | 2021-10-20 | 2024-06-04 | 浙江大立科技股份有限公司 | Video image single-target tracking method combining historical track information |
CN113971216B (en) * | 2021-10-22 | 2023-02-03 | 北京百度网讯科技有限公司 | Data processing method and device, electronic equipment and memory |
CN114329063B (en) * | 2021-10-29 | 2024-06-11 | 腾讯科技(深圳)有限公司 | Video clip detection method, device and equipment |
CN117315421A (en) * | 2023-09-26 | 2023-12-29 | 中国人民解放军91977 部队 | Method and device for predicting flight path of offshore target |
CN117495917B (en) * | 2024-01-03 | 2024-03-26 | 山东科技大学 | Multi-target tracking method based on JDE multi-task network model |
CN118279568A (en) * | 2024-05-31 | 2024-07-02 | 西北工业大学 | Multi-target identity judging method for distributed double-infrared sensor time sequence twin network |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110081043A1 (en) * | 2009-10-07 | 2011-04-07 | Sabol Bruce M | Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background |
CN104094279A (en) * | 2014-04-30 | 2014-10-08 | 中国科学院自动化研究所 | Large-range-first cross-camera visual target re-identification method |
EP2858008A2 (en) * | 2013-09-27 | 2015-04-08 | Ricoh Company, Ltd. | Target detecting method and system |
CN104915970A (en) * | 2015-06-12 | 2015-09-16 | 南京邮电大学 | Multi-target tracking method based on track association |
CN109344712A (en) * | 2018-08-31 | 2019-02-15 | 电子科技大学 | A kind of road vehicle tracking |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104424648B (en) * | 2013-08-20 | 2018-07-24 | 株式会社理光 | Method for tracing object and equipment |
CN105261035B (en) * | 2015-09-15 | 2018-05-11 | 杭州中威电子股份有限公司 | A kind of highway motion target tracking method and device |
CN107291216A (en) * | 2016-04-05 | 2017-10-24 | 中兴通讯股份有限公司 | A kind of mobile terminal method for tracking target, device and mobile terminal |
CN106845385A (en) * | 2017-01-17 | 2017-06-13 | 腾讯科技(上海)有限公司 | The method and apparatus of video frequency object tracking |
WO2019006632A1 (en) * | 2017-07-04 | 2019-01-10 | 深圳大学 | Video multi-target tracking method and device |
CN108460787B (en) * | 2018-03-06 | 2020-11-27 | 北京市商汤科技开发有限公司 | Target tracking method and apparatus, electronic device, program, and storage medium |
CN109191497A (en) * | 2018-08-15 | 2019-01-11 | 南京理工大学 | A kind of real-time online multi-object tracking method based on much information fusion |
CN110276783B (en) * | 2019-04-23 | 2021-01-08 | 上海高重信息科技有限公司 | Multi-target tracking method and device and computer system |
CN110084836B (en) * | 2019-04-26 | 2022-03-04 | 西安电子科技大学 | Target tracking method based on deep convolution characteristic hierarchical response fusion |
-
2019
- 2019-10-11 CN CN201910964726.XA patent/CN110675432B/en active Active
- 2019-10-11 CN CN202211130202.9A patent/CN115311330B/en active Active
- 2019-10-11 CN CN202211130193.3A patent/CN115311329B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110081043A1 (en) * | 2009-10-07 | 2011-04-07 | Sabol Bruce M | Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background |
EP2858008A2 (en) * | 2013-09-27 | 2015-04-08 | Ricoh Company, Ltd. | Target detecting method and system |
CN104094279A (en) * | 2014-04-30 | 2014-10-08 | 中国科学院自动化研究所 | Large-range-first cross-camera visual target re-identification method |
CN104915970A (en) * | 2015-06-12 | 2015-09-16 | 南京邮电大学 | Multi-target tracking method based on track association |
CN109344712A (en) * | 2018-08-31 | 2019-02-15 | 电子科技大学 | A kind of road vehicle tracking |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
Non-Patent Citations (4)
Title |
---|
NICOLAI WOJKE ET AL: "SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC", 《ARXIV:1703.07402V1》 * |
刘军学等: "基于改进运动历史图像的多运动目标实时跟踪", 《计算机应用》 * |
李超等: "一种模板匹配的快速实现方法", 《航天返回与遥感》 * |
杜永胜: "基于运动特征的视频网络多目标匹配和跟踪方法", 《全国优秀博硕士学位论文全文数据库信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965657A (en) * | 2023-02-28 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Target tracking method, electronic device, storage medium, and vehicle |
CN115965657B (en) * | 2023-02-28 | 2023-06-02 | 安徽蔚来智驾科技有限公司 | Target tracking method, electronic device, storage medium and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN115311329A (en) | 2022-11-08 |
CN110675432A (en) | 2020-01-10 |
CN115311329B (en) | 2023-05-23 |
CN115311330B (en) | 2023-04-07 |
CN110675432B (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675432B (en) | Video multi-target tracking method based on multi-dimensional feature fusion | |
Ren et al. | Deep reinforcement learning with iterative shift for visual tracking | |
CN110400332B (en) | Target detection tracking method and device and computer equipment | |
CN110084831B (en) | Multi-target detection tracking method based on YOLOv3 multi-Bernoulli video | |
CN106780557B (en) | Moving object tracking method based on optical flow method and key point features | |
Angeli et al. | Incremental vision-based topological SLAM | |
Du et al. | Online deformable object tracking based on structure-aware hyper-graph | |
CN110569855B (en) | Long-time target tracking method based on correlation filtering and feature point matching fusion | |
CN102147851A (en) | Device and method for judging specific object in multi-angles | |
CN113052873B (en) | Single-target tracking method for on-line self-supervision learning scene adaptation | |
CN103927764B (en) | A kind of wireless vehicle tracking of combining target information and estimation | |
Tsintotas et al. | DOSeqSLAM: Dynamic on-line sequence based loop closure detection algorithm for SLAM | |
CN112037257B (en) | Target tracking method, terminal and computer readable storage medium thereof | |
Cao et al. | Correlation-based tracking of multiple targets with hierarchical layered structure | |
CN115063454A (en) | Multi-target tracking matching method, device, terminal and storage medium | |
CN107424163A (en) | A kind of lens boundary detection method based on TextTiling | |
JP2022183692A (en) | Navigation monitoring device, navigation monitoring method, and navigation monitoring program | |
Song et al. | A cross frame post-processing strategy for video object detection | |
Gao et al. | A graphical social topology model for RGB-D multi-person tracking | |
Tang et al. | Place recognition using line-junction-lines in urban environments | |
CN113887449A (en) | Multi-target tracking method and computer-readable storage medium | |
CN114428807A (en) | Ground maneuvering target motion trajectory semantic system construction and cognitive optimization method | |
US20230051014A1 (en) | Device and computer-implemented method for object tracking | |
Windbacher et al. | Single-Stage 3D Pose Estimation of Vulnerable Road Users Using Pseudo-Labels | |
Du et al. | DSSF-net: Dual-Task Segmentation and Self-supervised Fitting Network for End-to-End Lane Mark Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |