CN110443829A - It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature - Google Patents
It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature Download PDFInfo
- Publication number
- CN110443829A CN110443829A CN201910715630.XA CN201910715630A CN110443829A CN 110443829 A CN110443829 A CN 110443829A CN 201910715630 A CN201910715630 A CN 201910715630A CN 110443829 A CN110443829 A CN 110443829A
- Authority
- CN
- China
- Prior art keywords
- target
- region
- feature
- information
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Tracking technique field is blocked the present invention relates to anti-, and it discloses and a kind of track algorithm is blocked based on motion feature and the anti-of similarity feature, the following steps are included: A, Target- target image: object detection results are free of extra background area, and every frame all updates target image;B, the region of search Search region-.This blocks track algorithm based on motion feature and the anti-of similarity feature, the algorithm, which has, high blocks robustness, target is under complete occlusion state, most shelter can be screened out according to similarity information first, the interference of different motion track can be screened out according to the movement direction of target later, the shelter target of same motion trajectory but different motion frequency can be screened out further according to the motion velocity information of target, position in the current frame is finally gone out according to the motion information prediction of itself, similarity mode, which is reused, until crossing shelter goes out target position.
Description
Technical field
Tracking technique field, specially a kind of anti-screening based on motion feature and similarity feature are blocked the present invention relates to anti-
Keep off track algorithm.
Background technique
Existing track algorithm is roughly divided into two classes, and one is the sides that similarity mode is carried out based on depth characteristic information
Formula, another kind are the predictions that target position is carried out based on motion information, and tracking the preferable track algorithm of accuracy at present is all benefit
Similarity mode realization is carried out with deep learning feature, such as the SiamRPN series that present effect is best, including SiamRPN,
DaSiamRPN, SiamRPN++, these networks are all to take the twin network structure of Siamnet, i.e., extract target and search respectively
The deep learning feature in region is found in region of search with the highest position of the matching similarity of target as tracking result, also
There is a kind of track algorithm based on motion information prediction, wherein most representative is exactly kalman filtering, does not do feature and mention
Extract operation, but the prediction of the next appearance position of target is made by the motion state to target, and constantly accumulate the target
Motion information, improve precision of prediction, it is this prediction be next frame appearance before can calculated result, that is, do not have
Use the feature of target itself.
Thinking is built based on above two track algorithm, it is not difficult to find out that the track algorithm based on similarity mode mode is accurate
Property it is high, but rely on the integrality that the target occurs in present frame and lose mesh in present frame in the case where target is blocked
Target feature, such track algorithm will fail, and need that target is waited to occur extracting target signature, this feelings again later from new
Condition, that is, target is with losing, and due to only using the characteristics of image of target, does not need to use video sequence as instruction in training
Practice collection, but use detection sample as network inputs, detects the target in sample as a network branches, detect sample and make
For region to be searched, for the highest position of matching similarity as tracking result, this training method also results in the network only benefit
With the static nature of image, the motion sequence feature of target is not used, and only uses target state information
The accuracy of kalman is very poor, because of a kind of only prediction of motion state, there is no extraction target images themselves features, so
In illumination, block etc. and to track under external causes interference unstable, and tracking output box when target deformation includes excessive background, i.e., with
It is inaccurate that track confines position.
Summary of the invention
Track algorithm is blocked based on motion feature and the anti-of similarity feature the present invention provides a kind of, has high screening
Robustness is kept off, similarity feature is used alone and does the high advantage of tracking accuracy, solves asking of mentioning in background above technology
Topic.
The invention provides the following technical scheme: a kind of block track algorithm based on motion feature and the anti-of similarity feature,
The following steps are included:
A, Target- target image: object detection results are free of extra background area, and every frame all updates target image;
B, the region of search Search region-: being the figure that extends out of previous frame target position, and usually length and width respectively extend out one
Times, it is navigated in present frame with target position in previous frame, and wide height is respectively extended out into one times of obtained image-region;
C, convolutional neural networks (CNN) the twin network of SiamRPN net-: is passed through respectively to target image and region of search
Feature extraction is carried out, then generates candidate region from region of search, finally matches target image characteristics and these candidate regions, it is defeated
Out with the matching angle value of each candidate region and corresponding regional location;
D, Pose-stage1: exporting target position by SiamRPN_net, this is only to be predicted with similarity feature
Target position, be not final output, it is also necessary to be predicted plus motion feature;
E, dx, dy, w, h: by by similarity feature predict come result be divided into two parts, one be target relatively on
Position moving distance dx, dy in one frame, the other is predicting the wide w and high h come in target present frame;
F, LSTM- shot and long term memory network: LSTM is a kind of Recognition with Recurrent Neural Network, and input information is selectively retained
A part is in network, and the output of LSTM is current input and multiframe information is coefficient as a result, the first point before
Input is dx in branch, dy, that is, target moving displacement information, passes through the movement of the available target of accumulation of displacement of targets information
Velocity information can be considered as target and do uniformly accelerated motion since target is smaller in the displacement of consecutive frame;
G, target_toward- target direction: this branch is directly individually to train a branch from target,
Target is that every frame all updates, and creates the branch of a prediction target direction, substitutes into CNN network and extracts final semantic information;
H, pose_stage2: passing through a CNN network, moves from the target speed, target shape, target of input
Towards in three information, abstract semantic information, predicted position of the final output target in present frame are extracted.
Preferably, in the step B by matching region of search in various positions and shape image-region, find and
The highest position of target matching degree is as tracing positional.
Preferably, target (target image) and search_ in the twin network of SiamRPN_net- in the step C
Region (region of search) substitutes into CNN (convolutional neural networks) respectively and carries out depth characteristic extraction, and the characteristic dimension of output is successively
Be: wide * high * characteristic pattern quantity, output feature substitute into four conv (convolutional layer) respectively, and output characteristic dimension is successively: wide *
High * characteristic pattern quantity, wherein k refers to candidate frame quantity, a series of region of positions, shape on candidate frame, that is, region of search,
Input conv again, first branching representation is to the score after the matching of whole candidate regions, and second branch is corresponding
It is the position of candidate region.
Preferably, the equation of motion in the step F is as follows: S=[V0+ (Vt-V0)/2] * T, wherein known quantity: S is just
It is displacement, V0 is starting velocity, and T is time unknown quantity: Vt present speed, the movement speed of first branch's final output target
Degree substitutes into CNN network and extracts final semantic information, and the width and height of second branch's input target are selectable by LSTM
Retain the high information of width in the entire sequence of target, can predict the current shape of target, unexpected deformation will not occur, that is, track
The case where positional fault, network can provide the wide high level of reasonable target according to the shape information of accumulation, and second branch is finally defeated
The shape of target out substitutes into CNN network and extracts final semantic information.
The present invention have it is following the utility model has the advantages that
1, based on motion feature and the anti-of similarity feature track algorithm should be blocked, which, which has, high blocks robust
Property, target are under complete occlusion state, can screen out most shelter according to similarity information first, later can be with
According to the movement of target towards the interference of different motion track is screened out, can be screened out further according to the motion velocity information of target
Fall the shelter target of same motion trajectory but different motion frequency, is finally gone out according to the motion information prediction of itself in present frame
In position, reuse similarity mode until crossing shelter and go out target position.
2, track algorithm should be blocked based on motion feature and the anti-of similarity feature, it is special in combination with the picture depth of target
The motion information of reference breath and target, is trained as clarification of objective in track algorithm, is obtained using LSTM network structure
Take target velocity feature, motion information accumulate use in a network video sequence as tracking network training set (in the past with
Track algorithm is all to use detection sample set as the training set of tracking network, and no target sequence walks trained factor), be by
LSTM network structure obtains the deformation characteristics of target, prevents predicting tracing frame wild effect.
Detailed description of the invention
Fig. 1 is that the present invention is based on the CNN+LSTM tracking network figures of motion feature and similarity feature;
Fig. 2 is that SiamRPN of the present invention extracts target similarity feature and the tentative prediction location drawing.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
A kind of referring to FIG. 1-2, to block track algorithm based on motion feature and the anti-of similarity feature, 1. include following step
It is rapid:
A, Target- target image: object detection results are free of extra background area, and every frame all updates target image;
B, the region of search Search region-: being the figure that extends out of previous frame target position, and usually length and width respectively extend out one
Times, it is navigated in present frame with target position in previous frame, and wide height is respectively extended out into one times of obtained image-region, passes through matching
The image-region of various positions and shape in region of search is found with the highest position of target matching degree as tracing positional;
C, convolutional neural networks (CNN) the twin network of SiamRPN net-: is passed through respectively to target image and region of search
Feature extraction is carried out, then generates candidate region from region of search, finally matches target image characteristics and these candidate regions, it is defeated
Out with the matching angle value of each candidate region and corresponding regional location, target (mesh in the twin network of SiamRPN_net-
Logo image) and search_region (region of search) substitute into respectively CNN (convolutional neural networks) carry out depth characteristic extraction, it is defeated
Characteristic dimension out is successively: wide * high * characteristic pattern quantity, and output feature substitutes into four conv (convolutional layer) respectively, exports feature
Dimension is successively: wide * high * characteristic pattern quantity, wherein k refers to candidate frame quantity, a series of positions on candidate frame, that is, region of search
It sets, the region of shape, inputs conv again, first branching representation is to the score after the matching of whole candidate regions the
It is the position of candidate region that two branches are corresponding;
D, Pose-stage1: exporting target position by SiamRPN_net, this is only to be predicted with similarity feature
Target position, be not final output, it is also necessary to be predicted plus motion feature;
E, dx, dy, w, h: by by similarity feature predict come result be divided into two parts, one be target relatively on
Position moving distance dx, dy in one frame, the other is predicting the wide w and high h come in target present frame;
F, LSTM- shot and long term memory network: LSTM is a kind of Recognition with Recurrent Neural Network, and input information is selectively retained
A part is in network, and the output of LSTM is current input and multiframe information is coefficient as a result, the first point before
Input is dx in branch, dy, that is, target moving displacement information, passes through the movement of the available target of accumulation of displacement of targets information
Velocity information can be considered as target and do uniformly accelerated motion, equation of motion is such as since target is smaller in the displacement of consecutive frame
Under: S=[V0+ (Vt-V0)/2] * T, wherein known quantity: S is exactly to be displaced, and V0 is starting velocity, and T is time unknown quantity: Vt works as
Preceding speed, the movement velocity of first branch's final output target substitute into CNN network and extract final semantic information, and second
Branch inputs the width and height of target, and the high information of width in the entire sequence of target is selectively retained by LSTM, can predict mesh
The current shape of target, will not occur unexpected deformation, the i.e. situation of tracing positional mistake, and network can be according to the shape information of accumulation
The wide high level of reasonable target is provided, the shape of second branch's final output target substitutes into CNN network and extracts final semantic letter
Breath, LSTM network have Memorability, input a tracking sequence, motion state can be accumulated and be saved in a network, before this is
What the track algorithm based on deep learning did not use, what existing track algorithm was mostly is CNN network, just for static state
Image zooming-out feature, the dynamic motion feature having no idea using target;
G, target_toward- target direction: this branch is directly individually to train a branch from target,
Target is that every frame all updates, and creates the branch of a prediction target direction, substitutes into CNN network and extracts final semantic information;
H, pose_stage2: passing through a CNN network, moves from the target speed, target shape, target of input
Towards in three information, abstract semantic information, predicted position of the final output target in present frame, track algorithm interference are extracted
Because being known as: illumination is blocked, move too fast, background complicated etc., the multiple target tracking scene especially in actual scene, target
Shuttle in video pictures, repeatedly, the case where being blocked by foreign object multiple of the same race or not of the same race it is common, when target is blocked,
Foreign object is screened out using similarity and motion feature to block, and the prediction of target position is made by motion information.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with
A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding
And modification, the scope of the present invention is defined by the appended.
Claims (4)
1. a kind of block track algorithm based on motion feature and the anti-of similarity feature, which comprises the following steps:
A, Target- target image: object detection results are free of extra background area, and every frame all updates target image;
B, the region of search Search region-: being the figure that extends out of previous frame target position, and usually length and width respectively extend out one times, uses
Target position navigates in present frame in previous frame, and wide height is respectively extended out one times of obtained image-region;
C, the twin network of SiamRPN net-: convolutional neural networks (CNN) is passed through to target image and region of search respectively and is carried out
Feature extraction, then candidate region is generated from region of search, finally matches target image characteristics and these candidate regions, output with
The matching angle value of each candidate region and corresponding regional location;
D, Pose-stage1: exporting target position by SiamRPN_net, this is the mesh only predicted with similarity feature
Cursor position is not final output, it is also necessary to be predicted plus motion feature;
E, dx, dy, w, h: by by similarity feature predict come result be divided into two parts, one is target with respect to previous frame
Middle position moving distance dx, dy, the other is predicting the wide w and high h come in target present frame;
F, LSTM- shot and long term memory network: LSTM is a kind of Recognition with Recurrent Neural Network, selectively retains one for input information
Point in network, the output of LSTM is current input and multiframe information is coefficient as a result, in first branch before
Input is dx, dy, that is, target moving displacement information, passes through the movement velocity of the available target of accumulation of displacement of targets information
Information can be considered as target and do uniformly accelerated motion since target is smaller in the displacement of consecutive frame;
G, target_toward- target direction: this branch is directly individually to train a branch from target, and target is
Every frame all updates, and creates the branch of a prediction target direction, substitutes into CNN network and extracts final semantic information;
H, pose_stage2: passing through a CNN network, moves direction from the target speed, target shape, target of input
In three information, abstract semantic information, predicted position of the final output target in present frame are extracted.
2. a kind of block track algorithm based on motion feature and the anti-of similarity feature, which is characterized in that pass through in the step B
The image-region for matching various positions and shape in region of search, finds with the highest position of target matching degree as trace bit
It sets.
3. a kind of block track algorithm based on motion feature and the anti-of similarity feature, which is characterized in that in the step C
Target (target image) and search_region (region of search) substitutes into CNN (volume respectively in the twin network of SiamRPN_net-
Product neural network) depth characteristic extraction is carried out, the characteristic dimension of output is successively: wide * high * characteristic pattern quantity, output feature point
Not Dai Ru four conv (convolutional layer), output characteristic dimension be successively: wide * high * characteristic pattern quantity, wherein k refers to candidate frame number
Amount, a series of region of positions, shape on candidate frame, that is, region of search input conv again, and first branching representation is
To the score after the matching of whole candidate regions, it is the position of candidate region that second branch be corresponding.
4. a kind of block track algorithm based on motion feature and the anti-of similarity feature, which is characterized in that the fortune in the step F
Dynamic formula is as follows: S=[V0+ (Vt-V0)/2] * T, wherein known quantity: S is exactly to be displaced, and V0 is starting velocity, and T is that the time is unknown
Amount: Vt present speed, the movement velocity of first branch's final output target substitute into CNN network and extract final semantic information,
The width and height of second branch input target, the high information of width in the entire sequence of target is selectively retained by LSTM, can be with
The current shape for predicting target, will not occur unexpected deformation, the i.e. situation of tracing positional mistake, network can be according to the shape of accumulation
Shape information provides the wide high level of reasonable target, and the shape of second branch's final output target substitutes into CNN network and extracts finally
Semantic information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910715630.XA CN110443829A (en) | 2019-08-05 | 2019-08-05 | It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910715630.XA CN110443829A (en) | 2019-08-05 | 2019-08-05 | It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110443829A true CN110443829A (en) | 2019-11-12 |
Family
ID=68433182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910715630.XA Pending CN110443829A (en) | 2019-08-05 | 2019-08-05 | It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443829A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401285A (en) * | 2020-03-23 | 2020-07-10 | 北京迈格威科技有限公司 | Target tracking method and device and electronic equipment |
CN111539987A (en) * | 2020-04-01 | 2020-08-14 | 上海交通大学 | Occlusion detection system and method based on discrimination model |
CN112163473A (en) * | 2020-09-15 | 2021-01-01 | 郑州金惠计算机系统工程有限公司 | Multi-target tracking method and device, electronic equipment and computer storage medium |
CN112184769A (en) * | 2020-09-27 | 2021-01-05 | 上海高德威智能交通系统有限公司 | Tracking abnormity identification method, device and equipment |
CN112308106A (en) * | 2019-11-15 | 2021-02-02 | 北京京邦达贸易有限公司 | Image labeling method and system |
CN112489084A (en) * | 2020-12-09 | 2021-03-12 | 重庆邮电大学 | Trajectory tracking system and method based on face recognition |
CN115222771A (en) * | 2022-07-05 | 2022-10-21 | 北京建筑大学 | Target tracking method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012092148A2 (en) * | 2010-12-30 | 2012-07-05 | Pelco Inc. | Scene activity analysis using statistical and semantic feature learnt from object trajectory data |
CN106022239A (en) * | 2016-05-13 | 2016-10-12 | 电子科技大学 | Multi-target tracking method based on recurrent neural network |
CN107122736A (en) * | 2017-04-26 | 2017-09-01 | 北京邮电大学 | A kind of human body based on deep learning is towards Forecasting Methodology and device |
CN107180226A (en) * | 2017-04-28 | 2017-09-19 | 华南理工大学 | A kind of dynamic gesture identification method based on combination neural net |
CN107330920A (en) * | 2017-06-28 | 2017-11-07 | 华中科技大学 | A kind of monitor video multi-target tracking method based on deep learning |
CN107705323A (en) * | 2017-10-13 | 2018-02-16 | 北京理工大学 | A kind of level set target tracking method based on convolutional neural networks |
CN108257158A (en) * | 2018-03-27 | 2018-07-06 | 福州大学 | A kind of target prediction and tracking based on Recognition with Recurrent Neural Network |
CN108520530A (en) * | 2018-04-12 | 2018-09-11 | 厦门大学 | Method for tracking target based on long memory network in short-term |
CN109740742A (en) * | 2019-01-14 | 2019-05-10 | 哈尔滨工程大学 | A kind of method for tracking target based on LSTM neural network |
CN109948611A (en) * | 2019-03-14 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of method and device that method, the information of information area determination are shown |
-
2019
- 2019-08-05 CN CN201910715630.XA patent/CN110443829A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012092148A2 (en) * | 2010-12-30 | 2012-07-05 | Pelco Inc. | Scene activity analysis using statistical and semantic feature learnt from object trajectory data |
CN106022239A (en) * | 2016-05-13 | 2016-10-12 | 电子科技大学 | Multi-target tracking method based on recurrent neural network |
CN107122736A (en) * | 2017-04-26 | 2017-09-01 | 北京邮电大学 | A kind of human body based on deep learning is towards Forecasting Methodology and device |
CN107180226A (en) * | 2017-04-28 | 2017-09-19 | 华南理工大学 | A kind of dynamic gesture identification method based on combination neural net |
CN107330920A (en) * | 2017-06-28 | 2017-11-07 | 华中科技大学 | A kind of monitor video multi-target tracking method based on deep learning |
CN107705323A (en) * | 2017-10-13 | 2018-02-16 | 北京理工大学 | A kind of level set target tracking method based on convolutional neural networks |
CN108257158A (en) * | 2018-03-27 | 2018-07-06 | 福州大学 | A kind of target prediction and tracking based on Recognition with Recurrent Neural Network |
CN108520530A (en) * | 2018-04-12 | 2018-09-11 | 厦门大学 | Method for tracking target based on long memory network in short-term |
CN109740742A (en) * | 2019-01-14 | 2019-05-10 | 哈尔滨工程大学 | A kind of method for tracking target based on LSTM neural network |
CN109948611A (en) * | 2019-03-14 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of method and device that method, the information of information area determination are shown |
Non-Patent Citations (6)
Title |
---|
AMIR SADEGHIAN 等: "Tracking The Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies", 《ICCV 2017》 * |
BO LI 等: "High Performance Visual Tracking with Siamese Region Proposal Network", 《CVPR 2018》 * |
HENG FAN 等: "Siamese Cascaded Region Proposal Networks for Real-Time Visual Tracking", 《ARXIV》 * |
PETER ONDRU´SˇKA D等: "End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks", 《ARXIV》 * |
彭聪 等: "基于动态网格密度的SNN聚类的ET-GM-PHD滤波算法", 《弹箭与制导学报》 * |
高军 等: "基于YOLO和RNN的运动目标跟踪方法", 《计算机工程与设计》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308106A (en) * | 2019-11-15 | 2021-02-02 | 北京京邦达贸易有限公司 | Image labeling method and system |
CN111401285A (en) * | 2020-03-23 | 2020-07-10 | 北京迈格威科技有限公司 | Target tracking method and device and electronic equipment |
CN111401285B (en) * | 2020-03-23 | 2024-02-23 | 北京迈格威科技有限公司 | Target tracking method and device and electronic equipment |
CN111539987A (en) * | 2020-04-01 | 2020-08-14 | 上海交通大学 | Occlusion detection system and method based on discrimination model |
CN111539987B (en) * | 2020-04-01 | 2022-12-09 | 上海交通大学 | Occlusion detection system and method based on discrimination model |
CN112163473A (en) * | 2020-09-15 | 2021-01-01 | 郑州金惠计算机系统工程有限公司 | Multi-target tracking method and device, electronic equipment and computer storage medium |
CN112184769A (en) * | 2020-09-27 | 2021-01-05 | 上海高德威智能交通系统有限公司 | Tracking abnormity identification method, device and equipment |
CN112184769B (en) * | 2020-09-27 | 2023-05-02 | 上海高德威智能交通系统有限公司 | Method, device and equipment for identifying tracking abnormality |
CN112489084A (en) * | 2020-12-09 | 2021-03-12 | 重庆邮电大学 | Trajectory tracking system and method based on face recognition |
CN112489084B (en) * | 2020-12-09 | 2021-08-03 | 重庆邮电大学 | Trajectory tracking system and method based on face recognition |
CN115222771A (en) * | 2022-07-05 | 2022-10-21 | 北京建筑大学 | Target tracking method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443829A (en) | It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature | |
CN110728702B (en) | High-speed cross-camera single-target tracking method and system based on deep learning | |
CN110660082B (en) | Target tracking method based on graph convolution and trajectory convolution network learning | |
CN108986064B (en) | People flow statistical method, equipment and system | |
CN111709328B (en) | Vehicle tracking method and device and electronic equipment | |
US20190138813A1 (en) | Digital Video Fingerprinting Using Motion Segmentation | |
Maksai et al. | Non-markovian globally consistent multi-object tracking | |
CN114970321A (en) | Scene flow digital twinning method and system based on dynamic trajectory flow | |
CN109919981A (en) | A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary | |
CN109934127B (en) | Pedestrian identification and tracking method based on video image and wireless signal | |
CN103914685B (en) | A kind of multi-object tracking method cliqued graph based on broad sense minimum with TABU search | |
JP2004046647A (en) | Method and device for tracking moving object based on dynamic image data | |
CN110110649A (en) | Alternative method for detecting human face based on directional velocity | |
CN101447082A (en) | Detection method of moving target on a real-time basis | |
CN105989358A (en) | Natural scene video identification method | |
CN106529477A (en) | Video human behavior recognition method based on significant trajectory and time-space evolution information | |
CN109583355B (en) | People flow counting device and method based on boundary selection | |
CN111161325B (en) | Three-dimensional multi-target tracking method based on Kalman filtering and LSTM | |
CN111739053A (en) | Online multi-pedestrian detection tracking method under complex scene | |
CN111428589B (en) | Gradual transition identification method and system | |
Maksai et al. | Globally consistent multi-people tracking using motion patterns | |
CN106295532A (en) | A kind of human motion recognition method in video image | |
CN106683062B (en) | A kind of moving target detecting method based on ViBe under Still Camera | |
CN108764338A (en) | A kind of pedestrian tracking algorithm applied to video analysis | |
CN112507845A (en) | Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191112 |
|
RJ01 | Rejection of invention patent application after publication |