CN103413295B - A kind of video multi-target long-range tracking - Google Patents
A kind of video multi-target long-range tracking Download PDFInfo
- Publication number
- CN103413295B CN103413295B CN201310292921.5A CN201310292921A CN103413295B CN 103413295 B CN103413295 B CN 103413295B CN 201310292921 A CN201310292921 A CN 201310292921A CN 103413295 B CN103413295 B CN 103413295B
- Authority
- CN
- China
- Prior art keywords
- target
- tracks
- tracking
- tree
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Abstract
The present invention relates to a kind of video multi-target long-range tracking, the method concrete steps: step one: extract the target characteristic of great amount of samples picture to be tracked, first with the characteristic set of the Feature Words word frequency vector of M tree representation sample object picture, recycling video search carries out the retrieval of M tree to target to be matched, finally utilize affine transformation relationship checking retrieval result, thus match initial target;Step 2: utilize short distance tracker tracing detection based on optical flow tracking to initial target;Step 3: utilize the three-dimensional object module more fresh target of on-line study, set up the object module of polymorphic various visual angles;Step 4: utilize short distance to follow the tracks of and detection target combines method again, it is achieved long-range is followed the tracks of.The method uses the three-dimensional object module method of on-line study, solve the destination object at every moment changed in tracking network video, solve the target detection problems of the picture characteristics being continually changing, and the framework of following the tracks of that employing short distance is followed the tracks of with detection target again realizes the long-range tracking of target.Multiple fields such as the method can be widely used for the network information security, image multi-target detection, video multi-target tracking.
Description
Technical field
The present invention relates to a kind of video multi-target long-range tracking.
Background technology
An important information source during Internet video has been increasingly becoming people's life at present, its power of influence is the most gradually
Exceeding traditional television broadcasting, traditional television broadcasting starts to develop to Internet video the most gradually.On the other hand, in Internet video
The freedom passed causes content safety problem the most gradually to highlight, and has become as the focal issue of Internet video, and inhibits network
The sound development of video.One that network video information content is analyzed and is managed becoming the network information security important
Task.
Video frequency object tracking, by the noisy observation information of history a series of in analysis video sequence, estimates that target is true
Real parameter, such as position, acceleration, speed, movement locus etc..These observation information are probably derived from the target of tracking, ring
Border background or system noise.Video frequency object tracking is next step video object classification, or even is video behavior analysis and understanding
The basic parameter foundation of target is provided, is basic means and one of key technology of Logistics networks video content safety.
Method for tracking target is broadly divided into based on target visible short distance tracker in whole video with based on following the tracks of and inspection
Survey the long-range tracking combined.Wherein, following approach is often followed based on target visible short distance tracker in whole video:
Static template is followed the tracks of and static distinctiveness is followed the tracks of.It is by constant To Template (image block, a Nogata that static template is followed the tracks of
Figure) target appearance is modeled, and using the candidate image block that most preferably mates with To Template as the target estimated.Static zones
Other property follow the tracks of be that the background moving target is modeled, then set up a binary classifier, this grader represent target with
Differentiation boundary between background, finally carries out off-line training to object classifier.Generally background modeling employing following two mode:
The first, search for affiliated partner in the background, and the motion of this object is associated with destination object motion.These affiliated partners are permissible
Help to follow the tracks of in destination object disappears in video field or in the case of carrying out complicated change.The second, background is considered
Negative sample, tracker is tackled it and is distinguished.
But, often follow following approach based on the long-range tracking followed the tracks of and detection combines: " Tracking-by-
Detection " method and " Detection-by-Tracking " method.Wherein, " Tracking-by-Detection " method
It is the tracking result that the detector of off-line training is used for verifying tracker, if tracking result is not authenticated, will be by the most thorough
Target is found in the picture search at the end.But, " Detection-by-Tracking " method is to be entered by the candidate target that every frame detects
Row global object track optimizing is to determine target location.Wherein, object detection method mainly has inspection based on image local feature
Survey method and detection method based on sliding window.Detection method based on image local feature often follows following approach: special
Levy detection, feature identification, and Model Matching.Generally use plane or 3D model.But, detection method based on sliding window
It is then by different size of window, input picture to be scanned, and judges whether scanned image block contains target pair
As.
Although above-mentioned these algorithms comparative maturity, but these methods exist in respective application accordingly
Problem.Wherein, based on target visible short distance tracker in whole video must known target, when target occlusion or disappear time with
Track failure.Relying on, based on the long-range tracking followed the tracks of and detection combines, the detector that off-line training is good, detector is in the phase of tracking
Between do not have the change of attribute, although self adaptation difference tracker can solve cosmetic variation problem, but can not comprise simultaneously
The appearance information of the polymorphic various visual angles of target, cause can not true complex network video tracking being carried out sane long-range with
Track.The when of additionally, be applied to multi-target detection, existing detection method based on image local feature and based on sliding window
The detection time of detection method can be linearly increasing along with the increase of sample object quantity to be detected.For substantial amounts of image object,
Detection efficiency is the lowest.
Summary of the invention
The embodiment of the present invention is to provide a kind of video multi-target long-range tracking, can detect multiple mesh fast and accurately
Mark, sets up the polymorphic stability and high efficiency object module of various visual angles, and long-range follows the tracks of target in real time.
The technical scheme is that a kind of video multi-target long-range tracking, it is special
Levying and be, the method specifically comprises the following steps that
Step one: extract the feature of great amount of samples Target Photo to be tracked, utilizes clustering method to calculate sample object figure
The Feature Words word frequency vector of sheet, with the characteristic set of M-tree representation vectorization, target to be matched is carried out by recycling video search
M-tree is retrieved, and finally utilizes affine relation checking retrieval result to match initial target;
Step 2: the initial target that short distance tracker tracing detection based on optical flow tracking arrives;
Step 3: utilize the three-dimensional object module more fresh target of on-line study, set up the object module of polymorphic various visual angles;
Step 4: utilize short distance to follow the tracks of and again detect the stage tracking that fresh target combines, it is achieved real-time length
Journey is followed the tracks of.
The invention has the beneficial effects as follows: a kind of video multi-target long-range tracking of the present invention, the method is first
Utilize Feature Words word frequency vector to build M-tree index and carry out target detection, there is good robustness, to target deformation, block,
Illumination variation is insensitive;Can be with the multiple target of one-time detection, it is not necessary to duplicate detection, it is possible to achieve efficient online multiple target inspection
Survey;Then utilize on-line study solid object module method to update object module, video frequency object tracking can be made to adapt to target
The complicated change of polymorphic various visual angles;The stage tracking that short distance is followed the tracks of and target detection combines again is finally used to realize
The long-range of target is followed the tracks of, it is possible to reduce the tracking error when targets such as scene switching, target disappear, camera lens quickly moves disappear.
In a word, multiple fields such as the method can be widely used for the network information security, image multi-target detection, video multi-target tracking.
On the basis of technique scheme, the present invention can also do following improvement.
Further, the extraction of the feature of described sample object picture is to utilize the SURF characteristics algorithm with positional information to enter
OK.
Further, described Feature Words word frequency vector is to represent great amount of samples target to be detected by SURF feature clustering
Picture vectorization, and then be M-tree by the Based on Feature Points of vectorization, additionally, each node of M-tree is with record SURF feature
The vector of location sets, step is as follows:
1) choose a representative image library, and calculate the cluster class center of all characteristic points of this image library
(Feature Words dictionary);
2) Europe between the cluster class center described in the vectorial image characteristic point of Feature Words word frequency and step 1) is calculated several
In obtain distance;
3) find the class center that the image characteristic point of Feature Words word frequency vector is nearest, calculate the frequency at such center, and will
Frequency adds 1;
4) repeat step 2), 3), until completing the frequency of all characteristic points of the image of Feature Words word frequency to be generated vector
Calculate, obtain the frequency histogram of this image, described frequency histogram vectorization is i.e. obtained the Feature Words word frequency of this image to
Amount.
Further, the described SURF feature locations affine relation that passed through in M-tree indexes by image to be detected verifies mesh to be selected
Mark, step is as follows:
1) video searching method by yardstick is put in employing the most by turn, obtains image block to be detected;
2) the Feature Words word frequency vector of described image block to be detected is calculated;
3) the Feature Words word frequency vector of described image block to be detected is mated in described M-tree index;
4) by retrieve wait to set the goal carry out SURF feature locations affine relation checking, be judged as initial mesh by checking
Mark, if it does, perform step 5), otherwise, returns step 1);
5) using the SURF feature location set of similar purpose image that detects as the initial target of optical flow tracking.
Further, it is to utilize optical flow tracking to realize that described short distance is followed the tracks of, and specifically comprises the following steps that
1) using the SURF feature location set of optical flow tracking initial target as the tracking object of optical flow tracking, time is followed the tracks of out
Select target;
2) candidate target followed the tracks of judges whether to follow the tracks of correctly by affine relation, if following the tracks of correct, performs step 3);
3) using the SURF feature lost on a small quantity as background removal, and new significant SURF feature locations is increased, according to
Cluster result forms new Feature Words word frequency vector;
4) Feature Words word frequency vector stage of shape cost tracing target together with initial target of fresh target is followed the tracks of M-tree,
Every frame is correctly followed the tracks of result and will be added the stage after duplicate removal and follow the tracks of M-tree, repetition step 2), 3) and 4), it is achieved short distance with
Track.
Further, the target update during described tracking is utilized in line study stereomodel method, and concrete steps are such as
Under:
1) followed the tracks of the change of target by background removal perception, compare duplicate removal, expired inactive destination node is removed;
2) use quick M-tree to go out object module with accurate affine relation two stage match of verification, formed and comprise mesh
Mark the object module of " three-dimensional " of polymorphic various visual angles information.
Further, the described successful differentiation followed the tracks of in short distance all uses system in two problems of position verification of target detection
The judgment criterion based on affine relation of one.Specifically comprise the following steps that
1) on the basis of quick M-tree is retrieved, M-leaf node is retrieved;
2) by the M-leaf node vectorial with SURF feature location set and couple candidate detection target or tracking target
Carry out character pair Point matching, calculate affine transformation coefficient regression simultaneously;
3) discrete noise spot is got rid of according to affine transformation coefficient;
4) character pair point is to whether having similar affine transformation coefficient to utilize recurrence significance test criterion to judge
The correctness of checking target.
Further, described short distance is followed the tracks of and target again detects the long-range combined and follows the tracks of, and specifically comprises the following steps that
1) M-tree and affine transformation checking acquisition initial target model is utilized to carry out short as the initial target of optical flow tracking
Journey is followed the tracks of;
2) utilize on-line study stereomodel method to update object module during following the tracks of, set up polymorphic various visual angles
Object module;
3) disappear in target or follow the tracks of the situation of mistake, again detecting target, continuing short distance after target being detected and follow the tracks of.
Accompanying drawing explanation
Fig. 1 is a kind of video multi-target long-range tracking general flow chart that the present invention relates to;
Fig. 2 is the initial target detecting step flow chart that the present invention relates to;
Fig. 3 is the three-dimensional object module flow chart of steps of the on-line study that the present invention relates to;
Fig. 4 is that the short distance that the present invention relates to is followed the tracks of and long-range follows the tracks of the tracking block schematic illustration combined.
Detailed description of the invention
Being described principle and the feature of the present invention below in conjunction with accompanying drawing, example is served only for explaining the present invention, and
Non-for limiting the scope of the present invention.
The tracking object module of a kind of video multi-target studying Internet video content that the present invention relates to and tracking
Strategy, is illustrated in figure 1 the general flow chart of a kind of video multi-target long-range tracking;Fig. 2 is the initial mesh that the present invention relates to
Mark detecting step flow chart;Fig. 3 is the three-dimensional object module flow chart of steps of the on-line study that the present invention relates to;Fig. 4 is for this
The short distance that invention relates to is followed the tracks of and long-range follows the tracks of the tracking block schematic illustration combined.As shown in Figure 1,2,3, 4, a kind of video is many
Specifically comprising the following steps that of target long-range tracking
Target sample storehouse and image to be detected is provided by user.
Step one: extract the feature of great amount of samples Target Photo to be tracked, utilizes clustering method to calculate sample object figure
The Feature Words word frequency vector of sheet, with the characteristic set of M-tree representation vectorization, target to be matched is carried out by recycling video search
M-tree is retrieved, and finally utilizes affine relation checking retrieval result to match initial target.Specifically comprise the following steps that
1) choose a representative image library, and calculate the cluster class center of all characteristic points of this image library,
I.e. Feature Words dictionary;
2) calculate Feature Words word frequency vector image characteristic point and step 1) described in cluster class center between Europe several
In obtain distance;
3) find the class center that the image characteristic point of Feature Words word frequency vector is nearest, calculate the frequency at such center, and will
Frequency adds 1;
4) repeat step 2), 3), until completing the frequency of all of characteristic point of the image of Feature Words word frequency to be generated vector
Number calculates, and obtains the frequency histogram of this image, described frequency histogram vectorization i.e. obtains the Feature Words word frequency of this image
Vector;
5) with the characteristic set of M-tree representation Feature Words word frequency vector, wherein each leaf node of M-tree represents a target
Sample, the most each leaf node is with SURF feature locations in record object picture;
6) video searching method by yardstick is put in employing the most by turn, region to be matched carries out the retrieval of M-tree, retrieves
Wait to set the goal;
7) by retrieve wait to set the goal carry out SURF feature locations affine relation checking, be judged as initial mesh by checking
Mark, if it does, perform step 8), otherwise, returns step 6);
8) using the SURF feature location set of the similar purpose image of detection as the initial target of optical flow tracking.
Step 2: the initial target that short distance tracker tracing detection based on optical flow tracking arrives.Specifically comprise the following steps that
1) using the SURF feature location set of optical flow tracking initial target as the tracking object of optical flow tracking, time is followed the tracks of out
Select target;
2) candidate target followed the tracks of judges whether to follow the tracks of correctly by affine relation, if following the tracks of correct, performs step 3),
Otherwise, step 5) is performed;
3) using the SURF feature lost on a small quantity as background removal, and new significant SURF feature locations is increased, according to
Cluster result forms new Feature Words word frequency vector;
4) Feature Words word frequency vector stage of shape cost tracing target together with initial target of fresh target is followed the tracks of M-tree,
Every frame correctly follows the tracks of result will add stage tracking M-tree after duplicate removal;
5) when following the tracks of unsuccessfully, search video, region to be matched is carried out stage tracking M-tree retrieval, retrieve is undetermined
Target carries out the checking of SURF feature locations affine relation again, is judged as the tracking target reappeared by checking, re-executes
Step 1).
Step 3: utilize the three-dimensional object module more fresh target of on-line study, set up the object module of polymorphic various visual angles.
The three-dimensional object module of on-line study specifically comprises the following steps that
1) filtering background, i.e. utilizes radiation transformation criterion using the SURF feature lost on a small quantity as background removal;
2) insert new characteristic point, i.e. insert new SURF feature locations;
3) comparing duplicate removal, SURF feature locations that will be newly inserted compares duplicate removal, compares expired inactive target
Node, and remove inactive node;
4) more new model, i.e. calculates the Feature Words word frequency vector of fresh target, forms M-tree node together with initial target,
Form the object module of " three-dimensional " that comprise target polymorphic various visual angles information.
Step 4: utilize short distance to follow the tracks of and the detection tracking that combines of fresh target again, it is achieved real-time long-range with
Track.
1) using the object module of detection as the initial target of optical flow tracking;
2) judging whether to follow the tracks of correct candidate target based on affine relation, if following the tracks of accurately, then performing step 3), if with
Track failure, then perform step 4);
3) if following the tracks of successfully, during following the tracks of, the three-dimensional object module method constantly more fresh target of on-line study is utilized
Model, sets up the three-dimensional object module of polymorphic various visual angles, is further continued for using optical flow tracking device short distance to follow the tracks of target;
4) if the failure of short distance optical flow tracking, quick M-tree search method is again used again to retrieve new alternative initial
Target;
5) correctness of affine transformation coefficient checking candidate target is utilized, using the correct target that again detects as light stream
The fresh target model followed the tracks of.Repetition step 1), 2) and 3).Thus realize the tracking box that short distance is followed the tracks of and long-range tracking combines
Frame.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and
Within principle, any modification, equivalent substitution and improvement etc. made, should be included within the scope of the present invention.
Claims (9)
1. a video multi-target long-range tracking, it is characterised in that the method specifically comprises the following steps that
Step one: extract the feature of great amount of samples Target Photo to be tracked, utilizes clustering method to calculate sample object picture
Feature Words word frequency vector, with the characteristic set of M-tree representation vectorization, recycling video search carries out M-tree to target to be matched
Retrieval, finally utilizes affine relation checking retrieval result to match initial target;
Step 2: the initial target that short distance tracker tracing detection based on optical flow tracking arrives;
Step 3: utilize the three-dimensional object module more fresh target of on-line study, set up the object module of polymorphic various visual angles;
Step 4: utilize short distance to follow the tracks of and detection fresh target combines method again, it is achieved real-time long-range is followed the tracks of.
A kind of video multi-target long-range tracking, it is characterised in that described sample object picture
The extraction of feature be to utilize the SURF characteristics algorithm with positional information to carry out.
A kind of video multi-target long-range tracking, it is characterised in that described characteristic similarity degree
Amount, specifically comprises the following steps that
1) great amount of samples Target Photo to be tracked is carried out the SURF feature extraction of band positional information;
2) the cluster class center of all characteristic points of image library is calculated;
3) according to cluster result the tagsort in sample object picture carried out word frequency statistics, so to sample object picture to
Quantify;
4) the sample object picture set after vectorization being expressed as M-tree, wherein each leaf node of M-tree represents a sample
Target Photo, the most each leaf node is with the vector of SURF feature location set in record sample object picture.
A kind of video multi-target long-range tracking, it is characterised in that described video search is to adopt
With putting the searching method by yardstick the most by turn, region to be matched carrying out the retrieval of M-tree, waiting of retrieving sets the goal.
A kind of video multi-target long-range tracking, it is characterised in that described multiple target coupling is
Utilize SURF feature locations affine relation to verify, be judged as initial target by checking.
A kind of video multi-target long-range tracking, it is characterised in that described optical flow tracking is followed the tracks of
The initial target detected, specifically comprises the following steps that
1) using the SURF feature location set of optical flow tracking initial target as the tracking object of optical flow tracking, candidate's mesh is followed the tracks of out
Mark;
2) candidate target followed the tracks of judges whether to follow the tracks of correctly by affine relation;
3) follow the tracks of correct, then using the SURF feature lost on a small quantity as background removal, and increase new significant SURF Q-character
Put, form new Feature Words word frequency vector further according to cluster result;
4) Feature Words word frequency vector stage of shape cost tracing target together with initial target of fresh target is followed the tracks of M-tree, every frame
Correct result of following the tracks of will add the stage after duplicate removal and follows the tracks of M-tree, repetition step 2), 3) and 4), it is achieved short distance light stream with
Track.
A kind of video multi-target long-range tracking, it is characterised in that standing of described on-line study
Body object module, specifically comprises the following steps that
1) followed the tracks of the change of target by background removal perception, compare duplicate removal, expired inactive destination node is removed;
2) use quick M-tree to go out object module with accurate affine transformation relationship two stage match of verification, formed and comprise mesh
Mark the object module of " three-dimensional " of polymorphic various visual angles information.
8. according to video multi-target long-range tracking a kind of described in claim 5 or 7, it is characterised in that described based on affine
The judgment criterion of transformation relation be M-tree mate on the basis of, by M-leaf node with SURF feature location set
Vector, and couple candidate detection target or follow the tracks of target and carry out character pair Point matching and affine transformation coefficient regression, get rid of from
Carry out after dissipating noise spot returning significance test, it is judged that character pair point is to whether having similar affine transformation coefficient.
A kind of video multi-target long-range tracking, it is characterised in that described short distance is followed the tracks of and weight
The stage tracking that new detection target combines, first carries out short distance tracking to stereomodel target, updates during tracking
Object module, when target disappears or follows the tracks of the situation of mistake, detects target again, detect continue after target short distance with
Track.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310292921.5A CN103413295B (en) | 2013-07-12 | 2013-07-12 | A kind of video multi-target long-range tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310292921.5A CN103413295B (en) | 2013-07-12 | 2013-07-12 | A kind of video multi-target long-range tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103413295A CN103413295A (en) | 2013-11-27 |
CN103413295B true CN103413295B (en) | 2016-12-28 |
Family
ID=49606300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310292921.5A Expired - Fee Related CN103413295B (en) | 2013-07-12 | 2013-07-12 | A kind of video multi-target long-range tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103413295B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106875428A (en) * | 2017-01-19 | 2017-06-20 | 博康智能信息技术有限公司 | A kind of multi-object tracking method and device |
CN107194310A (en) * | 2017-04-01 | 2017-09-22 | 国家计算机网络与信息安全管理中心 | The rigid-object tracking matched based on scene change classifications and online local feature |
TWI631516B (en) * | 2017-10-16 | 2018-08-01 | 緯創資通股份有限公司 | Target tracking method and system adaptable to multi-target tracking |
CN108921872B (en) * | 2018-05-15 | 2022-02-01 | 南京理工大学 | Robust visual target tracking method suitable for long-range tracking |
CN108932509A (en) * | 2018-08-16 | 2018-12-04 | 新智数字科技有限公司 | A kind of across scene objects search methods and device based on video tracking |
CN111199179B (en) * | 2018-11-20 | 2023-12-29 | 深圳市优必选科技有限公司 | Target object tracking method, terminal equipment and medium |
CN109615641B (en) * | 2018-11-23 | 2022-11-29 | 中山大学 | Multi-target pedestrian tracking system and tracking method based on KCF algorithm |
EP3891650A1 (en) | 2018-12-03 | 2021-10-13 | Telefonaktiebolaget LM Ericsson (publ) | Distributed computation for real-time object detection and tracking |
CN110264497B (en) * | 2019-06-11 | 2021-09-17 | 浙江大华技术股份有限公司 | Method and device for determining tracking duration, storage medium and electronic device |
CN112137591B (en) * | 2020-10-12 | 2021-07-23 | 平安科技(深圳)有限公司 | Target object position detection method, device, equipment and medium based on video stream |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6826292B1 (en) * | 2000-06-23 | 2004-11-30 | Sarnoff Corporation | Method and apparatus for tracking moving objects in a sequence of two-dimensional images using a dynamic layered representation |
CN102799900A (en) * | 2012-07-04 | 2012-11-28 | 西南交通大学 | Target tracking method based on supporting online clustering in detection |
CN102855473A (en) * | 2012-08-21 | 2013-01-02 | 中国科学院信息工程研究所 | Image multi-target detecting method based on similarity measurement |
-
2013
- 2013-07-12 CN CN201310292921.5A patent/CN103413295B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6826292B1 (en) * | 2000-06-23 | 2004-11-30 | Sarnoff Corporation | Method and apparatus for tracking moving objects in a sequence of two-dimensional images using a dynamic layered representation |
CN102799900A (en) * | 2012-07-04 | 2012-11-28 | 西南交通大学 | Target tracking method based on supporting online clustering in detection |
CN102855473A (en) * | 2012-08-21 | 2013-01-02 | 中国科学院信息工程研究所 | Image multi-target detecting method based on similarity measurement |
Non-Patent Citations (2)
Title |
---|
一种基于特征光流检测的运动目标跟踪方法;李金宗,原磊,李冬冬;《系统工程与电子技术》;20040331;第27卷(第3期);422-426 * |
视频跟踪算法研究综述;闫庆森,李临生,徐晓峰,王灿;《计算机科学》;20130630;第40卷(第6A期);第205-207页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103413295A (en) | 2013-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103413295B (en) | A kind of video multi-target long-range tracking | |
Wang et al. | Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark | |
Yin et al. | 3d lidar-based global localization using siamese neural network | |
Zhou et al. | Deep alignment network based multi-person tracking with occlusion and motion reasoning | |
Lynen et al. | Placeless place-recognition | |
Xie et al. | Real-time illegal parking detection system based on deep learning | |
Huang et al. | Person search in videos with one portrait through visual and temporal links | |
Cieslewski et al. | Point cloud descriptors for place recognition using sparse visual information | |
Liu et al. | Indexing visual features: Real-time loop closure detection using a tree structure | |
CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
Ardeshir et al. | GIS-assisted object detection and geospatial localization | |
Lee et al. | Place recognition using straight lines for vision-based SLAM | |
CN108596010B (en) | Implementation method of pedestrian re-identification system | |
Yin et al. | A general feature-based map matching framework with trajectory simplification | |
CN102804208A (en) | Automatically mining person models of celebrities for visual search applications | |
JP2012033022A (en) | Change area detection device and method in space | |
Dymczyk et al. | Will it last? Learning stable features for long-term visual localization | |
Vishal et al. | Accurate localization by fusing images and GPS signals | |
Mishkin et al. | Place recognition with WxBS retrieval | |
CN110969648A (en) | 3D target tracking method and system based on point cloud sequence data | |
CN110533661A (en) | Adaptive real-time closed-loop detection method based on characteristics of image cascade | |
CN110084830A (en) | A kind of detection of video frequency motion target and tracking | |
Zhang et al. | Topological spatial verification for instance search | |
CN109886065A (en) | A kind of online increment type winding detection method | |
CN109034237A (en) | Winding detection method based on convolutional Neural metanetwork road sign and sequence search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161228 Termination date: 20170712 |