CN104200487A - Target tracking method based on ORB characteristics point matching - Google Patents

Target tracking method based on ORB characteristics point matching Download PDF

Info

Publication number
CN104200487A
CN104200487A CN201410376616.9A CN201410376616A CN104200487A CN 104200487 A CN104200487 A CN 104200487A CN 201410376616 A CN201410376616 A CN 201410376616A CN 104200487 A CN104200487 A CN 104200487A
Authority
CN
China
Prior art keywords
target object
orb
model
unique point
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410376616.9A
Other languages
Chinese (zh)
Inventor
王若梅
韩冠亚
陈湘萍
谢雪峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU ZHONGDA DIGITAL HOME ENGINEERING RESEARCH CENTER Co Ltd
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
GUANGZHOU ZHONGDA DIGITAL HOME ENGINEERING RESEARCH CENTER Co Ltd
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU ZHONGDA DIGITAL HOME ENGINEERING RESEARCH CENTER Co Ltd, National Sun Yat Sen University filed Critical GUANGZHOU ZHONGDA DIGITAL HOME ENGINEERING RESEARCH CENTER Co Ltd
Priority to CN201410376616.9A priority Critical patent/CN104200487A/en
Publication of CN104200487A publication Critical patent/CN104200487A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to a target tracking method based on ORB characteristics point matching. The target tracking method comprises: adopting an ORB (Oriented FAST and Rotated BRIEF) characteristic point set to represent a target object, and in a tracking process of the target object, iteratively updating the ORB characteristic point set by continuously adding new characteristic points and cutting outliers. A nearest neighborhood search algorithm is used between two continuous frames to find out matched characteristic point pairs, and then a motion transformation model of the tracking target is worked out according to the characteristic point pairs by using method that combines random sampling accordance with a multiple transformation model. The target tracking method based on ORB characteristics point matching is better in robustness and higher in efficiency, when being used for processing scenes with complex appearance change, disorder background, illumination variation and shielded target object.

Description

A kind of method for tracking target based on ORB Feature Points Matching
Technical field
The present invention relates to computer vision tracking technique field, be specifically related to the method for tracking target based on ORB Feature Points Matching.
Background technology
Target following is the important subject of computer vision field, the convenience of the develop rapidly based on high-performance computer, inexpensive video camera and growing automated video are analyzed demand, and the mankind have produced very big interest to probing into of visual target tracking algorithm.In recent years, vision is followed the tracks of in a lot of fields application very widely, comprises motion gesture identification, pedestrian's tracking, intelligent monitoring, man-machine interaction, intelligent transportation detection and automobile navigation etc.Be accompanied by the perfect of the development of digital home's industry and digital home's industrial chain, video tracking progressively gos deep in the application of Smart Home monitoring, on the basis of the multi-service application integration support platform of Facing Digital family, for the user of digital home, provide the characteristic digital home interaction services such as household and community security protection.
Although Chinese scholars has been studied, find plurality of target tracking, face several major issues.The first, the appearance change of target object.In actual scene, the outward appearance of target is subject to the impact of factors such as illumination variation, the change of posture form, camera lens shake, target sizes change and constantly changes.The second, complicated background interference.Under the scenes such as traffic detection, automobile navigation, pedestrian's tracking, target following is because being subject to complicated background interference to have huge challenge.In actual complex environment, have much noise, and background is mixed and disorderly, the color of target object is also easily obscured with the color of background mutually.The 3rd, occlusion issue.In actual video scene, it is the most common common situation that target object is blocked even completely by partial occlusion, investigates a kind of robustness of method for tracking target, just must test the processing power of the method to occlusion issue.The 4th, real time problems.Before three kinds of problems be all in the robustness of investigating target tracking algorism, and target following can to meet the demand of real-time processing be also the difficulties of target tracking domain.
In the research and practice process to the method, the present inventor finds: because unique point descriptor all has good robustness to processing the deformation of noise, detection error and geometrical optics, the present invention proposes a kind of method for tracking target based on ORB Feature Points Matching, thereby realizes efficient target tracking.
Summary of the invention
The invention provides a kind of method for tracking target based on ORB Feature Points Matching, the robustness of processing the scenes such as complicated mode of appearance variation, background clutter, illumination variation and target object block is good, and efficiency is high.
The invention provides a kind of method for tracking target based on ORB Feature Points Matching, comprise the following steps:
(1) first the original image frame of input is extracted to ORB (oFAST and rBRIEF) feature point set;
(2) in descriptor space, match and produce the feature set of present frame with the ORB feature point set of previous frame target object constantly;
(3) according to the position at previous frame target place use random sampling one to make peace method that multiple transform model combines, calculate the motion converter of tracking target;
(4) finally by constantly adding new unique point and cutting outlier to upgrade, produce the target signature collection that final present frame is constantly suitable.
Described ORB unique point descriptor represents by a string of binary characters, and each of character string, by choosing at random the big or small generation mutually that partners of image block that two sizes are S * S near unique point, is got the string of binary characters of n=256 bit length here.More particularly, the image block P that we are S * S a size defines with string of binary characters, and definition is as shown in formula (1):
&tau; ( p ; x , y ) = 1 , if p ( x ) < p ( y ) , 0 , otherwise . - - - ( 1 )
Wherein, p (x) and p (y) are respectively that P is at the gray-scale value of pixel x and y.By choosing one group of specific <x, y> point is to conduct set, just the character string that has produced like this n position as BRIEF descriptor, defines as formula (2):
B n ( p ) = &Sigma; 1 &le; i &le; n 2 i - 1 &tau; ( p ; x i , y i ) . - - - ( 2 )
The ORB unique point descriptor D that described given two length are n position iand D j, similarity ρ (D between the two i, D j) by the very efficient hamming distance of computing machine, represented.So, its similarity measurement is defined as follows:
&rho; ( D i , D j ) = &Sigma; i = 1 n &delta; [ D i ( k ) - D j ( k ) ] , - - - ( 3 )
Wherein, D iand D (k) j(k) be respectively ORB unique point descriptor D iand D jk position, and the worthwhile x of function δ [x] equals 1 at 0 o'clock, otherwise equals 0.
As new ORB feature point set S tafter present frame t is extracted, we attempt finding out its subset belong to S tand the feature point set of its each element and the target object of previous frame t-1 matching degree is high.
Suppose we are according to the feature point set of the target object of previous frame t-1 and in whole descriptor space, use proximity search algorithm to find out Optimum Matching be defined as follows:
whether the motion of target object is suitable, and we are with relatively having stipulated a kind of distance metric and a kind of conditional probability with former frame target object position.The formula of evaluating a model M is defined as follows:
Wherein, N c(M) represent the number of consistent unique point, Pr (M) represents the probability of candidate family, and Cp (M) represents the complexity of model M.The concrete condition of these three factors of influence will be described below.
A given specific model M and from unique point set the minimum number meeting the demands of choosing at random individual matching characteristic point, we can calculate the unknown parameter of transformation model M with comparalive ease.Then unique point set all data be all used to test, those can be applicable to well transformation model M formation a consistent point set C (M), and the size of point set C (M) is N c(M).Here the complexity Cp (M) that we arrange model M is
In order to assess the possibility Pr (M) of given candidate family M, first we defined a distance metric according to the position of previous frame target object, defines as formula (6):
dist ( M | &gamma; ) = &Integral; 0 1 | | &gamma; ( u ) - M ( &gamma; ( u ) ) | | du , - - - ( 6 )
Wherein, γ (u): [0,1] → R 2for the standardization border of previous frame t-1 target object, and M (γ (u)) is the target object border under model M.If the border of tracked target object is discrete, polygon can be used in its border so represent, distance metric is:
dist ( M | Q t - 1 ) = &Sigma; k = 1 n | | q t - 1 k - M ( q t - 1 k ) | | . - - - ( 7 )
Next, the probability P r (M) that we can define certain special exercise transformation model M is as formula (8):
Pr ( M ) = e - &lambda;dist ( M | Q t - 1 ) , - - - ( 8 )
Wherein, λ means the constant parameter of attenuation rate.
Described ORB feature point set is by dynamically adding the renewal that iterates of the method for new unique point and cutting outlier.We use a kind of simple method for the energy value of each unique point maintenance in set.After Feature Points Matching and transformation model are estimated, the t frame teacher's of i unique point in the feature point set of target object energy value upgraded, defined as formula (9):
V i t = V i t - 1 + &alpha; , matched , consensus V i t - 1 - &beta; , matched , outlier V i t - 1 , not matched - - - ( 9 )
Formula (9) if represent a unique point with the transformation model of estimation can with the target object feature point set of previous frame t-1 in a bit match, illustrate that so this unique point is consistent, and its energy value is assigned this energy value of previous frame t-1, add an increment α.In like manner contrary situation, if can not match by estimative transformation model, illustrates that this unique point is that peel off, abnormal so, and its energy value will be endowed a decrement β.And the point not mating, its energy value remains unchanged.
According to their energy value, can sort to unique point.So, only have M the unique point that energy value is the highest to be retained, and other unique point because being considered to for tracking target object, its low energy value is unsettled, is dropped.When new unique point is added into the feature point set of target object, their zero energy value is set to all intermediate values of set.Follow the tracks of while starting most, all unique points energy value when the first frame is identical.
Technique scheme can be found out, because adopting ORB feature point set, the embodiment of the present invention represents the characteristics of image of target object, in target object tracing process, adopt and constantly add the method for new unique point and cutting outlier to come iteration to upgrade ORB feature point set.And between two continuous frames, adopt proximity search algorithm to find out the unique point pair of coupling, next according to unique point, adopt random sampling one to make peace method that multiple transform model combines is calculated the motion converter model of tracking target., therefore can process the target following under the scenes such as complicated mode of appearance variation, background clutter, illumination variation and target object block.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the process flow diagram of the method for tracking target based on ORB Feature Points Matching in the embodiment of the present invention;
Fig. 2 is changing and blocking the tracking effect figure under situation at target object mode of appearance in the embodiment of the present invention;
Fig. 3 is the ORB characteristic point analysis figure to target occlusion video sequence in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making all other embodiment that obtain under creative work prerequisite, belong to the scope of protection of the invention.
The inventive method flow process as shown in Figure 1, I tthe original image frame that represents the t moment of input, first this method extracts ORB feature point set S t, then in descriptor space with the ORB feature point set of previous frame t-1 target object constantly match and produce the feature set of present frame then according to the position at previous frame target place use random sampling one to make peace method that multiple transform model combines, calculate the motion converter model T of tracking target t, finally by constantly adding new unique point and cutting outlier to upgrade, produce the target signature collection that final present frame t is constantly suitable
After ORB Feature Points Matching, the unique point pair of the some based on mating between two continuous frames, we can estimate and the most suitable transformation model of moving target object.Random sampling unification algorism (RANdom SAmple Consensus) is a kind of iterative algorithm of being estimated the parameter of certain mathematical model by the one group of observation data that contains outlier.First we extract and mate SIFT unique point or the SURF unique point of target, then utilizes the unique point of coupling to this method being used for estimating the motion model of target object between two continuous frames.Random sampling unification algorism produces a right model of the abundant unique point of energy matching by the mode of iteration.
Yet RANSAC algorithm requires the unique point pair of prior model of supposition and minimal amount, this point is easy to be breached in target following.Therefore, we use RAMOSAC algorithm to carry out model estimation, and RAMOSAC has combined two kinds of methods of multiple motion model of RAMSAC and different complexities.Multiple transform model is comprised of four kinds of different transformation models, and its detailed formation below 3.2.1 joint is introduced.
(1) multiple transform model representation
In order to describe better the movement locus of target object, we select to represent with table four kinds of listed transformation models of 3-2, and they are respectively the translation model M that contains two unknown parameters t, the similarity transformation model M that contains four unknown parameters (rotation parameter, zooming parameter and translation parameters) s, the affine Transform Model M that contains six unknown parameters aand the projection transformation model M that contains eight unknown parameters p(3 * 3 matrixes).And four right minimal amount difference N of unique point that model parameters are required more than calculating min=1,2,3 and 4.
Table 1 conversion motion model
(2) estimation of transformation model
In order to evaluate one according to model M ∈ { M t, M s, M a, M pestimating to such an extent that whether candidate transformation T is suitable for the motion of describing target object, we are with relatively having stipulated a kind of distance metric and a kind of conditional probability with former frame target object position.The formula of evaluating a model M is defined as follows:
Wherein, N c(M) represent the number of consistent unique point, Pr (M) represents the probability of candidate family, and Cp (M) represents the complexity of model M.The concrete condition of these three factors of influence will be described below.
A given specific model M and from unique point set the minimum number meeting the demands of choosing at random individual matching characteristic point, we can calculate the unknown parameter of transformation model M with comparalive ease.Then unique point set all data be all used to test, those can be applicable to well transformation model M formation a consistent point set C (M), and the size of point set C (M) is N c(M).Here the complexity Cp (M) that we arrange model M is
In order to assess the possibility Pr (M) of given candidate family M, first we defined a distance metric according to the position of previous frame target object, defines as formula:
dist ( M | &gamma; ) = &Integral; 0 1 | | &gamma; ( u ) - M ( &gamma; ( u ) ) | | du ,
Wherein, γ (u): [0,1] → R 2for the standardization border of previous frame t-1 target object, and M (γ (u)) is the target object border under model M.If the border of tracked target object is discrete, polygon can be used in its border so represent, distance metric is:
dist ( M | Q t - 1 ) = &Sigma; k = 1 n | | q t - 1 k - M ( q t - 1 k ) | | .
Next, the probability P r (M) that we can define certain special exercise transformation model M is as formula:
Pr ( M ) = e - &lambda;dist ( M | Q t - 1 ) ,
Wherein, λ means the constant parameter of attenuation rate.
Similar to RANSAC algorithm, RAMOSAC algorithm execution step is as follows:
1) from table 1, in listed transformation model, choose at random a model M;
2) then from ORB feature point set choosing at random a size is oRB unique point subset S r;
3) according to subset S rcalculate the parameter of model M, produce consistent feature point set C (M) and calculate Score (M);
4) iteration step 1) to 3) fixing number of times.According to Score (M) maximal value, find out optimal model;
5) use least square method to recalculate model parameter according to the most consistent point set of applicable model.
The inventive method changes and blocks tracking effect under situation as shown in Figure 2 at target object mode of appearance.Under the scene that target appearance metamorphosis, partial occlusion block even completely and similar object is disturbed, the inventive method robustness is good, can accurately tracking target object, obtain good tracking results.The inventive method is to shown in appended Fig. 3 of ORB characteristic point analysis of target occlusion video sequence, analyzed quantitatively the superiority of the inventive method, by add up frame by frame the ORB unique point of target object coupling and wherein the number of intra-office point prove the robustness of this method.
It should be noted that, the contents such as the information interaction between said apparatus and intrasystem each unit, implementation, due to the inventive method embodiment based on same design, particular content can, referring to the narration in the inventive method embodiment, repeat no more herein.
One of ordinary skill in the art will appreciate that all or part of step in the whole bag of tricks of above-described embodiment is to come the hardware that instruction is relevant to complete by program, this program can be stored in a computer-readable recording medium, storage medium can comprise: ROM (read-only memory) (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc.
A kind of method for tracking target based on ORB the Feature Points Matching above embodiment of the present invention being provided is described in detail, applied specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment is just for helping to understand method of the present invention and core concept thereof; , for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention meanwhile.

Claims (5)

1. the method for tracking target based on ORB Feature Points Matching, is characterized in that, the method includes the steps of:
First original image frame to input extracts ORB feature point set;
In descriptor space, match and produce the feature set of present frame with the ORB feature point set of previous frame target object constantly;
According to the position at previous frame target place use random sampling one to make peace method that multiple transform model combines, calculate the motion converter of tracking target;
Finally by constantly adding new unique point and cutting outlier to upgrade, produce the target signature collection that final present frame is constantly suitable.
2. the method for tracking target based on ORB Feature Points Matching according to claim 1, it is characterized in that: described ORB unique point descriptor is represented by a string of binary characters, each of character string is by choosing at random the big or small generation mutually that partners of image block that two sizes are S * S near unique point, here get the string of binary characters of n=256 bit length, the image block P that is S * S by a size defines with string of binary characters, and definition is as shown in formula (1):
Wherein, p (x) and p (y) are respectively that P is at the gray-scale value of pixel x and y; By choosing one group of specific <x, y> point is to conduct set, just the character string that has produced like this n position as BRIEF descriptor, defines as formula (2):
3. the method for tracking target based on ORB Feature Points Matching according to claim 1, is characterized in that: the ORB unique point descriptor D that described given two length are n position iand D j, similarity ρ (D between the two i, D j) by computing machine, hamming is apart from representing very efficiently, its similarity measurement is defined as follows:
Wherein, D iand D (k) j(k) be respectively ORB unique point descriptor D iand D jk position, and the worthwhile x of function δ [x] equals 1 at 0 o'clock, otherwise equals 0;
As new ORB feature point set S tafter present frame t is extracted, we attempt finding out its subset belong to S tand the feature point set of its each element and the target object of previous frame t-1 matching degree is high;
Set according to the feature point set of the target object of previous frame t-1 and in whole descriptor space, use proximity search algorithm to find out Optimum Matching be defined as follows:
4. the method for tracking target based on ORB Feature Points Matching according to claim 1, is characterized in that: one of described evaluation is according to model M ∈ { M t, M s, M a, M pestimate to such an extent that whether candidate transformation T suitable for the motion of describing target object, with former frame target object position relatively stipulated a kind of distance metric and a kind of conditional probability, the formula of evaluating a model M is defined as follows:
Wherein, N c(M) represent the number of consistent unique point, Pr (M) represents the probability of candidate family, and Cp (M) represents the complexity of model M.The concrete condition of these three factors of influence will be described below; A given specific model M and from unique point set the minimum number meeting the demands of choosing at random individual matching characteristic point, then unique point set all data be all used to test, those can be applicable to well transformation model M formation a consistent point set C (M), and the size of point set C (M) is N c(M); Here the complexity Cp (M) that I arrange model M is according to the position of previous frame target object, defined a distance metric, defined as formula (6):
Wherein, γ (u): [0,1] → R 2for the standardization border of previous frame t-1 target object, and M (γ (u)) is the target object border under model M; If the border of tracked target object is discrete, polygon can be used in its border so represent, distance metric is:
The probability P r (M) that defines certain special exercise transformation model M is as formula (8):
Wherein, λ means the constant parameter of attenuation rate.
5. the method for tracking target based on ORB Feature Points Matching according to claim 1, it is characterized in that: described ORB feature point set is by dynamically adding the renewal that iterates of the method for new unique point and cutting outlier, after Feature Points Matching and transformation model are estimated, the t frame teacher's of i unique point in the feature point set of target object energy value upgraded, defined as formula (9):
Formula (9) if represent a unique point with the transformation model of estimation can with the target object feature point set of previous frame t-1 in a bit match, illustrate that so this unique point is consistent, and its energy value is assigned this energy value of previous frame t-1, add an increment α; In like manner contrary situation, if can not match by estimative transformation model, illustrates that this unique point is that peel off, abnormal so, and its energy value will be endowed a decrement β, and the point not mating, its energy value remains unchanged.
CN201410376616.9A 2014-08-01 2014-08-01 Target tracking method based on ORB characteristics point matching Pending CN104200487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410376616.9A CN104200487A (en) 2014-08-01 2014-08-01 Target tracking method based on ORB characteristics point matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410376616.9A CN104200487A (en) 2014-08-01 2014-08-01 Target tracking method based on ORB characteristics point matching

Publications (1)

Publication Number Publication Date
CN104200487A true CN104200487A (en) 2014-12-10

Family

ID=52085773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410376616.9A Pending CN104200487A (en) 2014-08-01 2014-08-01 Target tracking method based on ORB characteristics point matching

Country Status (1)

Country Link
CN (1) CN104200487A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method
CN105913459A (en) * 2016-05-10 2016-08-31 中国科学院自动化研究所 Moving object detection method based on high resolution continuous shooting images
WO2017049817A1 (en) * 2015-09-24 2017-03-30 北京零零无限科技有限公司 Method and apparatus for operating unmanned aerial vehicle by means of gestures
WO2017173977A1 (en) * 2016-04-05 2017-10-12 中兴通讯股份有限公司 Mobile terminal target tracking method, device, and mobile terminal
CN107437257A (en) * 2017-08-08 2017-12-05 重庆信络威科技有限公司 Moving object segmentation and dividing method under a kind of mobile background
CN108021921A (en) * 2017-11-23 2018-05-11 塔普翊海(上海)智能科技有限公司 Image characteristic point extraction system and its application
CN108227717A (en) * 2018-01-30 2018-06-29 中国人民解放军陆军装甲兵学院 Multiple mobile robot's map amalgamation method and convergence platform based on ORB features
CN109146920A (en) * 2018-06-29 2019-01-04 西北工业大学 A kind of method for tracking target that insertion type is realized
CN110147809A (en) * 2019-03-08 2019-08-20 亮风台(北京)信息科技有限公司 Image processing method and device, storage medium and vision facilities

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
CN103700069A (en) * 2013-12-11 2014-04-02 武汉工程大学 ORB (object request broker) operator-based reference-free video smoothness evaluation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
CN103700069A (en) * 2013-12-11 2014-04-02 武汉工程大学 ORB (object request broker) operator-based reference-free video smoothness evaluation method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ETHAN RUBLEE 等: "ORB: an efficient alternative to SIFT or SURF", 《COMPUTER VISION》 *
PETTER STRANDMARK 等: "Joint Random Sample Consensus and Multiple Motion Models for Robust Video Tracking", 《LECTURE NOTES IN COMPUTER SCIENCE》 *
谢成明: "基于 ORB 特征的目标检测与跟踪的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096338B (en) * 2014-12-30 2018-06-22 天津航天中为数据系统科技有限公司 Extracting of Moving Object and device
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
WO2017049817A1 (en) * 2015-09-24 2017-03-30 北京零零无限科技有限公司 Method and apparatus for operating unmanned aerial vehicle by means of gestures
US10261507B2 (en) 2015-09-24 2019-04-16 Beijing Zero Zero Infinity Technology Co., Ltd Method and device for controlling unmanned aerial vehicle with gesture
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method
CN105844663B (en) * 2016-03-21 2018-11-27 中国地质大学(武汉) A kind of adaptive ORB method for tracking target
WO2017173977A1 (en) * 2016-04-05 2017-10-12 中兴通讯股份有限公司 Mobile terminal target tracking method, device, and mobile terminal
CN105913459B (en) * 2016-05-10 2019-07-12 中国科学院自动化研究所 Moving target detecting method based on high-resolution continuous shooting image
CN105913459A (en) * 2016-05-10 2016-08-31 中国科学院自动化研究所 Moving object detection method based on high resolution continuous shooting images
CN107437257A (en) * 2017-08-08 2017-12-05 重庆信络威科技有限公司 Moving object segmentation and dividing method under a kind of mobile background
CN108021921A (en) * 2017-11-23 2018-05-11 塔普翊海(上海)智能科技有限公司 Image characteristic point extraction system and its application
CN108227717A (en) * 2018-01-30 2018-06-29 中国人民解放军陆军装甲兵学院 Multiple mobile robot's map amalgamation method and convergence platform based on ORB features
CN108227717B (en) * 2018-01-30 2021-12-03 中国人民解放军陆军装甲兵学院 Multi-mobile-robot map fusion method and fusion platform based on ORB (object-oriented bounding Box) features
CN109146920A (en) * 2018-06-29 2019-01-04 西北工业大学 A kind of method for tracking target that insertion type is realized
CN109146920B (en) * 2018-06-29 2021-12-28 西北工业大学 Target tracking method capable of realizing embedded implementation
CN110147809A (en) * 2019-03-08 2019-08-20 亮风台(北京)信息科技有限公司 Image processing method and device, storage medium and vision facilities

Similar Documents

Publication Publication Date Title
CN104200487A (en) Target tracking method based on ORB characteristics point matching
Chen et al. Learning context flexible attention model for long-term visual place recognition
Li et al. GradNet: Gradient-guided network for visual object tracking
Bertasius et al. Semantic segmentation with boundary neural fields
Ji et al. Encoder-decoder with cascaded CRFs for semantic segmentation
Wang et al. Inverse sparse tracker with a locally weighted distance metric
Von Stumberg et al. Gn-net: The gauss-newton loss for multi-weather relocalization
Wang et al. Temporal segment networks: Towards good practices for deep action recognition
Tsagkatakis et al. Online distance metric learning for object tracking
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
Badrinarayanan et al. Semi-supervised video segmentation using tree structured graphical models
Chen et al. Background–foreground interaction for moving object detection in dynamic scenes
Hua et al. Depth estimation with convolutional conditional random field network
Guclu et al. Integrating global and local image features for enhanced loop closure detection in RGB-D SLAM systems
Xu et al. Pose for everything: Towards category-agnostic pose estimation
Jiao et al. Magicvo: End-to-end monocular visual odometry through deep bi-directional recurrent convolutional neural network
Ding et al. Simultaneous body part and motion identification for human-following robots
Lalos et al. Efficient tracking using a robust motion estimation technique
Zhou et al. STI-Net: Spatiotemporal integration network for video saliency detection
CN112633100B (en) Behavior recognition method, behavior recognition device, electronic equipment and storage medium
Di et al. A unified framework for piecewise semantic reconstruction in dynamic scenes via exploiting superpixel relations
Jiang et al. Regularisation learning of correlation filters for robust visual tracking
Li et al. Spatiotemporal tree filtering for enhancing image change detection
Sanches et al. Recommendations for evaluating the performance of background subtraction algorithms for surveillance systems
Zhang et al. Neural guided visual slam system with Laplacian of Gaussian operator

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141210

RJ01 Rejection of invention patent application after publication