CN105389833A - Target tracking method based on online iteration subspace learning - Google Patents

Target tracking method based on online iteration subspace learning Download PDF

Info

Publication number
CN105389833A
CN105389833A CN201510993106.0A CN201510993106A CN105389833A CN 105389833 A CN105389833 A CN 105389833A CN 201510993106 A CN201510993106 A CN 201510993106A CN 105389833 A CN105389833 A CN 105389833A
Authority
CN
China
Prior art keywords
target
image
subspace
formula
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510993106.0A
Other languages
Chinese (zh)
Other versions
CN105389833B (en
Inventor
何军
张德娇
施蓓蓓
张玥
崔桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanji Agricultural Machinery Research Institute Co.,Ltd.
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201510993106.0A priority Critical patent/CN105389833B/en
Publication of CN105389833A publication Critical patent/CN105389833A/en
Application granted granted Critical
Publication of CN105389833B publication Critical patent/CN105389833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a target tracking method based on online iteration subspace learning. The target tracking method comprises the following steps of: firstly, calibrating a target image, and then updating subspaces by adopting a method of online iteration updating of a subspace set. According to the target tracking method disclosed by the invention, as for a target to be tracked in each frame image in a monitor video, after the target image is calibrated, all that is needed is to update the subspaces in an online mode, and the maintained storage space occupied by the subspace set in the tracking process is far smaller than the maintained memory space occupied by a dictionary in the classical target tracking method, so that the demand quantity for a memory is reduced, and the efficiency of the target tracking method is improved. When the target is partly shielded or influenced by illumination variations, the target tracking method can still effectively tracks the target; and even in the cases that the tracked target object has a large change in posture and is severely polluted, and brightness of illumination strongly varies, the target tracking method can still better track the target and is stronger in robustness.

Description

A kind of based on the method for tracking target at line iteration sub-space learning
Technical field
The invention belongs to technical field of computer vision, relate to a kind of method for tracking target, more specifically, relate to a kind of based on the method for tracking target at line iteration sub-space learning.
Background technology
Vision is followed the tracks of, refer to by after camera acquisition image sequence, computing machine is utilized to carry out analyzing and processing according to certain algorithm to image, after moving target being detected, extract and identifying, obtain the kinematic parameter of destination object, comprise the position of tracking target, speed, movement locus etc., by the treatment and analyses to these prioris, computing machine can estimate the motor behavior of target, constantly follows the tracks of this target.It is have in real life to apply very widely that vision is followed the tracks of, such as in the public place having safety guarantee to require (place that the flow of the people such as airport, highway is large), when needs follow the trail of the objective personage, the method that computer vision is followed the trail of can be utilized, estimate next step action of personage to be tracked, even if be also unlikely to lose the tracking to target when there is the situations such as target is blocked, light change.Classical method for tracking target EMS memory occupation space is large, and efficiency is lower, and especially when larger change occurs for tracking target scene and form, usual tracking effect is not good.
Summary of the invention
For solving the problem, the invention discloses a kind of based on the method for tracking target at line iteration sub-space learning, effectively can solve target to be at least partially obscured and to be subject to the problems such as illumination variation impact with tracking target image, and significantly improve the efficiency of target following, decrease the demand to internal memory.
In order to achieve the above object, the invention provides following technical scheme:
Based on the method for tracking target at line iteration sub-space learning, comprise the steps:
Steps A, the robustness on-line calibration of sequence image
Each sequence image vector of institute's tracking target is turned to the column vector of n × 1, then whole sequence image set forms n × N matrix D, wherein D can resolve into a size is n × N low-rank matrix L=UW, a size is the sparse matrix E of n × N, with a nonlinear images conversion τ, the robustness on-line calibration formula of sequence image is as follows:
Due to the non-linear nature of image conversion D ° τ, above formula (1) is expressed as follows:
Target image is for conversion into the target image of an accurate calibration in specification coordinate system by computing formula (2), specifically comprises the steps:
Steps A-1, first some initial position parameters of object to be tracked in the first two field picture are provided, that selects for initial position parameters institute frame treats that tracking objects image I evenly carries out m affined transformation disturbance, obtain m through the tracking object image of disturbance by affined transformation, then utilize these image initials L sub spaces adopt above formula (2) initialization L sub spaces
Steps A-2, only considers a two field picture I at every turn i, stator space formula (2) is rewritten as:
In above formula, I ithe i-th two field picture, the secondary conversion parameter estimated by iteration of kth, that kth time iteration is for conversion jacobi matrix, w is weight vectors, and e is sparse vector, and Δ τ is the conversion parameter of increment;
The augmentation Lagrangian Form of formula (3) is as follows:
L ( U t k , w , e , Δ τ , λ ) = | | e | | 1 + λ h ( w , e , Δ τ ) + u 2 | | h ( w , e , Δ τ ) | | 2 2 - - - ( 4 )
Wherein λ ∈ R nit is Lagrange multiplier vector;
Optimal value of the parameter (w *, e *, Δ τ *, λ *) can be tried to achieve by following formula ADMM algorithm:
In above formula, ρ > 1 is the constant factor of ADMM;
Steps A-3, the cost function selecting formula (4) to upgrade as kth time iterative process sub-spaces upgrades subspace
First to (4) about U kdifferentiate:
d L du k = ( λ * + μ h ( w * , e * , Δτ * ) ) w * T - - - ( 6 )
Be projected to Grassmann manifold G (d k, n), obtain gradient introduce with Γ = ( I - U t k U t k T ) Γ 1 , ? Δ L = Γw * T , The SVD exploded representation of Δ L is following formula:
Δ L = [ Γ | | Γ | | , x 2 , ... , x d ] × d i a g ( σ , 0 , ... , 0 ) × [ w * | | w * | | , y 2 , ... , y d ] T - - - ( 7 )
The update method of subspace is as follows:
U t + 1 k = U t k + ( c o s ( η σ ) - 1 ) U t k w t * | | w t * | | w t * T | | w t * | | - s i n ( η σ ) Γ | | Γ | | w t * T | | w t * | | - - - ( 8 )
Step B, adopts subspace online updating method to follow the tracks of target, antithetical phrase spatial aggregation carry out online updating as follows:
For current i-th frame sequence image I to be tracked i, complete L iteration, complete antithetical phrase spatial aggregation renewal, use formula (5) to calculate optimum (w during each iteration *, e *, Δ τ *, λ *), wherein Δ τ *namely calculate wherein initial subspace set is initial conversion parameter is
Process subsequent frame sequence image successively, initial conversion parameter adopt the mode same with the i-th frame sequence image to upgrade subspace to gather until process all subsequent frame sequence images;
According to the some vertex position constants of rectangle frame at specification coordinate system alignment target image, pass through conversion, obtain this target at I i+1location parameter in image, wherein for inverse transformation.
Further, before step B, adopting following formula to judge, whether the image of tracking target there is larger distortion:
If sparse vector e *exceed given threshold value and then illustrate that the image of tracking target there occurs larger distortion, need the subspace set reinitializing tracking target use the tracking target of previous frame calibration the tracking target of distortion is there is with present frame random homogeneous perturbation m time respectively, and increase the order d of subspace set respectively k=d k+ 1, k=1 ..., L, then uses formula (2) initialization to obtain new subspace set
Further, described ρ=1.5.
Further, described ∈=0.2.
Beneficial effect:
Method for tracking target provided by the invention, for target to be tracked in each two field picture of monitor video, after the calibration completing target image, only need upgrade subspace with online mode, the storage space shared by the set of subspace maintained in tracing process is far smaller than the memory headroom shared by " dictionary " that classical method for tracking target maintains, decrease the demand to internal memory, improve the efficiency of method for tracking target.Target be at least partially obscured or be subject to illumination variation affect time, still can effective tracking target; Even if the destination object running into tracking there occurs larger postural change, is subject to serious pollution, illumination light and shade strong variations time, this method still can tracking target preferably, has stronger robustness.
Accompanying drawing explanation
Fig. 1 is flow chart of steps of the present invention.
Fig. 2 is image calibration schematic diagram.
Fig. 3 is the target following schematic diagram based on image calibration.
Fig. 4 is subspace set online updating schematic diagram.
Embodiment
Below with reference to specific embodiment, technical scheme provided by the invention is described in detail, following embodiment should be understood and be only not used in for illustration of the present invention and limit the scope of the invention.
Provided by the invention based on the method for tracking target at line iteration sub-space learning, for image to be calibrated adopt based on the robustness homing method of norm, after the calibration completing target image, have employed the method renewal subspace that subspace is integrated into line iteration renewal.Specifically, the inventive method as shown in Figure 1, comprises following step:
Steps A, the robustness on-line calibration of sequence image
Each sequence image vector of institute's tracking target is turned to the column vector (n is the pixel count of image) of n × 1, then whole sequence image set forms n × N matrix D (N is the quantity of sequence image), then D can resolve into a size be n × N low-rank matrix L=UW (wherein, U is the row Orthogonal Subspaces matrix forming low-rank matrix L, the order of size to be n × d, d be matrix L; W is weight coefficient matrix, and size is d × N), a size is the sparse matrix E of n × N, and a nonlinear images conversion τ.The robustness on-line calibration of sequence image can mathematical modeling be following formula (1):
The set that the orthogonal matrix that n ties up all n × d in the real space is formed is called Grassmann manifold (Grassmannian), is expressed as G (d, n), i.e. U ∈ G (d, n).Therefore, can think that complete sequence image set is after suitable geometric transformation, can be expressed as the superposition of low-rank matrix UW and sparse abnormal matrix E.
But due to the non-linear nature of image conversion D ° τ, formula (1) needs to adopt the method for linear Approach by inchmeal to solve, i.e. formula (2):
When the secondary iteration of kth, the current estimation to image conversion parameter, that i-th image converts based on image rotation jacobian matrix, { ξ irepresent R nstandard basis.Due to the effect of the nonlinear transformation τ of image, need with gang's low-rank subspace approach this nonlinear conversion processes (L is total iterations, and it is larger that non-linear stronger L is arranged, and generally can be set to L=10).And in different iterative loop, because the yardstick of image alignment is inconsistent, this low-rank subspace is gathered there is different dimensions, namely at different iteration phase U kdifferent Grassmann manifold G (d will be limited in k, n).
Steps A-1, builds and initialization subspace:
For the sequence of video images carrying out target following, first the initial position parameters (i.e. three point coordinate) of object to be tracked in the first two field picture is provided, in Fig. 2, left-half is original image before calibration, wherein object to be tracked three point coordinate frames go out, what illustrate as required is, here three point coordinate are only example, can select as required 3 or four point coordinate even more come frame select the position of target image).That selects for this three point coordinate institute frame treats that tracking objects image I evenly carries out m affined transformation disturbance, affined transformation comprises convergent-divergent, translation and rotation, obtain m through the tracking object image of disturbance by such affined transformation, then utilize this m image initial L sub spaces this L sub spaces initial method based on above formula (2).
Steps A-2, ADMM solves local linear problem:
Work as U kknown (or being known estimated value), formula (2) can utilize ADMM (AlternatingDirectionMethodofMultipliers) Algorithm for Solving.For on-line calibration, only consider a two field picture I at every turn i, stator space formula (2) can be rewritten as:
Here I ithe i-th two field picture, the secondary conversion parameter estimated by iteration of kth, that kth time iteration is for conversion jacobi matrix, target is Optimization Solution weight vectors w, the conversion parameter Δ τ of sparse vector e and increment.
The augmentation Lagrangian Form of formula (3) is as follows:
L ( U t k , w , e , Δ τ , λ ) = | | e | | 1 + λ h ( w , e , Δ τ ) + u 2 | | h ( w , e , Δ τ ) | | 2 2 - - - ( 4 )
Wherein λ ∈ R nit is Lagrange multiplier vector.
Optimal value of the parameter (w *, e *, Δ τ *, λ *) can be tried to achieve by following ADMM algorithm:
Here ρ > 1 is the constant factor of ADMM, is taken as ρ=1.5.
Steps A-3, iteration subspace update:
In order to determine the optimal subspace in formula (2) in kth time iteration be regarded as an optimization problem based on Grassmann manifold.That is, the present invention wishes to obtain such sub spaces sequence meet (t → ∞), therefore selects the cost function upgraded as kth time iterative process sub-spaces with formula (4).Once according to subspace before (5) are utilized to solve other optimized parameter (w *, e *, Δ τ *, λ *), then available formula (4) upgrades this subspace for cost function.
Use gradient descent algorithm, need first to (4) about U kdifferentiate, that is:
d L dU k = ( λ * + μ h ( w * , e * , Δτ * ) ) w * T - - - ( 6 )
Be projected to Grassmann manifold G (d k, n), can gradient be obtained introduce with can obtain because Γ is the vector of n × 1, and be d × 1 weight vectors, then the rank of matrix that Δ L is formed is 1, therefore its SVD decompose have unique non-zero singular value be σ=|| Γ || || w *||, and corresponding left and right proper vector is respectively and by increasing mutually orthogonal vector x 2..., x dand y 2..., y d, the SVD decomposition of Δ L can be written as formula (7):
Δ L = [ Γ | | Γ | | , x 2 , ... , x d ] × d i a g ( σ , 0 , ... , 0 ) × [ w * | | w * | | , y 2 , ... , y d ] T - - - ( 7 )
The step-length of setting Gradient Descent is η=0.001, then the method for subspace update is following formula (8):
U t + 1 k = U t k + ( c o s ( η σ ) - 1 ) U t k w t * | | w t * | | w t * T | | w t * | | - s i n ( η σ ) Γ | | Γ | | w t * T | | w t * | | - - - ( 8 )
Step B, the realization of target following:
Step B-1, the target following of subspace online updating
For a new two field picture I i+1, due to the motion of target, need to estimate this target at image I i+1position.First the image conversion parameter that previous frame calculates is used as the initial transformation parameter of current iteration then corresponding to image I i+1in unregulated target image.Use image on-line calibration method, i.e. formula (2), calculate best conversion parameter make through being for conversion into the target image (in Fig. 2 shown in right half part) of an accurate calibration.For rectangle frame four the vertex position constant (p at specification coordinate system alignment target image 1, p 2, p 3, p 4) (in order to represent clear, adopting four summits in this step, also can adopt the summit of three summits or more quantity as required), pass through conversion (namely inverse transformation), obtain this target at I i+1location parameter (o in image 1, o 2, o 3, o 4).Transform method is as shown in formula (9).
Therefore, at maintenance specification coordinate system alignment target image four vertex position constant (p 1, p 2, p 3, p 4) under constant condition, for follow-up image I i+1..., I i+mas long as, can accurate alignment target image, calculate corresponding conversion parameter τ i+1..., τ i+m, utilize formula (8) just can follow the tracks of target respectively, tracing process as shown in Figure 3.
Due in a new two field picture, target can produce the change such as illumination, deformation, therefore after completing target image calibration, needs to upgrade the subspace set characterizing target image low-rank characteristic the position estimating tracking target in next frame image can be continued.Subspace is gathered the flow process of its online updating as shown in Figure 4.
The detailed process of online updating is as follows:
(1) for current i-th frame sequence image I to be tracked i, there is initial subspace set with initial conversion parameter iteration for the first time, needs to upgrade subspace formula (5) is used to calculate optimum (w *, e *, Δ τ *, λ *), wherein Δ τ *namely calculate then formula (8) is used to upgrade subspace extremely complete first time iteration.
(2) second time iteration, upgrades subspace the conversion parameter that first time iterative computation is gone out as the conversion parameter that current iteration is initial.Formula (5) is used to calculate optimum (w *, e *, Δ τ *, λ *), wherein Δ τ *namely calculate then formula (8) is used to upgrade subspace extremely complete second time iteration.
(3) similar process, carry out respectively third time, the 4th time ..., the L time iteration, complete antithetical phrase spatial aggregation renewal.
(4) when process the i-th+1 frame sequence image, initial conversion parameter the process repeating above-mentioned (1)-(3) upgrades subspace set
(5) for the process of subsequent frame sequence image, identical with (4) step, until process all subsequent frame sequence images.Step B-2, the subspace re-training in object tracking process:
In object tracking process, due to reasons such as video camera zoom, target morphology change, the image of institute's tracking target can produce larger change, thus can not be represented by original sub-space approximation completely, in such a situa-tion, the set of online updating subspace is no longer suitable for in order to differentiate whether tracking target image larger distortion occurs, utilize the sparse vector e estimated by formula (5) *, judge e *whether exceed given threshold value relative to the energy of tracking target image, if exceed given threshold value, illustrate that the image of tracking target there occurs larger distortion, need the subspace set reinitializing tracking target determination methods is as shown in formula (10):
Here ∈ is the decision threshold reserved in advance, is set to ∈=0.2.
In order to reinitialize subspace set to reflect the change of tracking target, the tracking target of choice for use previous frame calibration of the present invention the tracking target of distortion is there is with present frame random homogeneous perturbation m time respectively, and increase the order d of subspace set respectively k=d k+ 1, k=1 ..., L, then uses formula (2) initialization to obtain new subspace set like this, even if the destination object running into tracking there occurs larger postural change, is subject to serious pollution, illumination light and shade strong variations time, this method still can tracking target preferably, has stronger robustness.
Below the inventive method is tested, first select the video tracking database that will test, and select target to be tracked from first image center, target location can be determined with three apex coordinates, be referred to as " range of interest ".To treat that tracking objects image I evenly carries out 30 affined transformation disturbances, affined transformation comprises convergent-divergent, translation and rotation, obtain 30 tracking object images through disturbance by such affined transformation, and utilize these 30 images based on formula (2) initialization L sub spaces l is set to L=10 herein.As shown in Figure 1, image optimum conversion is obtained the image of " just ", sequence image set complete in such database, after suitable geometric transformation, can be expressed as the superposition of low-rank matrix and sparse abnormal matrix to image calibration schematic diagram.
For the i-th two field picture in video tracking database, need the location parameter estimating target to be tracked.First according to ADMM formula (5) respectively by L iteration, solve obtain correspond to subspace set optimum (w *, e *, Δ τ *, λ *), and obtain the optimized transformation parameters of this two field picture sparse vector e is judged according to formula (10) *norm and the ratio of target image norm whether exceed given threshold value ∈, arrange ∈=0.2, if exceeded, re-training subspace is gathered.If do not exceed given threshold value, then carry out upgrading subspace set in line iteration according to formula (8) subspace is integrated into the schematic diagram of line iteration renewal as shown in Figure 4.At rectangle frame four vertex position constant (p of specification coordinate system alignment target image 1, p 2, p 3, p 4), pass through according to formula (9) conversion (namely inverse transformation), obtain the location parameter (o of this target in this two field picture 1, o 2, o 3, o 4), thus reaching the object of target following, the principle schematic of target following is as shown in Figure 3.
Technological means disclosed in the present invention program is not limited only to the technological means disclosed in above-mentioned embodiment, also comprises the technical scheme be made up of above technical characteristic combination in any.It should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications are also considered as protection scope of the present invention.

Claims (4)

1. based on the method for tracking target at line iteration sub-space learning, it is characterized in that, comprise the steps:
Steps A, the robustness on-line calibration of sequence image
Each sequence image vector of institute's tracking target is turned to the column vector of n × 1, then whole sequence image set forms n × N matrix D, wherein D can resolve into a size is n × N low-rank matrix L=UW, a size is the sparse matrix E of n × N, with a nonlinear images conversion τ, the robustness on-line calibration formula of sequence image is as follows:
m i n U , W , E , τ | | E | | 1
s.t.D°τ=UW+E
(1)
U∈G(d,n)
Due to the non-linear nature of image conversion D ° τ, above formula (1) is expressed as follows:
m i n U k , W , E , Δ τ | | E | | 1
U k∈G(d k,n)
Target image is for conversion into the target image of an accurate calibration in specification coordinate system by computing formula (2), specifically comprises the steps:
Steps A-1, first some initial position parameters of object to be tracked in the first two field picture are provided, that selects for initial position parameters institute frame treats that tracking objects image I evenly carries out m affined transformation disturbance, obtain m through the tracking object image of disturbance by affined transformation, then utilize these image initials L sub spaces adopt above formula (2) initialization L sub spaces
Steps A-2, only considers a two field picture I at every turn i, stator space formula (2) is rewritten as:
m i n w , e , Δ τ | | e | |
(3)
In above formula, I ithe i-th two field picture, the secondary conversion parameter estimated by iteration of kth, that kth time iteration is for conversion jacobi matrix, w is weight vectors, and e is sparse vector, and Δ τ is the conversion parameter of increment;
The augmentation Lagrangian Form of formula (3) is as follows:
L ( U t k , w , e , Δ τ , λ ) = | | e | | 1 + λ h ( w , e , Δ τ ) + u 2 | | h ( w , e , Δ τ ) | | 2 2 - - - ( 4 )
Wherein λ ∈ R nit is Lagrange multiplier vector;
Optimal value of the parameter (w *, e *, Δ τ *, λ *) can be tried to achieve by following formula ADMM algorithm:
In above formula, ρ > 1 is the constant factor of ADMM;
Steps A-3, the cost function selecting formula (4) to upgrade as kth time iterative process sub-spaces upgrades subspace
First to (4) about U kdifferentiate:
d L dU k = ( λ * + μ h ( w * , e * , Δτ * ) ) w * T - - - ( 6 )
Be projected to Grassmann manifold G (d k, n), obtain gradient Δ L = ( I - U k U k T ) d L dU k , Introduce Γ 1 = λ i * + μ h ( w i * , e i * , Δτ i * ) With Γ = ( I - U t k U t k T ) Γ 1 , Obtain Δ L=Γ w * T, the SVD exploded representation of Δ L is following formula:
Δ L = [ Γ | | Γ | | , x 2 , ... , x d ] × d i a g ( σ , 0 , ... , 0 ) × [ w * || w * | | , y 2 , ... , y d ] T - - - ( 7 )
The update method of subspace is as follows:
U t + 1 k = U t k + ( c o s ( η σ ) - 1 ) U t k w t * | | w t * | | w t * T | | w t * | | - s i n ( η σ ) Γ | | Γ | | w t * T | | w t * | | - - - ( 8 )
Step B, adopts subspace online updating method to follow the tracks of target:
Antithetical phrase spatial aggregation carry out online updating as follows:
For current i-th frame sequence image I to be tracked i, complete L iteration, complete antithetical phrase spatial aggregation renewal, use formula (5) to calculate optimum (w during each iteration *, e *, Δ τ *, λ *), wherein Δ τ *namely calculate k=[1 ... L], wherein initial subspace set is initial conversion parameter is
Process subsequent frame sequence image successively, initial conversion parameter adopt the mode same with the i-th frame sequence image to upgrade subspace to gather until process all subsequent frame sequence images;
According to the some vertex position constants of rectangle frame at specification coordinate system alignment target image, pass through conversion, obtain this target at I i+1location parameter in image, wherein for inverse transformation.
2. according to claim 1ly it is characterized in that based on the method for tracking target at line iteration sub-space learning, before step B, adopting following formula to judge, whether the image of tracking target there is larger distortion:
If sparse vector e *exceed given threshold value and then illustrate that the image of tracking target there occurs larger distortion, need the subspace set reinitializing tracking target use the tracking target of previous frame calibration the tracking target of distortion is there is with present frame random homogeneous perturbation m time respectively, and increase the order d of subspace set respectively k=d k+ 1, k=1 ..., L, then uses formula (2) initialization to obtain new subspace set
3. according to claim 1 and 2 based on the method for tracking target at line iteration sub-space learning, it is characterized in that: described ρ=1.5.
4. according to claim 2 based on the method for tracking target at line iteration sub-space learning, it is characterized in that: described e=0.2.
CN201510993106.0A 2015-12-24 2015-12-24 A kind of method for tracking target based in line interation sub-space learning Active CN105389833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510993106.0A CN105389833B (en) 2015-12-24 2015-12-24 A kind of method for tracking target based in line interation sub-space learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510993106.0A CN105389833B (en) 2015-12-24 2015-12-24 A kind of method for tracking target based in line interation sub-space learning

Publications (2)

Publication Number Publication Date
CN105389833A true CN105389833A (en) 2016-03-09
CN105389833B CN105389833B (en) 2018-11-27

Family

ID=55422082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510993106.0A Active CN105389833B (en) 2015-12-24 2015-12-24 A kind of method for tracking target based in line interation sub-space learning

Country Status (1)

Country Link
CN (1) CN105389833B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228574A (en) * 2016-07-19 2016-12-14 中国联合网络通信集团有限公司 Method for tracking target and device
CN106384356A (en) * 2016-09-22 2017-02-08 北京小米移动软件有限公司 Method and apparatus for separating foreground and background of video sequence
CN106503652A (en) * 2016-10-21 2017-03-15 南京理工大学 Based on the accident detection method that low-rank adaptive sparse is rebuild
WO2021253671A1 (en) * 2020-06-18 2021-12-23 中国科学院深圳先进技术研究院 Magnetic resonance cine imaging method and apparatus, and imaging device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332425A1 (en) * 2009-06-30 2010-12-30 Cuneyt Oncel Tuzel Method for Clustering Samples with Weakly Supervised Kernel Mean Shift Matrices
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN104751493A (en) * 2015-04-21 2015-07-01 南京信息工程大学 Sparse tracking method on basis of gradient texture features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332425A1 (en) * 2009-06-30 2010-12-30 Cuneyt Oncel Tuzel Method for Clustering Samples with Weakly Supervised Kernel Mean Shift Matrices
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN104751493A (en) * 2015-04-21 2015-07-01 南京信息工程大学 Sparse tracking method on basis of gradient texture features

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228574A (en) * 2016-07-19 2016-12-14 中国联合网络通信集团有限公司 Method for tracking target and device
CN106384356A (en) * 2016-09-22 2017-02-08 北京小米移动软件有限公司 Method and apparatus for separating foreground and background of video sequence
CN106503652A (en) * 2016-10-21 2017-03-15 南京理工大学 Based on the accident detection method that low-rank adaptive sparse is rebuild
WO2021253671A1 (en) * 2020-06-18 2021-12-23 中国科学院深圳先进技术研究院 Magnetic resonance cine imaging method and apparatus, and imaging device and storage medium

Also Published As

Publication number Publication date
CN105389833B (en) 2018-11-27

Similar Documents

Publication Publication Date Title
Espiau Effect of camera calibration errors on visual servoing in robotics
CN103514366B (en) Urban air quality concentration monitoring missing data recovering method
Varma et al. Transformers in self-supervised monocular depth estimation with unknown camera intrinsics
CN110287819B (en) Moving target detection method based on low rank and sparse decomposition under dynamic background
CN105389833A (en) Target tracking method based on online iteration subspace learning
CN103456030B (en) Based on the method for tracking target of scattering descriptor
CN108151713A (en) A kind of quick position and orientation estimation methods of monocular VO
CN106971189B (en) A kind of noisy method for recognising star map of low resolution
CN112967388A (en) Training method and device for three-dimensional time sequence image neural network model
CN103310464B (en) A kind of method of the direct estimation camera self moving parameter based on normal direction stream
US20120134535A1 (en) Method for adjusting parameters of video object detection algorithm of camera and the apparatus using the same
CN112380985A (en) Real-time detection method for intrusion foreign matters in transformer substation
CN112949466A (en) Video AI smoke pollution source identification and positioning method
CN106097277B (en) A kind of rope substance point-tracking method that view-based access control model measures
CN113362377B (en) VO weighted optimization method based on monocular camera
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
CN112268564B (en) Unmanned aerial vehicle landing space position and attitude end-to-end estimation method
CN103868601B (en) The bilateral full variational regularization bearing calibration of the non-homogeneous response of IRFPA detector
CN106991659B (en) A kind of multi-frame self-adaption optical image restoration methods for adapting to atmospheric turbulance change
CN105205560B (en) Photovoltaic power supply power prediction method based on positive and negative error variable weights
Indelman Bundle adjustment without iterative structure estimation and its application to navigation
CN106017482B (en) A kind of spatial operation relative orbit control error calculation method based on no mark recursion
CN110533599B (en) Method for improving reconstruction quality of two-dimensional tomographic image of polluted gas concentration spatial distribution
Xu et al. Application and analysis of recurrent convolutional neural network in visual odometry
CN112949761A (en) Training method and device for three-dimensional image neural network model and computer equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210423

Address after: 210019 area a, 4th floor, building A8, 8 Bailongjiang East Street, Nanjing, Jiangsu Province

Patentee after: Nanji Agricultural Machinery Research Institute Co.,Ltd.

Address before: 210044 Nanjing Ning Road, Jiangsu, No. six, No. 219

Patentee before: NANJING University OF INFORMATION SCIENCE & TECHNOLOGY