CN103473790A - Online target tracking method based on increment bilateral two-dimensional principal component analysis (Bi-2DPCA) learning and sparse representation - Google Patents

Online target tracking method based on increment bilateral two-dimensional principal component analysis (Bi-2DPCA) learning and sparse representation Download PDF

Info

Publication number
CN103473790A
CN103473790A CN2013103860768A CN201310386076A CN103473790A CN 103473790 A CN103473790 A CN 103473790A CN 2013103860768 A CN2013103860768 A CN 2013103860768A CN 201310386076 A CN201310386076 A CN 201310386076A CN 103473790 A CN103473790 A CN 103473790A
Authority
CN
China
Prior art keywords
target
2dpca
frame
particle
increment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103860768A
Other languages
Chinese (zh)
Other versions
CN103473790B (en
Inventor
李映
宋旭
李鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Mega new Mstar Technology Ltd
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310386076.8A priority Critical patent/CN103473790B/en
Publication of CN103473790A publication Critical patent/CN103473790A/en
Application granted granted Critical
Publication of CN103473790B publication Critical patent/CN103473790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an online target tracking method based on increment bilateral two-dimensional principal component analysis (Bi-2DPCA) learning and sparse representation, and provides an increment Bi-2DPCA learning algorithm capable of rapidly and accurately updating a target sub-space model to reflect an appearance change of a target in a tracking process. Aiming at the problem that the target is frequently blocked and polluted by noise in the tracking process to cause that the tracking effect becomes worse, according to the method disclosed by the invention, a Bi-2DPCA-based sub-space model is embedded below a sparse representation frame, so that interferences to target positioning and target sub-space model updating caused by blocking and noise are furthest removed. Meanwhile, a novel method for calculating visual similarity is used. By adopting the method, energy distribution of Bi-2DPCA when an image is presented is considered, and is more accurate in comparison with a classical reconstruction error; tracking is achieved under a Bayesian inference framework; the target state is estimated by using a particle filtering algorithm.

Description

Online method for tracking target based on increment Bi-2DPCA study and rarefaction representation
Technical field
The present invention relates to a kind of online method for tracking target based on increment Bi-2DPCA study and rarefaction representation, it is the online method for tracking target that a kind of subspace in conjunction with bilateral two-dimensional principal component analysis method (Bilateral two-dimensional Principal Component Analysis, Bi-2DPCA) means model and rarefaction representation.
Background technology
Target following is a Basic Problems of computer vision field.It has a wide range of applications: comprise video monitoring, and behavioural analysis, motion event detects, and video frequency searching etc.Although a lot of scholars have made a lot of effort in the research in this field, vision is followed the tracks of and is remained a challenging research field.Because the variation that the outward appearance of target often can face due to illumination variation, blocks in tracing process, deformation, complicated movement background etc. cause.Therefore, a good target appearance model will play decisive role to the robustness of track algorithm.
As a kind of unsupervised learning and data analysis technique of classics, principal component analysis (PCA) (Principal Component Analysis, PCA) has outstanding feature extraction and data representation ability.Target subspace display model based on PCA can be good at describing tracking target, and obtains the tracking effect than robust.But this display model, often need the target image of 2 dimensions to be expanded into to the expression vector of 1 dimension by row or column, so just produced a very feature space of higher-dimension.In this feature space, calculate accurately its covariance matrix and be and be difficult to realize, calculate its proper vector thing very consuming time especially.And PCA be applicable to process the sample based on unimodal probability model, make the accuracy of the display model based on PCA depend on very much the probability distribution of sample.In order to overcome this defect of PCA, document " Bi-2DPCA:A Fast Face Coding Method for Recognition " has proposed the image representation method based on Bi-2DPCA, its main thought is directly processed the view data of 2 dimensions and without the image dress being changed to the vector of 1 dimension, the covariance matrix that the method builds small-sized, make the calculating of eigenwert and proper vector want the many of Simple fast than PCA.The method has obtained the reconstructed error rate less than PCA and good real-time simultaneously, and Bi-2DPCA has firm theoretical foundation to guarantee that it does not rely on the distribution of data.Recently, the tracking based on rarefaction representation has obtained increasing concern, and the method hypothetical target is formed by a series of To Templates and trifling template linear combination, by adding the constraint of sparse property, obtains the rarefaction representation vector of candidate target.The method can be good at processing noise and occlusion issue, but real-time is not high, and the method directly used the base of To Template as sparse dictionary, can not well describe structure and the variation in target signature space.
In sum, anti-the blocking with the noise ability of data representation ability that can Bi-2DPCA is outstanding and rarefaction representation combines, thereby reaches complementation.
Summary of the invention
The technical matters solved
For fear of the deficiencies in the prior art part, the present invention proposes a kind of online method for tracking target based on increment Bi-2DPCA study and rarefaction representation.
Technical scheme
A kind of online method for tracking target based on increment Bi-2DPCA study and rarefaction representation is characterized in that step is as follows:
Step 1: target-marking x in the first frame 1, x 1be the affine transformation parameter of target image piece in the first frame, an initialization N particle and weights thereof
Figure BDA0000374427740000021
Step 2: front T two field picture is used classical particle filtering algorithm tracking target, obtains initial target sample set A={A 1, A 2..., A t, A imean target image block matrix in the i two field picture, its size criteriaization is to m * n;
Step 3: element in A is carried out to the Bi-2DPCA processing, obtain following target subspace initial model:
Figure BDA0000374427740000022
element average image in A;
L ∈ R m*p: left transformation matrix, its column vector quadrature;
R ∈ R n*q: right transformation matrix, its column vector quadrature;
Step 4: input a new two field picture as present frame, and the hypothesis present frame is the t frame.Particle in former frame is pressed to its weights
Figure BDA0000374427740000023
proportional relation resampled, then use Gaussian motion model, obtain particle in present frame
Figure BDA0000374427740000024
Step 5: obtain particle in present frame
Figure BDA0000374427740000031
the outward appearance of correspondence image piece means, both being normalized into size is the image array of m * n
Step 6: calculate particle in present frame
Figure BDA0000374427740000033
the outward appearance of correspondence image piece means
Figure BDA0000374427740000034
probability with the visual similarity of target appearance model (this model consists of under the rarefaction representation framework the subspace model insertion based on Bi-2DPCA)
Figure BDA0000374427740000035
using this value as particle
Figure BDA0000374427740000036
new weights,
Figure BDA0000374427740000037
then use maximum posteriori criterion (MAP) criterion, obtain in present frame having the state estimation value of the particle of maximum weights as this frame target be the tracking results to present frame.If present frame is last frame, finishes, otherwise continue to carry out;
Step 7: judge whether to have followed the tracks of the M frame, if perform step 8, otherwise forward step 4 to; Described M is renewal frequency, 2<M<10;
Step 8: by this M tracking results, obtain an Increment Matrix B={A t+1, A t+2..., A t+M, in B, each element is that the outward appearance of target image piece in the two field picture newly traced into means, its size criteria turns to m * n;
Step 9: use increment Bi-2DPCA algorithm to upgrade by C={C, B}, upgrade seasonal C={A, B} for the first time; The target subspace based on Bi-2DPCA built means model; Forward step 4 to.
Beneficial effect
A kind of online method for tracking target based on increment Bi-2DPCA study and rarefaction representation that the present invention proposes, in order to reflect target cosmetic variation in tracing process, proposed a kind of increment Bi-2DPCA learning algorithm and can upgrade fast and accurately the target subspace model.Often be blocked in tracing process due to target and, by noise pollution, cause the tracking effect variation, the present invention is directed to this problem, by the subspace model insertion based on Bi-2DPCA under the rarefaction representation framework.Block thereby eliminated to greatest extent interference target localization and target subspace model modification brought with noise.Simultaneously, the present invention has used a kind of method of new computation vision similarity, and the method has been considered the energy distribution of Bi-2DPCA when presentation video, compares classical reconstructed error more accurate.Tracking realizes, and uses particle filter algorithm estimating target state under the Bayesian inference framework.
The invention has the beneficial effects as follows: the target subspace based on Bi-2DPCA means to have the advantage of Bi-2DPCA at aspects such as image representation, covariance calculating, proper vector and eigenwert calculating, makes track algorithm rapidly and accurately.Target subspace based on Bi-2DPCA is meaned to embed the rarefaction representation framework, can obtain noise image, this noise image has been pointed out noise and has blocked position to occur, write contaminated pixel by getting rid of this, the renewal of target subspace model be can instruct, and then noise and occlusion issue well processed.
The accompanying drawing explanation
Fig. 1 the inventive method process flow diagram
Embodiment
Now in conjunction with the embodiments, the invention will be further described for accompanying drawing:
1) target-marking x in the first frame 1(x 1be the affine transformation parameter of target image piece in the first frame), an initialization N particle and weights thereof
2) front T two field picture is used classical particle filtering algorithm tracking target, obtains initial target sample set A={A 1, A 2..., A t, A imean target image block matrix in the i two field picture, its size is normalized to m * n;
3) calculate the covariance matrix of A
Figure BDA0000374427740000042
wherein for element average in A.To G tcarry out Eigenvalues Decomposition (EVD), get its front q larger eigenwert
Figure BDA0000374427740000044
the characteristic of correspondence vector forms right transformation matrix
Figure BDA0000374427740000045
by element in A at R tupper projection obtains: P=AR t.Calculate P tcovariance matrix
Figure BDA0000374427740000046
f is carried out to Eigenvalues Decomposition (EVD), get its front p larger eigenwert
Figure BDA0000374427740000047
the characteristic of correspondence vector forms left transformation matrix L t∈ R m*p.The target subspace model is: with
Figure BDA0000374427740000048
centered by, with L t, R tsubspace for span;
4) input a new two field picture as present frame, and the hypothesis present frame is the t frame.By particle in the former frame image by its weights proportional relation resampled, then use Gaussian motion model, obtain particle state parameter in present frame
Figure BDA00003744277400000411
particle after being about to resample adds the random perturbation of a Gaussian distributed.Gaussian motion model is generally arranged:
Figure BDA00003744277400000412
Σ wherein xa diagonal matrix, the variance of element representation affine transformation parameter on diagonal line;
5) obtain particle in present frame the outward appearance of corresponding image block means, both being normalized into size is the image array of m * n
Figure BDA0000374427740000052
6) to particle in present frame
Figure BDA0000374427740000053
the outward appearance of correspondence image piece means
Figure BDA0000374427740000054
display model by target is explained.The present invention uses a pair of matrix of coefficients (E i, e i) mean that the outward appearance of candidate target means
Figure BDA0000374427740000055
meet the following optimal problem based on rarefaction representation:
( E i , e i ) = arg min E , e | | A t i - A &OverBar; t - L t ER t T - e | | 2 2 + &lambda; | | e | | 1 - - - ( 1 )
Wherein, E ∈ R p*q, e ∈ R m*n,
Figure BDA0000374427740000057
the average of current time, L tthe left transformation matrix of current time subspace model, R tbe the right transformation matrix of current time, λ is the sparse property of a balances noise image e and the factor of reconstructed error size.The present invention uses iterative algorithm to solve the optimal problem of (1) formula.By (E i, e i) can solve particle
Figure BDA0000374427740000058
the outward appearance of correspondence image piece means
Figure BDA0000374427740000059
probability with the visual similarity of target appearance model
Figure BDA00003744277400000510
?
Figure BDA00003744277400000511
have:
p ( A t i | x t ) &Proportional; exp [ - 1 2 &sigma; 2 | | A t i - A &OverBar; t - | L t E i R t T | | F ] &times; exp [ - 1 2 &Sigma; k = 1 p &Sigma; l = 1 q E i 2 ( k , l ) &lambda; L k + &lambda; R l ] - - - ( 2 )
Wherein, &sigma; 2 = &Sigma; i = p + 1 m &lambda; L i + &Sigma; i = q + 1 n &lambda; R i , &lambda; R 1 &GreaterEqual; &lambda; R 2 &GreaterEqual; . . . &GreaterEqual; &lambda; R n For G tn eigenwert, &lambda; L 1 &GreaterEqual; &lambda; L 2 &GreaterEqual; . . . &GreaterEqual; &lambda; L m For F tm eigenwert, G t, F tdefinition as step 3), ‖ ‖ fthe F norm of matrix, E i(k, l) means the capable l column element of k of noise image E.Using the MAP criterion, obtain in present frame having the state estimation value of the particle of maximum weights as this frame target, has been both our tracking results to present frame.If present frame is last frame, finishes, otherwise continue to carry out;
7) judge whether to have followed the tracks of M frame (M is renewal frequency, 2<M<10).If perform step 8), otherwise forward step 4) to;
8) obtain an Increment Matrix B={A by this M tracking results t+1, A t+2..., A t+M, in B, each element is that the outward appearance of target image piece in the two field picture newly traced into means, its size criteria turns to m * n;
9) use increment Bi-2DPCA algorithm to upgrade by C={C, B}(upgrades seasonal C={A, B} for the first time) target subspace that builds means model.Forward step 4) to.
Increment Bi-2DPCA algorithm flow is as follows:
Input: target subspace model before upgrading: average
Figure BDA0000374427740000061
left and right transformation matrix L t, R t.Covariance average G tand F t.The target image set of blocks B={A newly traced into t+1, A t+2..., A t+Mand average
Figure BDA0000374427740000062
Output: new target subspace model: average left and right transformation matrix L t+M, R t+M.Covariance average G t+Mand F t+M.
1. A t + M &OverBar; = 1 t + M ( t &CenterDot; A t &OverBar; + M &CenterDot; B &OverBar; ) ;
2. have G t + M = 1 t + M [ t &CenterDot; G t + M &CenterDot; G M + t &CenterDot; M t + M ( A t &OverBar; - A M &OverBar; ) T ( A &OverBar; t - A M &OverBar; ) ] ,
Wherein G M = 1 M &Sigma; i = t + 1 t + M ( A i - A M &OverBar; ) T ( A i - A M &OverBar; ) ;
3. to G t+Mcarry out Eigenvalues Decomposition, G is arranged t+M=R Σ rr t.Its maximum q of district eigenwert characteristic of correspondence vector forms right transformation matrix R t+M;
By element in B at R t+Mupper projection, have P i=A ir t+M(i=t+1, t+2 ..., t+M);
5. have F t + M &ap; 1 t + M [ t &CenterDot; F t + M &CenterDot; P M + t &CenterDot; M t + M ( A &OverBar; t - A M &OverBar; ) R t + M R t + M T ( A t &OverBar; - A M &OverBar; ) T ] ,
Wherein P M = 1 M &Sigma; i = t + 1 t + M ( A i - A M &OverBar; ) R t + M R t + M T ( A i - A M &OverBar; ) T .
To F t+Mcarry out Eigenvalues Decomposition, F is arranged t+M=L Σ ll t.Its maximum p of district eigenwert characteristic of correspondence vector forms left transformation matrix L t+M.

Claims (1)

1. the online method for tracking target based on increment Bi-2DPCA study and rarefaction representation is characterized in that step is as follows:
Step 1: target-marking x in the first frame 1, x 1be the affine transformation parameter of target image piece in the first frame, an initialization N particle and weights thereof
Figure FDA0000374427730000011
Step 2: front T two field picture is used classical particle filtering algorithm tracking target, obtains initial target sample set A={A 1, A 2..., A t, A imean target image block matrix in the i two field picture, its size criteriaization is to m * n;
Step 3: element in A is carried out to the Bi-2DPCA processing, obtain following target subspace initial model:
Figure FDA0000374427730000012
element average image in A;
L ∈ R m*p: left transformation matrix, its column vector quadrature;
R ∈ R n*q: right transformation matrix, its column vector quadrature;
Step 4: input a new two field picture as present frame, and the hypothesis present frame is the t frame.Particle in former frame is pressed to its weights
Figure FDA0000374427730000013
proportional relation resampled, then use Gaussian motion model, obtain particle in present frame
Step 5: obtain particle in present frame
Figure FDA0000374427730000015
the outward appearance of correspondence image piece means, both being normalized into size is the image array of m * n
Figure FDA0000374427730000016
Step 6: calculate particle in present frame
Figure FDA0000374427730000017
the outward appearance of correspondence image piece means
Figure FDA0000374427730000018
probability with the visual similarity of target appearance model (this model consists of under the rarefaction representation framework the subspace model insertion based on Bi-2DPCA)
Figure FDA0000374427730000019
using this value as particle
Figure FDA00003744277300000110
new weights, then use maximum posteriori criterion (MAP) criterion, obtain in present frame having the state estimation value of the particle of maximum weights as this frame target
Figure FDA00003744277300000112
be the tracking results to present frame.If present frame is last frame, finishes, otherwise continue to carry out;
Step 7: judge whether to have followed the tracks of the M frame, if perform step 8, otherwise forward step 4 to; Described M is renewal frequency, 2<M<10;
Step 8: by this M tracking results, obtain an Increment Matrix B={A t+1, A t+2..., A t+M, in B, each element is that the outward appearance of target image piece in the two field picture newly traced into means, its size criteria turns to m * n;
Step 9: use increment Bi-2DPCA algorithm to upgrade by C={C, B}, upgrade seasonal C={A, B} for the first time; The target subspace based on Bi-2DPCA built means model; Forward step 4 to.
CN201310386076.8A 2013-08-29 2013-08-29 Based on the online method for tracking target of increment Bi-2DPCA study and rarefaction representation Active CN103473790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310386076.8A CN103473790B (en) 2013-08-29 2013-08-29 Based on the online method for tracking target of increment Bi-2DPCA study and rarefaction representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310386076.8A CN103473790B (en) 2013-08-29 2013-08-29 Based on the online method for tracking target of increment Bi-2DPCA study and rarefaction representation

Publications (2)

Publication Number Publication Date
CN103473790A true CN103473790A (en) 2013-12-25
CN103473790B CN103473790B (en) 2016-05-25

Family

ID=49798624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310386076.8A Active CN103473790B (en) 2013-08-29 2013-08-29 Based on the online method for tracking target of increment Bi-2DPCA study and rarefaction representation

Country Status (1)

Country Link
CN (1) CN103473790B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN104331909A (en) * 2014-11-21 2015-02-04 中国矿业大学(北京) Gradient features based method of tracking video targets in dark environment in real time
CN104880708A (en) * 2015-01-30 2015-09-02 西北工业大学 Tracking method for variable number of maneuvering target
CN104899896A (en) * 2015-06-12 2015-09-09 西北工业大学 Multi-task learning target tracking method based on subspace characteristics
CN104933733A (en) * 2015-06-12 2015-09-23 西北工业大学 Target tracking method based on sparse feature selection
CN105095864A (en) * 2015-07-16 2015-11-25 西安电子科技大学 Aurora image detection method based on deep learning two-dimensional principal component analysis network
CN105635808A (en) * 2015-12-31 2016-06-01 电子科技大学 Video splicing method based on Bayesian theory
CN106327515A (en) * 2015-06-17 2017-01-11 南京理工大学 Moving object tracking method based on 2DPCA (Two-dimensional Principal Component Analysis)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7724960B1 (en) * 2006-09-08 2010-05-25 University Of Central Florida Research Foundation Inc. Recognition and classification based on principal component analysis in the transform domain
CN103093431A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Compressed sensing reconstruction method based on principal component analysis (PCA) dictionary and structural priori information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7724960B1 (en) * 2006-09-08 2010-05-25 University Of Central Florida Research Foundation Inc. Recognition and classification based on principal component analysis in the transform domain
CN103093431A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Compressed sensing reconstruction method based on principal component analysis (PCA) dictionary and structural priori information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
戚培庆等: ""基于双向二维主成分分析的运动目标跟踪"", 《计算机工程与应用》, 29 March 2013 (2013-03-29), pages 156 - 158 *
杨大为等: ""基于粒子滤波与稀疏表达的目标跟踪方法"", 《模式识别与人工智能》, vol. 26, no. 7, 15 July 2013 (2013-07-15), pages 681 - 683 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646407B (en) * 2013-12-26 2016-06-22 中国科学院自动化研究所 A kind of video target tracking method based on composition distance relation figure
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN104331909A (en) * 2014-11-21 2015-02-04 中国矿业大学(北京) Gradient features based method of tracking video targets in dark environment in real time
CN104880708A (en) * 2015-01-30 2015-09-02 西北工业大学 Tracking method for variable number of maneuvering target
CN104880708B (en) * 2015-01-30 2017-07-04 西北工业大学 A kind of variable number maneuvering target tracking method
CN104899896A (en) * 2015-06-12 2015-09-09 西北工业大学 Multi-task learning target tracking method based on subspace characteristics
CN104933733A (en) * 2015-06-12 2015-09-23 西北工业大学 Target tracking method based on sparse feature selection
CN104899896B (en) * 2015-06-12 2018-03-02 西北工业大学 Multi-task learning target tracking method based on subspace characteristics
CN106327515A (en) * 2015-06-17 2017-01-11 南京理工大学 Moving object tracking method based on 2DPCA (Two-dimensional Principal Component Analysis)
CN106327515B (en) * 2015-06-17 2019-04-12 南京理工大学 A kind of motion target tracking method based on 2DPCA
CN105095864A (en) * 2015-07-16 2015-11-25 西安电子科技大学 Aurora image detection method based on deep learning two-dimensional principal component analysis network
CN105095864B (en) * 2015-07-16 2018-04-17 西安电子科技大学 Aurora image detecting method based on deep learning two-dimensional principal component analysis network
CN105635808A (en) * 2015-12-31 2016-06-01 电子科技大学 Video splicing method based on Bayesian theory
CN105635808B (en) * 2015-12-31 2018-10-19 电子科技大学 A kind of video-splicing method based on bayesian theory

Also Published As

Publication number Publication date
CN103473790B (en) 2016-05-25

Similar Documents

Publication Publication Date Title
CN103473790A (en) Online target tracking method based on increment bilateral two-dimensional principal component analysis (Bi-2DPCA) learning and sparse representation
Qian et al. PUGeo-Net: A geometry-centric network for 3D point cloud upsampling
Ngo et al. A study on moving mesh finite element solution of the porous medium equation
CN106056628A (en) Target tracking method and system based on deep convolution nerve network feature fusion
CN103136520B (en) The form fit of Based PC A-SC algorithm and target identification method
CN109461172A (en) Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN101629966B (en) Particle image velocimetry (PIV) processing method
CN105741316A (en) Robust target tracking method based on deep learning and multi-scale correlation filtering
Shao et al. Integral invariants for space motion trajectory matching and recognition
CN107067410B (en) Manifold regularization related filtering target tracking method based on augmented samples
CN103310463B (en) Based on the online method for tracking target of Probabilistic Principal Component Analysis and compressed sensing
CN103440512A (en) Identifying method of brain cognitive states based on tensor locality preserving projection
CN105006003A (en) Random projection fern based real-time target tracking algorithm
CN101271520A (en) Method and device for confirming characteristic point position in image
CN104751493A (en) Sparse tracking method on basis of gradient texture features
CN103810755A (en) Method for reconstructing compressively sensed spectral image based on structural clustering sparse representation
CN101916433A (en) Denoising method of strong noise pollution image on basis of partial differential equation
CN105023013A (en) Target detection method based on local standard deviation and Radon transformation
Kwon et al. Visual tracking via particle filtering on the affine group
Rui et al. Object tracking using particle filter in the wavelet subspace
CN104899896A (en) Multi-task learning target tracking method based on subspace characteristics
CN103955951A (en) Fast target tracking method based on regularization templates and reconstruction error decomposition
Guo et al. Robust low-rank subspace segmentation with finite mixture noise
CN103514600A (en) Method for fast robustness tracking of infrared target based on sparse representation
Wang et al. Deep nrsfm++: Towards unsupervised 2d-3d lifting in the wild

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171228

Address after: 401220 Fengcheng Pioneer Street, Changshou District, Chongqing City, No. 19

Patentee after: Wu Xiaoze

Address before: 710072 Xi'an friendship West Road, Shaanxi, No. 127

Patentee before: Northwestern Polytechnical University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180904

Address after: 518000 Guangdong Shenzhen Longhua New District big wave street Longsheng community Tenglong road gold rush e-commerce incubation base exhibition hall E commercial block 706

Patentee after: Shenzhen step Technology Transfer Center Co., Ltd.

Address before: 401220 Fengcheng Pioneer Street, Changshou District, Chongqing City, No. 19

Patentee before: Wu Xiaoze

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191126

Address after: 226100 No. 40 East China Road, Sanchang Street, Haimen City, Nantong City, Jiangsu Province

Patentee after: Nantong Mega new Mstar Technology Ltd

Address before: 518000 Electronic Commerce Incubation Base of Tenglong Road Gold Rush, Longhua Street, Longhua New District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen step Technology Transfer Center Co., Ltd.

TR01 Transfer of patent right