CN102592135A - Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics - Google Patents

Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics Download PDF

Info

Publication number
CN102592135A
CN102592135A CN201110425493XA CN201110425493A CN102592135A CN 102592135 A CN102592135 A CN 102592135A CN 201110425493X A CN201110425493X A CN 201110425493XA CN 201110425493 A CN201110425493 A CN 201110425493A CN 102592135 A CN102592135 A CN 102592135A
Authority
CN
China
Prior art keywords
subspace
target
split
matrix
visual tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110425493XA
Other languages
Chinese (zh)
Other versions
CN102592135B (en
Inventor
张笑钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN 201110425493 priority Critical patent/CN102592135B/en
Publication of CN102592135A publication Critical patent/CN102592135A/en
Application granted granted Critical
Publication of CN102592135B publication Critical patent/CN102592135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a visual tracking algorithm of subspace fusing target space distribution and time sequence distribution characteristics, which mainly comprises steps of firstly, developing target regional characteristics from three angles, and conducting subspace learning respectively; secondly, applying an updating strategy to update the mean value and subspace under three modes on line; finally, integrating the three subspace modes into a likelihood function through tensor reconstruction to be used for evaluating candidate detecting images. The visual tracking method achieves an effective target tracing method, and is a universal method. Experimental results shows that compared with other classical subspace tracking algorithm, the visual tracking method is more effective and robust, and has good application prospect.

Description

Merge the visual tracking method of object space distribution and timing distribution proper subspace
Technical field
The present invention relates to computer vision field, particularly relate to a kind of visual tracking method that merges object space distribution and timing distribution proper subspace.
Background technology
Target following is very important studying a question in the computer vision field, because it is the basis of researchs such as high-rise visual problem such as motion analysis and behavior identification.Generally speaking, present main two critical problems of target tracking algorism: (1) apparent model; (2) follow the tracks of framework.
In general, apparent model is exactly how target object to be carried out effectively expressing, and carries out real-time renewal.To Template be a kind of the most directly Target Modeling method, but this model lacks identification and robustness.Although for target scale, rotation and non-rigid shape deformations robust comparatively, because it has ignored the apparent color space distributed intelligence of target, there is certain defective in the color histogram of target area.Though the apparent model based on Density Estimator has well solved this defective, but the cost that brings is the increase of calculating and storage complexity.In addition, come the internal relation between the modeling neighborhood pixels through Markov random field, but its training cost is very huge based on the apparent model of condition random field.
In recent years, more reasonable based on the apparent model of subspace study owing to the constant hypothesis in its subspace, thereby be widely used in the vision track field.But this model needs abundant sample when training, in practice, be difficult to reach the requirement of real-time.Based on this, Levy and Lindenbaum have proposed sequence KL (Sequential Karhunen-Loeve) mapping algorithm and have been used for the characteristic base that image is learnt on increment ground.Lim etc. have expanded sequence KL mapping algorithm, and average and the characteristic base to target image carries out incremental update simultaneously, and with this algorithm first Application in the vision track of target.Then; The Robust Estimation strategy; The reinforcement adaptive approach of Yang based on data-driven; Liao based on the tracking of robust Kalman filtering and Gai and Stevenson based on the method for dynamic model, though in some specific scene, obtained tracking performance preferably, certain weak point is arranged: above-mentioned all at first will be based on the track algorithm of subspace with image generate one-dimensional vector; The apparent space distribution information of target is almost completely lost, thereby makes model very responsive to target apparent variation of overall importance and noise.To this shortcoming, Hu etc. introduce tensor thought, have played effectiveness to a certain extent.When carrying out but because it has brought error using the R-SVD renewal process only to keep preceding R the big pairing proper vector of eigenwert, and along with tracking, error can progressively add up, and causes the model drift.Though a kind of model based on dynamic tensor analysis has been avoided above-mentioned error, obtained accurate more result, because small sample problem makes the covariance matrix that calculates can't describe the distribution situation of sample, thereby cause the calculating of subspace to be degenerated.
Summary of the invention
Only pay close attention to property time correlation between the characteristic of target area in order to overcome traditional apparent model, and ignored the space distribution information of target area based on subspace study.The present invention provides a kind of visual tracking method that merges object space distribution and timing distribution proper subspace, with respect to other classons space track algorithm, and the effective more and robust of this tracking.
To achieve these goals, the technical scheme below the present invention has adopted:
A kind of visual tracking method that merges object space distribution and timing distribution proper subspace may further comprise the steps:
(1) the utilization tensor theories is expressed as one three rank tensor with the training sample of target image observation, and this tensor is launched into three split-matrixes from the width of image, these three different directions of height and image time shaft of image;
Wherein, the row sample subspace of preceding two split-matrixes is for the space distribution information of target; The capable sample subspace of the 3rd split-matrix is corresponding to the time distributed intelligence of target;
(2), calculate the sample covariance matrix of each split-matrix, and obtain the pattern subspace of target level and vertical direction through covariance matrix being done feature decomposition for preceding two split-matrixes; For the 3rd split-matrix, adopt singular value decomposition algorithm to obtain the pattern subspace of target on time shaft;
(3) for given object candidate area, calculate its reconstructed error on the apparent space of target, and the observation likelihood function of objective definition candidate region thus;
(4) according to the observation likelihood function of definition, the utilization particle filter calculates the posterior probability of dbjective state, adopts maximum a posteriori estimation strategy from a series of candidate's observed readings, to select the maximum dbjective state as current time of posterior probability;
(5) result who utilize to follow the tracks of to the target pattern subspace that obtains in the step (2) is upgraded, and adds forgetting factor in renewal process;
Wherein, forgetting factor is a kind of weight between 0 and 1, and big more forgetting factor means that model pays close attention to recent data more in the model of subspace.
Furtherly, described step (1) specifically comprises following substep:
At first, with the training sample of target image observation with one three rank tensor representation;
Secondly, with the tensor that obtains from width, highly, these three different directions of time are launched into three split-matrixes;
Furtherly, described step (2) specifically comprises following substep:
At first, for preceding two split-matrixes, calculate the sample covariance matrix of each split-matrix, through covariance matrix being done the subspace that feature decomposition obtains each pattern;
Secondly, for the 3rd split-matrix, adopt singular value decomposition algorithm to obtain the pattern subspace of target on time shaft.
Furtherly, described step (3) specifically comprises following substep:
At first, obtain reconstructed error;
Secondly, by reconstructed error definition likelihood function.
Furtherly, described step (4) specifically comprises following substep:
At first, the motion of hypothetical target between the two continuous frames image is affine motion, and the state of target is by the target affine motion parameter characterization between two frames, and hypothetical target state dynamic transfer probability model is a Gauss model;
Secondly, utilize likelihood function to calculate the posterior probability of dbjective state;
At last, adopt maximum a posteriori to estimate that strategy is from selecting the maximum dbjective state as current time of posterior probability from a series of candidate's observed readings.
Furtherly, described step (5) specifically comprises following substep:
For preceding two split-matrixes,
At first, the covariance matrix to the sample matrix of acquisition in the step (2) upgrades;
Secondly, forgetting factor is added in the renewal equation covariance matrix that is more tallied with the actual situation.
For the 3rd split-matrix,
At first, the data matrix that newly obtains being carried out QR decomposes;
Secondly, the data matrix that merges new legacy data is carried out svd.
The invention has the beneficial effects as follows:
1, track algorithm proposed by the invention is a kind of method in common, and any tracking target type all is suitable for;
2, the fusion object space distribution that the present invention adopted and the vision track algorithm of timing distribution proper subspace; Fully excavated space distribution information and property time correlation between the characteristic of target area on the vertical and horizontal direction of target area, made algorithm reduce the apparent variation of overall importance of target and the sensitivity of noise compared to classic algorithm;
3, the present invention proposes a kind of update strategy, for two spatial models of level and vertical direction, through upgrading the covariance matrix more spatial model of fresh target distribution online, has improved update efficiency effectively;
4, the present invention proposes a kind of update strategy; For the time distribution pattern; Adopt the more time series pattern of the fresh target distribution online of increment singular value decomposition method; Not only solved owing to number of samples much smaller than the subspace degenerate problem that the sample dimension produces, also alleviated simultaneously the model drifting problem of introducing in calculating to a certain extent.
Description of drawings
Below in conjunction with accompanying drawing and embodiment the present invention is further specified.
Fig. 1 is the general frame of tracker of the present invention;
Fig. 2 is an image observation decomposing schematic representation of the present invention;
Fig. 3 is that three rank tensors of the present invention launch synoptic diagram.
Embodiment
Through embodiment the present invention is carried out concrete description below; Only be used for the present invention is further specified; Can not be interpreted as the qualification to protection domain of the present invention, the technician in this field can make some nonessential improvement and adjustment to the present invention according to the content of foregoing invention.
As shown in Figure 1, Fig. 1 is a general frame of the present invention.The present invention is a kind of based on multi-angle subspace fusion goal tracking, and the hardware and the programming language of the concrete operation of method of the present invention do not limit, and can accomplish with any language, and other mode of operation repeats no more for this reason.
Embodiments of the invention adopt one to have the Pentium 4 computing machine of 3.2G hertz central processing unit and 1G byte of memory and worked out the working routine that the sequence particle group optimizing is followed the tracks of framework with the Matlab language; Realized method of the present invention, fusion object space of the present invention distributes and the visual tracking method of timing distribution proper subspace may further comprise the steps:
Calculating is estimated modules such as (MAP) based on the expansion of tensor subspace, the renewal of covariance matrix, singular value decomposition algorithm, particle filter Condensation and the maximum a posteriori that the R-SVD algorithm is increment, and concrete steps are described below:
(1) training sample with target image observation is expressed as one three rank tensor
Figure BDA0000121108250000051
Represent, wherein l 1, l 2And l 3Be expressed as the width of image observation, highly and the length on time shaft respectively.According to tensor theories, this three rank image observation tensor can be by l 1, l 2And l 3Direction is launched into three split-matrix A respectively (1), A (2), A (3)Carry out corresponding with Fig. 2.
For A (1)And A (2)The row sample covariance matrix, A (3)Capable sample covariance matrix Distribution calculation following:
C 1 = Σ i ( A ( 1 ) i - μ 1 ) ( A ( 1 ) i - μ 1 ) T
C 2 = Σ i ( A ( 2 ) j - μ 2 ) ( A ( 2 ) j - μ 2 ) T
C 3 = Σ i ( A ( 3 ) ′ k - μ 3 ) ( A ( 3 ) ′ k - μ 3 ) T
(2) wherein
Figure BDA0000121108250000064
Be respectively matrix A (1), A (2), A (3)I, j, k row sample.μ 1, μ 2Be respectively matrix A (1), A (2)Column mean, μ 3Be A (3)Column mean, i.e. matrix A (3)Capable average.After obtaining the covariance matrix of sample, the subspace of each pattern can obtain through covariance matrix being done simple feature decomposition:
C d = U d S d U d T , d = 1,2,3
(3) For a given target candidate region?
Figure BDA0000121108250000066
and its expansion in vector form?
Figure BDA0000121108250000067
The target candidate regions in the target apparent subspace reconstruction error can be calculated as follows:
RE 1 = | | ( o t - u 1 ) - ( o t - u 1 ) U 1 U 1 T | | 2
RE 2 = | | ( o t ′ - u 2 ) - ( o t ′ - u 2 ) U 2 U 2 T | | 2
RE 3 = | | ( v t - u 3 ) - ( v t - u 3 ) U 3 U 3 T | | 2
RE=RE 1+RE 2+RE 3
U wherein 1And u 2Define as follows:
Figure BDA0000121108250000071
Therefore, the observation likelihood function of object candidate area can define as follows:
p(o t|x t)∝exp(-RE)
(4) utilization particle filter Condensation calculates the posterior probability of dbjective state, and method is following: the motion of hypothetical target between the two continuous frames image is affine motion, and the target affine motion parameter between two frames just can be used for characterizing the state of target
Figure BDA0000121108250000073
Figure BDA0000121108250000074
T wherein x, t y, η t, s t, β t,
Figure BDA0000121108250000075
Represent position translation amount, rotation angle, yardstick, aspect ratio and vergence direction respectively.Like this, given a series of observed reading O t={ o 1..., o tCan calculate the posterior probability of dbjective state:
p(x t|O t)∝p(o t|x t)∫p(x t|x t-1)p(x t-1|O t-1)dx t-1
Wherein, p (o t| x t) be illustrated in the probability that observed reading ot under the given dbjective state xt takes place, p (x t| x T-1) expression dbjective state dynamic transfer probability model.Adopt the strategy of maximum a posteriori estimation (MAP) from a series of candidate's observed readings, to select the maximum dbjective state of posterior probability as current time.
(5) according to the result of target following, the covariance matrix that preceding two patterns are decomposed sample matrix upgrades, and in renewal process, adds forgetting factor, and wherein the more new formula of covariance can be expressed as follows:
C d ← λ C d + ( X ( d ) - u d ′ ) ( X T ( d ) - u d ′ ) T + mn m + n ( μ d - μ d ′ ) ( μ d - μ d ′ ) T , d = 1,2
C wherein dBe the current sample covariance matrix of d pattern, X (d)It is new sample data; λ is a forgetting factor, and λ ∈ [0,1] in experiment makes that in the model of subspace recent new data occupy bigger weight, means that also model pays close attention to recent data more.After having upgraded the sample covariance matrix of all patterns, try to achieve the feature structure of each covariance matrix again, can realize the online updating of antithetical phrase spatial model.
(6) according to the result of target following, to the 3rd split-matrix utilization R-SVD algorithm, thereby to the online updating of this pattern subspace.Specific algorithm is following:
At first right k the new apparent data E={I of target T+1..., I T+kCarry out the QR decomposition, obtain the orthogonal basis of E
Figure BDA0000121108250000081
In addition, order
Figure BDA0000121108250000082
Secondly, order V ′ = V 0 0 M k Wherein, M kIt is k * k unit matrix.So just have
Σ ′ = U ′ T A ′ V ′ = U T E ~ ( A | E ) V 0 0 M k
= U T AV U T E E ~ T AV E ~ T E = Σ U T E 0 E ~ T E
At last; To ∑ ' carry out svd; Promptly
Figure BDA0000121108250000086
therefore, the svd of data matrix A '.
A '=U ' ∑ ' V ' TJust can be expressed as:
A ′ = U ′ ( U ~ Σ ~ V ~ T ) V ′ T = ( U ′ U ~ ) Σ ~ ( V ~ T ) V ′ T = U ^ Σ ^ V ^ T

Claims (6)

1. a visual tracking method that merges object space distribution and timing distribution proper subspace is characterized in that, may further comprise the steps:
(1) the utilization tensor theories is expressed as one three rank tensor with the training sample of target image observation, and this tensor is launched into three split-matrixes from the width of image, these three different directions of height and image time shaft of image;
Wherein, the row sample subspace of preceding two split-matrixes is for the space distribution information of target; The capable sample subspace of the 3rd split-matrix is corresponding to the time distributed intelligence of target;
(2), calculate the sample covariance matrix of each split-matrix, and obtain the pattern subspace of target level and vertical direction through covariance matrix being done feature decomposition for preceding two split-matrixes; For the 3rd split-matrix, adopt singular value decomposition algorithm to obtain the pattern subspace of target on time shaft;
(3) for given object candidate area, calculate its reconstructed error on the apparent space of target, and the observation likelihood function of objective definition candidate region thus;
(4) according to the observation likelihood function of definition, the utilization particle filter calculates the posterior probability of dbjective state, adopts maximum a posteriori estimation strategy from a series of candidate's observed readings, to select the maximum dbjective state as current time of posterior probability;
(5) result who utilize to follow the tracks of to the target pattern subspace that obtains in the step (2) is upgraded, and adds forgetting factor in renewal process;
Wherein, forgetting factor is a kind of weight between 0 and 1, and big more forgetting factor means that model pays close attention to recent data more in the model of subspace.
2. the visual tracking method of fusion object space distribution according to claim 1 and timing distribution proper subspace is characterized in that described step (1) specifically comprises following substep:
At first, with the training sample of target image observation with one three rank tensor representation;
Secondly, with the tensor that obtains from width, highly, these three different directions of time are launched into three split-matrixes;
3. the visual tracking method of fusion object space distribution according to claim 1 and timing distribution proper subspace is characterized in that described step (2) specifically comprises following substep:
At first, for preceding two split-matrixes, calculate the sample covariance matrix of each split-matrix, through covariance matrix being done the subspace that feature decomposition obtains each pattern;
Secondly, for the 3rd split-matrix, adopt singular value decomposition algorithm to obtain the pattern subspace of target on time shaft.
4. the visual tracking method of fusion object space distribution according to claim 1 and timing distribution proper subspace is characterized in that described step (3) specifically comprises following substep:
At first, obtain reconstructed error;
Secondly, by reconstructed error definition likelihood function.
5. the visual tracking method of fusion object space distribution according to claim 1 and timing distribution proper subspace is characterized in that described step (4) specifically comprises following substep:
At first, the motion of hypothetical target between the two continuous frames image is affine motion, and the state of target is by the target affine motion parameter characterization between two frames, and hypothetical target state dynamic transfer probability model is a Gauss model;
Secondly, utilize likelihood function to calculate the posterior probability of dbjective state;
At last, adopt maximum a posteriori to estimate that strategy is from selecting the maximum dbjective state as current time of posterior probability from a series of candidate's observed readings.
6. according to the visual tracking method of described fusion object space distribution and timing distribution proper subspace shown in any claim of claim 1-5, it is characterized in that: described step (5) specifically comprises following substep:
For preceding two split-matrixes,
At first, the covariance matrix to the sample matrix of acquisition in the step (2) upgrades;
Secondly, forgetting factor is added in the renewal equation covariance matrix that is more tallied with the actual situation.
For the 3rd split-matrix,
At first, the data matrix that newly obtains being carried out QR decomposes;
Secondly, the data matrix that merges new legacy data is carried out svd.
CN 201110425493 2011-12-16 2011-12-16 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics Active CN102592135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110425493 CN102592135B (en) 2011-12-16 2011-12-16 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110425493 CN102592135B (en) 2011-12-16 2011-12-16 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics

Publications (2)

Publication Number Publication Date
CN102592135A true CN102592135A (en) 2012-07-18
CN102592135B CN102592135B (en) 2013-12-18

Family

ID=46480745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110425493 Active CN102592135B (en) 2011-12-16 2011-12-16 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics

Country Status (1)

Country Link
CN (1) CN102592135B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150572A (en) * 2012-12-11 2013-06-12 中国科学院深圳先进技术研究院 On-line type visual tracking method
CN103345735A (en) * 2013-07-16 2013-10-09 上海交通大学 Compressed space-time multi-sensor fusion tracking method based on Kalman filter
CN105654102A (en) * 2014-11-10 2016-06-08 富士通株式会社 Data processing device and data processing method
CN106228245A (en) * 2016-07-21 2016-12-14 电子科技大学 Infer based on variation and the knowledge base complementing method of tensor neutral net
CN107122735A (en) * 2017-04-26 2017-09-01 中山大学 A kind of multi-object tracking method based on deep learning and condition random field
CN110084834A (en) * 2019-04-28 2019-08-02 东华大学 A kind of method for tracking target based on quick tensor singular value decomposition Feature Dimension Reduction
CN111292548A (en) * 2020-02-06 2020-06-16 温州大学 Safe driving method based on visual attention

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030161500A1 (en) * 2002-02-22 2003-08-28 Andrew Blake System and method for probabilistic exemplar-based pattern tracking
WO2007042195A2 (en) * 2005-10-11 2007-04-19 Carl Zeiss Imaging Solutions Gmbh Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data space
CN101221620A (en) * 2007-12-20 2008-07-16 北京中星微电子有限公司 Human face tracing method
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030161500A1 (en) * 2002-02-22 2003-08-28 Andrew Blake System and method for probabilistic exemplar-based pattern tracking
US20060093188A1 (en) * 2002-02-22 2006-05-04 Microsoft Corporation Probabilistic exemplar-based pattern tracking
WO2007042195A2 (en) * 2005-10-11 2007-04-19 Carl Zeiss Imaging Solutions Gmbh Method for segmentation in an n-dimensional characteristic space and method for classification on the basis of geometric characteristics of segmented objects in an n-dimensional data space
CN101221620A (en) * 2007-12-20 2008-07-16 北京中星微电子有限公司 Human face tracing method
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150572A (en) * 2012-12-11 2013-06-12 中国科学院深圳先进技术研究院 On-line type visual tracking method
CN103150572B (en) * 2012-12-11 2016-04-13 中国科学院深圳先进技术研究院 Online visual tracking method
CN103345735A (en) * 2013-07-16 2013-10-09 上海交通大学 Compressed space-time multi-sensor fusion tracking method based on Kalman filter
CN105654102A (en) * 2014-11-10 2016-06-08 富士通株式会社 Data processing device and data processing method
CN106228245A (en) * 2016-07-21 2016-12-14 电子科技大学 Infer based on variation and the knowledge base complementing method of tensor neutral net
CN106228245B (en) * 2016-07-21 2018-09-04 电子科技大学 Infer the knowledge base complementing method with tensor neural network based on variation
CN107122735A (en) * 2017-04-26 2017-09-01 中山大学 A kind of multi-object tracking method based on deep learning and condition random field
CN107122735B (en) * 2017-04-26 2020-07-14 中山大学 Multi-target tracking method based on deep learning and conditional random field
CN110084834A (en) * 2019-04-28 2019-08-02 东华大学 A kind of method for tracking target based on quick tensor singular value decomposition Feature Dimension Reduction
CN110084834B (en) * 2019-04-28 2021-04-06 东华大学 Target tracking method based on rapid tensor singular value decomposition feature dimension reduction
CN111292548A (en) * 2020-02-06 2020-06-16 温州大学 Safe driving method based on visual attention

Also Published As

Publication number Publication date
CN102592135B (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN102592135B (en) Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
Liu et al. Attribute-aware face aging with wavelet-based generative adversarial networks
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
US7778446B2 (en) Fast human pose estimation using appearance and motion via multi-dimensional boosting regression
CN114299380A (en) Remote sensing image semantic segmentation model training method and device for contrast consistency learning
CN112395987B (en) SAR image target detection method based on unsupervised domain adaptive CNN
CN111563915B (en) KCF target tracking method integrating motion information detection and Radon transformation
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104484890B (en) Video target tracking method based on compound sparse model
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN106326924A (en) Object tracking method and object tracking system based on local classification
CN102915545A (en) OpenCV(open source computer vision library)-based video target tracking algorithm
CN106338733A (en) Forward-looking sonar object tracking method based on frog-eye visual characteristic
CN102663775A (en) Target tracking method oriented to video with low frame rate
CN104751111A (en) Method and system for recognizing human action in video
Zhang et al. A swarm intelligence based searching strategy for articulated 3D human body tracking
CN107590821A (en) A kind of method for tracking target and system based on track optimizing
CN111160294A (en) Gait recognition method based on graph convolution network
CN112801019B (en) Method and system for eliminating re-identification deviation of unsupervised vehicle based on synthetic data
CN112084871B (en) High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
Jin et al. Cvt-assd: convolutional vision-transformer based attentive single shot multibox detector
CN102592290A (en) Method for detecting moving target region aiming at underwater microscopic video
CN106909935A (en) A kind of method for tracking target and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20120718

Assignee: NEW DIMENSION SYSTEMS CO., LTD.

Assignor: Wenzhou University

Contract record no.: 2014330000384

Denomination of invention: Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics

Granted publication date: 20131218

License type: Common License

Record date: 20141010

Application publication date: 20120718

Assignee: Zhejiang Gela Weibao Glass Technology Co., Ltd.

Assignor: Wenzhou University

Contract record no.: 2014330000387

Denomination of invention: Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics

Granted publication date: 20131218

License type: Common License

Record date: 20141010

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model