CN104484890A - Video target tracking method based on compound sparse model - Google Patents

Video target tracking method based on compound sparse model Download PDF

Info

Publication number
CN104484890A
CN104484890A CN201410802562.8A CN201410802562A CN104484890A CN 104484890 A CN104484890 A CN 104484890A CN 201410802562 A CN201410802562 A CN 201410802562A CN 104484890 A CN104484890 A CN 104484890A
Authority
CN
China
Prior art keywords
sparse
particle
model
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410802562.8A
Other languages
Chinese (zh)
Other versions
CN104484890B (en
Inventor
敬忠良
金博
王梦
潘汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201410802562.8A priority Critical patent/CN104484890B/en
Publication of CN104484890A publication Critical patent/CN104484890A/en
Application granted granted Critical
Publication of CN104484890B publication Critical patent/CN104484890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a video target tracking method based on a compound sparse model and belongs to the field of computer vision. According to the method, a combined sparse coefficient matrix capable of observing all particles is divided into set sparsity, element sparsity and abnormal sparsity under the condition that a compound sparse appearance model is under a particle filtering framework; sharing and unsharing characteristics of the particle in a dictionary and additive sparse noise are represented; norms from L1 to infinite number and norms L1 and 1 are regularized to implement the compound sparsity; the optimization problem is solved by using a direction-variable mutiplier method, so that the higher calculation efficiency is realized. The invention also provides a dynamic dictionary updating method, so that the change of target appearance can be adapted. Experiment shows that the tracking performance and robustness of the algorithm are superior to several compared conventional video target tracking algorithm. The video target tracking method can be applied to the fields of man-machine interaction, intelligent monitoring, intelligent traffic, vision navigation, video retrieval, and the like.

Description

Based on the video target tracking method of compound sparse model
Technical field
What the present invention relates to is a kind of technology of field of video processing, specifically a kind of video target tracking method based on compound sparse model being applied to man-machine interaction, intelligent monitoring, intelligent transportation, vision guided navigation, video frequency searching.
Background technology
Video tracking is a major issue of computer vision field, and its task is the two-dimensional image sequence that analysis shot by camera arrives, then to the location that interested target or region continue.No matter be in civilian or military domain, video tracking technology is designated as application prospect widely.Modern intelligent monitor system needs to realize detecting automatically the target in visual field, follow the tracks of and identifying, catches abnormal conditions, and makes early warning.Lacking reliable and efficient video frequency object tracking algorithm is one of Main Bottleneck of field of intelligent monitoring.Field of human-computer interaction, people have no longer been satisfied with the simple interaction models only based on mouse and keyboard.Along with the method for virtual reality technology, the man-machine interaction mode of view-based access control model has become the study hotspot of industry member and academia.In order to realize this target, first computing machine will have the ability of perception human motion, and its prerequisite is the position locating human body from video camera, then understands the behavior of people on this basis.Thus can say that vision tracking is the basis of man-machine interaction.Intelligent transportation system refers to, analyzes the video image of monitored road, draws the information such as vehicle behavior, pedestrian behavior and the magnitude of traffic flow, then controls wagon flow or take precautions against traffic hazard.One in intelligent transportation very crucial problem is segmentation and the tracking of vehicle and pedestrian, and this also belongs to the category of video tracking.In addition, video frequency object tracking in fields such as vision guided navigation, video frequency searching, video compress and vision guided navigations all with important application.But, due to the apparent change of target, attitudes vibration, imaging circumstances change, background interference and the factor such as to block, design accurately, stable, fast video tracking algorithm remain a very challenging task.
In recent years, rarefaction representation, in the application in video tracking field, makes the performance of track algorithm, especially in anti-blocking, is significantly improved.Usually, the video tracking algorithm based on rarefaction representation realizes under particle filter framework, and it uses some particles with weight to represent the probability density function of current goal state, the then position of estimating target.The initial observation of each particle is the pixel of corresponding region in present frame, by sparse apparent model, they can be expressed as by template in a dictionary one group of sparse coefficient on the subspace of opening.The likelihood that candidate target belongs to tracked target is defined by the reconstructed error of the linear combination of this group sparse coefficient on dictionary and former observation.
Video tracking algorithm based on rarefaction representation can rough classification two class: single task algorithm and multitasked algorithm.Typical single task algorithm, as L 1tracker [X.Mei and H.Ling, " Robust visual tracking using L 1minimization, " inComputerVision, 2009 IEEE 12th International Conference on, 1436 – 1443, IEEE (2009)], when solving sparse coefficient, separately L is carried out to each particle 1optimization.This practice has two shortcomings.The first, calculated amount is larger, as L 1tracker uses interior point method to solve the L of each particle 1optimization problem, its computation complexity is O (m 2nn mrn), wherein: m, n and r are template number and population in intrinsic dimensionality, dictionary respectively.The second, separate between hypothetical particle, ignore the contact between particle completely.But due to the Gaussian distribution that particle filter uses when sampling particle, most of particle is spatially close, and thus their apparent also has similarity.Utilize this similarity can improve the integrative reconstruction performance of all particles, thus improve tracking accuracy.Multi-task learning can utilize the essential connection between multiple relational learning task, from theoretical and be empirically all proved to be and obviously can improve overall performance.The sparse coefficient solving all particles can be regarded as multiple relevant linear regression problem.Thus, the object based on the rarefaction representation video tracking algorithm of multitask is the performance utilizing the advantage of multi-task learning to improve video tracking.Representational algorithm is multitask tracker (MTT) [T.Zhang, B.Ghanem, S.Liu, andN.Ahuja, " Robust visual tracking via multi ?tasksparse learning, " in Computer Vision andPattern Recognition (CVPR), 2012 IEEE Conferenceon, 2042 – 2049, IEEE (2012)].Although experimental result shows, tracking effect and the counting yield of multitask tracker are all better than L 1tracker, but it has following problem.The joint sparse that in multitask tracker, particle is apparent is by L 1, qnorm constraint realizes.L 1, qnorm can cause the contact of overemphasizing between particle, and particularly during q > 2, the non-zero sparse coefficient of all particles all gets similar value.The general knowledge institute contradiction that the characteristic of this and particle should be made up of the general character between particle and himself individual character.
Through finding the retrieval of prior art, open (bulletin) the day 2014.8.13 of Chinese patent literature CN103985143A, discloses a kind of based on the online method for tracking target of distinctive in the video of dictionary learning.Candidate samples and template are carried out piecemeal by the method, rarefaction representation coefficient is extracted to each piece, again using the sorter of these coefficients as corresponding blocks, obtain the judgement degree of confidence of this block candidate samples, then the judgement degree of confidence of each piece is randomly drawed K and sued for peace, travel through all possibilities, finally in various possibility, be selected the maximum candidate samples of number of times, the most the tracking results of present frame.But in this technology, the optimization problem of the rarefaction representation coefficient of all candidate samples is isolated consideration, and the relation between candidate samples is unheeded.That is, it belongs to the track algorithm of single task always, and thus the estimated accuracy of its sparse coefficient is difficult to satisfied industrial needs.
Summary of the invention
The present invention is directed to the problem ignoring general character between contact and the undue intensity particle of existing multitask rarefaction representation track algorithm between particle completely of the single task rarefaction representation track algorithm that prior art exists, propose a kind of video target tracking method based on compound sparse model.
The present invention is achieved by the following technical solutions, comprises the following steps:
The first step: at the initial frame of video to be tested, namely selects tracked target in the first frame by hand, generates primary collection with its initial position around initial position, tight sampling generates initial dictionary D, selection unit matrix I mas trifling template.
Second step: in each frame of video to be tested, uses the motion model prediction particle state of target
The motion model of described target adopts random walk model, and namely new state obtains from previous moment state according to Gaussian distribution sampling,
3rd step: extract the pixel value in region of each particle of prediction and boil down to observing matrix Y;
Described region is a parallelogram, defined, comprise parameter and comprise center, length and width, angle of inclination by the state of particle.
Described compression refers to: carry out down-sampling process to obtained pixel value, and merges into vector as observed reading to the pixel value after sampling, finally by multiple Vector Groups synthesis observing matrix Y.
4th step: utilize observing matrix Y, dictionary D, trifling template I m, solve L 1, ∞and L 1,1norm optimization problem, calculates sparse coefficient matrix B, S and T of the sparse apparent model of compound;
The sparse apparent model of described compound is a kind of union feature extraction model of all particles, namely observation rarefaction representation coefficient in dictionary template of all particles is unifiedly calculated, the joint sparse matrix of coefficients of all particles is divided into three parts: organize openness, element is openness and abnormal openness, correspond respectively to the sharing feature of particle, personal characteristics and off-note, be specially: when moment t, in particle filter, the predicted state of n particle is the original of them is observed wherein: for according to state the image-region cut out from present frame; These observed down-samplings and is pressed into vector, obtaining final observation when dictionary is D t={ d 1..., d r, wherein: r is the number of template; Initial dictionary is in initial frame, obtains, in order to the problem solving noise He block, introduce abnormal template I near the original state of target according to tight sampling policy sampling m, i.e. a unit matrix, by particle observe on noise or block the sparse noise being expressed as an additivity, it can get value large arbitrarily in its support.In the sparse apparent model of compound, observing matrix Y can be expressed as Y = D I m B + S T = D ( B + S ) + I m T , Wherein, joint sparse matrix of coefficients B represents the sharing feature of particle on dictionary D, and the element sparse matrix S of personal characteristics represents the sparse coefficient of particle in abnormal template, and element sparse matrix T represents additive noise.
Needed for the L that solves 1, ∞and L 1,1norm optimization problem is:
min E , B , S , T | | E | | F 2 + λ b | | B | | 1 , ∞ + λ S | | S | | 1,1 + λ t | | T | | 1,1
s.t.Y=D(B+S)+I mT+E
This model solves especially by multiplier method of changing direction, and namely utilizes linear separability the optimization problem of above-mentioned complexity to be decomposed into several comparatively simple subproblem and solves one by one.This method has the feature of Fast Convergent.Compare other optimization methods, higher convergence precision can be got in less iterations.Concrete, by multiplier method of changing direction, the framework that substantially solves of above-mentioned optimization problem is:
B k + 1 = arg min B λ b | | b | | 1 , ∞ + ρ 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
S k + 1 = arg min S λ S | | S | | 1 , 1 + ρ 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
T k + 1 = arg min t λ t | | T | | 1 , 1 + ρ 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
E k + 1 = arg min E | | B | | 1 , ∞ + ρ 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
U k + 1 = U k + D ( B k + 1 + S k + 1 ) + I m T k + 1 + E k + 1 - Y
Can find out, former optimization problem is decomposed in order to B, S, T, E tetra-subproblems and an iteration variable upgrade.Re-using multiplier method of changing direction is the L of standard by four sub-question variation 1, ∞norm and L 1,1norm regularization problem, thus obtain final method for solving.
5th step: calculate reconstructed error E, and the weight to each particle upgrade;
6th step: select to upgrade the target of the maximum particle of rear weight as present frame, the state of this target is corresponding is observed
7th step: use observation template in dictionary and sparse is upgraded by adaptive mode.
8th step: the state estimation exporting all frames, namely for the tracking results of target in this video.
Described adaptive mode specifically comprises the following steps:
1) norm of each template in dictionary is first adjusted, and by the weight of norm as each template.
2) then each frame follow the tracks of terminate after consider the newly apparent of target, when the similarity of all templates in this performance and dictionary is all very low, so think that it is an apparent new pattern, and substitute with its template that in dictionary, weight is minimum; The row of nonzero element more than 80% in the sparse matrix of consideration group simultaneously B, namely by the template with common trait that most of particle is shared, think that they are also the patterns that the apparent probability of occurrence of current target is larger, then improve the weight of its correspondence when the weight of these row is less; Again the weights of that newly add and shared template are set to median, to prevent these templates from playing larger effect at subsequent frame, prevent its effect too leading simultaneously.
3) finally to the weight of dictionary, namely the norm of template is normalized, and makes it and is 1.
Accompanying drawing explanation
Fig. 1 is the sparse apparent model schematic of compound;
Fig. 2 is system flowchart;
Fig. 3 is embodiment tracking results schematic diagram.
Embodiment
Elaborate to embodiments of the invention below, the present embodiment is implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1
In the present embodiment, the state of target adopts the affine model of six variablees, i.e. x t={ a t, b t, θ t, s t, α t, φ t, wherein: 6 parameters represent position coordinates, the anglec of rotation, yardstick, length breadth ratio, angle of inclination respectively; The motion model of target adopts random walk model, and namely new state obtains from previous moment state according to Gaussian distribution sampling, the original pixel value being observed the region that state defines of target, is then compressed into vector by its down-sampling, becomes the actual observation used in compound sparse model; Likelihood function p (O t| x t) apparently by particle to define with the similarity of dictionary template, i.e. the reconstructed error Gaussian distributed of the sparse apparent model of the compound of particle.At moment t, current dictionary is D t-1, n particle is respectively so in the present embodiment, new dbjective state is estimated new target is apparent n new particle new dictionary D tconcrete steps as described below:
Step 1, resampling is carried out to particle, and result is designated as
Step 2, according to motion model the new state of prediction particle
Step 3, state according to particle, generate observing matrix Y from present frame
Step 4, utilization observation Y, dictionary D t-1with abnormal template I m, calculate sparse coefficient matrix B, S and T of the sparse apparent model of compound
4.1) initialization: establish B 0=0, S 0=0, T 0=0, E 0=Y, U 0=0
4.2) major cycle: For k=0,1 ..., until convergence
4.2.1) B subcycle: G=Y-D t-1s k-I mt k-E k-U k, B (0)=B k, P (0)=B k, for j=0,1 ..., until convergence
a ) B ( j + 1 ) = ( ρ D t - 1 T D t - 1 + τ I r ) - 1 [ ρ D t - 1 T G + τ ( P ( j ) - U B ( j ) ) ]
b ) For i = 0,1 , . . . , r , P i &CenterDot; ( j + 1 ) = max ( 0 , sgn ( 1 - &lambda; b &tau; | | B i ( j + 1 ) + U Bi &CenterDot; ( j + 1 ) | | ) ) a , Wherein: A ii-th row of representing matrix A, sgn () is-symbol function.A is a row vector, and the computing method of each element in the middle of it are, l=1 ..., n.Wherein: a land z lbe l the element of a and z, variable u is | the descending sort of z|, J ^ = max ( j : &Sigma; r = 1 j ( u r - u j ) < &lambda; b &tau; ) .
C) subcycle iteration variable is upgraded, U B ( j + 1 ) = U B ( j ) + B ( j + 1 ) - P ( j + 1 )
4.2.2) S subcycle: H=Y-D t-1s k+1-I mt k-E k-U k, S (0)=S k, A (0)=S k, for j=0,1 ..., until convergence
a ) S ( j + 1 ) = ( &rho; D t - 1 T D t - 1 + &tau; I r ) - 1 [ &rho; D t - 1 T H + &tau; ( A ( j ) - U S ( j ) ) ]
C) subcycle iteration variable is upgraded, U S ( j + 1 ) = U S ( j ) + S ( j + 1 ) - A ( j + 1 )
4.2.3) T subcycle:
4.2.4) E subcycle: E k + 1 = &rho; &rho; + 2 ( Y - D t - 1 ( B k + 1 + S k + 1 ) - I m T k + 1 - U k )
4.2.5) iteration variable upgrades: U k+1=U k+ D t-1(B k+1+ S k+1)+I mt k+1+ E k+1-Y
The reconstructed error E=Y-D of step 5, calculating particle t-1(B+S)-I mt
The weight of step 6, more new particle, w t i = w ~ t - 1 i exp ( - | | E i | | 2 2 )
Step 7, select the maximum particle of weight as the estimated value of current goal state it is observed
Step 8, utilization d t-1, B, S, T carry out dynamically updating of dictionary, obtains new dictionary D t
8.1) according to current dictionary D t-1the norm of middle template, calculates present weight π i=|| d i||
8.2) C=D t-1(B+S), at D t-1on reconstruction coefficients c be respective column in C
8.3) use reconstruction coefficients c, upgrade weight
8.4) median of note π is θ=median (π)
8.5) if new observation with the template d that reconstruction coefficients is maximum isimilarity less, namely i=arg max 1≤i≤r| c i|, so: j=arg min 1≤i≤rπ i, replace the template that weight is minimum π j
8.6) index set of the row of nonzero element in B more than 80% is designated as Q, to any k ∈ Q, π j< θ, is set to π by its weight j
8.7) normalized weight π, makes ∑ iπ i=1
8.8) norm of normalization dictionary, makes || d i||=π i.Finally obtaining new dictionary is D t
The final tracking results of step 9, target is
The present embodiment is by the mixing Mex programming realization of Matlab and C++, wherein Matlab is responsible for the main process of algorithm, as particle filter, sampling observation, weight calculation, template state estimation, dictionary updating etc., in the sparse apparent model of compound, the calculating of the coefficient of colligation matrix of particle adopts C++ to realize, and can improve the real-time of system like this.Process flow diagram as shown in Figure 2.
The present embodiment uses Matlab language to test in several sections of real video sequences the proposed video frequency object tracking algorithm based on compound sparse model.The video sequence that test uses is all from following network address:
https://sites.google.com/site/trackerbenchmark/benchmarks/v10
As shown in Figure 3, be deer, davidindoor, sylvester, faceocc, football and car11 respectively.In order to verify the performance of the present embodiment algorithm, testing and the algorithm of the present embodiment proposition and several traditional video tracking algorithm are compared, is L respectively 1tracker, frag, IVT, MS, TLD and CT.The tracking results of all algorithms in test data as shown in Figure 3.The difficult point of deer video is the fast reserve of fawn, motion blur and similar purpose interference; The difficult point of davidindoor video is illumination variation, dimensional variation, the expression shape change of people and partial occlusion; The difficult point of sylvester video is long-time tracking, attitudes vibration and rotation; The difficult point of faceocc video is large area and blocks for a long time; The difficult point of football video is rapid movement, a large amount of similar purpose interference and blocks; The difficult point of car11 video is the image blur that stronger illumination variation and rainwater cause.By the relative position of solid box in comparison diagram 3 with other frames, can find out that the video frequency object tracking algorithm in compound sparse model that the present embodiment proposes all achieves the result being better than traditional track algorithm under above-mentioned difficulties, thus there is better tracking performance and robustness.

Claims (7)

1. based on a video target tracking method for compound sparse model, it is characterized in that, comprise the following steps:
The first step: at the initial frame of video to be tested, namely selects tracked target in the first frame by hand, generates primary collection with its initial position around initial position, tight sampling generates initial dictionary D;
Second step: in each frame of video to be tested, uses the motion model prediction particle state of target
3rd step: extract the pixel value in region of each particle of prediction and boil down to observing matrix Y;
4th step: at the sparse apparent model of compound Y = D I m B + S T = D ( B + S ) + I m T In, pass through L 1, ∞and L 1,1norm optimization calculates joint sparse matrix of coefficients B, represents the element sparse matrix S of the sharing feature of particle on dictionary D and personal characteristics and represent the element sparse matrix T of the sparse coefficient of particle in abnormal template respectively;
5th step: calculate reconstructed error E, and the weight to each particle upgrade;
6th step: to upgrade the target of the maximum particle of rear weight as present frame, the state of this target is be observed
7th step: use observation template in dictionary and sparse is upgraded by adaptive mode;
8th step: the state estimation exporting all frames, namely for the tracking results of target in this video.
2. method according to claim 1, is characterized in that, the motion model of described target adopts random walk model, and namely new state obtains from previous moment state according to Gaussian distribution sampling,
3. method according to claim 1, is characterized in that, the region described in the 3rd step is a parallelogram, and by the state of particle, namely center, length and width, angle of inclination defined.
4. method according to claim 1, is characterized in that, described compression refers to: carry out down-sampling process to obtained pixel value, and merges into vector as observed reading to the pixel value after sampling, finally by multiple Vector Groups synthesis observing matrix Y.
5. method according to claim 1, it is characterized in that, the sparse apparent model of described compound is a kind of joint sparse Feature Selection Model of all particles, namely the rarefaction representation coefficient of the observation of all particles is unifiedly calculated, the joint sparse matrix of coefficients of all particles is divided into three parts by it: group's sparse coefficient matrix B, element sparse coefficient matrix T and abnormal sparse coefficient matrix T, correspond respectively to the sharing feature of particle, personal characteristics and off-note.
6. method according to claim 1, is characterized in that, the solving-optimizing equation of the sparse apparent model of described compound is:
min E , B , S , T | | E | | F 2 + &lambda; b | | B | | 1 , &infin; + &lambda; s | | S | | 1,1 + &lambda; t | | T | | 1,1
s.t.Y=D(B+S)+I mT+E
This model solves especially by multiplier method of changing direction, and namely utilizes linear separability the optimization problem of above-mentioned complexity to be decomposed into several comparatively simple subproblem and solves one by one.
7. method according to claim 6, is characterized in that, former optimization problem is decomposed in order to B, S, T, E tetra-subproblems and an iteration variable upgrade by described multiplier method of changing direction, and re-using multiplier method of changing direction is the L of standard by four sub-question variation 1, ∞norm and L 1,1norm regularization problem, thus obtain final method for solving, the framework that substantially solves of this optimization problem is:
B k + 1 = arg min B &lambda; b | | B | | 1 , &infin; + &rho; 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
S k + 1 = arg min S &lambda; s | | S | | 1 , 1 + &rho; 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
T k + 1 = arg min t &lambda; t | | T | | 1 , 1 + &rho; 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
E k + 1 = arg min E | | B | | 1 , &infin; + &rho; 2 | | D ( B + S k ) + I m T k + E k - Y + U k | | F 2
U k+1=U k+D(B k+1+S k+1)+I mT k+1+E k+1-Y。
CN201410802562.8A 2014-12-18 2014-12-18 Video target tracking method based on compound sparse model Active CN104484890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410802562.8A CN104484890B (en) 2014-12-18 2014-12-18 Video target tracking method based on compound sparse model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410802562.8A CN104484890B (en) 2014-12-18 2014-12-18 Video target tracking method based on compound sparse model

Publications (2)

Publication Number Publication Date
CN104484890A true CN104484890A (en) 2015-04-01
CN104484890B CN104484890B (en) 2017-02-22

Family

ID=52759430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410802562.8A Active CN104484890B (en) 2014-12-18 2014-12-18 Video target tracking method based on compound sparse model

Country Status (1)

Country Link
CN (1) CN104484890B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751493A (en) * 2015-04-21 2015-07-01 南京信息工程大学 Sparse tracking method on basis of gradient texture features
CN104751484A (en) * 2015-03-20 2015-07-01 西安理工大学 Moving target detection method and detection system for achieving same
CN105654069A (en) * 2016-02-03 2016-06-08 江南大学 Increment subspace target tracking method based on Lp norm regularization
CN106204647A (en) * 2016-07-01 2016-12-07 国家新闻出版广电总局广播科学研究院 Based on the visual target tracking method that multiple features and group are sparse
CN106529526A (en) * 2016-07-06 2017-03-22 安徽大学 Object tracking algorithm based on combination between sparse expression and prior probability
CN107330912A (en) * 2017-05-10 2017-11-07 南京邮电大学 A kind of target tracking method of rarefaction representation based on multi-feature fusion
CN107392938A (en) * 2017-07-20 2017-11-24 华北电力大学(保定) A kind of sparse tracking of structure based on importance weighting
CN109685830A (en) * 2018-12-20 2019-04-26 浙江大华技术股份有限公司 Method for tracking target, device and equipment and computer storage medium
CN110120066A (en) * 2019-04-11 2019-08-13 上海交通大学 Robust multiple targets tracking and tracking system towards monitoring system
CN110175597A (en) * 2019-06-04 2019-08-27 北方工业大学 Video target detection method integrating feature propagation and aggregation
CN114783022A (en) * 2022-04-08 2022-07-22 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007050707A2 (en) * 2005-10-27 2007-05-03 Nec Laboratories America, Inc. Video foreground segmentation method
CN102254328A (en) * 2011-05-17 2011-11-23 西安电子科技大学 Video motion characteristic extracting method based on local sparse constraint non-negative matrix factorization
CN102800108A (en) * 2012-07-11 2012-11-28 上海交通大学 Vision target tracking method based on least square estimation with local restriction
CN103246874A (en) * 2013-05-03 2013-08-14 北京工业大学 Face identification method based on JSM (joint sparsity model) and sparsity preserving projection
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007050707A2 (en) * 2005-10-27 2007-05-03 Nec Laboratories America, Inc. Video foreground segmentation method
CN102254328A (en) * 2011-05-17 2011-11-23 西安电子科技大学 Video motion characteristic extracting method based on local sparse constraint non-negative matrix factorization
CN102800108A (en) * 2012-07-11 2012-11-28 上海交通大学 Vision target tracking method based on least square estimation with local restriction
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera
CN103246874A (en) * 2013-05-03 2013-08-14 北京工业大学 Face identification method based on JSM (joint sparsity model) and sparsity preserving projection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜嘉琳: "基于结构稀疏的目标识别方法", 《中国优秀硕士学位论文全文数据库.信息科技辑》 *
王梦等: "基于复合约束的视频目标跟踪算法", 《计算机仿真》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751484A (en) * 2015-03-20 2015-07-01 西安理工大学 Moving target detection method and detection system for achieving same
CN104751484B (en) * 2015-03-20 2017-08-25 西安理工大学 A kind of moving target detecting method and the detecting system for realizing moving target detecting method
CN104751493A (en) * 2015-04-21 2015-07-01 南京信息工程大学 Sparse tracking method on basis of gradient texture features
CN105654069A (en) * 2016-02-03 2016-06-08 江南大学 Increment subspace target tracking method based on Lp norm regularization
CN105654069B (en) * 2016-02-03 2019-05-10 江南大学 Based on LpThe increment subspace method for tracking target of norm regularization
CN106204647A (en) * 2016-07-01 2016-12-07 国家新闻出版广电总局广播科学研究院 Based on the visual target tracking method that multiple features and group are sparse
CN106204647B (en) * 2016-07-01 2019-05-10 国家新闻出版广电总局广播科学研究院 Based on multiple features and organize sparse visual target tracking method
CN106529526B (en) * 2016-07-06 2019-12-17 安徽大学 target tracking method based on combination of sparse representation and prior probability
CN106529526A (en) * 2016-07-06 2017-03-22 安徽大学 Object tracking algorithm based on combination between sparse expression and prior probability
CN107330912A (en) * 2017-05-10 2017-11-07 南京邮电大学 A kind of target tracking method of rarefaction representation based on multi-feature fusion
CN107330912B (en) * 2017-05-10 2021-06-11 南京邮电大学 Target tracking method based on sparse representation of multi-feature fusion
CN107392938A (en) * 2017-07-20 2017-11-24 华北电力大学(保定) A kind of sparse tracking of structure based on importance weighting
CN109685830A (en) * 2018-12-20 2019-04-26 浙江大华技术股份有限公司 Method for tracking target, device and equipment and computer storage medium
CN109685830B (en) * 2018-12-20 2021-06-15 浙江大华技术股份有限公司 Target tracking method, device and equipment and computer storage medium
CN110120066A (en) * 2019-04-11 2019-08-13 上海交通大学 Robust multiple targets tracking and tracking system towards monitoring system
CN110175597A (en) * 2019-06-04 2019-08-27 北方工业大学 Video target detection method integrating feature propagation and aggregation
CN114783022A (en) * 2022-04-08 2022-07-22 马上消费金融股份有限公司 Information processing method and device, computer equipment and storage medium
CN114783022B (en) * 2022-04-08 2023-07-21 马上消费金融股份有限公司 Information processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104484890B (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN104484890A (en) Video target tracking method based on compound sparse model
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN108665481B (en) Self-adaptive anti-blocking infrared target tracking method based on multi-layer depth feature fusion
CN104574445B (en) A kind of method for tracking target
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN112184752A (en) Video target tracking method based on pyramid convolution
CN111914664A (en) Vehicle multi-target detection and track tracking method based on re-identification
CN107481264A (en) A kind of video target tracking method of adaptive scale
CN106295564B (en) A kind of action identification method of neighborhood Gaussian structures and video features fusion
CN107918772B (en) Target tracking method based on compressed sensing theory and gcForest
CN105975932A (en) Gait recognition and classification method based on time sequence shapelet
CN106570490A (en) Pedestrian real-time tracking method based on fast clustering
CN111402303A (en) Target tracking architecture based on KFSTRCF
CN103413154A (en) Human motion identification method based on normalized class Google measurement matrix
Li et al. SSD object detection model based on multi-frequency feature theory
Liang et al. Automatic defect detection of texture surface with an efficient texture removal network
Liu et al. Self-correction ship tracking and counting with variable time window based on YOLOv3
Tu et al. A biologically inspired vision-based approach for detecting multiple moving objects in complex outdoor scenes
CN108257148B (en) Target suggestion window generation method of specific object and application of target suggestion window generation method in target tracking
Wang et al. Forest smoke detection based on deep learning and background modeling
CN113850221A (en) Attitude tracking method based on key point screening
Ikram et al. Real time hand gesture recognition using leap motion controller based on CNN-SVM architechture
Tao Detecting smoky vehicles from traffic surveillance videos based on dynamic features
Li et al. Research on YOLOv3 pedestrian detection algorithm based on channel attention mechanism
Geng et al. A novel color image segmentation algorithm based on JSEG and Normalized Cuts

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant