CN103778641B - Method for tracking target based on Wavelet Descriptor - Google Patents

Method for tracking target based on Wavelet Descriptor Download PDF

Info

Publication number
CN103778641B
CN103778641B CN201210414785.8A CN201210414785A CN103778641B CN 103778641 B CN103778641 B CN 103778641B CN 201210414785 A CN201210414785 A CN 201210414785A CN 103778641 B CN103778641 B CN 103778641B
Authority
CN
China
Prior art keywords
target
wavelet
template
color histogram
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210414785.8A
Other languages
Chinese (zh)
Other versions
CN103778641A (en
Inventor
田小林
焦李成
刘朵
张小华
缑水平
朱虎明
钟桦
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210414785.8A priority Critical patent/CN103778641B/en
Publication of CN103778641A publication Critical patent/CN103778641A/en
Application granted granted Critical
Publication of CN103778641B publication Critical patent/CN103778641B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of method for tracking target based on Wavelet Descriptor, mainly solve in prior art due to target is blocked or Rapid Variable Design and cause the problem that target following is failed.Implementation step is: the first frame of one section of video of (1) input, and handmarking goes out target to be tracked;(2) To Template is set up;(3) color histogram of target is calculated;(4) in the region of search of a new frame video image, wavelet character is extracted;(5) ask and the local minimum of To Template distance;(6) color histogram of target is calculated;(7) judge whether target is blocked, and in the case of partial occlusion, the Corner Feature extracted is tracked, utilize estimation to realize target following under full circumstance of occlusion;(8) circulation performs step (4)~step (7), until video terminates.The present invention is blocked in target compared with prior art or improves the robustness of target following in the case of Rapid Variable Design.

Description

Method for tracking target based on Wavelet Descriptor
Technical field
The invention belongs to technical field of image processing, relate to video target tracking method, can be applicable to intelligent monitoring, target following and man machine interface.
Background technology
The target following of sequence image be image processing techniques application important component part, it refer to by input sequence of video images be analyzed, determine the position at target place in each frame, it is thus achieved that relevant parameter.Target following is one of key technology in computer vision, has merged the fields such as image procossing, pattern recognition and artificial intelligence, is all widely used in terms of the many such as robot visual guidance, safety monitoring, traffic control, video compress and meteorologic analysis.Such as military aspect, have been successfully applied for the Imaging Guidance of weapon, military surveillance and supervision etc..Civilian aspect, such as vision monitoring, has been widely used in each side of social life.Target following can be applicable to the guard monitor of community and critical facility;In intelligent transportation system, carry out the real-time tracing of vehicle, the many valuable traffic flow parameters of vehicle flowrate, vehicle, speed, vehicle density etc. can be obtained, the emergency situations such as accident or fault can also be detected simultaneously.
The patent application " a kind of sensation target recognition and tracking method " (number of patent application 201010537843.7, publication number CN101986348A) that Shanghai Dian Ji University proposes, discloses a kind of sensation target recognition and tracking method.This tracking includes: acquiescence the 0th frame search window is big with image etc., the encirclement frame of target is obtained from the first two field picture, the followed by prediction of search window, it utilize image processing method to surround frame and wherein characteristic point calculate, a kind of method simultaneously proposing predictable search window on the basis of target following, the target of mark is carried out motion prediction and tracking, although this tracking has certain effect to improving real-time, but when moving target blocks or during Rapid Variable Design, use above-mentioned forecasting search window method then cannot realize accurately following the tracks of.
Patent application " based on marginal information and the gray level target tracking algorithm of mean shift " (the number of patent application CN201010238378.7 that BJ University of Aeronautics & Astronautics proposes, publication number CN101916446A), disclose a kind of based on marginal information with the gray level target tracking algorithm of mean shift.The method carries out pretreatment and extracts clarification of objective template the first frame video image, in the present frame original position of kalman filter forecasting target, then target optimal location in the current frame is searched for mean shift track algorithm in original position, the interval fixed cycle, in conjunction with canny filtering, To Template is updated.Although target can be tracked in the case of target shape, intensity profile and background change by the method, but this method is to use kalman to carry out linear prediction, therefore, when the target as nonlinear motion is blocked, follow the tracks of and be susceptible to drift and cause following the tracks of unsuccessfully.
Summary of the invention
Present invention aims to above-mentioned the deficiencies in the prior art, propose a kind of method for tracking target based on Wavelet Descriptor, to block in target or to remain to correctly follow the tracks of target during target rapid movement, improve the vigorousness of target following.
The thinking realizing the present invention is: the wavelet character extracting target at the first frame describes son as To Template, extract the wavelet character of region of search in present frame and describe son, ask the local minimum of distance between the two to determine the position of target, block judgement thought by introduce simultaneously, improve the accuracy that To Template updates, use when target is blocked Corner Feature and motion prediction to complete the correct tracking of target.Implement step to include the following:
(1) the first frame in one section of video of input, and handmarking goes out target to be tracked;
(2) To Template is set up:
2a) the tracking target marking step (1) carries out 3 layers of wavelet transformations decomposition, extracts the details coefficients after decomposing and is designated as wavelet character description;
2b) according to step 2a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr1:
Thr 1 = C o n s t * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef1(i j) represents that (i, j) wavelet conversion coefficient at place, M represents the row of matrix of wavelet coefficients to point, and N represents the row of matrix of wavelet coefficients;
2c) by step 2a) wavelet character that obtains describes son and step 2b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr1, then these wavelet characters are described sub value and put 1, otherwise keep initial value, i.e. retain the wavelet character point that change is more stable, and using the result after threshold process as To Template;
(3) calculation procedure 2b) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, it is 16 minizones by red R passage uniform quantization, it is 16 minizones by the passage uniform quantization of green G, it is 16 minizones by blueness channel B uniform quantization, obtains color histogram hist by the color pixel quantity in each minizone that falls in statistics To Template1
(4) in the region of search of a new frame video image, wavelet character is extracted:
4a) the next frame of input video, carries out 3 layers of wavelet transformations decomposition to the region of search of target, extracts the details coefficients after decomposing and describe son as wavelet character;
4b) according to step 4a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr2:
Thr 2 = C o n s t * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef2(x y) represents that (x, y) wavelet conversion coefficient at place, P represents the row of matrix of wavelet coefficients to point, and Q represents the row of matrix of wavelet coefficients;
4c) by step 4a) wavelet character that obtains describes son and step 4b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr2, then the value that these wavelet characters describe son is put 1, is otherwise kept initial value, i.e. retain the wavelet character point that change is more stable;
(5) wavelet character of the To Template that the wavelet character of the region of search obtained according to step (4) and step (2) obtain, seeks the local minizing point S of their spacingmin
(6) according to the position S drawn in step (5)minEstimate the target location of present frame, and calculate the color histogram hist estimating target2Rectangular histogram as candidate target;
(7) judge whether target is blocked:
The color histogram that color histogram 7a) obtained according to step (3) and step (6) obtain draws occlusion coefficient Occ;
7b) occlusion coefficient Occ is compared with threshold value, if Occ is less than threshold value T1=0.6 represents that target is not blocked, the candidate target that step (6) is obtained as the target following result of present frame and the To Template of renewal, then output target following result;If Occ is more than threshold value T2=0.9, represent that target there occurs and block completely, perform step (9);If Occ is more than T1And less than T2, represent that target there occurs partial occlusion, perform step (8);
(8) these Corner Feature points are tracked by random n the Corner Feature extracting To Template, n=N in the current frame, obtain the target location of this frame, and are exported as the target following result of present frame this target location;
(9) according to the dependency between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and this target location is exported as the target following result of present frame;
(10) circulation performs step (4)~step (9), until the last frame of video.
The present invention compared with prior art has the advantage that
First, invention introduces wavelet character and describe son, comprise the spatial information (si) of image, using the teaching of the invention it is possible to provide abundant characteristics of image, overcome in prior art and cause the inaccurate shortcoming of feature description because losing spatial information (si), improve the performance of image feature descriptor.
Second, invention introduces the thought blocking judgement, when target is blocked, do not update To Template, avoid the accumulation of mistake template, and at partial occlusion with block take different modes that target is tracked in the case of two kinds completely, overcome and prior art is followed the tracks of when target is blocked failed shortcoming, improve target following robustness.
Accompanying drawing explanation
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the schematic diagram that the first frame video image of inputting of the present invention handmarking go out target to be tracked;
Fig. 3 is the tracking result of the new frame video image that the present invention inputs;
Fig. 4 is present invention simulated effect figure under occlusion.
It is embodied as measure
With reference to Fig. 1, the present invention to be embodied as step as follows:
Step 1. inputs the first frame in one section of video, and handmarking goes out target to be tracked, and present example one section of video sequence such as Fig. 2 of input, it is the first two field picture that one section of vehicle runs video, and the region that in Fig. 2, rectangle frame is confined is as target to be tracked.
Step 2. sets up To Template:
Tracking target 2a) utilizing wavelet transformation step 1 to be marked carries out 3 layers of decomposition, i.e. when ground floor decomposes, tracking goal decomposition is become 1 approximate signal and 3 detail signals, when the second layer decomposes, the approximate signal obtained by ground floor is further decomposed into 1 approximate signal and 3 detail signals, when third layer is decomposed, the approximate signal obtained by the second layer is further decomposed into 1 approximate signal and 3 detail signals, finally, extract these 9 detail signals and constitute wavelet character description of 9 layers;
2b) according to step 2a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr1:
Thr 1 = C o n s t * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef1(i j) represents that (i, j) wavelet conversion coefficient at place, M represents the row of matrix of wavelet coefficients to point, and N represents the row of matrix of wavelet coefficients;
2c) by step 2a) wavelet character that obtains describes son and step 2b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr1, then these wavelet characters are described sub value and put 1, otherwise keep initial value, i.e. retain the wavelet character point that change is more stable, and using the result after threshold process as To Template.
Step 3. calculation procedure 2c) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, it is 16 minizones by red R passage uniform quantization, it is 16 minizones by the passage uniform quantization of green G, it is 16 minizones by blueness channel B uniform quantization, obtains color histogram hist by the color pixel quantity in each minizone that falls in statistics To Template1
Step 4. extracts wavelet character in the region of search of a new frame video image:
4a) the next frame of input video, carries out 3 layers of wavelet transformation and decomposes the region of search of target, extracts the details coefficients after decomposing and describes son as wavelet character, and the 3 layers of wavelet transformation herein used decompose and step 2a) described in isolation identical;
4b) according to step 4a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr2:
Thr 2 = C o n s t * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef2(x y) represents that (x, y) wavelet conversion coefficient at place, P represents the row of matrix of wavelet coefficients to point, and Q represents the row of matrix of wavelet coefficients;
4c) by step 4a) wavelet character that obtains describes son and step 4b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr2, then the value that these wavelet characters describe son is put 1, is otherwise kept initial value, i.e. retain the wavelet character point that change is more stable.
The wavelet character of the To Template that the wavelet character of the region of search that step 5. obtains according to step 4 and step 2 obtain, seeks the local minizing point S of their spacingmin
Initial point position S in region of search0Place determines a search window big with To Template etc., calculates To Template and distance d of search window1:
d 1 = Σ i , j , k | wd 1 ( i , j , k ) - wd 2 ( i , j , k ) | ,
Wherein, wd1(i, j k) represent that To Template wavelet character at the i-th row jth row kth layer describes son, wd2(i, j k) represent that search window wavelet character at the i-th row jth row kth layer describes son;With S0Centered by, calculate the distance of search window on its four neighborhoods up and down and To Template, and with above-mentioned distance d1Relatively, the position S of distance minima is obtained1, then with S1Centered by, calculating the distance of the search window on its four neighborhoods up and down and To Template, circulating successively, until obtaining local minizing point Smin
Step 6. is according to the position S drawn in step 5minEstimate the target location of present frame, and calculate the color histogram hist estimating target2Rectangular histogram as candidate target.
Step 7. judges whether target is blocked:
Color histogram 7a) obtained according to step 3 and step 6, the color histogram obtained seeks occlusion coefficient Occ, i.e. at each quantized interval of R-G-B RGB channel, if the color histogram hist of candidate target2Value be 0 but the color histogram hist of To Template1It is not zero, then with the color histogram hist of To Template1Represent and block parameter;If the color histogram hist of candidate target2Value be not 0, andValue more than threshold value Th=1.2, then use the color histogram of To Template and difference hist of the color histogram of candidate target1-hist2Representing and block parameter, the value otherwise blocking parameter is 0, finally, each quantized interval blocks parameter summation and represents occlusion coefficient Occ;
7b) by occlusion coefficient Occ and threshold value T1Compare, if Occ is less than threshold value T1=0.6 represents that target is not blocked, the candidate target that step 6 is obtained as the target following result of present frame and the To Template of renewal, then output target following result, as shown in Figure 3;If Occ is more than threshold value T2=0.9, represent that target there occurs and block completely, then perform step 9;If Occ is more than T1And less than T2, represent that target there occurs partial occlusion, then perform step 8.
Step 8. extracts n Corner Feature of To Template, n=N at random, is tracked these Corner Feature points in the current frame, obtains the target location of this frame, and is exported, as shown in Figure 4 as the target following result of present frame this target location.
Step 9. is according to the dependency between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and this target location is exported as the target following result of present frame.
Step 10. circulation performs step 4~step 9, until the last frame of video.
The effect of the present invention can be further illustrated by following emulation:
Emulation content, first, input the first two field picture such as Fig. 2 of one section of vehicle operation video, and the region that wherein in Fig. 2, rectangle frame is confined is target to be tracked, extracts the wavelet character of target to be tracked and describes son and set up To Template;Secondly, extract the wavelet character of region of search in present frame and describe son, find the local minizing point of it and To Template, start shadowing device, when target is not blocked, the local minizing point position obtained is exported as target following result, as it is shown on figure 3, use Corner Feature and motion prediction to complete the tracking of target, as shown in Figure 4 when target is blocked, finally, circulation performs the above-mentioned tracking step last frame until video.
Fig. 3 and Fig. 4 test result indicate that, the present invention can to block and and the realization of goal of rapid movement effectively follow the tracks of.

Claims (4)

1. a method for tracking target based on Wavelet Descriptor, comprises the following steps:
(1) the first frame in one section of video of input, and handmarking goes out target to be tracked;
(2) To Template is set up:
2a) the tracking target marking step (1) carries out 3 layers of wavelet transformations decomposition, extracts the details coefficients after decomposing and is designated as wavelet character description;
2b) according to step 2a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr1:
Thr 1 = C o n s t * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef1(i j) represents that (i, j) wavelet conversion coefficient at place, M represents the row of matrix of wavelet coefficients to point, and N represents the row of matrix of wavelet coefficients;
2c) by step 2a) wavelet character that obtains describes son and step 2b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr1, then these wavelet characters are described sub value and put 1, otherwise keep initial value, i.e. retain the wavelet character point that change is more stable, and using the result after threshold process as To Template;
(3) calculation procedure 2b) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, it is 16 minizones by red R passage uniform quantization, it is 16 minizones by the passage uniform quantization of green G, it is 16 minizones by blueness channel B uniform quantization, obtains color histogram hist by the color pixel quantity in each minizone that falls in statistics To Template1
(4) in the region of search of a new frame video image, wavelet character is extracted:
4a) the next frame of input video, carries out 3 layers of wavelet transformations decomposition to the region of search of target, extracts the details coefficients after decomposing and describe son as wavelet character;
4b) according to step 4a) in wavelet conversion coefficient after wavelet transformation calculate threshold value Thr2:
Thr 2 = C o n s t * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef2(x y) represents that (x, y) wavelet conversion coefficient at place, P represents the row of matrix of wavelet coefficients to point, and Q represents the row of matrix of wavelet coefficients;
4c) by step 4a) wavelet character that obtains describes son and step 4b) threshold value that obtains compares, if the value that wavelet character describes son is more than threshold value Thr2, then the value that these wavelet characters describe son is put 1, is otherwise kept initial value, i.e. retain the wavelet character point that change is more stable;
(5) wavelet character of the To Template that the wavelet character of the region of search obtained according to step (4) and step (2) obtain, seeks the local minizing point S of their spacingmin
(6) according to the position S drawn in step (5)minEstimate the target location of present frame, and calculate the color histogram hist estimating target2Rectangular histogram as candidate target;
(7) judge whether target is blocked:
The color histogram that color histogram 7a) obtained according to step (3) and step (6) obtain draws occlusion coefficient Occ;
7b) occlusion coefficient Occ is compared with threshold value, if Occ is less than threshold value T1=0.6 represents that target is not blocked, the candidate target that step (6) is obtained as the target following result of present frame and the To Template of renewal, then output target following result;If Occ is more than threshold value T2=0.9, represent that target there occurs and block completely, perform step (9);If Occ is more than T1And less than T2, represent that target there occurs partial occlusion, perform step (8);
(8) these Corner Feature points are tracked by random n the Corner Feature extracting To Template, n=N in the current frame, obtain the target location of this frame, and are exported as the target following result of present frame this target location;
(9) according to the dependency between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and this target location is exported as the target following result of present frame;
(10) circulation performs step (4)~step (9), until the last frame of video.
Method for tracking target based on Wavelet Descriptor the most according to claim 1, described step 2a) and step 4a) in 3 layers of wavelet transformation decompose, it it is the picture breakdown realized by wavelet transformation, i.e. at ground floor, picture breakdown become 1 approximate signal and 3 detail signals, when the second layer decomposes, the approximate signal obtained by ground floor is further decomposed into 1 approximate signal and 3 detail signals, when third layer is decomposed, the approximate signal obtained by the second layer is further decomposed into 1 approximate signal and 3 detail signals, finally, extract these 9 detail signals and constitute wavelet character description of 9 layers.
Method for tracking target based on Wavelet Descriptor the most according to claim 1, the solution procedure of described step (5) is: in initial point position S of region of search0Place determines a search window big with To Template etc., calculates To Template and distance d of search window1, with S0Centered by, calculate the distance of the search window on its four neighborhoods up and down and To Template, and and d1Relatively, the position S of distance minima is obtained1, then with S1Centered by, calculating the distance of the search window on its four neighborhoods up and down and To Template, circulating successively, until obtaining local minizing point Smin
Method for tracking target based on Wavelet Descriptor the most according to claim 1, described step 7a) in occlusion coefficient Occ, it is to be obtained by the color histogram of candidate target and To Template, at each quantized interval of R-G-B RGB channel, if the color histogram hist of candidate target2Value be 0 but the color histogram hist of To Template1It is not zero, then with the color histogram hist of To Template1Represent and block parameter;If the color histogram hist of candidate target2Value be not 0 andValue more than threshold value Th=1.2, then use the color histogram of To Template and difference hist of the color histogram of candidate target1-hist2Representing and block parameter, the value otherwise blocking parameter is 0, finally, each quantized interval blocks parameter summation and represents occlusion coefficient Occ.
CN201210414785.8A 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor Expired - Fee Related CN103778641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210414785.8A CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210414785.8A CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Publications (2)

Publication Number Publication Date
CN103778641A CN103778641A (en) 2014-05-07
CN103778641B true CN103778641B (en) 2016-08-03

Family

ID=50570837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210414785.8A Expired - Fee Related CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Country Status (1)

Country Link
CN (1) CN103778641B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574384B (en) * 2014-12-26 2018-04-27 北京航天控制仪器研究所 A kind of target based on MSER and SURF Feature Points Matchings loses catching method again
CN105761277B (en) * 2016-02-01 2018-09-14 西安理工大学 A kind of motion target tracking method based on light stream
CN108269269A (en) * 2016-12-30 2018-07-10 纳恩博(北京)科技有限公司 Method for tracking target and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156993A (en) * 2011-04-15 2011-08-17 北京航空航天大学 Continuous wavelet transform object tracking method based on space-time processing block
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720295B2 (en) * 2004-06-29 2010-05-18 Sanyo Electric Co., Ltd. Method and apparatus for coding images with different image qualities for each region thereof, and method and apparatus capable of decoding the images by adjusting the image quality
TR200804969A2 (en) * 2008-07-04 2010-01-21 Aselsan Elektron�K Sanay� Ve T�Caret Anon�M ��Rket� Complex wavelet tracer
US20100026897A1 (en) * 2008-07-30 2010-02-04 Cinnafilm, Inc. Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156993A (en) * 2011-04-15 2011-08-17 北京航空航天大学 Continuous wavelet transform object tracking method based on space-time processing block
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching

Also Published As

Publication number Publication date
CN103778641A (en) 2014-05-07

Similar Documents

Publication Publication Date Title
CN102881022B (en) Concealed-target tracking method based on on-line learning
CN102999920B (en) Target tracking method based on nearest neighbor classifier and mean shift
CN107818571A (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN102722714B (en) Artificial neural network expanding type learning method based on target tracking
CN102945554B (en) Target tracking method based on learning and speeded-up robust features (SURFs)
CN110991272A (en) Multi-target vehicle track identification method based on video tracking
CN104574439A (en) Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
CN102592112B (en) Method for determining gesture moving direction based on hidden Markov model
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN106408594A (en) Video multi-target tracking method based on multi-Bernoulli characteristic covariance
CN105809715B (en) A kind of visual movement object detection method adding up transformation matrices based on interframe
CN104050818A (en) Moving vehicle speed measurement method based on target tracking and feature point matching
CN109145836A (en) Ship target video detection method based on deep learning network and Kalman filtering
CN102799646B (en) A kind of semantic object segmentation method towards multi-view point video
CN104463165A (en) Target detection method integrating Canny operator with Vibe algorithm
CN103544488B (en) A kind of face identification method and device
CN103778641B (en) Method for tracking target based on Wavelet Descriptor
CN115311241B (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN108009551A (en) Suitable for the power knife switch division position state identification method of electric operating robot
CN103456030A (en) Target tracking method based on scattering descriptor
CN104281837A (en) Pedestrian tracking method combining Kalman filtering with ROI expansion between adjacent frames
CN112465273A (en) Unmanned vehicle track prediction method based on local attention mechanism
CN105405138A (en) Water surface target tracking method based on saliency detection
CN104778670A (en) Fractal-wavelet self-adaption image denoising method based on multivariate statistical model
CN104778472A (en) Extraction method for facial expression feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160803

Termination date: 20211025

CF01 Termination of patent right due to non-payment of annual fee