CN103778641A - Target tracking method based on wavelet descriptor - Google Patents

Target tracking method based on wavelet descriptor Download PDF

Info

Publication number
CN103778641A
CN103778641A CN201210414785.8A CN201210414785A CN103778641A CN 103778641 A CN103778641 A CN 103778641A CN 201210414785 A CN201210414785 A CN 201210414785A CN 103778641 A CN103778641 A CN 103778641A
Authority
CN
China
Prior art keywords
target
wavelet
template
color histogram
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210414785.8A
Other languages
Chinese (zh)
Other versions
CN103778641B (en
Inventor
田小林
焦李成
刘朵
张小华
缑水平
朱虎明
钟桦
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210414785.8A priority Critical patent/CN103778641B/en
Publication of CN103778641A publication Critical patent/CN103778641A/en
Application granted granted Critical
Publication of CN103778641B publication Critical patent/CN103778641B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a target tracking method based on a wavelet descriptor and mainly resolves a problem of target tracking failure caused by shelter or rapid change of a target in the prior art. The method comprises implementing steps of: (1) inputting the first frame of a video and manually marking a target to be tracked; (2) establishing a target template; (3) calculating the color histogram of the target; (4) extracting a wavelet characteristic in a search area of a new frame of the video image; (5) acquiring a local minimum of the distance of the target template; (6) calculating the color histogram of the target; (7) determining whether the target is sheltered and tracking an extracted angular point characteristic if the target is partially sheltered or achieving target tracking with motion estimation if the target is completely sheltered; and (8) executing the step (4) to the step (7) repeatedly until the video ends. Compared with a method in the prior art, the method improves target tracking robustness if the target is partially sheltered or changed rapidly.

Description

Based on the method for tracking target of Wavelet Descriptor
Technical field
The invention belongs to technical field of image processing, relate to video target tracking method, can be applicable to intelligent monitoring, target following and man-machine interface.
Background technology
The target following of sequence image is the important component part of image processing techniques application, it refer to by input sequence of video images analyze, determine the position at target place in each frame, obtain relevant parameter.Target following is one of gordian technique in computer vision, has merged the fields such as image processing, pattern-recognition and artificial intelligence, is all widely used aspect many in robot visual guidance, safety monitoring, traffic control, video compress and meteorologic analysis etc.As military aspect, Imaging Guidance, military surveillance and the supervision etc. of weapon are successfully applied to.Civilian aspect, as vision monitoring, has been widely used in the each side of social life.Target following can be applicable to the guard monitor of community and critical facility; Carry out the real-time tracing of vehicle for intelligent transportation system, can obtain the many valuable traffic flow parameters of vehicle flowrate, vehicle, the speed of a motor vehicle, vehicle density etc., simultaneously can also detection accident or the emergency situations such as fault.
The patented claim " a kind of sensation target recognition and tracking method " (number of patent application 201010537843.7, publication number CN101986348A) that Shanghai Dian Ji University proposes, discloses a kind of sensation target recognition and tracking method.This tracking comprises: acquiescence the 0th frame search window and image etc. are large, from the first two field picture, obtain the encirclement frame of target, then be the prediction of search window, it utilize image processing method to surround frame and wherein unique point calculate, a kind of method of predictable search window has been proposed simultaneously on the basis of target following, target to mark is carried out motion prediction and tracking, although this tracking has certain effect to improving real-time, but in the time that moving target blocks or change fast, use above-mentioned forecasting search window method cannot realize accurate tracking.
Patented claim " based on the gray level target tracking algorithm of marginal information and mean shift " (the number of patent application CN201010238378.7 that BJ University of Aeronautics & Astronautics proposes, publication number CN101916446A), a kind of gray level target tracking algorithm based on marginal information and mean shift is disclosed.The method is carried out pre-service and is extracted clarification of objective template the first frame video image, reference position at present frame by kalman filter forecasting target, then search for the optimal location of target in present frame with mean shift track algorithm in reference position, the interval fixed cycle, in conjunction with canny filtering, To Template is upgraded.Although the method can be followed the tracks of target in the situation that target shape, intensity profile and background change, but this method is to use kalman to carry out linear prediction, therefore, while blocking as the target of nonlinear motion, follow the tracks of easily generation and drift about and cause following the tracks of unsuccessfully.
Summary of the invention
The object of the invention is to for above-mentioned the deficiencies in the prior art, propose a kind of method for tracking target based on Wavelet Descriptor, to block in target or correct tracking target still when target rapid movement, improve the robustness of target following.
Realizing thinking of the present invention is: the wavelet character descriptor that extracts target at the first frame is as To Template, extract the wavelet character descriptor of region of search in present frame, ask the local minimum of distance between the two to determine the position of target, block judgement thought by what introduce simultaneously, improve the accuracy that To Template upgrades, in the time that target is blocked, use Corner Feature and motion prediction to complete the correct tracking of target.Specific implementation step comprises as follows:
(1) the first frame in one section of video of input, and handmarking goes out target to be tracked;
(2) set up To Template:
Tracking target 2a) step (1) being marked is carried out 3 layers of wavelet transformations decomposition, and the details component extracting after decomposing is designated as wavelet character descriptor;
2b) according to step 2a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 1:
Thr 1 = Const * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 1(i, j) represents the wavelet conversion coefficient that point (i, j) is located, and M represents the row of wavelet coefficient matrix, and N represents wavelet coefficient matrix column;
2c) by step 2a) the wavelet character descriptor and the step 2b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 1, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point, and using the result after threshold process as To Template;
(3) calculation procedure 2b) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, be 16 minizones by the uniform quantization of red R passage, be 16 minizones by the uniform quantization of green G passage, be 16 minizones by the uniform quantization of blue B passage, the pixel quantity dropping in each minizone by the color in statistics To Template obtains color histogram hist 1;
(4) in the region of search of a new frame video image, extract wavelet character:
4a) the next frame of input video, carries out 3 layers of wavelet transformation to the region of search of target and decomposes, and the details component after extraction is decomposed is as wavelet character descriptor;
4b) according to step 4a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 2:
Thr 2 = Const * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0 . 6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 2(x, y) represents the wavelet conversion coefficient that point (x, y) is located, and P represents the row of wavelet coefficient matrix, and Q represents wavelet coefficient matrix column;
4c) by step 4a) the wavelet character descriptor and the step 4b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 2, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point;
(5) wavelet character of the To Template that the wavelet character of the region of search obtaining according to step (4) and step (2) obtain, asks the S of local minizing point of distance between them min;
(6) according to the position S drawing in step (5) minestimate the target location of present frame, and calculate the color histogram hist of estimating target 2as the histogram of candidate target;
(7) judge whether target is blocked:
The color histogram that color histogram 7a) obtaining according to step (3) and step (6) obtain draws occlusion coefficient Occ;
7b) occlusion coefficient Occ and threshold value are compared, if Occ is less than threshold value T 1=0.6 represents that target do not block, and the candidate target that step (6) is obtained is as the target following result of present frame and the To Template of renewal, then export target tracking results; If Occ is greater than threshold value T 2=0.9, represent that target has occurred to block completely, execution step (9); If Occ is greater than T 1and be less than T 2, represent that partial occlusion has occurred target, execution step (8);
(8) extract at random N Corner Feature of To Template, in present frame, these Corner Feature points are followed the tracks of, obtain the target location of this frame, and target following result output using this target location as present frame;
(9) according to the correlativity between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and target following result output using this target location as present frame;
(10) circulation execution step (4)~step (9), until the last frame of video.
The present invention compared with prior art has following advantage:
The first, the present invention has introduced wavelet character descriptor, and the spatial information (si) that comprises image can provide abundant characteristics of image, has overcome in prior art and has caused feature to describe inaccurate shortcoming because losing spatial information (si), has improved the performance of image feature descriptor.
Second, the present invention has introduced the thought of blocking judgement, in the time that target is blocked, do not upgrade To Template, avoid the accumulation of wrong template, and at partial occlusion with block completely in two kinds of situations and take different modes to follow the tracks of target, overcome in prior art and in the time that target is blocked, followed the tracks of failed shortcoming, improved target following robustness.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the schematic diagram that the first frame video image of inputting of the present invention handmarking go out target to be tracked;
Fig. 3 is the tracking results of the new frame video image inputted of the present invention;
Fig. 4 is the simulated effect figure of the present invention under circumstance of occlusion.
Concrete implementing measure
With reference to Fig. 1, specific embodiment of the invention step is as follows:
Step 1. is inputted the first frame in one section of video, and handmarking goes out target to be tracked, and one section of video sequence of example input of the present invention is as Fig. 2, and it is the first two field picture of one section of vehicle operating video, and the region that in Fig. 2, rectangle frame is confined is as target to be tracked.
Step 2. is set up To Template:
2a) utilize the tracking target that wavelet transformation marks step 1 to carry out 3 layers of decomposition, in the time that ground floor decomposes, tracking target is resolved into 1 approximate signal and 3 detail signals, in the time that the second layer decomposes, the approximate signal that ground floor is obtained resolves into 1 approximate signal and 3 detail signals again, in the time decomposing for the 3rd layer, the approximate signal that the second layer is obtained resolves into 1 approximate signal and 3 detail signals again, finally, extract these 9 detail signals and form the wavelet character descriptor of 9 layers;
2b) according to step 2a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 1:
Thr 1 = Const * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 1(i, j) represents the wavelet conversion coefficient that point (i, j) is located, and M represents the row of wavelet coefficient matrix, and N represents wavelet coefficient matrix column;
2c) by step 2a) the wavelet character descriptor and the step 2b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 1, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point, and using the result after threshold process as To Template.
Step 3. calculation procedure 2c) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, be 16 minizones by the uniform quantization of red R passage, be 16 minizones by the uniform quantization of green G passage, be 16 minizones by the uniform quantization of blue B passage, the pixel quantity dropping in each minizone by the color in statistics To Template obtains color histogram hist 1.
Step 4. is extracted wavelet character in the region of search of a new frame video image:
4a) the next frame of input video, carries out 3 layers of wavelet transformation to the region of search of target and decomposes, and extracts details component after decomposing as wavelet character descriptor, and 3 layers of wavelet transformation herein using decompose and step 2a) described in isolation identical;
4b) according to step 4a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 2:
Thr 2 = Const * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 2(x, y) represents the wavelet conversion coefficient that point (x, y) is located, and P represents the row of wavelet coefficient matrix, and Q represents wavelet coefficient matrix column;
4c) by step 4a) the wavelet character descriptor and the step 4b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 2, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point.
The wavelet character of the To Template that the wavelet character of the region of search that step 5. obtains according to step 4 and step 2 obtain, asks the S of local minizing point of distance between them min.
At the initial point position S of region of search 0place determines a search window large with To Template etc., calculates To Template and searches for the distance d of window 1:
d 1 = Σ i , j , k | wd 1 ( i , j , k ) - wd 2 ( i , j , k ) | ,
Wherein, wd 1(i, j, k) represents that To Template is listed as the wavelet character descriptor at k layer place, wd at the capable j of i 2(i, j, k) represents that search window is listed as the wavelet character descriptor at k layer place at the capable j of i; With S 0centered by, calculate search window on its four neighborhoods up and down and the distance of To Template, and with above-mentioned apart from d 1relatively, obtain the position S apart from minimum value 1, then with S 1centered by, calculate search window on its four neighborhoods up and down and the distance of To Template, circulation successively, until obtain the S of local minizing point min.
Step 6. is according to the position S drawing in step 5 minestimate the target location of present frame, and calculate the color histogram hist of estimating target 2as the histogram of candidate target.
Step 7. judges whether target is blocked:
The color histogram and the step 6 that 7a) obtain according to step 3, the color histogram obtaining is asked occlusion coefficient Occ, at each quantized interval of R-G-B RGB passage, if the color histogram hist of candidate target 2value be 0 but the color histogram hist of To Template 1non-vanishing, use the color histogram hist of To Template 1represent to block parameter; If the color histogram hist of candidate target 2value be not 0, and
Figure BDA00002307308800062
value be greater than threshold value T h=1.2, use the difference hist of the color histogram of To Template and the color histogram of candidate target 1-hist 2represent to block parameter, otherwise the value of blocking parameter is 0, last, the parameter that blocks of each quantized interval is sued for peace to represent occlusion coefficient Occ;
7b) by occlusion coefficient Occ and threshold value T 1compare, if Occ is less than threshold value T 1=0.6 represents that target do not block, and the candidate target that step 6 is obtained is as the target following result of present frame and the To Template of renewal, then export target tracking results, as shown in Figure 3; If Occ is greater than threshold value T 2=0.9, represent that target has occurred to block completely, perform step 9; If Occ is greater than T 1and be less than T 2, represent that partial occlusion has occurred target, perform step 8.
Random N the Corner Feature that extracts To Template of step 8. followed the tracks of these Corner Feature points in present frame, obtains the target location of this frame, and target following result using this target location as present frame exports, as shown in Figure 4.
Step 9. is according to the correlativity between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and target following result output using this target location as present frame.
Step 10. circulation execution step 4~step 9, until the last frame of video.
Effect of the present invention can further illustrate by following emulation:
Emulation content, first, the first two field picture of one section of vehicle operating video of input is as Fig. 2, and the region that wherein in Fig. 2, rectangle frame is confined is target to be tracked, extracts the wavelet character descriptor of target to be tracked and sets up To Template; Secondly, extract the wavelet character descriptor of region of search in present frame, find the local minizing point of it and To Template, start shadowing device, in the time that target is not blocked, export the local minizing point position obtaining as target following result, as shown in Figure 3, in the time that blocking, target use Corner Feature and motion prediction to complete the tracking of target, as shown in Figure 4, finally, circulation is carried out above-mentioned tracking step until the last frame of video.
The experimental result of Fig. 3 and Fig. 4 shows, the present invention can to block and and the realization of goal of rapid movement effectively follow the tracks of.

Claims (4)

1. the method for tracking target based on Wavelet Descriptor, comprises the following steps:
(1) the first frame in one section of video of input, and handmarking goes out target to be tracked;
(2) set up To Template:
Tracking target 2a) step (1) being marked is carried out 3 layers of wavelet transformations decomposition, and the details component extracting after decomposing is designated as wavelet character descriptor;
2b) according to step 2a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 1:
Thr 1 = Const * ( 1 M * N Σ i , j | coef 1 ( i , j ) | ) / 0.6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 1(i, j) represents the wavelet conversion coefficient that point (i, j) is located, and M represents the row of wavelet coefficient matrix, and N represents wavelet coefficient matrix column;
2c) by step 2a) the wavelet character descriptor and the step 2b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 1, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point, and using the result after threshold process as To Template;
(3) calculation procedure 2b) in the color histogram of To Template:
The color model of To Template is R-G-B RGB color model, be 16 minizones by the uniform quantization of red R passage, be 16 minizones by the uniform quantization of green G passage, be 16 minizones by the uniform quantization of blue B passage, the pixel quantity dropping in each minizone by the color in statistics To Template obtains color histogram hist 1;
(4) in the region of search of a new frame video image, extract wavelet character:
4a) the next frame of input video, carries out 3 layers of wavelet transformation to the region of search of target and decomposes, and the details component after extraction is decomposed is as wavelet character descriptor;
4b) according to step 4a) in wavelet conversion coefficient calculated threshold Thr after wavelet transformation 2:
Thr 2 = Const * ( 1 P * Q Σ x , y | coef 2 ( x , y ) | ) / 0 . 6745 ,
Wherein, Const is modulation factor, Const ∈ [3.5,4.5], coef 2(x, y) represents the wavelet conversion coefficient that point (x, y) is located, and P represents the row of wavelet coefficient matrix, and Q represents wavelet coefficient matrix column;
4c) by step 4a) the wavelet character descriptor and the step 4b that obtain) threshold value that obtains compares, if the value of wavelet character descriptor is greater than threshold value Thr 2, the value of these wavelet character descriptors is put to 1, otherwise keep initial value, retain and change more stable wavelet character point;
(5) wavelet character of the To Template that the wavelet character of the region of search obtaining according to step (4) and step (2) obtain, asks the S of local minizing point of distance between them min;
(6) according to the position S drawing in step (5) minestimate the target location of present frame, and calculate the color histogram hist of estimating target 2as the histogram of candidate target;
(7) judge whether target is blocked:
The color histogram that color histogram 7a) obtaining according to step (3) and step (6) obtain draws occlusion coefficient Occ;
7b) occlusion coefficient Occ and threshold value are compared, if Occ is less than threshold value T 1=0.6 represents that target do not block, and the candidate target that step (6) is obtained is as the target following result of present frame and the To Template of renewal, then export target tracking results; If Occ is greater than threshold value T 2=0.9, represent that target has occurred to block completely, execution step (9); If Occ is greater than T 1and be less than T 2, represent that partial occlusion has occurred target, execution step (8);
(8) extract at random N Corner Feature of To Template, in present frame, these Corner Feature points are followed the tracks of, obtain the target location of this frame, and target following result output using this target location as present frame;
(9) according to the correlativity between current frame video image and previous frame video image, by the result of target following in the speed of target travel and direction and previous frame, estimate the position of target in current frame video image, and target following result output using this target location as present frame;
(10) circulation execution step (4)~step (9), until the last frame of video.
2. the method for tracking target based on Wavelet Descriptor according to claim 1, described step 2a) and step 4a) in 3 layers of wavelet transformation decompose, it is the picture breakdown realizing by wavelet transformation, at ground floor, picture breakdown is become to 1 approximate signal and 3 detail signals, in the time that the second layer decomposes, the approximate signal that ground floor is obtained resolves into 1 approximate signal and 3 detail signals again, in the time decomposing for the 3rd layer, the approximate signal that the second layer is obtained resolves into 1 approximate signal and 3 detail signals again, finally, extract these 9 detail signals and form the wavelet character descriptor of 9 layers.
3. the method for tracking target based on Wavelet Descriptor according to claim 1, the solution procedure of described step (5) is: at the initial point position S of region of search 0place determines a search window large with To Template etc., calculates To Template and searches for the distance d of window 1, with S 0centered by, calculate search window on its four neighborhoods up and down and the distance of To Template, and and d 1relatively, obtain the position S apart from minimum value 1, then with S 1centered by, calculate search window on its four neighborhoods up and down and the distance of To Template, circulation successively, until obtain the S of local minizing point min.
4. the method for tracking target based on Wavelet Descriptor according to claim 1, described step 7a) in occlusion coefficient Occ, to obtain by the color histogram of candidate target and To Template, at each quantized interval of R-G-B RGB passage, if the color histogram hist of candidate target 2value be 0 but the color histogram hist of To Template 1non-vanishing, use the color histogram hist of To Template 1represent to block parameter; If the color histogram hist of candidate target 2value be not 0 and
Figure FDA00002307308700031
value be greater than threshold value T h=1.2, use the difference hist of the color histogram of To Template and the color histogram of candidate target 1-hist 2represent to block parameter, otherwise the value of blocking parameter is 0, last, the parameter that blocks of each quantized interval is sued for peace to represent occlusion coefficient Occ.
CN201210414785.8A 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor Expired - Fee Related CN103778641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210414785.8A CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210414785.8A CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Publications (2)

Publication Number Publication Date
CN103778641A true CN103778641A (en) 2014-05-07
CN103778641B CN103778641B (en) 2016-08-03

Family

ID=50570837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210414785.8A Expired - Fee Related CN103778641B (en) 2012-10-25 2012-10-25 Method for tracking target based on Wavelet Descriptor

Country Status (1)

Country Link
CN (1) CN103778641B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574384A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Lost target recapturing method based on MSER and SURF feature point matching
CN108269269A (en) * 2016-12-30 2018-07-10 纳恩博(北京)科技有限公司 Method for tracking target and device
CN105761277B (en) * 2016-02-01 2018-09-14 西安理工大学 A kind of motion target tracking method based on light stream

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026897A1 (en) * 2008-07-30 2010-02-04 Cinnafilm, Inc. Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution
US20100135587A1 (en) * 2004-06-29 2010-06-03 Sanyo Electric Co., Ltd. Method and apparatus for coding images with different image qualities for each region thereof, and method and apparatus capable of decoding the images by adjusting the image quality
US20110103649A1 (en) * 2008-07-04 2011-05-05 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirket Complex Wavelet Tracker
CN102156993A (en) * 2011-04-15 2011-08-17 北京航空航天大学 Continuous wavelet transform object tracking method based on space-time processing block
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100135587A1 (en) * 2004-06-29 2010-06-03 Sanyo Electric Co., Ltd. Method and apparatus for coding images with different image qualities for each region thereof, and method and apparatus capable of decoding the images by adjusting the image quality
US20110103649A1 (en) * 2008-07-04 2011-05-05 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirket Complex Wavelet Tracker
US20100026897A1 (en) * 2008-07-30 2010-02-04 Cinnafilm, Inc. Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution
CN102156993A (en) * 2011-04-15 2011-08-17 北京航空航天大学 Continuous wavelet transform object tracking method based on space-time processing block
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574384A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Lost target recapturing method based on MSER and SURF feature point matching
CN104574384B (en) * 2014-12-26 2018-04-27 北京航天控制仪器研究所 A kind of target based on MSER and SURF Feature Points Matchings loses catching method again
CN105761277B (en) * 2016-02-01 2018-09-14 西安理工大学 A kind of motion target tracking method based on light stream
CN108269269A (en) * 2016-12-30 2018-07-10 纳恩博(北京)科技有限公司 Method for tracking target and device

Also Published As

Publication number Publication date
CN103778641B (en) 2016-08-03

Similar Documents

Publication Publication Date Title
CN102881022B (en) Concealed-target tracking method based on on-line learning
CN102999920B (en) Target tracking method based on nearest neighbor classifier and mean shift
CN102945554B (en) Target tracking method based on learning and speeded-up robust features (SURFs)
CN102324183B (en) Method for detecting and shooting vehicle based on composite virtual coil
CN108447078B (en) Interference perception tracking algorithm based on visual saliency
CN105744232B (en) A kind of method of the transmission line of electricity video external force damage prevention of Behavior-based control analytical technology
CN107818571A (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN104050818B (en) The moving vehicle speed-measuring method of based target tracking and Feature Points Matching
CN102722714B (en) Artificial neural network expanding type learning method based on target tracking
CN104574439A (en) Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
CN114187665B (en) Multi-person gait recognition method based on human skeleton heat map
CN110738690A (en) unmanned aerial vehicle video middle vehicle speed correction method based on multi-target tracking framework
CN104281837B (en) With reference to Kalman filtering and the adjacent widened pedestrian tracting methods of interframe ROI
CN109829936B (en) Target tracking method and device
CN104463165A (en) Target detection method integrating Canny operator with Vibe algorithm
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN115311241B (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN103456030A (en) Target tracking method based on scattering descriptor
CN111681382A (en) Method for detecting temporary fence crossing in construction site based on visual analysis
CN103778641B (en) Method for tracking target based on Wavelet Descriptor
CN103679690A (en) Object detection method based on segmentation background learning
CN103065310B (en) Based on the high spectrum image edge information extracting method of three-dimensional light spectral corner statistics
CN106997670A (en) Real-time sampling of traffic information system based on video
CN102663746B (en) Background detection method based on video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160803

Termination date: 20211025

CF01 Termination of patent right due to non-payment of annual fee