CN102831618B - Hough forest-based video target tracking method - Google Patents

Hough forest-based video target tracking method Download PDF

Info

Publication number
CN102831618B
CN102831618B CN201210253267.2A CN201210253267A CN102831618B CN 102831618 B CN102831618 B CN 102831618B CN 201210253267 A CN201210253267 A CN 201210253267A CN 102831618 B CN102831618 B CN 102831618B
Authority
CN
China
Prior art keywords
target
image
image block
feature
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210253267.2A
Other languages
Chinese (zh)
Other versions
CN102831618A (en
Inventor
田小林
焦李成
李敏敏
张小华
王桂婷
朱虎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210253267.2A priority Critical patent/CN102831618B/en
Publication of CN102831618A publication Critical patent/CN102831618A/en
Application granted granted Critical
Publication of CN102831618B publication Critical patent/CN102831618B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a Hough forest-based video target tracking method which mainly solves the problem that the tracking easily fails when the target is partially shielded or generates non-rigid change in the prior art. The method comprises the following steps of: (1) inputting the first frame of a video, and manually marking the target to be tracked; (2) extracting the characteristics of the video image from the input first frame of video image; (3) establishing and initializing a Hough forest detector; (4) detecting the target and performing Hough vote; (5) tracking the target by use of a Lucas-Kanade tracker to obtain the scale change (s) of the target frame size; (6) determining the target position according to the vote peak in the Hough forest detection and the tracking result of the Lucas-Kanade tracker; (7) adjusting the width and height of the target frame according to the scale change (s) of the target frame size, and displaying the width and height; (8) re-training the Hough forest detector; and (9) repeating the steps (4)-(8) until the video is finished. The method disclosed by the invention realizes accurate tracking of the shielded target, and can be applied to the field of human-machine interaction and traffic monitoring.

Description

Video target tracking method based on Hough forest
Technical field
The invention belongs to technical field of image processing, relate to video target tracking method, can be applicable to the fields such as man-machine interaction and target following.
Background technology
Target following refers to by the image sequence of taking is analyzed, thereby calculates the position of target on every two field picture, then obtains relevant parameter.Target following is a requisite gordian technique in computer vision, and it is all widely used aspect many in robot visual guidance, safety monitoring, traffic control, video compress and meteorologic analysis etc.As military aspect, Imaging Guidance, military surveillance and the supervision etc. of weapon have successfully been applied to.Civilian aspect, as vision monitoring, has also been widely used in the each side of social life.Target following also can be applicable to the guard monitor of community and critical facility; Carry out the real-time tracing of vehicle for intelligent transportation system, thereby obtain the many valuable traffic flow parameters of vehicle flowrate, vehicle, the speed of a motor vehicle, vehicle density etc., simultaneously can also detection accident or the emergency situations such as fault.
The patented claim " a kind of video target tracking method based on cumulative histogram particle filter " (number of patent application 201110101737.9, publication number CN102184548A) that Zhejiang Polytechnical University proposes discloses a kind of video target tracking method based on cumulative histogram particle filter.The method combines color cumulative histogram with particle filter tracking algorithm, first according to the target zone detecting, calculate the color cumulative histogram of target, then initialization particle filter tracking, obtain the scope of target in a new frame, now, point centered by the coordinate of each particle, calculate the weight of interim cumulative histogram and each particle and carry out weight normalization, and obtain thus new target cumulative histogram, upgrade cumulative histogram, finally adopt replacement selection algorithm to resample to particle.Although the method can be carried out correct tracking when target and background color similarity,, what this method adopted is that histogram is described target, so the inadequate robust of non-rigid variation to target is easy to follow the tracks of unsuccessfully.
Patented claim " video target tracking method based on the average drifting " (number of patent application 201010110655.6 that University Of Suzhou proposes, publication number CN101924871A), a kind of video target tracking method based on Mean Shift is disclosed, the method is first to extract the SIFT feature of tracking target, then with Mean-Shift algorithm, the SIFT feature of target is mated, thereby realize the tracking to target.Although this method has adopted, rotation, yardstick convergent-divergent, brightness are changed to the SIFT feature maintaining the invariance, because complexity is very large, can greatly reduce the real-time of target following.
Summary of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, propose a kind of video target tracking method based on Hough forest, to improve the real-time of target following to the robustness of target occlusion, non-rigid variation and target following.
Realizing technical thought of the present invention is: Hough transformation is combined and as detecting device, target detected with random forest sorter, by Lucas-Kanade tracker, target is followed the tracks of simultaneously, Hough transformation is combined with random forest sorter, improve the performance of random forest sorter, make it to the tracking of target occlusion and the non-rigid variation of target robust more, pass through the yardstick in the Lucas-Kanade method adjustment aim region of introducing simultaneously, further determine the position of target, make to follow the tracks of the dimensional variation of good adaptation target.Specific implementation step comprises as follows:
(1) the first frame in one section of video of input, and handmarking goes out target to be tracked;
(2) from the first frame video image of input, extract the feature of video image, this feature comprises: Lab
Feature, gradient orientation histogram HOG feature, the first-order derivative characteristic of image x direction, second derivative feature, the first-order derivative characteristic of image y direction, second derivative feature;
(3) set up also initialization Hough forest detecting device
The decision tree number of 3a) setting in Hough forest detecting device is 20, produces 8 couples of scopes position offset (l, s), (p, q) in 0 ~ 12 piece for each decision tree is random, simultaneously for position offset l in every pair of piece chooses a kind of feature at random;
3b) using target area as positive sample, region beyond using target area is as negative sample, using target area to surrounding, expand 20 pixels as Sample Refreshment region, in Sample Refreshment region, individual element is got 12 * 12 image block, to each decision tree, according to position offset (l in 8 pairs of pieces, s), (p, q) determine 8 unique points pair of image block, extract the feature i training decision tree of these unique points, produce image block characteristics value; If image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if image block is negative sample, only upgrade the when positive negative sample ratio of this tree of positive negative sample that image block coding is corresponding;
3c) from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees;
(4) survey target and carry out Hough ballot
4a) be written into a new frame video image, according to the method for step (2), extract the feature of a new frame video image, and the target area of previous frame is expanded and is twice as region of search, in region of search, centered by each pixel, get 12 * 12 and obtain image block, by the decision tree in random forest sorter successively to image block classify, computed image block eigenvalue, when decision tree judges that image block belongs to target, according to the center of position offset d corresponding to image block characteristics value and image block, calculate the center of target and vote;
4b) get the position of ballot peak value as the position of target;
(5) Lucas-Kanade tracker tracking target, the dimensional variation s of acquisition target frame size;
(6), according to the ballot peak value in the detection of Hough forest and the tracking results of Lucas-Kanade tracker, determine according to the following rules target location;
If the value of the maximum polling place during Hough forest detects is greater than threshold value 1, the position in current frame video image using the position detecting as target, and execution step (8);
If the value of the maximum polling place during Hough forest detects is greater than threshold value 2 and is less than threshold value 1, when the result detecting and Lucas-Kanade follow the tracks of the result that obtains when x, y deflection error are all less than 5 pixels, get the average of testing result and tracking results as target location, and execution step (8), otherwise, using testing result as target location, execution step (7);
If the value of the maximum polling place during Hough forest detects is greater than threshold value 3 and is less than threshold value 2, using testing result as target location, execution step (7);
If the value of the maximum polling place during Hough forest detects is less than threshold value 3, the position using the tracking results of Lucas-Kanade tracker as target, performs step (7);
(7) wide and high according to the variation yardstick s adjustment aim frame of target frame size, and show;
(8) again train Hough forest classified device
8a) using target area as positive sample, the region beyond using target area, as negative sample, expands 20 pixels by target area, using the region of this expansion as new region more;
8b) in new region more, centered by each pixel, get 12 * 12 image block, to each decision tree, the feature i training decision tree of determining 8 pairs of unique points of this image block and extracting these unique points, produce image block characteristics value, if image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if being negative sample, this image block only upgrades the positive negative sample ratio of image block coding correspondence and the positive negative sample ratio of this decision tree,
(9) repeating step (4)-(8), until video finishes.
The present invention compared with prior art has following advantage:
First, the present invention has adopted the thought of Hough ballot, not only the feature that aligns sample is learnt, and in learning process, recorded positive sample to the position offset of target's center, even like this when target is by partial occlusion or while there is non-rigid variation, also can obtain target location by Hough ballot.
The second, the present invention has adopted Lucas-Kanade tracking to determine the dimensional variation of target, has overcome target frame size in prior art and has fixed, the shortcoming not changing with target sizes.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the first frame video image of input;
Fig. 3 is the schematic diagram that handmarking goes out target to be tracked in Fig. 2;
Fig. 4 is the schematic diagram of new region more in Fig. 2;
Fig. 5 is a new frame video image of input;
Fig. 6 is the schematic diagram of region of search in Fig. 5;
Fig. 7 is the schematic diagram of new region more in Fig. 5.
Concrete implementing measure
Below in conjunction with accompanying drawing, invention is described further.
With reference to Fig. 1, specific implementation of the present invention is provided to following embodiment:
Step 1, the first frame of one section of video of input, and handmarking goes out target to be tracked.
One section of video figure of this example input is as Fig. 2, and it is the first frame that one section of people's face blocks video, the target of the people's face in Fig. 2 for following the tracks of, and to confining human face region as target to be tracked with mouse in Fig. 2, result is as shown in Figure 3.
Step 2, from the first frame video image of input, extracts the feature of video image.
2.1) by Fig. 2, by RGB color space conversion, be Lab color space, in RGB color space, R represents that red channel, G represent that green channel, B represent blue channel, in Lab color space, L represents that illumination passage, a represent from redness to green passage, b, to represent from yellow to blue passage, using L passage, a passage, b passage as feature 1 ~ feature 3 of extracting;
2.2) Fig. 2 is converted into gray level image by RGB image, calculate the gradient direction of each pixel in gray level image, take 40 ° as a region, the gradient direction merger of each pixel is quantified as to 9 directions, the pixel number of statistics all directions, can obtain the gradient orientation histogram HOG of 9 dimensions, using 9 dimensional vectors of HOG as feature 4 ~ feature 12;
2.3) gray level image is carried out to medium filtering and obtain filtered image I, to the x direction of image I, ask first order derivative as feature 13, to image I, ask x direction to ask second derivative as feature 14, to image I, ask y direction to ask first order derivative as feature 15, to image I, ask y direction second derivative as feature 16.
Step 3, sets up and initialization Hough forest detecting device.
3.1) the decision tree number of setting in Hough forest detecting device is 20, produces the position skew in 0 ~ 12 piece of 8 pairs of scopes for each decision tree is random
3.2) region in the solid box shown in Fig. 3 is as positive sample, using the region outside solid box as negative sample, solid box in Fig. 3 is obtained to the dotted line frame shown in Fig. 4 to 20 pixels of surrounding expansion, region in dotted line frame is as Sample Refreshment region, in Sample Refreshment region, individual element is got 12 * 12 image block, to each decision tree, according to position offset (l in 8 pairs of pieces, s), (p, q) determine 8 unique points pair of image block, the feature i training decision tree of extracting these unique points, produces image block characteristics value; If image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if image block is negative sample, only upgrade the when positive negative sample ratio of this tree of positive negative sample that image block coding is corresponding;
3.3), from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees.
Step 4, detects target and carries out Hough ballot.
4.1) be written into new frame video image Fig. 5, according to the method for step 2, extract the feature of Fig. 5, and twice region in the dotted line frame shown in Fig. 6 is expanded to as region of search in the target area in Fig. 3, in region of search, centered by each pixel, get 12 * 12 and obtain image block, by the decision tree in random forest sorter, successively image block is classified, computed image block eigenvalue, when decision tree judges that image block belongs to target, according to the center of position offset d corresponding to image block characteristics value and image block, calculate the center of target and vote,
4.2) get the position of ballot peak value as the position of target;
Step 5, by Lucas-Kanade method tracking target
5.1) in the target area of Fig. 3, evenly produce N trace point p 1, p 2... p n, with Lucas-Kanade tracker, follow the tracks of these points, obtain p 1, p 2... p ncorresponding point q in Fig. 5 1, q 2... q n;
5.2) to the some q in Fig. 5 1, q 2... q n, with Lucas-Kanade tracker, follow the tracks of, obtain the some corresponding point p' in Fig. 3 1, p' 2... P' n;
5.3) calculate p 1, p 2... p nand p' 1, p' 2... p ' nforward direction-backward error fb between corresponding point, calculates p 1, p 2... p nand q 1, q 2... q nstandard cross-correlation coefficient ncc, get q 1, q 2... q nin meet following two requirements point as credible trace point: 1. forward direction-backward error fb is less than forward direction-backward error intermediate value fb_m; 2. standard cross-correlation coefficient ncc is greater than standard cross-correlation coefficient intermediate value ncc_m;
5.4) calculate respectively z credible trace point distance a between any two points in Fig. 3 i, and credible trace point distance b between any two points in Fig. 5 i, and by two class ratio of distances constants average, as the variation yardstick s of target frame size, i=1 wherein, 2 ... k, ! Represent factorial.
Step 6, the ballot peak value in detecting according to Hough forest and the tracking results of Lucas-Kanade tracker, determine according to the following rules target location:
If the ballot peak value during Hough forest detects is greater than threshold value 1, the position in current frame video image using the position detecting as target, and perform step 8;
If the ballot peak value during Hough forest detects is greater than threshold value 2 and is less than threshold value 1, when the result detecting and Lucas-Kanade follow the tracks of the result that obtains when x, y deflection error are all less than 5 pixels, get the average of testing result and tracking results as target location, and execution step 8, otherwise, using testing result as target location, execution step 7;
If the ballot peak value during Hough forest detects is greater than threshold value 3 and is less than threshold value 2, using testing result as target location, execution step 7;
If the ballot peak value during Hough forest detects is less than threshold value 3, the position using the tracking results of Lucas-Kanade tracker as target, performs step 7;
Step 7, according to the variation yardstick s of target frame size, wide and high according to following formula adjustment aim frame, and show
W s=w×s
h s=h×s
Wherein, w represents that the target frame before adjustment is wide, and s represents the variation yardstick of target frame size, W srepresent that the target frame after adjusting is wide, h represents that the target frame before adjustment is high, h srepresent that the target frame after adjusting is high.
Step 8, trains Hough forest classified device again
8.1) result of Fig. 5 being followed the tracks of is as shown in the solid box in Fig. 7, region in the solid box in Fig. 7 is as positive sample, and the region beyond solid box, as negative sample, expands 20 pixels by solid box, obtain the dotted line frame shown in Fig. 7, the region in dotted line frame is as new region more;
8.2) in new region more, centered by each pixel, get 12 * 12 image block, to each decision tree, the feature i training decision tree of determining 8 pairs of unique points of this image block and extracting these unique points, produce image block characteristics value, if image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if being negative sample, this image block only upgrades the positive negative sample ratio of image block coding correspondence and the positive negative sample ratio of this decision tree,
Step 9, repeating step 4---step 8, until video finishes.
Be more than an example of the present invention, do not form any limitation of the invention, emulation experiment shows, the present invention can not only realize the correct tracking to partial occlusion target, also can realize there being effective tracking of dimensional variation target.

Claims (4)

1. the video target tracking method based on Hough forest, comprises the following steps:
(1) the first frame of one section of video of input, and handmarking goes out target to be tracked;
(2) from the first frame video image of input, the feature of extracting video image, this feature comprises: Lab feature, gradient orientation histogram HOG feature, the first-order derivative characteristic of image x direction, second derivative feature, the first-order derivative characteristic of image y direction, second derivative feature;
(3) set up also initialization Hough forest detecting device
The decision tree number of 3a) setting in Hough forest detecting device is 20, produces 8 couples of scope position offset ((l in 0~12 piece for each decision tree is random j, r j), (p j, q j)), j=1 wherein, 2 ..., 8, simultaneously for position offset in every pair of piece is chosen a kind of feature i at random;
3b) using target area as positive sample, region beyond using target area is as negative sample, using target area to surrounding, expand 20 pixels as Sample Refreshment region, in Sample Refreshment region, individual element is got 12 * 12 image block, to each decision tree, according to position offset ((l in 8 pairs of pieces j, r j), (p j, q j)) determine 8 unique points pair of image block, extract the feature i training decision tree that these unique points are right, produce image block characteristics value; If image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if image block is negative sample, only upgrade the when positive negative sample ratio of this tree of positive negative sample that image block coding is corresponding;
Described according to position offset ((l in 8 pairs of pieces j, r j), (p j, q j)) determine 8 unique points pair of image block, extract the feature i training decision tree that these unique points are right, produce image block characteristics value, following steps are carried out
(3b1) center of hypothesis image block be (x, y), and a unique point is to comprising unique point 1 and unique point 2, by (x, y) and j to position offset ((l in piece j, r j), (p j, q j)) be added, obtain coordinate (x+l j, y+r j) and coordinate (x+p j, y+q j), by coordinate (x+l j, y+r j) corresponding pixel is as j the unique point 1 that unique point is right, by coordinate (x+p j, y+q j) corresponding pixel is as j the unique point 2 that unique point is right;
(3b2) unique point 1 that j unique point of comparison is right and the eigenwert of unique point 2 obtain j the flag m that unique point is right jif the eigenwert of unique point 1 is greater than the eigenwert of unique point 2, m j=1, if the eigenwert of unique point 1 is less than the eigenwert of unique point 2, m j=0;
(3b3) by 8 flag m that unique point is right 1~m 8be arranged in order and obtain image block characteristics value m 1m 2m 3m 4m 5m 6m 7m 8;
3c) from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees;
(4) detect target and carry out Hough ballot
4a) be written into a new frame video image, according to the method for step (2), extract the feature of a new frame video image, and the target area of previous frame is expanded and is twice as region of search, in region of search, centered by each pixel, get 12 * 12 image block, by the decision tree in Hough forest detecting device successively to image block classify, computed image block eigenvalue, when decision tree judges that image block belongs to target, according to the center of position offset d corresponding to image block characteristics value and image block, calculate the center of target and vote;
4b) get the position of ballot peak value as the position of target;
(5), by Lucas-Kanade tracker tracking target, obtain the dimensional variation s of target frame size;
(6), according to the ballot peak value in the detection of Hough forest and the tracking results of Lucas-Kanade tracker, determine according to the following rules target location;
If the ballot peak value during Hough forest detects is greater than threshold value 1, the position in current frame video image using the position detecting as target, and execution step (8);
If the ballot peak value during Hough forest detects is greater than threshold value 2 and is less than threshold value 1, when the result detecting and Lucas-Kanade follow the tracks of the result that obtains when x, y deflection error are all less than 5 pixels, get the average of testing result and tracking results as target location, and execution step (8), otherwise, using testing result as target location, execution step (7);
If the ballot peak value during Hough forest detects is greater than threshold value 3 and is less than threshold value 2, using testing result as target location, execution step (7);
If the ballot peak value during Hough forest detects is less than threshold value 3, the position using the tracking results of Lucas-Kanade tracker as target, performs step (7);
(7) wide and high according to the variation yardstick s adjustment aim frame of target frame size, and show;
(8) again train Hough forest detecting device
8a) using target area as positive sample, the region beyond using target area, as negative sample, expands 20 pixels by target area, using the region of this expansion as new region more;
8b) in new region more, centered by each pixel, get 12 * 12 image block, to each decision tree, the feature i training decision tree of determining 8 pairs of unique points of this image block and extracting these unique points, produce image block characteristics value, if image block is positive sample, the position offset d of computed image piece center and target's center, upgrade the when positive negative sample ratio of this decision tree of positive negative sample corresponding to image block characteristics value, the position offset d that memory image block eigenvalue is corresponding, if being negative sample, this image block only upgrades the positive negative sample ratio of image block coding correspondence and the positive negative sample ratio of this decision tree,
(9) repeating step (4)-(8), until video finishes.
2. the video target tracking method based on Hough forest according to claim 1, wherein step (2) described from the first frame video image of input, extract the feature of video image, carry out as follows:
(2a) by video image, by RGB color space conversion, be Lab color space, in RGB color space, R represents that red channel, G represent that green channel, B represent blue channel, in Lab color space, L represents that illumination passage, a represent from redness to green passage, b, to represent from yellow to blue passage, using L passage, a passage, b passage as feature 1~feature 3 of extracting;
(2b) video image is converted into gray level image by RGB image, calculate the gradient direction of each pixel in gray level image, take 40 ° as a region, the gradient direction merger of each pixel is quantified as to 9 directions, the pixel number of statistics all directions, can obtain the gradient orientation histogram HOG of 9 dimensions, using 9 dimensional vectors of HOG as feature 4~feature 12;
(2c) gray level image is carried out to medium filtering and obtain filtered image I, to the x direction of image I, ask first order derivative as feature 13, to image I, ask x direction second derivative as feature 14, to image I, ask y direction first order derivative as feature 15, to image I, ask y direction second derivative as feature 16.
3. the video target tracking method based on Hough forest according to claim 1, wherein step (5) described by Lucas-Kanade tracker tracking target, obtain the dimensional variation s of target frame size, carry out as follows:
(3a) in the target area of t-1 moment video image I1, evenly produce a series of N trace point p 1, p 2... p n, with Lucas-Kanade tracker, follow the tracks of these points, obtain p 1, p 2... p ncorresponding point q in t moment video image I2 1, q 2... q n;
(3b) to the some q in t moment video image I2 1, q 2... q n, with Lucas-Kanade tracker, follow the tracks of and obtain q 1, q 2... q ncorresponding point in t-1 moment video image I1
(3c) calculate trace point p in t-1 moment video image I1 1, p 2... p nwith forward direction-backward error fb between corresponding point, calculates p 1, p 2... p nand q 1, q 2... q nstandard cross-correlation coefficient ncc, get q 1, q 2... q nin meet following two requirements point as credible trace point: 1. forward direction-backward error fb is less than forward direction-backward error intermediate value fb_m; 2. standard cross-correlation coefficient ncc is greater than standard cross-correlation coefficient intermediate value ncc_m;
(3d) calculate respectively z credible trace point distance a between any two points in t-1 moment video image I1 i, and credible trace point distance b between any two points in t moment video image I2 i, and by two class ratio of distances constants average, as the variation yardstick s of target frame size, i=1 wherein, 2...k, ! Represent factorial.
4. the video target tracking method based on Hough forest according to claim 1, described wide and high according to the variation yardstick s adjustment aim frame of target frame size of step (7) is wherein to realize by following two formula:
w s=w×s
h s=h×s
Wherein, w represents that the target frame before adjustment is wide, and s represents the variation yardstick of target frame size, w srepresent that the target frame after adjusting is wide, h represents that the target frame before adjustment is high, h srepresent that the target frame after adjusting is high.
CN201210253267.2A 2012-07-20 2012-07-20 Hough forest-based video target tracking method Expired - Fee Related CN102831618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210253267.2A CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210253267.2A CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Publications (2)

Publication Number Publication Date
CN102831618A CN102831618A (en) 2012-12-19
CN102831618B true CN102831618B (en) 2014-11-12

Family

ID=47334733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210253267.2A Expired - Fee Related CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Country Status (1)

Country Link
CN (1) CN102831618B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136530A (en) * 2013-02-04 2013-06-05 国核自仪系统工程有限公司 Method for automatically recognizing target images in video images under complex industrial environment
CN104281852A (en) * 2013-07-11 2015-01-14 上海瀛联体感智能科技有限公司 Target tracking algorithm based on fusion 2D detection
TW201523459A (en) * 2013-12-06 2015-06-16 Utechzone Co Ltd Object tracking method and electronic apparatus
CN103699908B (en) * 2014-01-14 2016-10-05 上海交通大学 Video multi-target tracking based on associating reasoning
CN104299243B (en) * 2014-09-28 2017-02-08 南京邮电大学 Target tracking method based on Hough forests
CN104778470B (en) * 2015-03-12 2018-07-17 浙江大学 Text detection based on component tree and Hough forest and recognition methods
CN104778699B (en) * 2015-04-15 2017-06-16 西南交通大学 A kind of tracking of self adaptation characteristics of objects
CN104809455B (en) * 2015-05-19 2017-12-19 吉林大学 Action identification method based on the ballot of discriminability binary tree
CN105046382A (en) * 2015-09-16 2015-11-11 浪潮(北京)电子信息产业有限公司 Heterogeneous system parallel random forest optimization method and system
CN105404894B (en) * 2015-11-03 2018-10-23 湖南优象科技有限公司 Unmanned plane target tracking method and its device
CN105741316B (en) * 2016-01-20 2018-10-16 西北工业大学 Robust method for tracking target based on deep learning and multiple dimensioned correlation filtering
CN106169188B (en) * 2016-07-11 2019-01-15 西南交通大学 A kind of method for tracing object based on the search of Monte Carlo tree
CN108475424B (en) * 2016-07-12 2023-08-29 微软技术许可有限责任公司 Method, apparatus and system for 3D face tracking
CN106296742B (en) * 2016-08-19 2019-01-29 华侨大学 A kind of matched online method for tracking target of binding characteristic point
CN106846361B (en) * 2016-12-16 2019-12-20 深圳大学 Target tracking method and device based on intuitive fuzzy random forest
CN106952284A (en) * 2017-03-28 2017-07-14 歌尔科技有限公司 A kind of feature extracting method and its device based on compression track algorithm
CN107527356B (en) * 2017-07-21 2020-12-11 华南农业大学 Video tracking method based on lazy interaction mode
CN107508603A (en) * 2017-09-29 2017-12-22 南京大学 A kind of implementation method of forest condensing encoder
CN108171146A (en) * 2017-12-25 2018-06-15 河南工程学院 A kind of method for detecting human face based on Hough forest integrated study
CN108062861B (en) * 2017-12-29 2021-01-15 北京安自达科技有限公司 Intelligent traffic monitoring system
CN108196680B (en) * 2018-01-25 2021-10-08 盛视科技股份有限公司 Robot vision following method based on human body feature extraction and retrieval
CN108596188A (en) * 2018-04-04 2018-09-28 西安电子科技大学 Video object detection method based on HOG feature operators
CN108647698B (en) * 2018-05-21 2021-11-30 西安电子科技大学 Feature extraction and description method
CN108776822B (en) * 2018-06-22 2020-04-24 腾讯科技(深圳)有限公司 Target area detection method, device, terminal and storage medium
CN109544601A (en) * 2018-11-27 2019-03-29 天津工业大学 A kind of object detecting and tracking method based on on-line study
CN111368653B (en) * 2020-02-19 2023-09-08 杭州电子科技大学 Low-altitude small target detection method based on R-D graph and deep neural network
CN112633168B (en) * 2020-12-23 2023-10-31 长沙中联重科环境产业有限公司 Garbage truck and method and device for identifying garbage can overturning action of garbage truck

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1780672A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement
CN101908153A (en) * 2010-08-21 2010-12-08 上海交通大学 Method for estimating head postures in low-resolution image treatment
CN102496005A (en) * 2011-12-03 2012-06-13 辽宁科锐科技有限公司 Eye characteristic-based trial auxiliary study and judging analysis system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1780672A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement
CN101908153A (en) * 2010-08-21 2010-12-08 上海交通大学 Method for estimating head postures in low-resolution image treatment
CN102496005A (en) * 2011-12-03 2012-06-13 辽宁科锐科技有限公司 Eye characteristic-based trial auxiliary study and judging analysis system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mobile Object Detection through Client-Server based Vote Transfer;Shyam Sunder Kumar, et al.;《2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20120621;3290-3297 *
Shyam Sunder Kumar, et al..Mobile Object Detection through Client-Server based Vote Transfer.《2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》.2012,3290-3297. *

Also Published As

Publication number Publication date
CN102831618A (en) 2012-12-19

Similar Documents

Publication Publication Date Title
CN102831618B (en) Hough forest-based video target tracking method
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN103268616B (en) The moveable robot movement human body tracing method of multi-feature multi-sensor
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN105069434B (en) A kind of human action Activity recognition method in video
CN101609507B (en) Gait recognition method
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104091157A (en) Pedestrian detection method based on feature fusion
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN102592288B (en) Method for matching pursuit of pedestrian target under illumination environment change condition
CN104835182A (en) Method for realizing dynamic object real-time tracking by using camera
CN104574401A (en) Image registration method based on parallel line matching
CN103164693B (en) A kind of monitor video pedestrian detection matching process
CN107680116A (en) A kind of method for monitoring moving object in video sequences
CN104517095A (en) Head division method based on depth image
CN103020992A (en) Video image significance detection method based on dynamic color association
WO2024037408A1 (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN106296743A (en) A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN105550675A (en) Binocular pedestrian detection method based on optimization polymerization integration channel
CN103577804B (en) Based on SIFT stream and crowd's Deviant Behavior recognition methods of hidden conditional random fields
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN104200199A (en) TOF (Time of Flight) camera based bad driving behavior detection method
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141112

Termination date: 20190720