CN101770568A - Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation - Google Patents

Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation Download PDF

Info

Publication number
CN101770568A
CN101770568A CN200810243204A CN200810243204A CN101770568A CN 101770568 A CN101770568 A CN 101770568A CN 200810243204 A CN200810243204 A CN 200810243204A CN 200810243204 A CN200810243204 A CN 200810243204A CN 101770568 A CN101770568 A CN 101770568A
Authority
CN
China
Prior art keywords
point
image
target
unique point
unique
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200810243204A
Other languages
Chinese (zh)
Inventor
戴跃伟
曹骝
项文波
茅耀斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN200810243204A priority Critical patent/CN101770568A/en
Publication of CN101770568A publication Critical patent/CN101770568A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a target automatically recognizing and tracking method based on affine invariant points and optical flow calculation, which comprises the following steps: firstly, carrying out image pretreatment on a target image and video frames and extracting affine invariant feature points; then, carrying out feature point matching, eliminating mismatching points; determining the target recognition success when the feature point matching pairs reach certain number and affine conversion matrixes can be generated; then, utilizing the affine invariant points collected in the former step for feature optical flow calculation to realize the real-time target tracking; and immediately returning to the first step for carrying out the target recognition again if the tracking of middle targets fails. The feature point operator used by the invention belongs to an image local feature description operator which is based on the metric space and maintains the unchanged image zooming and rotation or even affine conversion. In addition, the adopted optical flow calculation method has the advantages of small calculation amount and high accuracy, and can realize the real-time tracking. The invention is widely applied to the fields of video monitoring, image searching, computer aided driving systems, robots and the like.

Description

Target based on affine invariant point and optical flow computation is discerned and tracking automatically
Technical field
The present invention relates to a kind of method of technical field of image processing, specifically is that a kind of target based on affine invariant point and optical flow computation is discerned and tracking automatically.
Background technology
The target recognition and tracking technology is widely used in sorts of systems, for example video monitoring system, image search system, medical image system, computer aided pilot system, robot, intelligent room etc.In real time, stably identification and pursuit movement in target be one very the difficulty task, reason is that the variation of picture noise that day by day complicated external environment brings, fuzzy, illumination etc. has brought great difficulty to Target Recognition, simultaneously, target attitude, yardstick can change at any time in the tracing process, and these all will affect the stability of identification, track algorithm.
The main task of Target Recognition is by the interested target of image data extraction, and is discerned.Data driven type and knowledge from top to bottom that present target identification method mainly is divided into from bottom to top are driving.Knowledge type target identification method from top to bottom is at the target of particular type in the image, and its shortcoming is replacement property and poor compatibility, and recognition objective changes, and knowledge will change thereupon.Data driven type target identification method is not from bottom to top considered the type of target, image is carried out low layer handle, and is widely applicable, has stronger replacement.What the present invention adopted is exactly a kind of target identification method from bottom to top.
Tracking target can obtain tracking effect preferably on the successful basis discerning.Can with method for tracking target according to whether having utilized inter-frame information, be divided into based on the method for motion analysis with based on the method for images match.Tracking based on motion analysis relies on motion detection to come the object of pursuit movement fully, and frame difference method and optical flow method are more typically arranged.Friction speed between optical flow tracking utilization target and the background detects moving target, has better anti noise ability.Optical flow method can be divided into continuous optical flow method and characteristic light stream method.The characteristic light stream method is the light stream of trying to achieve the unique point place by the characteristic matching of sequence image, realizes separating of target and background by the light stream cluster.Characteristic light stream method advantage is to handle big interframe displacement, to the susceptibility reduction of noise, only handles the unique point of seldom counting in the image, and calculated amount is less.What the present invention adopted is exactly a kind of method for tracking target based on the characteristic light stream method.
Summary of the invention:
The method that the object of the present invention is to provide a kind of recognition objective quickly and accurately and carry out real-time target following.
The technical solution that realizes the object of the invention is that a kind of Target Recognition and tracking based on affine invariant point and optical flow computation the steps include:
The first step, Target Recognition: at first target image and frame of video are carried out the image pre-service, and extract affine invariant features point, carry out Feature Points Matching then, and weed out the point of mistake coupling, when the coupling of unique point to reaching some, and can generate affine transformation matrix the time, confirm the Target Recognition success.
The unique point that this step is extracted remains unchanged to graphical rule and rotation, and light variation, noise, affine variation are all had robustness.
Second step, target following: the affine invariant point of utilizing previous step to collect, carry out characteristic light stream and calculate, realize real-time target following; With losing, return the first step as intermediate objective at once, carry out Target Recognition again.
The present invention compared with prior art, its remarkable advantage is: 1, the unique point operator that uses of the present invention is that a kind of image local feature that maintains the invariance based on metric space, to image zoom, rotation even affined transformation is described operator, its matching precision height, have at image under the situation of complicated distortion (comprising geometry deformation, change resolution and illumination variation etc.), still can match a large amount of stable point exactly; 2, the present invention adopts the characteristic light stream method to carry out target following, can handle big interframe displacement, to the susceptibility reduction of noise, only handles the unique point of seldom counting in the image, and calculated amount is less.
Description of drawings:
Fig. 1 is of the present invention based on the Target Recognition of affine invariant point and optical flow computation and the FB(flow block) of tracking.
Fig. 2 is of the present invention based on the Target Recognition of affine invariant point and optical flow computation and the target image of tracking.
Fig. 3 is of the present invention based on the Target Recognition of affine invariant point and optical flow computation and the Target Recognition and the successful video interception of tracking of tracking.
Embodiment:
Below in conjunction with accompanying drawing particular content of the present invention is described further.
The invention discloses a kind of target based on affine invariant point and optical flow computation and discern automatically and tracking, this method comprises the steps:
The first step, Target Recognition: at first target image and video first frame are carried out the image pre-service, and extract affine invariant features point, carry out Feature Points Matching then, and weed out the point of mistake coupling, when the coupling of unique point to reaching some, and can generate affine transformation matrix the time, confirm the Target Recognition success.If Target Recognition is unsuccessful, next frame and the target image of getting video successively mate, up to the Target Recognition success;
Second step, target following: the affine invariant point of utilizing this frame of video to collect, carry out characteristic light stream and calculate, find down the position of invariant point in the frame, realize real-time target following; With losing, return the first step as intermediate objective at once, carry out Target Recognition again;
The thought of this algorithm is that the affine invariant point of at first extracting in image and the frame of video is mated, thereby Target Recognition is come out and determines the position of target in frame of video, utilizes these unique points to carry out optical flow computation then, realizes object real-time tracking.
The target that the present invention is based on affine invariant point and optical flow computation is discerned and tracking automatically, and the step of extracting affine invariant features point in the Target Recognition process is as follows:
The first step utilizes Gaussian convolution to check target image and frame of video is carried out smoothing processing, adopts following formula
L(x,y,σ)=G(x,y,σ)*f(x,y)
G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) 2 σ 2
Wherein (σ) for metric space after the smoothing processing is the image of σ, (x, y σ) are Gaussian function to G to L, and (x y) is original image to f for x, y;
In second step, set up graphical rule space pyramid.Earlier original image is put and be twice, and with the image prototype of this image as Gauss's metric space pyramid first rank, the image prototype on later every rank is all obtained by the image prototype down-sampling of preceding single order, size is a preceding rank prototype
Figure G2008102432042D00032
Adjacent two Gaussian image in every rank are made difference and are promptly obtained difference of Gaussian (DoG) pyramid in gaussian pyramid.Get the pyramid first rank ground floor graphical rule coordinate
Figure G2008102432042D00033
Total exponent number of gold tower is log 2(min (w, h))-2, wherein w is the original image width, h is the original image height;
In the 3rd step, with 2 * 9 adjacent pixels of the upper and lower layer of each pixel and same order in the image to be detected two width of cloth image, and own on every side 8 pixels compare, and look for extreme point, are defined as the candidate feature point;
The 4th goes on foot, and removes the unique point and the unsettled skirt response point of some low contrasts.The three-dimensional quadratic function of employing match comes the DoG functional value of accurate feature points and removes the unique point of low contrast.If the normalization of pixel point value is between [0,1] in the DoG image, can be with all Point be judged to be the candidate extreme point of low contrast and filter out.Utilize the unique point at place, image border one bigger principal curvatures value to be arranged, but filter out at the low contrast features point of less this character of vertical direction curvature with edge at the peak value place and the intersect edge place of difference of Gaussian image;
In the 5th step, determine the unique point direction.For piece image f (x, y), pixel (x, y) mould value and the direction of locating gradient asked for as follows:
m ( x , y ) = ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 + ( f ( x , y + 1 ) - f ( x , y - 1 ) ) 2
θ ( x , y ) = arctan ( f ( x , y + 1 ) - f ( x , y - 1 ) f ( x + 1 , y ) - f ( x - 1 , y ) )
With the unique point sampling in the neighborhood window (neighborhood window size=3 * 1.5 * unique point yardstick) at center, and with the gradient direction of statistics with histogram neighborhood territory pixel.In gradient orientation histogram, when existing another to be equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxilliary direction of this unique point;
The 6th step, generating feature point describing word.At first coordinate axis is rotated to be the direction of unique point, next be that 16 * 16 window is got at the center with the unique point, on per 4 * 4 fritter, calculate the gradient orientation histogram of 8 directions, obtain the accumulated value of each gradient direction, form a seed points, just can produce the proper vector of 128 dimensions like this for a unique point.Continue length normalization method again with proper vector;
The unique point operator that obtains like this is a kind of based on metric space, image zoom, rotation even affined transformation are maintained the invariance, have under the situation of complicated distortion at image, and the image local feature that can match a large amount of stable point is exactly described operator.
The target that the present invention is based on affine invariant point and optical flow computation is discerned and tracking automatically, and the step of Feature Points Matching is as follows in the Target Recognition process:
The first step, total m as the unique point of target image, the describing word array is (A 1, A 2..., A m), total n of the unique point of image to be matched, the describing word array is (B 1, B 2..., B n), to (A 1, A 2..., A m) build the k-d tree, in the k-d tree, use BBF (Best-Bin-First) algorithm search B j(j=1 ..., neighbour's unique point A n) i(i=1 ..., m).As two unique point describing word A (a 0, a 1..., a 127) and B (b 0, b 1..., b 127) Euclidean distance is:
d = Σ i = 0 127 ( a i - b i ) 2
When between the unique point describing word of two width of cloth images Euclidean distance d hour, these two unique points are exactly neighbour's unique point, they are called a pair of match point;
In second step, utilize RANSAC (RANdom SAmple Consensus) random sampling consistency algorithm to eliminate the mispairing unique point.Step is:
(1) gets 4 characteristic matching at random to (A 1, B 1), (A 2, B 2), (A 3, B 3), (A 4, B 4), calculate corresponding homography matrix H
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
(2) unique point A i(i=1 ..., 4) be C with the point after this matrix correspondence iIf, A iAnd C iBetween Euclidean distance d change the operation of the 3rd step during less than error thresholds, otherwise change the operation of the 1st step;
(3) whether all possible characteristic matching is satisfied H to detection, all data are divided into " interior point " and " exterior point ";
(4) repeat above-mentioned 1-3 step operation N time;
(5) find out that to satisfy the characteristic matching that point is maximum in the corresponding matrix that each iterative computation goes out right, these points promptly are correct match points, and calculate final homography matrix H;
The unique point number of correct coupling is necessary for more than 4, can obtain homography matrix H, shows the Target Recognition success, otherwise gets next frame successively and target image carries out Target Recognition.
The target that the present invention is based on affine invariant point and optical flow computation is discerned and tracking automatically, and the step of target following is as follows:
The first step, set up window W to be tracked by the affine invariant features point that Target Recognition obtains, calculate the gray scale difference quadratic sum SSD (Sum of Squared intensity Differences) of video image interframe, the moving characteristic window, repeat several steps, less than certain threshold value, the match is successful at this moment to think the characteristic window of two frames, promptly finds the position of character pair point in next frame up to s;
Second step, utilize RANSAC (RANdom SAmple Consensus) random sampling consistency algorithm, calculate the homography matrix H of next frame image
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
Can obtain H as last, think that then target following is successful, and eliminate the unique point that those do not meet homography matrix, enter again the target following of next frame; If can not obtain H, illustrate that target is followed to lose, carry out Target Recognition again.
In conjunction with Fig. 1, a kind of target based on affine invariant point and optical flow computation of the present invention is discerned and tracking automatically, and step is as follows:
The first step, Target Recognition:
Width is w, highly is that (x y), utilizes formula for the target image f of h
L(x,y,σ)=G(x,y,σ)*f(x,y)
G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) 2 σ 2
Carry out Gauss's smoothing processing.Then original image is put and be twice, and with the image prototype of this image as Gauss's metric space pyramid first rank, the image prototype on later every rank is all obtained by the image prototype down-sampling of preceding single order, size is a preceding rank prototype
Figure G2008102432042D00063
Total exponent number of gold tower is log 2(min (w, h))-2.With 2 * 9 adjacent pixels of the upper and lower layer of each pixel and same order in the image two width of cloth image, and 8 pixels around own compare, and look for extreme point, are defined as the candidate feature point.The three-dimensional quadratic function of employing match comes the difference of Gaussian functional value of accurate feature points and removes the unique point of low contrast, removes unsettled skirt response point simultaneously.
(x, y) mould value and the direction of locating gradient asked for formula to utilize point
m ( x , y ) = ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 + ( f ( x , y + 1 ) - f ( x , y - 1 ) ) 2
θ ( x , y ) = arctan ( f ( x , y + 1 ) - f ( x , y - 1 ) f ( x + 1 , y ) - f ( x - 1 , y ) )
Ask for unique point direction and mould value.Then coordinate axis being rotated to be the direction of unique point, is that 16 * 16 window is got at the center with the unique point, by the accumulated value of each gradient direction, produces the proper vector of 128 dimensions, and makes the length normalization method of proper vector.
Video first frame is carried out same treatment, extract minutiae, and obtain 128 dimensional feature vectors of each unique point.
Build the k-d tree with the unique point of target image, use the unique point on BBF (Best-Bin-First) the algorithm search frame of video, require the Euclidean distance of 2 proper vectors
Figure G2008102432042D00071
Hour, obtain a pair of match point.After searching all match points, utilize RANSAC (RANdom SAmple Consensus) random sampling consistency algorithm to eliminate the mispairing unique point, the point of correct coupling is to greater than 4, and can calculate homography matrix H, thereby the location recognition success of target in frame of video, otherwise recognition failures repeats recognition objective in the video next frame.
Second step, target following:
Set up window W to be tracked by the affine invariant features point that this frame Target Recognition obtains, adopt KLT (Kanade LucasTomasi) algorithm, calculate the gray scale difference quadratic sum SSD (Sum of Squaredintensity Differences) between this frame and next frame video image, moving characteristic window in the next frame image, mate with the previous frame characteristic window, find the position of character pair point, as can not find, then reject the unique point that those are lost.
The point that adopts random sampling consistency algorithm (RANSAC Random Sample Consensus) that the unique point that finds in the next frame image is rejected the mistake coupling again, and generate homography matrix.As the some number that remains correct coupling is greater than 4, and homography matrix can successfully generate, and the target following success is described, otherwise the explanation target returned the first step at once with losing, and the next frame image as present frame, is carried out Target Recognition again.
Fig. 2 is of the present invention based on the Target Recognition of affine invariant point and optical flow computation and the target image of tracking, and Fig. 3 is Target Recognition and follows the tracks of successful video interception.

Claims (4)

1. the target based on affine invariant point and optical flow computation is discerned and tracking automatically, and this method comprises the steps:
The first step, Target Recognition: at first target image and video first frame are carried out the image pre-service, and extract affine invariant features point, carry out Feature Points Matching then, and weed out the point of mistake coupling, when the coupling of unique point to reaching some, and can generate affine transformation matrix the time, confirm the Target Recognition success.If Target Recognition is unsuccessful, next frame and the target image of getting video successively mate, up to the Target Recognition success;
Second step, target following: the affine invariant point of utilizing this frame of video to collect, carry out characteristic light stream and calculate, find down the position of invariant point in the frame, realize real-time target following; With losing, return the first step as intermediate objective at once, carry out Target Recognition again;
The thought of this algorithm is that the affine invariant point of at first extracting in image and the frame of video is mated, thereby Target Recognition is come out and determines the position of target in frame of video, utilizes these unique points to carry out optical flow computation then, realizes object real-time tracking.
2. the target based on affine invariant point and optical flow computation according to claim 1 is identification and tracking automatically, and the step that it is characterized in that extracting affine invariant features point is as follows:
The first step utilizes Gaussian convolution to check target image and frame of video is carried out smoothing processing, adopts following formula
L(x,y,σ)=G(x,y,σ)*f(x,y)
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) 2 σ 2
Wherein (σ) for metric space after the smoothing processing is the image of σ, (x, y σ) are Gaussian function to G to L, and (x y) is original image to f for x, y;
In second step, set up graphical rule space pyramid.Earlier original image is put and be twice, and with the image prototype of this image as Gauss's metric space pyramid first rank, the image prototype on later every rank is all obtained by the image prototype down-sampling of preceding single order, size is a preceding rank prototype
Figure F2008102432042C00012
Adjacent two Gaussian image in every rank are made difference and are promptly obtained difference of Gaussian (DoG) pyramid in gaussian pyramid.Get the pyramid first rank ground floor graphical rule coordinate
Figure F2008102432042C00013
Total exponent number of gold tower is log 2(min (w, h))-2, wherein w is the original image width, h is the original image height;
In the 3rd step, with 2 * 9 adjacent pixels of the upper and lower layer of each pixel and same order in the image to be detected two width of cloth image, and own on every side 8 pixels compare, and look for extreme point, are defined as the candidate feature point;
The 4th goes on foot, and removes the unique point and the unsettled skirt response point of some low contrasts.The three-dimensional quadratic function of employing match comes the DoG functional value of accurate feature points and removes the unique point of low contrast.If the normalization of pixel point value is between [0,1] in the DoG image, can be with all
Figure F2008102432042C00021
Point be judged to be the candidate extreme point of low contrast and filter out.Utilize the unique point at place, image border one bigger principal curvatures value to be arranged, but filter out at the low contrast features point of less this character of vertical direction curvature with edge at the peak value place and the intersect edge place of difference of Gaussian image;
In the 5th step, determine the unique point direction.For piece image f (x, y), pixel (x, y) mould value and the direction of locating gradient asked for as follows:
m ( x , y ) = ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 + ( f ( x , y + 1 ) - f ( x , y - 1 ) ) 2
θ ( x , y ) = arctan ( f ( x , y + 1 ) - f ( x , y - 1 ) f ( x + 1 , y ) - f ( x - 1 , y ) )
With the unique point sampling in the neighborhood window (neighborhood window size=3 * 1.5 * unique point yardstick) at center, and with the gradient direction of statistics with histogram neighborhood territory pixel.In gradient orientation histogram, when existing another to be equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxilliary direction of this unique point;
The 6th step, generating feature point describing word.At first coordinate axis is rotated to be the direction of unique point, next be that 16 * 16 window is got at the center with the unique point, on per 4 * 4 fritter, calculate the gradient orientation histogram of 8 directions, obtain the accumulated value of each gradient direction, form a seed points, just can produce the proper vector of 128 dimensions like this for a unique point.Continue length normalization method again with proper vector;
The unique point operator that obtains like this is a kind of based on metric space, image zoom, rotation even affined transformation are maintained the invariance, have under the situation of complicated distortion at image, and the image local feature that can match a large amount of stable point is exactly described operator.
3. the target based on affine invariant point and optical flow computation according to claim 1 is discerned and tracking automatically, it is characterized in that the step of Feature Points Matching is as follows:
The first step, total m as the unique point of target image, the describing word array is (A 1, A 2..., A m), total n of the unique point of image to be matched, the describing word array is (B 1, B 2..., B n), to (A 1, A 2..., A m) build the k-d tree, in the k-d tree, use BBF (Best-Bin-First) algorithm search B j(j=1 ..., neighbour's unique point A n) i(i=1 ..., m).As two unique point describing word A (a 0, a 1..., a 127) and B (b 0, b 1..., b 127) Euclidean distance is:
d = Σ i = 0 127 ( a i - b i ) 2
When between the unique point describing word of two width of cloth images Euclidean distance d hour, these two unique points are exactly neighbour's unique point, they are called a pair of match point;
Second step, utilize RANSAC (RANdom SAmple Consensus) random sampling consistency algorithm to eliminate the mispairing unique point, step is:
(1) gets 4 characteristic matching at random to (A 1, B 1), (A 2, B 2), (A 3, B 3), (A 4, B 4), calculate corresponding homography matrix H
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
(2) unique point A i(i=1 ..., 4) be C with the point after this matrix correspondence iIf, A iAnd C iBetween Euclidean distance d change the operation of the 3rd step during less than error thresholds, otherwise change the operation of the 1st step;
(3) whether all possible characteristic matching is satisfied H to detection, all data are divided into " interior point " and " exterior point ";
(4) repeat above-mentioned 1-3 step operation N time;
(5) find out that to satisfy the characteristic matching that point is maximum in the corresponding matrix that each iterative computation goes out right, these points promptly are correct match points, and calculate final homography matrix H;
The unique point number of correct coupling is necessary for more than 4, can obtain homography matrix H, shows the Target Recognition success, otherwise gets next frame successively and target image carries out Target Recognition.
4. the target based on affine invariant point and optical flow computation according to claim 1 is discerned and tracking automatically, it is characterized in that the step of target following is as follows:
The first step, set up window W to be tracked by the affine invariant features point that Target Recognition obtains, calculate the gray scale difference quadratic sum SSD (Sum of Squared intensity Differences) of video image interframe, the moving characteristic window, repeat several steps, less than certain threshold value, the match is successful at this moment to think the characteristic window of two frames, promptly finds the position of character pair point in next frame up to s;
Second step, utilize RANSAC (RANdom SAmple Consensus) random sampling consistency algorithm, calculate the homography matrix H of next frame image
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
Can obtain H as last, think that then target following is successful, and eliminate the unique point that those do not meet homography matrix, enter again the target following of next frame; If can not obtain H, illustrate that target is followed to lose, carry out Target Recognition again.
CN200810243204A 2008-12-31 2008-12-31 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation Pending CN101770568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810243204A CN101770568A (en) 2008-12-31 2008-12-31 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810243204A CN101770568A (en) 2008-12-31 2008-12-31 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

Publications (1)

Publication Number Publication Date
CN101770568A true CN101770568A (en) 2010-07-07

Family

ID=42503420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810243204A Pending CN101770568A (en) 2008-12-31 2008-12-31 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

Country Status (1)

Country Link
CN (1) CN101770568A (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945257A (en) * 2010-08-27 2011-01-12 南京大学 Synthesis method for extracting chassis image of vehicle based on monitoring video content
CN101986348A (en) * 2010-11-09 2011-03-16 上海电机学院 Visual target identification and tracking method
CN102136133A (en) * 2011-01-21 2011-07-27 北京中星微电子有限公司 Image processing method and image processing device
CN102436590A (en) * 2011-11-04 2012-05-02 康佳集团股份有限公司 Real-time tracking method based on on-line learning and tracking system thereof
CN102496000A (en) * 2011-11-14 2012-06-13 电子科技大学 Urban traffic accident detection method
CN102567999A (en) * 2010-12-29 2012-07-11 新奥特(北京)视频技术有限公司 Method for applying track data
CN103247058A (en) * 2013-05-13 2013-08-14 北京工业大学 Fast optical flow field calculation method based on error-distributed multilayer grid
CN103324932A (en) * 2013-06-07 2013-09-25 东软集团股份有限公司 Video-based vehicle detecting and tracking method and system
CN103649988A (en) * 2011-05-11 2014-03-19 谷歌公司 Point-of-view object selection
CN103985136A (en) * 2014-03-21 2014-08-13 南京大学 Target tracking method based on local feature point feature flow pattern
WO2015014111A1 (en) * 2013-08-01 2015-02-05 华为技术有限公司 Optical flow tracking method and apparatus
CN106295710A (en) * 2016-08-18 2017-01-04 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal of based on non-geometric constraint
CN106454229A (en) * 2016-09-27 2017-02-22 成都理想境界科技有限公司 Monitoring method, camera device, image processing device and monitoring system
CN106504237A (en) * 2016-09-30 2017-03-15 上海联影医疗科技有限公司 Determine method and the image acquiring method of matching double points
CN106778528A (en) * 2016-11-24 2017-05-31 四川大学 A kind of method for detecting fatigue driving based on gaussian pyramid feature
CN106803880A (en) * 2017-02-14 2017-06-06 扬州奚仲科技有限公司 Orbit camera device people's is autonomous with clapping traveling control method
CN107301658A (en) * 2017-05-19 2017-10-27 东南大学 A kind of method that unmanned plane image is positioned with extensive old times phase image Rapid matching
CN107507231A (en) * 2017-09-29 2017-12-22 智造未来(北京)机器人系统技术有限公司 Trinocular vision identifies follow-up mechanism and method
US9898677B1 (en) 2015-10-13 2018-02-20 MotionDSP, Inc. Object-level grouping and identification for tracking objects in a video
CN107833249A (en) * 2017-09-29 2018-03-23 南京航空航天大学 A kind of carrier-borne aircraft landing mission attitude prediction method of view-based access control model guiding
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
CN108346152A (en) * 2018-03-19 2018-07-31 北京大学口腔医院 Method based on root of the tooth periapical film automatic Evaluation dental clinic treatment effect
CN109462748A (en) * 2018-12-21 2019-03-12 福州大学 A kind of three-dimensional video-frequency color correction algorithm based on homography matrix
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109871813A (en) * 2019-02-25 2019-06-11 沈阳上博智像科技有限公司 A kind of realtime graphic tracking and system
CN109919971A (en) * 2017-12-13 2019-06-21 北京金山云网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110060276A (en) * 2019-04-18 2019-07-26 腾讯科技(深圳)有限公司 Object tracking method, tracking process method, corresponding device, electronic equipment
CN110288050A (en) * 2019-07-02 2019-09-27 广东工业大学 A kind of EO-1 hyperion and LiDar image automation method for registering based on cluster and optical flow method
CN110378936A (en) * 2019-07-30 2019-10-25 北京字节跳动网络技术有限公司 Optical flow computation method, apparatus and electronic equipment
CN110415276A (en) * 2019-07-30 2019-11-05 北京字节跳动网络技术有限公司 Motion information calculation method, device and electronic equipment
CN110730296A (en) * 2013-04-30 2020-01-24 索尼公司 Image processing apparatus, image processing method, and computer readable medium
CN110827324A (en) * 2019-11-08 2020-02-21 江苏科技大学 Video target tracking method
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
WO2020107327A1 (en) * 2018-11-29 2020-06-04 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for target identification in video
CN113673321A (en) * 2021-07-12 2021-11-19 浙江大华技术股份有限公司 Target re-recognition method, target re-recognition apparatus, and computer-readable storage medium

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945257A (en) * 2010-08-27 2011-01-12 南京大学 Synthesis method for extracting chassis image of vehicle based on monitoring video content
CN101986348A (en) * 2010-11-09 2011-03-16 上海电机学院 Visual target identification and tracking method
CN102567999A (en) * 2010-12-29 2012-07-11 新奥特(北京)视频技术有限公司 Method for applying track data
CN102567999B (en) * 2010-12-29 2014-11-05 新奥特(北京)视频技术有限公司 Method for applying track data
CN102136133B (en) * 2011-01-21 2016-09-14 北京中星微电子有限公司 A kind of image processing method and image processing apparatus
CN102136133A (en) * 2011-01-21 2011-07-27 北京中星微电子有限公司 Image processing method and image processing device
US9429990B2 (en) 2011-05-11 2016-08-30 Google Inc. Point-of-view object selection
CN103649988A (en) * 2011-05-11 2014-03-19 谷歌公司 Point-of-view object selection
CN103649988B (en) * 2011-05-11 2016-11-09 谷歌公司 Eye point object selects
CN102436590A (en) * 2011-11-04 2012-05-02 康佳集团股份有限公司 Real-time tracking method based on on-line learning and tracking system thereof
CN102496000B (en) * 2011-11-14 2013-05-08 电子科技大学 Urban traffic accident detection method
CN102496000A (en) * 2011-11-14 2012-06-13 电子科技大学 Urban traffic accident detection method
CN110730296A (en) * 2013-04-30 2020-01-24 索尼公司 Image processing apparatus, image processing method, and computer readable medium
CN110730296B (en) * 2013-04-30 2022-02-11 索尼公司 Image processing apparatus, image processing method, and computer readable medium
CN103247058B (en) * 2013-05-13 2015-08-19 北京工业大学 A kind of quick the Computation of Optical Flow based on error Distributed-tier grid
CN103247058A (en) * 2013-05-13 2013-08-14 北京工业大学 Fast optical flow field calculation method based on error-distributed multilayer grid
CN103324932A (en) * 2013-06-07 2013-09-25 东软集团股份有限公司 Video-based vehicle detecting and tracking method and system
WO2015014111A1 (en) * 2013-08-01 2015-02-05 华为技术有限公司 Optical flow tracking method and apparatus
US9536147B2 (en) 2013-08-01 2017-01-03 Huawei Technologies Co., Ltd. Optical flow tracking method and apparatus
CN103985136A (en) * 2014-03-21 2014-08-13 南京大学 Target tracking method based on local feature point feature flow pattern
US9898677B1 (en) 2015-10-13 2018-02-20 MotionDSP, Inc. Object-level grouping and identification for tracking objects in a video
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US11416993B2 (en) 2016-07-14 2022-08-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US11893738B2 (en) 2016-07-14 2024-02-06 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN106295710B (en) * 2016-08-18 2019-06-14 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal based on non-geometric constraint
CN106295710A (en) * 2016-08-18 2017-01-04 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal of based on non-geometric constraint
CN106454229A (en) * 2016-09-27 2017-02-22 成都理想境界科技有限公司 Monitoring method, camera device, image processing device and monitoring system
CN106504237A (en) * 2016-09-30 2017-03-15 上海联影医疗科技有限公司 Determine method and the image acquiring method of matching double points
CN106778528A (en) * 2016-11-24 2017-05-31 四川大学 A kind of method for detecting fatigue driving based on gaussian pyramid feature
CN106803880A (en) * 2017-02-14 2017-06-06 扬州奚仲科技有限公司 Orbit camera device people's is autonomous with clapping traveling control method
CN107301658B (en) * 2017-05-19 2020-06-23 东南大学 Method for fast matching and positioning unmanned aerial vehicle image and large-scale old time phase image
CN107301658A (en) * 2017-05-19 2017-10-27 东南大学 A kind of method that unmanned plane image is positioned with extensive old times phase image Rapid matching
CN107833249A (en) * 2017-09-29 2018-03-23 南京航空航天大学 A kind of carrier-borne aircraft landing mission attitude prediction method of view-based access control model guiding
CN107833249B (en) * 2017-09-29 2020-07-07 南京航空航天大学 Method for estimating attitude of shipboard aircraft in landing process based on visual guidance
CN107507231A (en) * 2017-09-29 2017-12-22 智造未来(北京)机器人系统技术有限公司 Trinocular vision identifies follow-up mechanism and method
CN109919971B (en) * 2017-12-13 2021-07-20 北京金山云网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109919971A (en) * 2017-12-13 2019-06-21 北京金山云网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
CN108346152A (en) * 2018-03-19 2018-07-31 北京大学口腔医院 Method based on root of the tooth periapical film automatic Evaluation dental clinic treatment effect
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109461174B (en) * 2018-10-25 2021-01-29 北京陌上花科技有限公司 Video target area tracking method and video plane advertisement implanting method and system
US11010613B2 (en) 2018-11-29 2021-05-18 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for target identification in video
WO2020107327A1 (en) * 2018-11-29 2020-06-04 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for target identification in video
CN109462748A (en) * 2018-12-21 2019-03-12 福州大学 A kind of three-dimensional video-frequency color correction algorithm based on homography matrix
CN109871813A (en) * 2019-02-25 2019-06-11 沈阳上博智像科技有限公司 A kind of realtime graphic tracking and system
CN110060276A (en) * 2019-04-18 2019-07-26 腾讯科技(深圳)有限公司 Object tracking method, tracking process method, corresponding device, electronic equipment
CN110060276B (en) * 2019-04-18 2023-05-16 腾讯科技(深圳)有限公司 Object tracking method, tracking processing method, corresponding device and electronic equipment
CN110288050B (en) * 2019-07-02 2021-09-17 广东工业大学 Hyperspectral and LiDar image automatic registration method based on clustering and optical flow method
CN110288050A (en) * 2019-07-02 2019-09-27 广东工业大学 A kind of EO-1 hyperion and LiDar image automation method for registering based on cluster and optical flow method
CN110378936A (en) * 2019-07-30 2019-10-25 北京字节跳动网络技术有限公司 Optical flow computation method, apparatus and electronic equipment
CN110378936B (en) * 2019-07-30 2021-11-05 北京字节跳动网络技术有限公司 Optical flow calculation method and device and electronic equipment
CN110415276B (en) * 2019-07-30 2022-04-05 北京字节跳动网络技术有限公司 Motion information calculation method and device and electronic equipment
CN110415276A (en) * 2019-07-30 2019-11-05 北京字节跳动网络技术有限公司 Motion information calculation method, device and electronic equipment
CN110827324A (en) * 2019-11-08 2020-02-21 江苏科技大学 Video target tracking method
CN113673321A (en) * 2021-07-12 2021-11-19 浙江大华技术股份有限公司 Target re-recognition method, target re-recognition apparatus, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN101770568A (en) Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation
Timofte et al. Multi-view traffic sign detection, recognition, and 3D localisation
Li et al. Deep attention network for joint hand gesture localization and recognition using static RGB-D images
Enzweiler et al. Monocular pedestrian detection: Survey and experiments
CN104200495B (en) A kind of multi-object tracking method in video monitoring
CN113313736B (en) Online multi-target tracking method for unified target motion perception and re-identification network
Tsintotas et al. Modest-vocabulary loop-closure detection with incremental bag of tracked words
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN103262121A (en) Detection and tracking of moving objects
CN101383899A (en) Video image stabilizing method for space based platform hovering
CN107590821B (en) Target tracking method and system based on track optimization
CN115240130A (en) Pedestrian multi-target tracking method and device and computer readable storage medium
CN103955680A (en) Action recognition method and device based on shape context
Weng et al. On-line human action recognition by combining joint tracking and key pose recognition
CN106845375A (en) A kind of action identification method based on hierarchical feature learning
CN103955951A (en) Fast target tracking method based on regularization templates and reconstruction error decomposition
CN111862147B (en) Tracking method for multiple vehicles and multiple lines of human targets in video
Florinabel Real-time image processing method to implement object detection and classification for remote sensing images
Yoon et al. Thermal-infrared based drivable region detection
Sri Jamiya An efficient algorithm for real-time vehicle detection using deep neural networks
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
Yang et al. The research of video tracking based on improved SIFT algorithm
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
Amini et al. New approach to road detection in challenging outdoor environment for autonomous vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20100707