CN106570888A - Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi) - Google Patents

Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi) Download PDF

Info

Publication number
CN106570888A
CN106570888A CN201610991481.6A CN201610991481A CN106570888A CN 106570888 A CN106570888 A CN 106570888A CN 201610991481 A CN201610991481 A CN 201610991481A CN 106570888 A CN106570888 A CN 106570888A
Authority
CN
China
Prior art keywords
angle point
pixel
point
pyramid
klt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610991481.6A
Other languages
Chinese (zh)
Inventor
王敏
关健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610991481.6A priority Critical patent/CN106570888A/en
Publication of CN106570888A publication Critical patent/CN106570888A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The invention discloses a target tracking method based on a FAST (Features from Accelerated Segment Test) corner point and a pyramid KLT (Kanade-Lucas-Tomasi). The method combines the FAST algorithm and the pyramid KLT algorithm. Firstly, a corner point in need of tracking is obtained from a first frame of video image through the FAST algorithm, the pyramid KLT method is used for tracking the corner point in need of tracking, the best corner point is found out through layer-by-layer screening, and optical flow information is used finally to update the target position. The target tracking problem in the video is solved, and the target tracking is more accurate. The proper feature information can be selected efficiently for tracking, the accuracy is high, the robustness is strong, and the method provided by the invention is simpler, and the operation time is shorter.

Description

It is a kind of based on FAST angle points and the method for tracking target of pyramid KLT
Technical field
The invention belongs to video analysis field, more particularly to a kind of to be based on FAST (Features from Accelerated Segment Test carry out the feature of autoacceleration sectionalization test) (Kanade-Lucas-Tomasi are with three for angle point and pyramid KLT Name name method) method for tracking target.
Background technology
In today that video monitoring is developed rapidly, the magnanimity information of video monitoring picture has been over manpower effective process Scope have become objective fact.It is current and intelligent video analysis technology is a kind of effective means for filtering bulk redundancy The image processing techniques that Chinese security protection industry is paid close attention to the most.In brief, the technology is exactly the object for finding to be moved in video, and It is tracked, is analyzed, note abnormalities in time behavior, triggering is reported to the police and takes other measures to be intervened.
At present, most simple based on frame difference method in the tracking of motion analysis, speed is fast, it is easy to which hardware is realized.However, right In dynamic background, accuracy is not high, and robustness is also poor.Traditional light stream split plot design does not have stronger anti-interference, but not Can the background that causes of effective district partial objectives for motion block, manifest and the problems such as aperture, it is computationally intensive, need special hardware Hold.If intensity of illumination or light source azimuth there occurs change, accuracy is poor.
The Region Matching of image sequence is obtained in that higher positioning precision, but computationally intensive, it is difficult to reach real-time Require.Model Matching tracking accuracy is high, it is adaptable to the various motion changes of maneuvering target, strong antijamming capability.But due to calculating Analysis is complicated, and arithmetic speed is slow, and the renewal of model is complex, and real-time is poor.Motion model is accurately set up, is Model Matching The key of success.
Comprising can be used for the characteristic information of target following in a large number in sequence image, such as the motion of target, color, edge and Texture etc..But clarification of objective information is usually time-varying, chooses suitable characteristic information and ensure that the validity comparison of tracking is stranded It is difficult.
The content of the invention
Goal of the invention:For the problem that prior art is present, the invention provides one kind can be selected efficiently suitably Characteristic information is tracked and accuracy is high, strong robustness for method for tracking target in video.
In order to solve above-mentioned technical problem, the technical solution used in the present invention is:
It is a kind of based on FAST angle points and the method for tracking target of pyramid KLT, comprise the following steps:
Step 10:All pixels point is obtained from the first frame video image, obtains needing the angle of tracking by FAST algorithms Point, is tracked to the angle point for needing tracking with pyramid KLT methods and the angle point for needing tracking is stored to lastSET set In, then from lastSET set in the traceable angle point set newSET of pre-generatmg present frame;
Step 20:Whether the angle point number in the traceable angle point set newSET of current frame video image is judged more than 0, If angle point number is more than 0, into step 30, otherwise, into step 50;
Step 30:Using pyramid KLT methods, predict that the angle point in the traceable angle point set newSET of present frame is being worked as Position in front frame video image, generates curSET set;
Step 40:In rejecting the prospect agglomerate no longer detected in unchartered angle point, i.e. video in curSET set Angle point or per frame change more than 50 pel spacings angle point;
Step 50:Judge whether to Corner Detection;
If when angle point number is less than 3 in new merged event generation or old merged event, needing to carry out angle point Detection, into step 60, if without both of these case, being directly entered step 80;
Step 60:FAST Corner Detections are carried out to present frame, newly-generated angle point is obtained;
Step 70:The newly-generated angle point that step 60 is obtained is updated in curSET set;
Step 80:Angle point during curSET is gathered is updated in lastSET set;
Step 90 uses the Optic flow information of angle point in lastSET, target location in more new video.
It is described that FAST Corner Detections are carried out to present frame in step 60, comprise the following steps:
Step 601:A pixel P is chosen from current frame image, its brightness value is set to into IP
Step 602:Setting one excludes the minimum threshold gamma of pseudo- angle point, and γ values are 10,11 or 12;
Step 603:With in step 601 choose pixel P as the center of circle, radius be equal to 3 pixels circle, the circle it is discrete There are 16 pixels on the border of the Bresenham circles of change, to each pixel successively serial number;
Step 604:The pixel value of detection pixel number 1 and numbering 9, if the pixel value of the pixel value of numbering 1 and numbering 9 It is all higher than IP+ γ or respectively less than IP-γ;The pixel value of pixel number 15 and numbering 7 is detected again;If the pixel number 1,7,9 With 15 in be both greater than I no less than the pixel value of 3 pixelsP+ γ or less than IP- γ, then pixel P is angle point;If four pixels Pixel value in point less than 3 pixels is both greater than IP+ γ or less than IP- γ, then pixel P is not angle point;
Step 605:After all of pixel is detected according to step 601 to step 604 in image, to meeting bar The pixel of part detects 16 pixels on its correspondence Bresenham circle, if being no less than 9 continuous pixels on circle, Their pixel value is both greater than IP+ γ or less than IP- γ, this pixel is defined as angle point;If continuous less than 9 on circle Pixel, their pixel value is both greater than IP+ γ or less than IP- γ, then this pixel is not angle point.
Further, also include that the angle point using decision Tree algorithms to detecting is selected in the step 60.
Further, the method that pixel adjacent on the circumference of deletion Bresenham circles is also included in the step 603: For the response V that each pixel for detecting calculates it;Wherein V be expressed as point p and around it 16 pixels it is absolute partially Poor sum;The relatively response V of two adjacent characteristic points simultaneously deletes the little characteristic points of response V.
Beneficial effect:The invention discloses a kind of based on FAST angle points and the method for tracking target of pyramid KLT.The side Method combines FAST algorithms and pyramid KLT algorithms, obtains needing tracking first from the first frame video image by FAST algorithms Angle point, with pyramid KLT methods to need tracking angle point be tracked, then by screen layer by layer find optimum angle Point, finally using Optic flow information more new target location, solves the Target Tracking Problem in video, makes target following more smart Really.The present invention can efficiently select suitable characteristic information to be tracked and accuracy is high, strong robustness, while of the invention The method of offer is simpler, and run time is shorter.
Description of the drawings
Fig. 1 is the workflow diagram of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings, the present invention is described in detail.
As shown in figure 1, the present invention's is a kind of based on FAST angle points and the method for tracking target of pyramid KLT, its step is such as Under:
Step 10:All pixels point is obtained from the first frame video image, obtains needing the angle of tracking by FAST algorithms Point, is tracked to the angle point for needing tracking with pyramid KLT methods and the angle point for needing tracking is stored to lastSET set In, then from lastSET set in the traceable angle point set newSET of pre-generatmg present frame;
What lastSET set was stored is the history angle point that the Jing pyramid KLT tracking predictions before present frame are crossed, by some The angle point of target frame overlapping region delete obtain present frame can pyramid KLT tracking angle point set newSET.Wherein, before one Scape just corresponds to a target frame, and the angle point classification difficulty of target frame overlapping region is larger, easily error.So be deleted can Effectively to reduce error rate.
Using pyramid KLT algorithms for the target in image be tracked when, be not to target frame and tracking inframe Side-play amount of a little all asking for, but the angle point for selecting some textural characteristics constant is used as trace point.
Step 20:Whether the angle point number in the traceable angle point set newSET of present frame is judged more than 0, if greater than 0, then into step 30;Otherwise enter step 50;
Step 30:Using pyramid KLT track algorithms, the angle point in the traceable angle point set newSET of present frame is predicted Position in the current frame, generates present frame angle point set curSET;
Wherein, the method for generating present frame angle point set curSET is comprised the following steps:
Step 301:The characteristic window at present frame t+ τ moment is B (X)=B (x, y, t+ τ), and wherein X=(x, y) is viewport Coordinate.The characteristic window of former frame t is A (X-d)=A (x- Δ x, y- Δ y, t);Then B (X)=A (x-d)+n (X), its In, n (X) is that d represents side-play amount of the characteristic window in time τ because illumination condition changes the noise for producing in time τ, Δ x and Δ y represent respectively side-play amount of the characteristic window on x, y direction in the τ moment.
Step 302:By n (X) square and in whole characteristic window upper integral, it is possible to obtain the SSD of characteristic window image (gray scale difference quadratic sum, Sum of Squared intensity Differences, hereinafter referred SSD):
ε=∫ ∫Vn(X)2ω (X) dX=∫ ∫V[A(X-d)-B(X)]2ω(X)dX
(1)
Wherein, coordinates matrix X=[x, y]T, skew moment matrix d=[dx, dy]T;ε is residual;V is calculating characteristic matching window Mouthful;ω (X) is weighting function, can generally be taken as 1, if emphasizing the effect of core texture, ω (X) can adopt Gauss Distribution function.
Step 303:Work as side-play amountWhen, ignoring side-play amount d is, by A (X-d) Taylor expansion, removes high order , obtain:
A (X-d)=A (X)-gd
(2)
Wherein g is A (X) gradient vector, and A (X) is mapped to the characteristic window with reference picture approximately the same plane for A (X-d), Formula (2) is substituted into into formula (1), and the both sides to formula (1) while to taking 0 after d derivations, can obtain:
Now ε obtains minimum.Formula (3) can transform to:
d·∫∫VggTω (X) dX=∫ ∫V[A(X)-B(X)]gω(X)dX
(4)
If order
E=∫ ∫V[A(X)-B(X)]gω(X)dX
(6)
Wherein gx、gyRespectively single order local derviations of the window function A (X) on x, y direction,Gxy =∫ ∫Vgxgyω (X) dX,Then formula (6) can be expressed as:
Zd=e
(7)
Z is the matrix of 2 × 2, and e represents the residual error of calculating.
Step 304:For every two field pictures solve equation displacement d=(Δ x, Δ y) that (7) can be obtained by characteristic window.
(6) formula is launched, is obtained:
Wherein, Ex=(A (X)-B (X))xgxωx(X) d (X), Ey=(A (X)-B (X))ygyωy(X) d (X), (8) are substituted into Formula (7) is obtained:
Solve equation (9) to obtain:
Namely displacement, by displacement the angle point in the traceable angle point set newSET of present frame can be just drawn Position in the current frame.
In pyramid KLT track algorithms, not all characteristic window comprising texture information is all to be adapted to tracking, For certain characteristic window, when the eigenvalue λ of matrix Z1And λ2Meet condition λ21maxWhen, this characteristic window have preferably with Track effect, threshold value λmaxAccording to shooting condition, obtained by testing.
Step 40:Reject unchartered angle point in curSET;For example not in the prospect agglomerate or motion excursion for detecting The excessive angle point of amount all can be disallowable.
Step 50:Corner Detection is judged whether to, if in new merged event generation or old merged event During angle point number very few (the corresponding angle point of single target is less than 3), need to carry out Corner Detection, into step 60, if do not had There is both of these case, be directly entered step 80;
Step 60:FAST Corner Detections are carried out to present frame;
FAST angle points are defined as:If certain pixel is in different regions from pixel enough in field around it, Then the pixel may be angle point.Namely some attributes are unusual, it is considered to gray level image, and even the gray value of the point is than it The gray value of enough pixels is big or little in surrounding field, then the point may be angle point.Wherein FAST Corner Detections bag Include following steps:
Step 601:Pixel P is chosen from image, its brightness value is set to into IP
Step 602:Setting one can quickly exclude the minimum threshold gamma of pseudo- angle point, and usual γ takes 10,11 or 12.
Step 603:The pixel chosen with step 601 is located at as the center of circle, radius is equal to the discretization of 3 pixels There are 16 pixels and to each pixel value successively serial number on the border of Bresenham circles.
Step 604:The pixel of detection numbering 1 and the pixel of numbering 9, if the pixel value of the pixel value of numbering 1 and numbering 9 It is all higher than IP+ γ or respectively less than IP-γ;The pixel of numbering 15 and the pixel of numbering 7 are detected again;If many in four pixels I is both greater than in the pixel value of 3 pixelsP+ γ or less than IP- γ, then pixel P is angle point;If being less than 3 in four pixels The pixel value of individual pixel is both greater than IP+ γ or less than IP- γ, then pixel P is not angle point.
Step 605:For all of pixel is carried out after preliminary detection, to meeting according to step 601~604 in image The pixel of condition detects 16 pixels on its correspondence Bresenham circle, if being no less than 9 continuous pixels on circle Point, their pixel value is both greater than IP+ γ or less than IP- γ, this pixel is defined as angle point;If continuous less than 9 on circle Pixel, their pixel value is both greater than IP+ γ or less than IP- γ, then this pixel is not angle point.
The present invention can also select the angle point of optimum by an angle point grader, and main method is calculated using decision tree Method carries out the selection of optimum angle point to the angle point for detecting.Concrete grammar is as follows:
(1) choose multiple image composition picture set under application scenarios to be learnt;
(2) all Corner Features on test set are obtained with FAST Corner Detection Algorithms;
(3) for each characteristic point, 16 pixels around it are stored in a vector.Do together for all two field pictures The thing of sample, obtains the vector of all of which characteristic point.
(4) each pixel (being assumed to be x) in this 16 pixels, can there is the one kind in following three state:
(5) by these states, characteristic vector x is divided into 3 subsets, Pd、Ps、pb
(6) new Boolean variable K is definedP.If P is an angle point, that KPIt is true;Otherwise KPIt is false.
(7) each subset is inquired about using ID3 algorithms (decision tree classifier).
(8) all of subset of recursive calculation until its entropy be 0.
(9) decision tree being created just is used for the FAST detections of other pictures.
The borderline pixel of the Bresenham circles of discretization has situation about being crowded together, Ke Yitong in step 603 Cross the method (Non-Maximal Suppression) of non-maxima suppression to solve, concrete grammar is:Detect for each Pixel calculate its response magnitude V;Wherein V be expressed as point p and around it the absolute deviation of 16 pixels sum;Relatively Response magnitude V of two adjacent characteristic points simultaneously deletes the little characteristic point of response magnitude V.
Step 70:Newly-generated angle point is updated in curSET set;
If new merged event, the angle point for detecting is added directly in curSET set;If old merged event, In addition to new angle point is added, also need to reject and the old angle point of new angle point identical.
Step 80:Angle point during curSET is gathered is updated in lastSET set;
Step 90:Using the Optic flow information more new target location of angle point in lastSET set.
Light stream is the apparent motion of gradation of image pattern, is the 2D transient motions of the last point of image, and any gives image Luminance patterns, it is possible to measure parallax using the Feature Points Matching of time change.Using the angle point in image as characteristic point, The detect and track angle point first in image sequence, then records position of the angle point in image sequence, thus can pass through The displacement field of angle point is calculating optical flow field between adjacent image frame.Light stream estimate method be based on it is assumed hereinafter that:Gradation of image point The change of cloth is because the motion of target or background causes.That is, the gray scale of target and background is not change over time 's.
Following steps are specifically included using the Optic flow information more new target location of angle point in lastSET set::
Step 901:One continuous sequence of frames of video is processed;
Step 902:For each video sequence, using FAST Corner Detections and pyramid KLT algorithms, detection may go out Existing foreground target;
Step 903:If a certain frame occurs in that foreground target, its representative Corner Feature is found;
Step 904:For any two adjacent video frames after a certain frame in step 903, find in previous frame Existing key feature points optimum position in the current frame, so as to obtain foreground target position coordinates in the current frame;
Step 905:Constantly 901~step 904 of repeat step, realizes the tracking of target.
The Optic flow information of angle point is extracted the angle point of target after FAST Corner Detections and pyramid KLT algorithms, also leads to Cross and select certain target area, eliminate the angle point of background.The optical flow method used in next two field picture is needed to carry out mesh now Mark corners Matching.For the follow-up two field picture to be matched, Point matching is carried out using optical flow method, find each angle point new Corresponding position in one two field picture.The areal concentration of match point is finally tried to achieve, using centroid algorithm the miss distance of target is calculated.
Light stream matching result may have a small amount of point in part to deviate target location, and the region to be determined cannot be with some angle Point is defined, but all of result is comprehensively weighed.Although target is rotated or due to shooting in motion process Affect to produce it is fuzzy, but most match point is still in target proximity, only base point outside target, in consideration of it, The center of gravity of all match points is calculated using center of gravity formula, the miss distance of target is determined with this, it is ensured that target can be accurate Registration and tracking.

Claims (4)

1. it is a kind of based on FAST angle points and the method for tracking target of pyramid KLT, it is characterised in that to comprise the following steps:
Step 10:Sequence of video images is obtained, all pixels point is obtained from the first frame video image, obtained by FAST algorithms The angle point of tracking is needed, the angle point for needing tracking is tracked with pyramid KLT methods, and the angle point storage of tracking will be needed To lastSET set in, then from lastSET set in the traceable angle point set newSET of pre-generatmg present frame;
Step 20:Whether the angle point number in the traceable angle point set newSET of current frame video image is judged more than 0, if Angle point number is more than 0, into step 30, otherwise, into step 50;
Step 30:Using pyramid KLT methods, the angle point in the traceable angle point set newSET of current frame video image is predicted Position in current frame video image, generates curSET set;
Step 40:Unchartered angle point in curSET set is rejected, the unchartered angle point is no longer detected in video To prospect agglomerate in angle point or per frame change more than 50 pel spacings angle point;
Step 50:Judge whether to Corner Detection;
If when angle point number is less than 3 in new merged event generation or old merged event, needing to carry out Corner Detection, Into step 60, if without both of these case, being directly entered step 80;
Step 60:FAST Corner Detections are carried out to present frame, newly-generated angle point is obtained;
Step 70:The newly-generated angle point that step 60 is obtained is updated in curSET set;
Step 80:Angle point during curSET is gathered is updated in lastSET set;
Step 90 uses the Optic flow information of angle point in lastSET, target location in more new video.
2. according to claim 1 a kind of based on FAST angle points and the method for tracking target of pyramid KLT, its feature exists In, it is described that FAST Corner Detections are carried out to present frame in step 60, comprise the following steps:
Step 601:A pixel P is chosen from current frame image, its brightness value is set to into IP
Step 602:Setting one excludes the minimum threshold gamma of pseudo- angle point, and γ values are 10,11 or 12;
Step 603:As the center of circle, radius is equal to the circle of 3 pixels to the pixel P chosen with step 601, the discretization of the circle There are 16 pixels on the border of Bresenham circles, to each pixel successively serial number;
Step 604:The pixel value of detection pixel number 1 and numbering 9, if the pixel value of the pixel value of numbering 1 and numbering 9 is big In IP+ γ or respectively less than IP-γ;The pixel value of pixel number 15 and numbering 7 is detected again;If the pixel number 1,7,9 and 15 In be both greater than I no less than the pixel value of 3 pixelsP+ γ or less than IP- γ, then pixel P is angle point;If in four pixels I is both greater than less than the pixel value of 3 pixelsP+ γ or less than IP- γ, then pixel P is not angle point;
Step 605:After all of pixel is detected according to step 601 to step 604 in image, to qualified Pixel detects 16 pixels on its correspondence Bresenham circle, if being no less than 9 continuous pixels on circle, they Pixel value be both greater than IP+ γ or less than IP- γ, this pixel is defined as angle point;If being less than 9 continuous pixels on circle Point, their pixel value is both greater than IP+ γ or less than IP- γ, then this pixel is not angle point.
3. according to claim 1 a kind of based on FAST angle points and the method for tracking target of pyramid KLT, its feature exists In also including that the angle point using decision Tree algorithms to detecting is selected in the step 60.
4. according to claim 2 a kind of based on FAST angle points and the method for tracking target of pyramid KLT, its feature exists In the method for also including pixel adjacent on the circumference of deletion Bresenham circles in the step 603:For each detection To pixel calculate its response V;Wherein V be expressed as point p and around it the absolute deviation of 16 pixels sum;Relatively two The response V of individual adjacent characteristic point simultaneously deletes the little characteristic points of response V.
CN201610991481.6A 2016-11-10 2016-11-10 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi) Pending CN106570888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610991481.6A CN106570888A (en) 2016-11-10 2016-11-10 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610991481.6A CN106570888A (en) 2016-11-10 2016-11-10 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)

Publications (1)

Publication Number Publication Date
CN106570888A true CN106570888A (en) 2017-04-19

Family

ID=58541277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610991481.6A Pending CN106570888A (en) 2016-11-10 2016-11-10 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)

Country Status (1)

Country Link
CN (1) CN106570888A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390704A (en) * 2017-07-28 2017-11-24 西安因诺航空科技有限公司 A kind of multi-rotor unmanned aerial vehicle light stream hovering method based on IMU pose compensations
CN108280430A (en) * 2018-01-24 2018-07-13 陕西科技大学 A kind of flow image-recognizing method
CN110428397A (en) * 2019-06-24 2019-11-08 武汉大学 A kind of angular-point detection method based on event frame
CN112884817A (en) * 2019-11-29 2021-06-01 中移物联网有限公司 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN113781525A (en) * 2021-09-13 2021-12-10 陕西铁路工程职业技术学院 Three-dimensional target tracking algorithm research based on original CAD model
CN114264297A (en) * 2021-12-01 2022-04-01 清华大学 Positioning and mapping method and system for UWB and visual SLAM fusion algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799883A (en) * 2012-06-29 2012-11-28 广州中国科学院先进技术研究所 Method and device for extracting movement target from video image
US20150229841A1 (en) * 2012-09-18 2015-08-13 Hangzhou Hikvision Digital Technology Co., Ltd. Target tracking method and system for intelligent tracking high speed dome camera
CN105469427A (en) * 2015-11-26 2016-04-06 河海大学 Target tracking method applied to videos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799883A (en) * 2012-06-29 2012-11-28 广州中国科学院先进技术研究所 Method and device for extracting movement target from video image
US20150229841A1 (en) * 2012-09-18 2015-08-13 Hangzhou Hikvision Digital Technology Co., Ltd. Target tracking method and system for intelligent tracking high speed dome camera
CN105469427A (en) * 2015-11-26 2016-04-06 河海大学 Target tracking method applied to videos

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390704A (en) * 2017-07-28 2017-11-24 西安因诺航空科技有限公司 A kind of multi-rotor unmanned aerial vehicle light stream hovering method based on IMU pose compensations
CN108280430A (en) * 2018-01-24 2018-07-13 陕西科技大学 A kind of flow image-recognizing method
CN108280430B (en) * 2018-01-24 2021-07-06 陕西科技大学 Flow image identification method
CN110428397A (en) * 2019-06-24 2019-11-08 武汉大学 A kind of angular-point detection method based on event frame
CN112884817A (en) * 2019-11-29 2021-06-01 中移物联网有限公司 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN112884817B (en) * 2019-11-29 2022-08-02 中移物联网有限公司 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN113781525A (en) * 2021-09-13 2021-12-10 陕西铁路工程职业技术学院 Three-dimensional target tracking algorithm research based on original CAD model
CN113781525B (en) * 2021-09-13 2024-01-23 陕西铁路工程职业技术学院 Three-dimensional target tracking method based on original CAD model
CN114264297A (en) * 2021-12-01 2022-04-01 清华大学 Positioning and mapping method and system for UWB and visual SLAM fusion algorithm

Similar Documents

Publication Publication Date Title
CN106570888A (en) Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)
CN105469427B (en) One kind is for method for tracking target in video
CN105404847B (en) A kind of residue real-time detection method
Park et al. Comparative study of vision tracking methods for tracking of construction site resources
CN106910204B (en) A kind of method and system to the automatic Tracking Recognition of sea ship
CN104766071B (en) A kind of traffic lights fast algorithm of detecting applied to pilotless automobile
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
Chen et al. An improved edge detection algorithm for depth map inpainting
CN102760230B (en) Flame detection method based on multi-dimensional time domain characteristics
Jiang et al. Multiple pedestrian tracking using colour and motion models
CN108764338B (en) Pedestrian tracking method applied to video analysis
CN109166137A (en) For shake Moving Object in Video Sequences detection algorithm
CN105740751A (en) Object detection and identification method and system
Stone et al. Forward looking anomaly detection via fusion of infrared and color imagery
CN112364865A (en) Method for detecting small moving target in complex scene
TWI729587B (en) Object localization system and method thereof
CN108491857A (en) A kind of multiple-camera target matching method of ken overlapping
CN106127813B (en) The monitor video motion segments dividing method of view-based access control model energy sensing
CN104282013B (en) A kind of image processing method and device for foreground target detection
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
CN114241372A (en) Target identification method applied to sector-scan splicing
CN104349125B (en) A kind of area monitoring method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170419