CN103761523A - Automatic identification and tracking method for airborne remote sensing video in specific man-made area - Google Patents

Automatic identification and tracking method for airborne remote sensing video in specific man-made area Download PDF

Info

Publication number
CN103761523A
CN103761523A CN201410001565.1A CN201410001565A CN103761523A CN 103761523 A CN103761523 A CN 103761523A CN 201410001565 A CN201410001565 A CN 201410001565A CN 103761523 A CN103761523 A CN 103761523A
Authority
CN
China
Prior art keywords
key point
point
image
template
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410001565.1A
Other languages
Chinese (zh)
Inventor
毕福昆
章菲菲
陈禾
谢宜壮
陈亮
龙腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201410001565.1A priority Critical patent/CN103761523A/en
Publication of CN103761523A publication Critical patent/CN103761523A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides an automatic identification and tracking method for an airborne remote sensing video in a specific man-made area. The automatic identification and tracking method comprises tracking in an offline state and tracking in an online state, wherein the tracking in the offline state comprises testing an extreme value of a scale space, locating a key point, distributing directions of the key point, and describing the key point; the tracking in the online state comprises extracting SIFT feature, accurately matching the key point, confirming a target area coverage to be tracked, and predicting the local area coverage of a self-adapting confirmed target to be tracked through a track.

Description

The automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video
Technical field
The invention belongs to field of remote sensing image processing, be specifically related to the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video.
Background technology
To the track algorithm major part of moving-target in aerial remote sensing video sequence, be all according to the poor position of tentatively determining target of the frame of front and back frame, after through a series of false-alarm, reject algorithm and prediction algorithm, reach the object of tracking, but actual video sequence is often subject to the impact of the aspects such as noise, style of shooting, picture quality can decline greatly, add image background possibility relative complex, this method is difficult to extract desirable target, finally causes tracking results to exist deviation even failed.Moreover this method cannot realize identification and the tracking of static target relatively in video sequence, has technical defect.
For above problem, someone has proposed to merge the improvement technology of SIFT operator and other algorithms, realizes the tracking to static target.SIFT be a kind of based on metric space, to image scaling, rotate the image local feature that even affined transformation maintains the invariance and describe operator, but this algorithm computation process is complicated, operand is huge, is generally difficult to meet the requirement of real-time.Even if can realize the tracking to static target, but most improvement algorithm, program initially generally need human intervention, draw a circle to approve target to be tracked, can not realize real automatic identification and tracking.
Summary of the invention
In order to solve the problems of the technologies described above, the invention provides the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video data.
The automatic recognition and tracking method in the specific artificial region of aerial remote sensing video data, follows the tracks of under off-line state, specifically comprises the following steps:
(1) extreme value of metric space detects: for a given width two dimensional image, carry out continuously Gaussian convolution and 2 times of down-samplings and phase reducing, form the metric space of this image, produce gaussian pyramid and difference of Gaussian (DOG) pyramid, in difference pyramid, except ground floor and last tomographic image, carry out extreme value detection, obtain the position coordinates of the extreme point of metric space;
(2) key point location: metric space DOG function is carried out to three-dimensional conic fitting, definite threshold, reject the unsettled extreme point that those contrasts are low, the Hessian matrix of recycling second order is removed the strong extreme point of skirt response, obtains the coordinate of the key point of exact position;
(3) key point direction is distributed: calculate gradient amplitude and the angle information of key point neighborhood territory pixel, and by the angle of gradient, carry out the statistics with histogram of amplitude size, carry out histogrammic conic fitting, determine principal direction and the auxiliary direction of key point;
(4) key point is described: all crucial neighborhoods of a point are rotated according to the principal direction of the above-mentioned key point calculating, pass through again image-region piecemeal around key point, computing block inside gradient histogram, generation has 128 unique dimensional vectors, this vector is carried out to gradient amplitude restriction and vectorial normalization, obtain the descriptor of the area image information of last this key point of sign.
During step under described off-line state (1) extreme value detects, by location point to be detected and it with 8 consecutive point of yardstick and 9 * 2 points corresponding to neighbouring yardstick totally 26 points compare, obtain the position coordinates of the extreme point of metric space.
The automatic recognition and tracking method in the specific artificial region of aerial remote sensing video data, follows the tracks of under presence, specifically comprises the following steps:
(1) SIFT feature extraction: the image binaryzation that the pyramidal every group of ground floor image of original difference of Gaussian is carried out to adaptive threshold, extract fringe region, be fixed again the expansion of radius, then only in this definite area, carry out extreme point detection, finally generate descriptor;
(2) key point exact matching, specifically comprises:
A, preliminary coupling: to template and in real time the descriptors of 128 dimensions of the key point in figure carry out the tolerance of similarity, ask Euclidean distance;
B, remove many-to-one match point: the right coordinate of coupling, according to the matching result sequence in real-time figure, is rejected to unnecessary match point, obtain man-to-man matching result;
C, rejecting erroneous matching pair;
(3) target area to be tracked scope is determined: obtain correct coupling to after, choose at random 3 pairs, template key point is: (x i, y i), (x 2, y 2), (x 3, y 3), in figure, corresponding match point is in real time: (x 1', y 1'), (x 2', y 2'), (x 3', y 3'), the following affine Transform Model of substitution:
x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 x 3 y 3 0 0 1 0 0 0 x 3 y 3 0 1 m 1 m 2 m 3 m 4 t x t y = x 1 ′ y 1 ′ x 2 ′ y 2 ′ x 3 ′ y 3 ′
Above formula is simplified and write as Ax=b: x=[A ta] -1a tb; Obtain affine transformation matrix x, i.e. [m 1, m 2, m 3, m 4, t x, t y] tboundary value to given template image is brought transformation model into, obtains the boundary value of the position of target in real-time figure, follows the tracks of local range size, add the border amount of redundancy of pixel, just can in lower piece image, determine the range size of target to be tracked; Equally, the center-of-mass coordinate substitution transformation model by the match point in template, just can obtain the template center-of-mass coordinate in real-time figure;
(4) by the adaptive regional area scope of determining target to be tracked of trajectory predictions: adopt the mode of para-curve extrapolation to carry out trajectory predictions, according to the information in the local following range region obtaining, in the real-time figure of next frame, cut the local following range that to contain target, intercepting obtains a little figure, equally, it is carried out to SIFT feature extraction, try to achieve the pinpoint extreme point of metric space, for it finds principal direction, produce local description, pass through again the exact matching with the key point of template image, confirm that target is whether in selected regional area, if confirmed target following, continue the tracking of regional area, otherwise, again real-time figure is carried out to feature extraction and matching.
Described key point exact matching adopts exhaustive matching process:
The descriptor of the key point in template is expressed as: R i=(r i1, r i2..., r i128),
Key point descriptor in figure is expressed as in real time: S i=(s i1, s i2..., s i128),
The Euclidean distance of any two descriptors:
d ( R i , S i ) = Σ j = 1 128 ( R ij - S ij ) 2
With the key point descriptor R in template ikey point descriptor S in the real-time figure of coupling jshould meet:
d ( R i , S j ) d ( R i , S k ) < thr
Wherein: d (R i, S j) be and R ithe minimum value of all Euclidean distances, d (R i, S k) be sub-minimum, thr value is 0.8.
Described rejecting erroneous matching is to adopting following methods to reject:
The coordinate of the key point in template is: r i=(x i, y i), center-of-mass coordinate is c1, and the distance definition of key point and barycenter is Euclidean distance, same, and the coordinate of the key point of figure is in real time: s j=(x j, y j), center-of-mass coordinate is c2, the distance definition of key point and barycenter is Euclidean distance, for a coupling to r i, s idefinition ratio:
R i = d ( r i , c 1 ) d ( s i , c 2 )
The ratio that all couplings are right sorts, and rejects the larger lofty point of ratio, and the coupling obtaining, to being that major part concentrates in certain scope, completing for the first time and rejects.
Described rejecting erroneous matching to adopting following methods to reject for the second time after completing rejecting for the first time: redefine a center-of-mass coordinate, replace original barycenter, again reject.
Beneficial effect of the present invention:
(1) the obvious specific artificial construction area to be tracked of edge feature extracts fringe region, be fixed the expansion of radius, only feature point detection is carried out in the region after processing, like this, both can meet the requirement of real-time, also can extract abundant unique point, realize follow-up matching operation.
(2) adopt the matching way that tentatively mates, removes many-one and remove error matching points stratification, prevent mistake coupling.
(3) by calculating and forecasting techniques, determine the position of target, and through adaptive fenestration procedure, tracking and the identification of target are carried out in the region among a small circle that intercepting contains target, have improved tracking velocity.
Accompanying drawing explanation
Fig. 1 is that the present invention is to the automatic recognition and tracking method flow diagram in the specific artificial region of aerial remote sensing video data.
Embodiment
Below in conjunction with the accompanying drawing embodiment that develops simultaneously, describe the present invention.
As shown in Figure 1: under off-line state, first to definite remotely-sensed data template image, extract SIFT feature.Key step is as follows:
(1) extreme value of metric space detects: for a given width two dimensional image, carry out continuously Gaussian convolution and 2 times of down-samplings and phase reducing, form the metric space of this image, produce gaussian pyramid and difference of Gaussian (DOG) pyramid.In difference pyramid, except ground floor and last tomographic image, carry out extreme value detection.Location point to be detected and it with 8 consecutive point of yardstick and 9 * 2 points corresponding to neighbouring yardstick totally 26 points compare, obtain the position coordinates of the extreme point of metric space;
(2) key point location: metric space DOG function is carried out to three-dimensional conic fitting, and definite threshold, rejects the unsettled extreme point that those contrasts are low.The Hes sian matrix of recycling second order is removed the strong extreme point of skirt response, obtains the coordinate of the key point of exact position;
(3) key point direction is distributed: calculate gradient amplitude and the angle information of key point neighborhood territory pixel, and by the angle of gradient, carry out the statistics with histogram of amplitude size, carry out histogrammic conic fitting, determine principal direction and the auxiliary direction of key point;
(4) key point is described: all crucial neighborhoods of a point are rotated according to the principal direction of the above-mentioned key point calculating, pass through again image-region piecemeal around key point, computing block inside gradient histogram, generation has 128 unique dimensional vectors, this vector is carried out to gradient amplitude restriction and vectorial normalization, obtain the descriptor of the area image information of last this key point of sign.
Under presence, the significant scene of edge is carried out quick characteristic matching:
(1) SIFT feature extraction: the pyramidal every group of ground floor image of original difference of Gaussian carried out to the image binaryzation of adaptive threshold, extract fringe region, then be fixed the expansion of radius.Then only in this definite area, carry out extreme point detection, finally generate descriptor;
(2) key point exact matching: 1. tentatively mate: the descriptors of 128 dimensions of the key point in template and real-time figure are carried out to the tolerance of similarity, ask Euclidean distance.Key point coupling adopts exhaustive coupling:
The descriptor of the key point in template is expressed as: R i=(r i1, r i2..., r i128),
Key point descriptor in figure is expressed as in real time: S i=(s i1, s i2..., s i128),
The Euclidean distance of any two descriptors:
d ( R i , S i ) = &Sigma; j = 1 128 ( R ij - S ij ) 2
With the key point descriptor R in template ikey point descriptor S in the real-time figure of coupling jshould meet:
d ( R i , S j ) d ( R i , S k ) < thr
Wherein: d (R i, S j) be and R ithe minimum value of all Euclidean distances, d (R i, S k) be sub-minimum, Low suggestion thr value is 0.8.
2. remove many-to-one match point: the coordinate that coupling is right sorts according to the matching result in real-time figure, rejects unnecessary match point, obtains man-to-man matching result.
3. reject erroneous matching pair: reject algorithm and determine as follows: the coordinate of the key point in template is: r i=(r i, y i), center-of-mass coordinate is: c1, and the distance definition of key point and barycenter is Euclidean distance, same, the coordinate of the key point of figure is in real time: s j=(x j, y j), center-of-mass coordinate is: c2, the distance definition of key point and barycenter is Euclidean distance.For a coupling to r i, s i, definition ratio:
R i = d ( r i , c 1 ) d ( s i , c 2 )
The ratio that all couplings are right sorts, and rejects the larger lofty point of ratio, and like this, the coupling obtaining concentrates in certain scope major part.
In order to remove the impact of isolated point, adopt secondary to reject: the coupling of rejecting for the first time, to as far as possible more, makes it can reject isolated point, redefine a center-of-mass coordinate, replace original barycenter, by above-mentioned algorithm, again reject, so just make the coupling be left to being correct.
(3) target area to be tracked scope is determined: obtain correct coupling to after, choose at random 3 pairs, template key point is: (x 1, y 1), (x 2, y 2), (x 3, y 3), in figure, corresponding match point is in real time: (x 1', y 1'), (x 2', y 2'), (x 3', y 3'), the model of the following affined transformation of substitution:
x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 x 3 y 3 0 0 1 0 0 0 x 3 y 3 0 1 m 1 m 2 m 3 m 4 t x t y = x 1 &prime; y 1 &prime; x 2 &prime; y 2 &prime; x 3 &prime; y 3 &prime;
For convenient, represent, above formula write as: Ax=b, can obtain:
x=[A TA] -1A Tb
By above formula, just can obtain affine transformation matrix x, boundary value to given template image is brought transformation model into, just can obtain the boundary value of the position of target in real-time figure, follow the tracks of local range size, add the border amount of redundancy of some pixels, just can in lower piece image, determine the range size of target to be tracked.
Equally, the center-of-mass coordinate substitution transformation model by the match point in template, just can obtain the template center-of-mass coordinate in real-time figure.
Then, by the adaptive regional area scope of determining target to be tracked of trajectory predictions: trajectory predictions adopts the mode (linear extrapolation can exist larger error) of para-curve extrapolation, center-of-mass coordinate in " 4 " moment, can be calculated by front 3 coordinates:
x ^ 4 = 5 x 3 - 4 x 2 + x 1 2
The model that can pass through equally affined transformation of determining of local following range regional extent obtains.According to the information in the local following range region obtaining, in the real-time figure of next frame, cut the local following range that to contain target, intercepting obtains a little figure, equally, it is carried out to SIFT feature extraction: try to achieve the pinpoint extreme point of metric space, for it finds principal direction, produce local description.Pass through again the exact matching with the key point of template image, confirm that target is whether in selected regional area.If confirmed target following, continue the tracking of regional area, otherwise, again real-time figure is carried out to feature extraction and matching.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (6)

1. the automatic recognition and tracking method in the specific artificial region of aerial remote sensing video data, follows the tracks of under off-line state, it is characterized in that, specifically comprises the following steps:
(1) extreme value of metric space detects: for a given width two dimensional image, carry out continuously Gaussian convolution and 2 times of down-samplings and phase reducing, form the metric space of this image, produce gaussian pyramid and difference of Gaussian (DOG) pyramid, in difference pyramid, except ground floor and last tomographic image, carry out extreme value detection, obtain the position coordinates of the extreme point of metric space;
(2) key point location: metric space DOG function is carried out to three-dimensional conic fitting, definite threshold, reject the unsettled extreme point that those contrasts are low, the Hessian matrix of recycling second order is removed the strong extreme point of skirt response, obtains the coordinate of the key point of exact position;
(3) key point direction is distributed: calculate gradient amplitude and the angle information of key point neighborhood territory pixel, and by the angle of gradient, carry out the statistics with histogram of amplitude size, carry out histogrammic conic fitting, determine principal direction and the auxiliary direction of key point;
(4) key point is described: all crucial neighborhoods of a point are rotated according to the principal direction of the above-mentioned key point calculating, pass through again image-region piecemeal around key point, computing block inside gradient histogram, generation has 128 unique dimensional vectors, this vector is carried out to gradient amplitude restriction and vectorial normalization, obtain the descriptor of the area image information of last this key point of sign.
2. the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video data as claimed in claim 1, it is characterized in that, during step under described off-line state (1) extreme value detects, by location point to be detected and it with 8 consecutive point of yardstick and 9 * 2 points corresponding to neighbouring yardstick totally 26 points compare, obtain the position coordinates of the extreme point of metric space.
3. the automatic recognition and tracking method in the specific artificial region of aerial remote sensing video data, follows the tracks of under presence, it is characterized in that, specifically comprises the following steps:
(1) SIFT feature extraction: the image binaryzation that the pyramidal every group of ground floor image of original difference of Gaussian is carried out to adaptive threshold, extract fringe region, be fixed again the expansion of radius, then only in this definite area, carry out extreme point detection, finally generate descriptor;
(2) key point exact matching, specifically comprises:
A, preliminary coupling: to template and in real time the descriptors of 128 dimensions of the key point in figure carry out the tolerance of similarity, ask Euclidean distance;
B, remove many-to-one match point: the right coordinate of coupling, according to the matching result sequence in real-time figure, is rejected to unnecessary match point, obtain man-to-man matching result;
C, rejecting erroneous matching pair;
(3) target area to be tracked scope is determined: obtain correct coupling to after, choose at random 3 pairs, template key point is: (x 1, y 1), (x 2, y 2), (x 3, y 3), in figure, corresponding match point is in real time: (x 1', y 1'), (x 2', y 2'), (x 3', y 3'), the following affine Transform Model of substitution:
Figure FDA0000452589790000021
Above formula is simplified and write as Ax=b: x=[A ta] -1a tb; Obtain affine transformation matrix x, i.e. [m 1, m 2, m 3, m 4, t x, t y] tboundary value to given template image is brought transformation model into, obtains the boundary value of the position of target in real-time figure, follows the tracks of local range size, add the border amount of redundancy of pixel, just can in lower piece image, determine the range size of target to be tracked; Equally, the center-of-mass coordinate substitution transformation model by the match point in template, just can obtain the template center-of-mass coordinate in real-time figure;
(4) by the adaptive regional area scope of determining target to be tracked of trajectory predictions: adopt the mode of para-curve extrapolation to carry out trajectory predictions, according to the information in the local following range region obtaining, in the real-time figure of next frame, cut the local following range that to contain target, intercepting obtains a little figure, equally, it is carried out to SIFT feature extraction, try to achieve the pinpoint extreme point of metric space, for it finds principal direction, produce local description, pass through again the exact matching with the key point of template image, confirm that target is whether in selected regional area, if confirmed target following, continue the tracking of regional area, otherwise, again real-time figure is carried out to feature extraction and matching.
4. the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video data as claimed in claim 3, is characterized in that, described key point exact matching adopts exhaustive matching process:
The descriptor of the key point in template is expressed as: R i=(r i1, r i2..., r i128),
Key point descriptor in figure is expressed as in real time: S i=(s i1, s i2..., s i128),
The Euclidean distance of any two descriptors:
Figure FDA0000452589790000031
With the key point descriptor R in template ikey point descriptor S in the real-time figure of coupling jshould meet:
Figure FDA0000452589790000032
Wherein: d (R i, S j) be and R ithe minimum value of all Euclidean distances, d (R i, S k) be sub-minimum, thr value is 0.8.
5. the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video data as described in claim 3 or 4, is characterized in that, described rejecting erroneous matching is to adopting following methods to reject:
The coordinate of the key point in template is: r i=(x i, y i), center-of-mass coordinate is c1, and the distance definition of key point and barycenter is Euclidean distance, same, and the coordinate of the key point of figure is in real time: s j=(x j, y j), center-of-mass coordinate is c2, the distance definition of key point and barycenter is Euclidean distance, for a coupling to r i, s jdefinition ratio:
Figure FDA0000452589790000033
The ratio that all couplings are right sorts, and rejects the larger lofty point of ratio, and the coupling obtaining, to being that major part concentrates in certain scope, completing for the first time and rejects.
6. the automatic recognition and tracking method in the specific artificial region of a kind of aerial remote sensing video data as claimed in claim 5, it is characterized in that, described rejecting erroneous matching to adopting following methods to reject for the second time after completing rejecting for the first time: redefine a center-of-mass coordinate, replace original barycenter, again reject.
CN201410001565.1A 2014-01-02 2014-01-02 Automatic identification and tracking method for airborne remote sensing video in specific man-made area Pending CN103761523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410001565.1A CN103761523A (en) 2014-01-02 2014-01-02 Automatic identification and tracking method for airborne remote sensing video in specific man-made area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410001565.1A CN103761523A (en) 2014-01-02 2014-01-02 Automatic identification and tracking method for airborne remote sensing video in specific man-made area

Publications (1)

Publication Number Publication Date
CN103761523A true CN103761523A (en) 2014-04-30

Family

ID=50528759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410001565.1A Pending CN103761523A (en) 2014-01-02 2014-01-02 Automatic identification and tracking method for airborne remote sensing video in specific man-made area

Country Status (1)

Country Link
CN (1) CN103761523A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200486A (en) * 2014-07-11 2014-12-10 澳门极视角有限公司 Foreground identification method
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN105095923A (en) * 2014-05-21 2015-11-25 华为技术有限公司 Image processing method and device
CN106951889A (en) * 2017-05-23 2017-07-14 煤炭科学技术研究院有限公司 Underground high risk zone moving target monitoring and management system
CN107622239A (en) * 2017-09-15 2018-01-23 北方工业大学 Detection method for remote sensing image specified building area constrained by hierarchical local structure
CN107748877A (en) * 2017-11-10 2018-03-02 杭州晟元数据安全技术股份有限公司 A kind of Fingerprint recognition method based on minutiae point and textural characteristics
CN113286163A (en) * 2021-05-21 2021-08-20 成都威爱新经济技术研究院有限公司 Timestamp error calibration method and system for virtual shooting live broadcast

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101355692A (en) * 2008-07-30 2009-01-28 浙江大学 Intelligent monitoring apparatus for real time tracking motion target area
US20110216939A1 (en) * 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Apparatus and method for tracking target
US20120294477A1 (en) * 2011-05-18 2012-11-22 Microsoft Corporation Searching for Images by Video
CN103077532A (en) * 2012-12-24 2013-05-01 天津市亚安科技股份有限公司 Real-time video object quick tracking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101355692A (en) * 2008-07-30 2009-01-28 浙江大学 Intelligent monitoring apparatus for real time tracking motion target area
US20110216939A1 (en) * 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Apparatus and method for tracking target
US20120294477A1 (en) * 2011-05-18 2012-11-22 Microsoft Corporation Searching for Images by Video
CN103077532A (en) * 2012-12-24 2013-05-01 天津市亚安科技股份有限公司 Real-time video object quick tracking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
管学伟等: "一种基于SIFT算法的目标匹配方法", 《第十四届全国图象图形学学术会议论文集》 *
赵峰: "基于无人机影像的景象匹配系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095923A (en) * 2014-05-21 2015-11-25 华为技术有限公司 Image processing method and device
CN105095923B (en) * 2014-05-21 2018-09-11 华为技术有限公司 A kind of image processing method and device
CN104200486A (en) * 2014-07-11 2014-12-10 澳门极视角有限公司 Foreground identification method
CN104200486B (en) * 2014-07-11 2017-04-19 澳门极视角有限公司 Foreground identification method
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN104658011B (en) * 2015-01-31 2017-09-29 北京理工大学 A kind of intelligent transportation moving object detection tracking
CN106951889A (en) * 2017-05-23 2017-07-14 煤炭科学技术研究院有限公司 Underground high risk zone moving target monitoring and management system
CN107622239A (en) * 2017-09-15 2018-01-23 北方工业大学 Detection method for remote sensing image specified building area constrained by hierarchical local structure
CN107622239B (en) * 2017-09-15 2019-11-26 北方工业大学 Detection method for remote sensing image specified building area constrained by hierarchical local structure
CN107748877A (en) * 2017-11-10 2018-03-02 杭州晟元数据安全技术股份有限公司 A kind of Fingerprint recognition method based on minutiae point and textural characteristics
CN107748877B (en) * 2017-11-10 2020-06-16 杭州晟元数据安全技术股份有限公司 Fingerprint image identification method based on minutiae and textural features
CN113286163A (en) * 2021-05-21 2021-08-20 成都威爱新经济技术研究院有限公司 Timestamp error calibration method and system for virtual shooting live broadcast

Similar Documents

Publication Publication Date Title
CN103761523A (en) Automatic identification and tracking method for airborne remote sensing video in specific man-made area
US9619691B2 (en) Multi-view 3D object recognition from a point cloud and change detection
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
CN103268616B (en) The moveable robot movement human body tracing method of multi-feature multi-sensor
US6826292B1 (en) Method and apparatus for tracking moving objects in a sequence of two-dimensional images using a dynamic layered representation
CN104851094A (en) Improved method of RGB-D-based SLAM algorithm
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN105335986A (en) Characteristic matching and MeanShift algorithm-based target tracking method
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN103514432A (en) Method, device and computer program product for extracting facial features
Rondao et al. Multi-view monocular pose estimation for spacecraft relative navigation
Zoidi et al. Stereo object tracking with fusion of texture, color and disparity information
Wang et al. Hand posture recognition from disparity cost map
Wang Automatic extraction of building outline from high resolution aerial imagery
CN104504692A (en) Method for extracting obvious object in image on basis of region contrast
杜绪伟 et al. Real-time hand tracking based on YOLOv4 model and Kalman filter
Yang et al. Method for building recognition from FLIR images
Nagar et al. SymmSLIC: Symmetry aware superpixel segmentation
Rogez et al. View-invariant human feature extraction for video-surveillance applications
Wang et al. Online adaptive multiple pedestrian tracking in monocular surveillance video
Taha et al. A machine learning model for improving building detection in informal areas: a case study of Greater Cairo
Tsuduki et al. A method for visualizing pedestrian traffic flow using SIFT feature point tracking
Kim et al. Efficient body part tracking using ridge data and data pruning
Li et al. 3D building extraction with semi-global matching from stereo pair worldview-2 satellite imageries
CN104281852A (en) Target tracking algorithm based on fusion 2D detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140430

RJ01 Rejection of invention patent application after publication