CN102853835A - Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method - Google Patents

Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method Download PDF

Info

Publication number
CN102853835A
CN102853835A CN2012102890653A CN201210289065A CN102853835A CN 102853835 A CN102853835 A CN 102853835A CN 2012102890653 A CN2012102890653 A CN 2012102890653A CN 201210289065 A CN201210289065 A CN 201210289065A CN 102853835 A CN102853835 A CN 102853835A
Authority
CN
China
Prior art keywords
unmanned vehicle
delta
theta
matching
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102890653A
Other languages
Chinese (zh)
Other versions
CN102853835B (en
Inventor
韩军伟
吉祥
郭雷
梁楠
赵天云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201210289065.3A priority Critical patent/CN102853835B/en
Publication of CN102853835A publication Critical patent/CN102853835A/en
Application granted granted Critical
Publication of CN102853835B publication Critical patent/CN102853835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method. The scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method is characterized by comprising the following steps of 1, extracting feature description vectors of an image of a matching target and a front-lower view of an unmanned aerial vehicle by a scale invariant feature transform algorithm, 2, determining if the front-lower view in the frame and the image of the matching target are matching or not, and 3, if the front-lower view and the image of the matching target are matching, recording coordinates of a matching point in a satellite map comprising the image of the matching target and the matching target, in the front-lower view of the unmanned aerial vehicle, calculating current position coordinates of the unmanned aerial vehicle in the satellite map according to the coordinates of the matching point and carrying out positioning of the unmanned aerial vehicle, and if the front-lower view and the image of the matching target are not matching, reading an unmanned aerial vehicle front-lower view in the next frame and sequentially carrying out matching. The scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method realizes accurate matching of a front-lower view of an unmanned aerial vehicle and a matching target in a satellite map, determination of current position coordinates of the unmanned aerial vehicle according to a built unmanned aerial vehicle front-lower view model, and positioning of the unmanned aerial vehicle.

Description

Unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features
Technical field
The present invention relates to a kind of unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features, be applied in the unmanned vehicle scene matching aided navigation location.
Background technology
Develop rapidly along with aeronautical technology, unmanned vehicle and correlation technique thereof have become the emphasis that various countries are competitively studied, and unmanned vehicle is with its stronger maneuverability, less weight, better aerodynamic quality, lower cost, better adaptive capacity to environment.At present, American-European countries is except the unmanned vehicle of usefulness, and also in the novel unmanned vehicle technology of research, China also is being engaged in the research of correlation technique.
In numerous technology that relate to unmanned vehicle, the unmanned vehicle location technology is an extremely crucial technology, and this technology is mainly used in unmanned vehicle is implemented accurately location and independent navigation, is the assurance that unmanned vehicle is finished the work.Existing unmanned vehicle location technology mainly is to rely on GPS, but, the estimated accuracy of GPS is relevant with the quality that the number of satellite that participates in the location and signal receive equipment, and signal transduction process easily is subject to radio interference, causes signal errors to enlarge.In addition, complex environment also requires the unmanned vehicle location technology to carry out the Integrated using of multiple location technology, and unmanned vehicle scene matching aided navigation location just is based on these and requires to be born.So-called unmanned vehicle scene matching aided navigation location, refer to utilize on the unmanned vehicle camera collection to realtime graphic mate with satellite image or the image that is stored in advance in the unmanned vehicle, obtain positional information.This technology can be used in the situation of GPS inefficacy, the effective location mode of GPS as an alternative, particularly GPS is subjected to external control, so development low cost, high precision, anti-interference or even location technology alternative GPS seem extremely important.
Chinese scholars has also been carried out the research of some aspects, scene matching aided navigation location, mates to reduce the method for matching error as adopting continuous multiple frames, and the method can improve coupling usefulness, but owing to being the multiframe coupling, can increase coupling consuming time; Also have class methods to mate based on single frames, mate consuming time few, but easily cause matching error, in this case, the method such as image characteristic extracting method and similarity measurement accurately sought just becomes the key point of single frames coupling.So, study that a kind of accurately Scene matching method based is significant to unmanned vehicle scene matching aided navigation location fast again.
Summary of the invention
The technical matters that solves
For fear of the deficiencies in the prior art part, the present invention proposes a kind of unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features, can substitute or agps system carries out the method for unmanned vehicle location, reduce unmanned vehicle to the dependence of GPS navigation.
Technical scheme
A kind of unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features is characterized in that step is as follows:
Step 1 is extracted the feature description vectors F of coupling target image i: utilize yardstick invariant features transform method to extract coupling target image I 1Feature description vectors F i, i=1,2 ... m; Wherein: F iPresentation video I 1I feature description vectors; M presentation video I 1The number of middle feature description vectors and m ∈ (0,10000);
Step 2 is extracted the feature description vectors F of the front lower view of unmanned vehicle j: utilize yardstick invariant features transform method to extract the front lower view I that unmanned vehicle is taken 2Feature description vectors F j, j=1,2 ... n; Wherein: F jPresentation video I 2J feature description vectors; N presentation video I 2The number of middle feature description vectors and n ∈ (0,10000);
Step 3 is searched the matching characteristic point: at F iIn find out and F jThe unique point that Euclidean distance is nearest, when 2 Euclidean distances during less than threshold value Q, then this is F jMatch point; When the unique point number of coupling reached threshold value W, the match is successful with the coupling target image for the front lower view of this unmanned vehicle; Then respectively recording feature point coordinate (fx, fy) and (rex, rey) in the satellite image at coupling target image and this coupling target image place, if it fails to match, repeating step 2;
Step 4 is calculated unmanned vehicle and is now located the position: utilize the position coordinates (fx on coupling target image and the satellite image, fy) and (rex, rey) calculate unmanned vehicle current position coordinates (lxresult, lyresult), concrete steps are as follows:
Step a: utilize &Delta;x = h &times; tan ( c + b + e ) Lw - fy > 0 h &times; tan ( c + b - e ) Lw - fy < 0 h &times; tan ( c + b ) Lw - fy = 0 Calculate the difference Δ x of the ordinate of match point coordinate (fx, fy) and unmanned vehicle position; Wherein: h represents the height parameter of unmanned vehicle; C=pi/2-a-2b; A represents the unmanned vehicle angle of depression; B represent the unmanned vehicle visual angle half; E=arctan (| Lw-fy|/Lm); || expression takes absolute value; Lw=[h/tan (a)-h * tan (c)] * sin (a)/sin (pi/2+b); Lm=Lw/2/tan (a);
Step b: utilize Δ y=|fx-LL| * Lc/Lcw to calculate the difference Δ y of the horizontal ordinate of match point coordinate (fx, fy) and unmanned vehicle position; Wherein: LL represents to mate half of target image width, LL=L/2; Lc = h / cos ( b + c + e ) Lw - fy > 0 h / cos ( b + c - e ) Lw - fy < 0 h / cos ( b + c ) Lw - fy = 0 ; Lcw=Lm/cos(e);
Step c: adopt following formula to calculate, obtain unmanned vehicle current position coordinates (lxresult, lyresult)
lxresult = &Delta;x &times; sin &theta; + &Delta;y &times; cos &theta; + rex fx - LL < 0 &Delta;x &times; sin &theta; - &Delta;y &times; cos &theta; + rex fx - LL > 0 &Delta;x &times; sin &theta; + rex fx - LL = 0
lyresult = &Delta;x &times; cos &theta; - &Delta;y &times; sin &theta; + rex fx - LL < 0 &Delta;x &times; cos &theta; + &Delta;y &times; sin &theta; + rex fx - LL > 0 &Delta;x &times; cos &theta; + rex fx - LL = 0
Wherein: lxresult represents the horizontal ordinate of unmanned vehicle on satellite mapping; Lyresult represents the ordinate of unmanned vehicle on satellite mapping; θ is the angle of unmanned vehicle heading and direct north.
Described threshold value Q ∈ (0,1).
Described threshold value W ∈ (1,10000).
Beneficial effect
A kind of unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features that the present invention proposes, utilize yardstick invariant features mapping algorithm to extract the front lower Characteristic of Image description vectors of looking of unmanned vehicle, and with satellite image in coupling clarification of objective description vectors mate, in the front lower view model of unmanned vehicle, calculate the unmanned vehicle position coordinates according to matching result, and compare with the pre-set flight coordinate of unmanned vehicle, judge whether flight path is correct.
The unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features that the present invention proposes, utilize in the satellite mapping coupling target and the front lower view of unmanned vehicle to carry out the unmanned vehicle location, can be used as substituting and aid of GPS, its bearing accuracy is high, and speed is fast.
Description of drawings
Fig. 1: the basic flow sheet of the inventive method;
Fig. 2: coupling target figure;
Fig. 3: the front lower view model of unmanned vehicle;
Fig. 4: unmanned vehicle vertical view;
Fig. 5: unmanned vehicle side view;
Fig. 6: unmanned vehicle coupling positioning system interface.
Embodiment
Now in conjunction with the embodiments, the invention will be further described for accompanying drawing:
The hardware environment that is used for implementing is: AMD Athlon 64 * 25000+ computing machine, 2GB internal memory, 256M video card, the software environment of operation is: Visual Studio 2008 and Windows 7.We have realized the positioning system that the present invention proposes with Visual Studio 2008 softwares.
Process flow diagram of the present invention as shown in Figure 1, implementation is as follows:
1 extracts coupling clarification of objective description vectors F i:
Utilize yardstick invariant features transform method to extract the feature description vectors that mates on the target image, the coupling target image as shown in Figure 2.Concrete steps are as follows:
At first to coupling target image I 1Carry out Gaussian smoothing, wherein choose σ n=0.5, obtain image , choose different σ=σ 02 O+s/SWith
Figure BDA00002012410100042
Do convolution and formed an image pyramid GSS σ, s=0 wherein ... S-1, o=0 ... O-1, S=3, O=min (log 2Row, log 2Col), σ 0The number of pixel on the vertical direction of=1.5, row presentation video, the number of pixel on the horizontal direction of col presentation video.Then to adjacent GSS σAsk difference to obtain DOG σ, for DOG σEach pixel respectively with a upper yardstick corresponding pixel points and around eight neighborhood territory pixels point, eight neighborhood territory pixels point around the current yardstick, and next yardstick corresponding pixel points and around eight neighborhood territory pixels point make comparisons, if this pixel is minimal value or maximum point, then this pixel is the image significant point, zone take σ as radius around it is marking area, can obtain thus the coordinate X of a series of image significant point, and its corresponding σ is its corresponding scale size λ.For each image significant point, use
Figure BDA00002012410100051
Gradient image and gaussian kernel do convolution and obtain gradient image
Figure BDA00002012410100052
, σ wherein G=1.5 σ, and compute gradient image
Figure BDA00002012410100053
In with the direction histogram in the marking area of significant point X, wherein the crest meter in each direction histogram interval adds up to this direction zone inside gradient at last, get the interval number L=36 of direction histogram, choose the direction zone that amplitude surpasses its maximal value 80% from direction histogram, be defined as this characteristic area principal direction γ, if any a plurality of directions zone, then there are a plurality of principal direction γ in this characteristic area.Get at last the marking area of image significant point X, be divided into 16 zones by principal direction and vertical direction thereof, in each zonule, add up respectively direction histogram, wherein the crest meter in each direction histogram interval adds up to this direction zone inside gradient assignment at last, get the interval number L=8 of direction histogram, and with the amplitude quantization of each direction histogram between [0,255], then obtain the description vectors F of one 128 dimension i, i=1,2 ... m.Wherein: F iPresentation video I 1I feature description vectors; M presentation video I 1The number of middle feature description vectors, m ∈ (0,10000).
2 extract the feature description vectors F of the front lower view of unmanned vehicle j:
The same with the feature description vectors that extracts the coupling target image in the step 1, after the unmanned vehicle camera photographs front lower view, will carry out the feature description vectors to this image and extract, in order to mate with the feature description vectors that mates target image, concrete steps are as follows:
Front lower view I to the unmanned vehicle shooting 2Carry out Gaussian smoothing, wherein choose σ n=0.5, obtain image
Figure BDA00002012410100054
, choose different σ=σ 02 O+s/SWith
Figure BDA00002012410100055
Do convolution and formed an image pyramid GSS σ, s=0 wherein ... S-1, o=0 ... O-1, S=3, O=min (log 2Row, log 2Col), σ 0The number of pixel on the vertical direction of=1.5, row presentation video, the number of pixel on the horizontal direction of col presentation video.Then to adjacent GSS σAsk difference to obtain DOG σ, for DOG σEach pixel respectively with a upper yardstick corresponding pixel points and around eight neighborhood territory pixels point, eight neighborhood territory pixels point around the current yardstick, and next yardstick corresponding pixel points and around eight neighborhood territory pixels point make comparisons, if this pixel is minimal value or maximum point, then this pixel is the image significant point, zone take σ as radius around it is marking area, can obtain thus the coordinate X of a series of image significant point, and its corresponding σ is its corresponding scale size λ.For each image significant point X, make Gradient image and gaussian kernel do convolution and obtain gradient image
Figure BDA00002012410100062
, σ wherein G=1.5 σ, and compute gradient image
Figure BDA00002012410100063
In with the direction histogram in the marking area of significant point X, wherein the crest meter in each direction histogram interval adds up to this direction zone inside gradient at last, get the interval number L=36 of direction histogram, choose the direction zone that amplitude surpasses its maximal value 80% from direction histogram, be defined as this characteristic area principal direction γ, if any a plurality of directions zone, then there are a plurality of principal direction γ in this characteristic area.Get at last the marking area of image significant point X, be divided into 16 zones by principal direction and vertical direction thereof, in each zonule, add up respectively direction histogram, wherein the crest meter in each direction histogram interval adds up to this direction zone inside gradient assignment at last, get the interval number L=8 of direction histogram, and with the amplitude quantization of each direction histogram between [0,255], then obtain the description vectors F of one 128 dimension j, j=1,2 ... n.Wherein: F jPresentation video I 2J feature description vectors; N presentation video I 2The number of middle feature description vectors, n ∈ (0,10000).
3 search the matching characteristic point:
At F iIn find out and F jThe unique point that Euclidean distance is nearest, when 2 Euclidean distances during less than threshold value Q, Q ∈ (0,1), then this is F jMatch point; When the unique point number of coupling reaches threshold value W, W ∈ (1,10000), the match is successful with the coupling target image for the front lower view of this unmanned vehicle, respectively recording feature point coordinate (fx, fy) and (rex, rey) in the satellite image at coupling target image and this coupling target image place, if it fails to match, repeating step 2;
4 calculate unmanned vehicle now locates coordinate:
Utilize the field angle b of unmanned vehicle camera, visual field width L, the height parameter h of angle of depression a and unmanned vehicle sets up the front lower view model of unmanned vehicle, as shown in Figure 3; Position coordinates (fx, fy) and (rex, the rey) of match point calculate unmanned vehicle current position coordinates (lxresult, lyresult) on this model utilization coupling target figure and the satellite mapping.
The coordinate Calculation formula of lxresult and lyresult is as follows:
lxresult = &Delta;x &times; sin &theta; + &Delta;y &times; cos &theta; + rex fx - LL < 0 &Delta;x &times; sin &theta; - &Delta;y &times; cos &theta; + rex fx - LL > 0 &Delta;x &times; sin &theta; + rex fx - LL = 0
lyresult = &Delta;x &times; cos &theta; - &Delta;y &times; sin &theta; + rex fx - LL < 0 &Delta;x &times; cos &theta; + &Delta;y &times; sin &theta; + rex fx - LL > 0 &Delta;x &times; cos &theta; + rex fx - LL = 0
Wherein, lxresult represents the horizontal ordinate of unmanned vehicle on satellite mapping; Lyresult represents the ordinate of unmanned vehicle on satellite mapping; Δ y represents horizontal ordinate poor of match point and unmanned vehicle position, as shown in Figure 4; Δ x represents ordinate poor of match point and unmanned vehicle position, as shown in Figure 4; θ is the angle of unmanned vehicle heading and direct north, as shown in Figure 4; Rex represents the horizontal ordinate position of match point in satellite mapping; Rey represents the ordinate position of match point in satellite mapping; Fx is illustrated in the horizontal ordinate position of match point among the coupling target figure, as shown in Figure 2; Fy is illustrated in the ordinate position of match point among the coupling target figure, as shown in Figure 2; LL represents to mate half of width of target figure, LL=L/2, as shown in Figure 2;
For Δ x, utilize &Delta;x = h &times; tan ( c + b + e ) Lw - fy > 0 h &times; tan ( c + b - e ) Lw - fy < 0 h &times; tan ( c + b ) Lw - fy = 0 Calculate;
Wherein, h represents the height parameter of unmanned vehicle; C=pi/2-a-2b; A represents the unmanned vehicle angle of depression, as shown in Figure 5; B represent the visual angle half, as shown in Figure 5; E=arctan (| Lw-fy|/Lm), as shown in Figure 5; Wherein, Lw=[h/tan (a)-h * tan (c)] * sin (a)/sin (pi/2+b); Lm=Lw/2/tan (a), as shown in Figure 5;
For Δ y, utilize Δ y=abs (fx-LL) * Lc/Lcw to calculate;
Wherein, Lc = h / cos ( b + c + e ) Lw - fy > 0 h / cos ( b + c - e ) Lw - fy < 0 h / cos ( b + c ) Lw - fy = 0 ; Lcw=Lm/cos(e);
The Euclidean distance of the flight coordinate that calculates this position coordinates and plan in advance judges whether flight path is correct, shows simultaneously position and the track that unmanned vehicle is now located in matching system, is convenient to intuitive judgment, as shown in Figure 6.
The unmanned vehicle scene matching aided navigation positioning system of utilizing this paper to set up can be mated the coupling target in the front lower view of unmanned vehicle and the satellite mapping accurately, and judge position and the flight path that unmanned vehicle is now located according to the front lower view model of the unmanned vehicle of setting up, the result shows, this system can judge the position coordinates of unmanned vehicle exactly in satellite mapping, be no more than 3 pixels with predefined grid deviation.

Claims (3)

1. unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features is characterized in that step is as follows:
Step 1 is extracted the feature description vectors F of coupling target image i: utilize yardstick invariant features transform method to extract coupling target image I 1Feature description vectors F i, i=1,2 ... m; Wherein: F iPresentation video I 1I feature description vectors; M presentation video I 1The number of middle feature description vectors and m ∈ (0,10000);
Step 2 is extracted the feature description vectors F of the front lower view of unmanned vehicle j: utilize yardstick invariant features transform method to extract the front lower view I that unmanned vehicle is taken 2Feature description vectors F j, j=1,2 ... n; Wherein: F jPresentation video I 2J feature description vectors; N presentation video I 2The number of middle feature description vectors and n ∈ (0,10000);
Step 3 is searched the matching characteristic point: at F iIn find out and F jThe unique point that Euclidean distance is nearest, when 2 Euclidean distances during less than threshold value Q, then this is F jMatch point; When the unique point number of coupling reached threshold value W, the match is successful with the coupling target image for the front lower view of this unmanned vehicle; Then respectively recording feature point coordinate (fx, fy) and (rex, rey) in the satellite image at coupling target image and this coupling target image place, if it fails to match, repeating step 2;
Step 4 is calculated unmanned vehicle and is now located the position: utilize the position coordinates (fx on coupling target image and the satellite image, fy) and (rex, rey) calculate unmanned vehicle current position coordinates (lxresult, lyresult), concrete steps are as follows:
Step a: utilize &Delta;x = h &times; tan ( c + b + e ) Lw - fy > 0 h &times; tan ( c + b - e ) Lw - fy < 0 h &times; tan ( c + b ) Lw - fy = 0 Calculate the difference Δ x of the ordinate of match point coordinate (fx, fy) and unmanned vehicle position; Wherein: h represents the height parameter of unmanned vehicle; C=pi/2-a-2b; A represents the unmanned vehicle angle of depression; B represent the unmanned vehicle visual angle half; E=arctan (| Lw-fy|/Lm); || expression takes absolute value; Lw=[h/tan (a)-h * tan (c)] * sin (a)/sin (pi/2+b); Lm=Lw/2/tan (a);
Step b: utilize Δ y=|fx-LL| * Lc/Lcw to calculate the difference Δ y of the horizontal ordinate of match point coordinate (fx, fy) and unmanned vehicle position; Wherein: LL represents to mate half of target image width, LL=L/2; Lc = h / cos ( b + c + e ) Lw - fy > 0 h / cos ( b + c - e ) Lw - fy < 0 h / cos ( b + c ) Lw - fy = 0 ; Lcw=Lm/cos(e);
Step c: adopt following formula to calculate, obtain unmanned vehicle current position coordinates (lxresult, lyresult)
lxresult = &Delta;x &times; sin &theta; + &Delta;y &times; cos &theta; + rex fx - LL < 0 &Delta;x &times; sin &theta; - &Delta;y &times; cos &theta; + rex fx - LL > 0 &Delta;x &times; sin &theta; + rex fx - LL = 0
lyresult = &Delta;x &times; cos &theta; - &Delta;y &times; sin &theta; + rex fx - LL < 0 &Delta;x &times; cos &theta; + &Delta;y &times; sin &theta; + rex fx - LL > 0 &Delta;x &times; cos &theta; + rex fx - LL = 0
Wherein: lxresult represents the horizontal ordinate of unmanned vehicle on satellite mapping; Lyresult represents the ordinate of unmanned vehicle on satellite mapping; θ is the angle of unmanned vehicle heading and direct north.
2. described unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features according to claim 1 is characterized in that: described threshold value Q ∈ (0,1).
3. described unmanned vehicle scene matching aided navigation localization method based on the conversion of yardstick invariant features according to claim 1 is characterized in that: described threshold value W ∈ (1,10000).
CN201210289065.3A 2012-08-15 2012-08-15 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method Active CN102853835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210289065.3A CN102853835B (en) 2012-08-15 2012-08-15 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210289065.3A CN102853835B (en) 2012-08-15 2012-08-15 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method

Publications (2)

Publication Number Publication Date
CN102853835A true CN102853835A (en) 2013-01-02
CN102853835B CN102853835B (en) 2014-12-31

Family

ID=47400636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210289065.3A Active CN102853835B (en) 2012-08-15 2012-08-15 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method

Country Status (1)

Country Link
CN (1) CN102853835B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
WO2018027451A1 (en) * 2016-08-08 2018-02-15 深圳市道通智能航空技术有限公司 Flight positioning method and device
CN107885231A (en) * 2016-09-30 2018-04-06 成都紫瑞青云航空宇航技术有限公司 A kind of unmanned plane capturing method and system based on visible images identification
CN108073184A (en) * 2017-11-27 2018-05-25 天脉聚源(北京)传媒科技有限公司 UAV Flight Control method and device
CN109782012A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第二十研究所 A kind of speed-measuring method based on photoelectric image feature association
CN111902851A (en) * 2018-03-15 2020-11-06 日本音响工程株式会社 Learning data generation method, learning data generation device, and learning data generation program
CN112461204A (en) * 2019-08-19 2021-03-09 中国科学院长春光学精密机械与物理研究所 Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004333243A (en) * 2003-05-06 2004-11-25 Pasuko:Kk Method for matching image
US20070127101A1 (en) * 2004-04-02 2007-06-07 Oldroyd Lawrence A Method for automatic stereo measurement of a point of interest in a scene
CN101598556A (en) * 2009-07-15 2009-12-09 北京航空航天大学 Unmanned plane vision/inertia integrated navigation method under a kind of circumstances not known
CN101629827A (en) * 2009-08-14 2010-01-20 华中科技大学 Front view terminal guidance navigation positioning method of aircraft
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004333243A (en) * 2003-05-06 2004-11-25 Pasuko:Kk Method for matching image
US20070127101A1 (en) * 2004-04-02 2007-06-07 Oldroyd Lawrence A Method for automatic stereo measurement of a point of interest in a scene
CN101598556A (en) * 2009-07-15 2009-12-09 北京航空航天大学 Unmanned plane vision/inertia integrated navigation method under a kind of circumstances not known
CN101629827A (en) * 2009-08-14 2010-01-20 华中科技大学 Front view terminal guidance navigation positioning method of aircraft
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
WO2018027451A1 (en) * 2016-08-08 2018-02-15 深圳市道通智能航空技术有限公司 Flight positioning method and device
CN107885231A (en) * 2016-09-30 2018-04-06 成都紫瑞青云航空宇航技术有限公司 A kind of unmanned plane capturing method and system based on visible images identification
CN108073184A (en) * 2017-11-27 2018-05-25 天脉聚源(北京)传媒科技有限公司 UAV Flight Control method and device
CN108073184B (en) * 2017-11-27 2024-02-20 北京拉近众博科技有限公司 Unmanned aerial vehicle flight control method and device
CN111902851A (en) * 2018-03-15 2020-11-06 日本音响工程株式会社 Learning data generation method, learning data generation device, and learning data generation program
CN111902851B (en) * 2018-03-15 2023-01-17 日本音响工程株式会社 Learning data generation method, learning data generation device, and learning data generation program
CN109782012A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第二十研究所 A kind of speed-measuring method based on photoelectric image feature association
CN112461204A (en) * 2019-08-19 2021-03-09 中国科学院长春光学精密机械与物理研究所 Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height
CN112461204B (en) * 2019-08-19 2022-08-16 中国科学院长春光学精密机械与物理研究所 Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height

Also Published As

Publication number Publication date
CN102853835B (en) 2014-12-31

Similar Documents

Publication Publication Date Title
CN102853835B (en) Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method
US10867189B2 (en) Systems and methods for lane-marker detection
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN107677274B (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN105469405A (en) Visual ranging-based simultaneous localization and map construction method
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN102426019A (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN103020945A (en) Remote sensing image registration method of multi-source sensor
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
CN101957203B (en) High-accuracy star tracking method of star sensor
CN103994765B (en) Positioning method of inertial sensor
CN103065135A (en) License number matching algorithm based on digital image processing
CN101839722A (en) Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN109614859B (en) Visual positioning feature extraction and matching method and device
CN102865859A (en) Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features)
CN107798691B (en) A kind of unmanned plane independent landing terrestrial reference real-time detection tracking of view-based access control model
CN112833892B (en) Semantic mapping method based on track alignment
CN105354841A (en) Fast matching method and system for remote sensing images
Qu et al. Evaluation of SIFT and SURF for vision based localization
US20220164603A1 (en) Data processing method, data processing apparatus, electronic device and storage medium
CN109376208A (en) A kind of localization method based on intelligent terminal, system, storage medium and equipment
Zhang et al. A LiDAR-intensity SLAM and loop closure detection method using an intensity cylindrical-projection shape context descriptor
Wang et al. High accuracy and low complexity LiDAR place recognition using unitary invariant frobenius norm
CN113295171A (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant