CN109871024A - A kind of UAV position and orientation estimation method based on lightweight visual odometry - Google Patents

A kind of UAV position and orientation estimation method based on lightweight visual odometry Download PDF

Info

Publication number
CN109871024A
CN109871024A CN201910008794.9A CN201910008794A CN109871024A CN 109871024 A CN109871024 A CN 109871024A CN 201910008794 A CN201910008794 A CN 201910008794A CN 109871024 A CN109871024 A CN 109871024A
Authority
CN
China
Prior art keywords
characteristic point
unmanned plane
point
binaryzation
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910008794.9A
Other languages
Chinese (zh)
Inventor
郑恩辉
王谈谈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201910008794.9A priority Critical patent/CN109871024A/en
Publication of CN109871024A publication Critical patent/CN109871024A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of UAV position and orientation estimation methods based on lightweight visual odometry.The two-dimensional image information and three-dimensional coordinate information of the spatial point in front of unmanned plane in visual field environment are obtained into information collection to visual field environment in front of unmanned plane by the depth camera that is mounted on unmanned plane;Using SIFT feature matching algorithm is improved, processing obtains the matching double points of preliminary matches: rejecting the Mismatching point in candidate feature point using random consistency algorithm, obtains accurate matched matching double points;Unmanned plane kinematic parameter is solved using accurate matched matching double points, and then acquires the variable quantity and attitude angle of unmanned plane position.The method of the present invention reduces equipment entrained by unmanned plane and price is relatively low.Meanwhile visual signal is stablized, stability is strong, and accumulated error is not present, and can be improved the accuracy and robustness of the autonomous pose estimation of unmanned plane.

Description

A kind of UAV position and orientation estimation method based on lightweight visual odometry
Technical field
The present invention relates to a kind of UAV position and orientation estimation method, in particular to a kind of to be based on lightweight visual odometry UAV position and orientation estimation method.
Background technique
In recent years, unmanned plane is because of the advantages that its is easy to operate, stable structure, unmanned and maneuverability, Jin Er The fields such as target following and positioning, exploration of taking photo by plane, fire-fighting rescue are widely used.Obviously, mesh is carried out using unmanned plane Mark tracking with the problems such as positioning, the control of unmanned plane Autonomous landing, UAV position and orientation in, the determination of UAV position and orientation state is must can not Few important link, also an always technical problem that can not avoid.
Currently, unmanned plane carries out the acquisition of itself posture information by GPS and IMU in traditional Navigation of Pilotless Aircraft.However The case where GPS signal is easy to appear missing in many environment, such as indoors, unmanned plane just can not determine the position letter of oneself Breath.At the same time, IMU obtains posture information according to the attitudes vibration of gyroscope and accelerometer by way of integral.But It is the precision for often generating cumulative errors in practical applications, and then reducing pose estimation.
Summary of the invention
In order to solve the disadvantage that the prior art and insufficient, the present invention provides a kind of nobody based on lightweight visual odometry Seat in the plane orientation estimation method.
As shown in Figure 1, the purpose of the present invention is realized by the technical solution of following steps:
1) by the depth camera that is mounted on unmanned plane to visual field environment in front of unmanned plane into information collection, obtain nobody The two-dimensional image information and three-dimensional coordinate information of spatial point in front of machine in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary The matching double points matched;
3) characteristic point pair of the preliminary matches obtained to step 2) is rejected candidate using random consistency algorithm (RANSAC) Mismatching point in characteristic point obtains accurate matched matching double points;
4) unmanned plane kinematic parameter is solved using accurate matched matching double points, and then acquires the variable quantity of unmanned plane position And attitude angle.
The step 2) specifically: progress characteristic point detection first;Secondly the spy of binaryzation is carried out to the characteristic point of acquisition Sign point description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains preliminary matches Matching double points.
The step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space is established by two dimensional image, drop is then carried out by two dimensional image and is adopted Process of convolution is carried out after sample processing with gaussian kernel function again to obtain the sampled images of different images size and constitute Gauss scale sky Between, different DoG scale images is then obtained by every adjacent two layers sampled images work difference in Gaussian scale-space and is constituted DoG scale space detects extreme point (Blob) in DoG scale space and is used as characteristic point;And in DoG scale space When each pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and Total 26 pixels of each 9 pixel in the DoG scale image of upper and lower adjacent scale are compared, and can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, by the ladder of characteristic point It spends feature vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2,,f128], fiIndicate ladder Spend i-th of gradient information in feature vector, biI-th of gradient information after indicating binaryzation;
2.3) characteristic point of every piece image and binaryzation description processing through the above steps 2.1)~2.2) are obtained, to phase Characteristic point between adjacent two field pictures is slightly matched, and candidate feature point is obtained:
To adjacent two field pictures, using the Euclidean distance between the binaryzation gradient eigenvector of characteristic point as two field pictures The decision metric of middle characteristic point similitude takes Euclidean distance between the binaryzation gradient eigenvector of characteristic point in two field pictures most A pair of of the characteristic point of close a pair of of characteristic point and Euclidean distance time closely, in these two pair characteristic point, if nearest Euclidean distance Be less than preset ratio threshold value divided by secondary close Euclidean distance, then judge that nearest a pair of of the characteristic point of Euclidean distance is similar, as With point pair, cast out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
Data characterization calculation amount, Ji Nengbao can be greatly reduced after handling in this way by above-mentioned improvement SIFT feature matching algorithm The accuracy that match point obtains is demonstrate,proved, and can be shortened the time in matching primitives.
In the step 4), the calculation for solving unmanned plane kinematic parameter is as follows:
4.1) following unmanned plane kinematic parameter formula is initially set up:
Then residual sum of squares (RSS) function min { R, T } is constructed, solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T;:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional seat of two characteristic points of matching double points in respectively adjacent two frame sequences image Mark, the three-dimensional coordinate information that the accurate matched characteristic point and step 1) obtained by step 3) obtains are combined and are obtained;Superscript p generation Table previous frame image, superscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional Coordinate;| | | | indicate absolute value of a vector;
4.2) variable quantity and attitude angle of unmanned plane position are solved using following formula:
Wherein,θ, ψ correspond to roll angle, pitch angle, yaw angle, r11-r33The each element being expressed as in spin matrix R, T is the variable quantity of unmanned plane spatial position.
The depth camera uses real senseD435 depth camera.
The present invention is used only real senseD435 depth camera and carries out information collection, reduces UAV position and orientation and estimates when institute Portable device, while depth camera can directly acquire the three-dimensional coordinate information of spatial point.
Different from traditional visual odometry, existing SIFT algorithm is when feature point description discrete consuming is a large amount of Between, there is a problem of real-time difference, the present invention carries out Feature Points Matching using improved SIFI Feature Correspondence Algorithm, by SIFT spy It levies vector and carries out binaryzation, to significantly improve the efficiency and accuracy of UAV position and orientation estimation.
Compared with prior art, the present invention have it is following a little and the utility model has the advantages that
1, method proposed by the present invention, reduces equipment entrained by unmanned plane and price is relatively low.
2, the present invention obtains the two-dimensional image information and three-dimensional coordinate of spatial point using real senseD435 depth camera Information avoids monocular odometer and needs multiple transformed coordinate system and can just solve the three-dimensional of spatial point by a large amount of calculating to sit Mark, improves the speed and precision of calculating.
3, the present invention proposes SIFT feature vector carrying out binaryzation, while realizing that raising guarantees detection accuracy, solution Conventional visual odometer of having determined uses SIFT algorithm in the feature point description discrete consuming plenty of time, and then leads to real-time difference Problem.
Detailed description of the invention
Fig. 1 is the logical flow chart of the method for the present invention.
Specific embodiment
The invention will be further described for explanation and specific embodiment with reference to the accompanying drawing.
As shown in Figure 1, specific embodiments of the present invention are as follows:
1) by the real senseD435 depth camera that is mounted on unmanned plane to visual field environment in front of unmanned plane into letter Breath acquisition, obtains the two-dimensional image information and three-dimensional coordinate information of the spatial point in front of unmanned plane in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary The matching double points matched:
2.1) it carries out characteristic point detection: two dimensional image space is established by two dimensional image, drop is then carried out by two dimensional image and is adopted Process of convolution is carried out after sample processing with gaussian kernel function again to obtain the sampled images of different images size and constitute Gauss scale sky Between, different DoG scale images is then obtained by every adjacent two layers sampled images work difference in Gaussian scale-space and is constituted DoG scale space detects extreme point (Blob) in DoG scale space and is used as characteristic point;And in DoG scale space When each pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and Total 26 pixels of each 9 pixel in the DoG scale image of upper and lower adjacent scale are compared, and can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, by the ladder of characteristic point It spends feature vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, takes the intermediate value of f, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2,, f128], fiIndicate i-th of gradient information in gradient eigenvector, biI-th of gradient information after indicating binaryzation;
2.3) characteristic point of every piece image and binaryzation description processing through the above steps 2.1)~2.2) are obtained, to phase Characteristic point between adjacent two field pictures is slightly matched, and candidate feature point is obtained:
To adjacent two field pictures, using the Euclidean distance between the binaryzation gradient eigenvector of characteristic point as two field pictures The decision metric of middle characteristic point similitude takes Euclidean distance between the binaryzation gradient eigenvector of characteristic point in two field pictures most A pair of of the characteristic point of close a pair of of characteristic point and Euclidean distance time closely, in these two pair characteristic point, if nearest Euclidean distance Be less than preset ratio threshold value divided by secondary close Euclidean distance, then judge that nearest a pair of of the characteristic point of Euclidean distance is similar, as With point pair, cast out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
3) characteristic point pair of the preliminary matches obtained to step 2) is rejected candidate using random consistency algorithm (RANSAC) Mismatching point in characteristic point obtains accurate matched matching double points;
4) unmanned plane kinematic parameter is solved using accurate matched matching double points, and then acquires the variable quantity of unmanned plane position And attitude angle:
4.1) following equation group is initially set up:
Then residual sum of squares (RSS) function min { R, T } is constructed, solves spin matrix R peace when obtaining objective function minimum Move vector T;
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional seat of two characteristic points of matching double points in respectively adjacent two frame sequences image Mark, the three-dimensional coordinate information that the accurate matched characteristic point and step 1) obtained by step 3) obtains are combined and are obtained;Superscript p generation Table previous frame image, superscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional Coordinate;
4.2) variable quantity and attitude angle of unmanned plane position are solved using following formula:
Wherein,θ, ψ correspond to roll angle, pitch angle, yaw angle, r11-r33The each member being expressed as in spin matrix R Element, such as r11Indicate that the 1st column element of the 1st row in spin matrix R, t are the variable quantity of unmanned plane spatial position.
The two adjacent frame sequence images that the present embodiment is acquired first with unmanned plane carry out three groups of feature extractions and spy altogether Matched comparative experiments is levied, time needed for obtaining former SIFI Feature Correspondence Algorithm and improved algorithmic match time compare such as Shown in table 1.
Table 1
By 1 gained of table, improved SIFI algorithm substantially reduces matching institute while realizing that raising guarantees detection accuracy The time of consumption.It solves Conventional visual odometer and consumes the plenty of time using SIFT matching algorithm, and then cause real-time poor The problem of.
Secondly the present embodiment utilizes standard data set The EuRoc Dataset, by with complicated vision slam system, That is LIBVISO2 method does the experimental analysis of qualitative comparison to UAV position and orientation estimation, and the results are shown in Table 2.
Table 2
In table 2, using estimation method of the invention and complicated LIBVIOSO2 estimation method respectively in standard data set Tetra- kinds of numbers of MH_01_easy, MH_02_easy, MH_03_medium, MH_01_difficult under The EuRoc Dataset According to progress UAV position and orientation estimation.According to estimated result, absolute evaluated error-ATE (m) and relative pose error-are calculated RPE(m/s)。
As can be seen from Table 2, method proposed by the present invention is in absolute evaluated error and relative pose error with complexity LIBVISO2 method is close, while the unmanned plane attained pose information that contrast standard data set provides, and shows estimation of the invention The UAV position and orientation and attained pose difference of method estimation are little.
Experiment further demonstrates that the present embodiment method is different from multiple while improving UAV position and orientation estimation accuracy Miscellaneous LIBVISO2 estimation method remains low overhead, realizes lightweight.
As above-mentioned implementation as it can be seen that the method for the present invention reduces equipment entrained by unmanned plane and price is relatively low.Together When, visual signal is stablized, and stability is strong, and accumulated error is not present, can be improved unmanned plane autonomous pose estimation accuracy and Robustness.
More than, it is merely preferred embodiments of the present invention;But scope of protection of the present invention is not limited thereto.It is any Those familiar with the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its improves Design is subject to equivalent substitution or change, should all cover in protection scope of the present invention.

Claims (5)

1. a kind of UAV position and orientation estimation method based on lightweight visual odometry, it is characterised in that: the following steps are included:
1) by the depth camera that is mounted on unmanned plane to visual field environment in front of unmanned plane into information collection, before obtaining unmanned plane The two-dimensional image information and three-dimensional coordinate information of spatial point in square visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points;
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) unmanned plane kinematic parameter is solved using accurate matched matching double points, and then acquires the variable quantity and appearance of unmanned plane position State angle.
2. a kind of UAV position and orientation estimation method based on lightweight visual odometry according to claim 1, feature It is: the step 2) specifically: progress characteristic point detection first;Secondly the characteristic point of binaryzation is carried out to the characteristic point of acquisition Description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains of preliminary matches With point pair.
3. a kind of UAV position and orientation estimation method based on lightweight visual odometry according to claim 1 or 2, special Sign is: the step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper, Under adjacent scale DoG scale image in total 26 pixels of each 9 pixel be compared;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2,,f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, biI-th of gradient information after indicating binaryzation;
2.3) characteristic point of every piece image and binaryzation description processing through the above steps 2.1)~2.2) are obtained, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
4. a kind of UAV position and orientation estimation method based on lightweight visual odometry according to claim 1, feature Be: in the step 4), the calculation for solving unmanned plane kinematic parameter is as follows:
4.1) following unmanned plane kinematic parameter formula is initially set up:
Then residual sum of squares (RSS) function min { R, T } is constructed, solves spin moment when obtaining objective function minimum with least square method Battle array R and translation vector T;:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before superscript p is represented One frame image, superscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate; | | | | indicate absolute value of a vector;
4.2) variable quantity and attitude angle of unmanned plane position are solved using following formula:
Wherein,θ, ψ correspond to roll angle, pitch angle, yaw angle, r11-r33The each element being expressed as in spin matrix R, t are The variable quantity of unmanned plane spatial position.
5. a kind of UAV position and orientation estimation method based on lightweight visual odometry according to claim 1, feature Be: the depth camera uses real senseD435 depth camera.
CN201910008794.9A 2019-01-04 2019-01-04 A kind of UAV position and orientation estimation method based on lightweight visual odometry Pending CN109871024A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910008794.9A CN109871024A (en) 2019-01-04 2019-01-04 A kind of UAV position and orientation estimation method based on lightweight visual odometry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910008794.9A CN109871024A (en) 2019-01-04 2019-01-04 A kind of UAV position and orientation estimation method based on lightweight visual odometry

Publications (1)

Publication Number Publication Date
CN109871024A true CN109871024A (en) 2019-06-11

Family

ID=66917570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910008794.9A Pending CN109871024A (en) 2019-01-04 2019-01-04 A kind of UAV position and orientation estimation method based on lightweight visual odometry

Country Status (1)

Country Link
CN (1) CN109871024A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751123A (en) * 2019-06-25 2020-02-04 北京机械设备研究所 Monocular vision inertial odometer system and method
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN111461998A (en) * 2020-03-11 2020-07-28 中国科学院深圳先进技术研究院 Environment reconstruction method and device
CN112102646A (en) * 2019-06-17 2020-12-18 北京初速度科技有限公司 Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104121902A (en) * 2014-06-28 2014-10-29 福州大学 Implementation method of indoor robot visual odometer based on Xtion camera
US20160203373A1 (en) * 2014-02-08 2016-07-14 Honda Motor Co., Ltd. System and method for mapping, localization, and pose correction of a vehicle based on images
CN108764080A (en) * 2018-05-17 2018-11-06 中国电子科技集团公司第五十四研究所 A kind of unmanned plane vision barrier-avoiding method based on cloud space binaryzation
CN108833928A (en) * 2018-07-03 2018-11-16 中国科学技术大学 Traffic Surveillance Video coding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203373A1 (en) * 2014-02-08 2016-07-14 Honda Motor Co., Ltd. System and method for mapping, localization, and pose correction of a vehicle based on images
CN104121902A (en) * 2014-06-28 2014-10-29 福州大学 Implementation method of indoor robot visual odometer based on Xtion camera
CN108764080A (en) * 2018-05-17 2018-11-06 中国电子科技集团公司第五十四研究所 A kind of unmanned plane vision barrier-avoiding method based on cloud space binaryzation
CN108833928A (en) * 2018-07-03 2018-11-16 中国科学技术大学 Traffic Surveillance Video coding method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈本清等: "基于SIFT和TPS算法的四旋翼无人机图像自动配准", 《遥感技术与应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102646A (en) * 2019-06-17 2020-12-18 北京初速度科技有限公司 Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal
CN112102646B (en) * 2019-06-17 2021-12-31 北京初速度科技有限公司 Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal
CN110751123A (en) * 2019-06-25 2020-02-04 北京机械设备研究所 Monocular vision inertial odometer system and method
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN111210463B (en) * 2020-01-15 2022-07-15 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN111461998A (en) * 2020-03-11 2020-07-28 中国科学院深圳先进技术研究院 Environment reconstruction method and device
WO2021179745A1 (en) * 2020-03-11 2021-09-16 中国科学院深圳先进技术研究院 Environment reconstruction method and device

Similar Documents

Publication Publication Date Title
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN111156984B (en) Monocular vision inertia SLAM method oriented to dynamic scene
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109871024A (en) A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN111024066A (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111739063A (en) Electric power inspection robot positioning method based on multi-sensor fusion
CN109029433A (en) Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN111862201B (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
CN110675453B (en) Self-positioning method for moving target in known scene
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
Fleer et al. Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots
CN111998862A (en) Dense binocular SLAM method based on BNN
CN112179373A (en) Measuring method of visual odometer and visual odometer
CN115471748A (en) Monocular vision SLAM method oriented to dynamic environment
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN109764864B (en) Color identification-based indoor unmanned aerial vehicle pose acquisition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190611

WD01 Invention patent application deemed withdrawn after publication