CN112577493B - Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance - Google Patents

Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance Download PDF

Info

Publication number
CN112577493B
CN112577493B CN202110222597.4A CN202110222597A CN112577493B CN 112577493 B CN112577493 B CN 112577493B CN 202110222597 A CN202110222597 A CN 202110222597A CN 112577493 B CN112577493 B CN 112577493B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
remote sensing
feature
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110222597.4A
Other languages
Chinese (zh)
Other versions
CN112577493A (en
Inventor
毛军
屈豪
胡小平
何晓峰
潘献飞
范晨
毛登科
谢嘉
吴雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202110222597.4A priority Critical patent/CN112577493B/en
Publication of CN112577493A publication Critical patent/CN112577493A/en
Application granted granted Critical
Publication of CN112577493B publication Critical patent/CN112577493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Abstract

The invention discloses an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance, the method adopts an inertia/vision combination odometer to continuously estimate the position and the posture of an unmanned aerial vehicle, a matching positioning algorithm is added on the basis of the inertia/vision combination odometer, and the accumulated errors of the position and the posture in the inertia/vision combination odometer are eliminated by matching and positioning the visual characteristics in a camera image and the visual characteristics of the remote sensing map; in the matching positioning algorithm, the correct matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer, so that the precision of the matching positioning algorithm is improved; in addition, a pose graph is adopted to optimize a target function, and a navigation result estimated by the inertia/vision combined odometer and a navigation result calculated by a matching positioning algorithm are fused, so that the overall precision of the navigation system is improved. The method provided by the invention has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.

Description

Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance
Technical Field
The invention relates to the technical field of autonomous navigation, in particular to an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance.
Background
The positioning system is an important guarantee for the unmanned aerial vehicle carrier to smoothly complete tasks. Currently, the widely used positioning method in drones is "inertial + satellite (radio)" combined navigation. Satellite navigation and radio navigation signals are extremely susceptible to interference, and there is a great potential risk to the navigation system of the drone to rely heavily on satellite or radio navigation. Therefore, the sensor that self carried is utilized, realizes independently fixing a position, has important meaning and extensive application prospect to unmanned aerial vehicle.
The inertial/visual combined odometer is a widely used unmanned aerial vehicle autonomous positioning method, and estimates navigation parameters of an unmanned aerial vehicle by utilizing the advantages of scale sensitivity, high output frequency, good dynamic property and the like of an inertial sensor and the advantage of high measurement precision of a visual sensor. The inertial/visual combined odometer can estimate the navigation parameters of the unmanned aerial vehicle and can perform three-dimensional reconstruction on the observed image characteristics. However, the positioning error of the combined visual/inertial odometer will accumulate with the flight of the drone, which severely limits the accuracy of the combined visual/inertial odometer.
Disclosure of Invention
The invention provides an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance, which are used for overcoming the defects that the position error of an unmanned aerial vehicle accumulates along with time in an inertia/vision combined navigation method in the prior art and the like.
In order to achieve the purpose, the invention provides an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance, which comprises the following steps:
loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
carrying out feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to a matching relation;
and constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, thereby optimizing the position and the attitude of the unmanned aerial vehicle.
In order to achieve the above object, the present invention further provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map assistance, including:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the characteristic matching module is used for carrying out characteristic matching on the characteristic description vector of the visual characteristic of the camera map and the characteristic description vector of the remote sensing map by using a matching positioning algorithm and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
and the optimization module is used for constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, so that the position and the attitude of the unmanned aerial vehicle are optimized.
To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
To achieve the above object, the present invention further proposes a computer-readable storage medium having a computer program stored thereon, which, when being executed by a processor, implements the steps of the above method.
Compared with the prior art, the invention has the beneficial effects that:
the unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention adopts an inertia/vision combined odometer to continuously estimate the position and the posture of the unmanned aerial vehicle; the inertia/vision combined odometer has better precision and robustness, and can estimate the navigation parameters of the unmanned aerial vehicle and the three-dimensional coordinates of the visual features of the camera image; on the basis of the inertia/vision combined odometer, a matching positioning algorithm is added, and accumulated errors of the position and the posture in the inertia/vision combined odometer are eliminated by matching and positioning the visual features in the airborne camera image and the visual features in the remote sensing map; according to the invention, the accurate matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer in the matching positioning algorithm, so that the precision of the matching positioning algorithm is effectively improved; in addition, the invention adopts a pose graph to optimize the objective function, fuses the navigation result estimated by the inertia/vision combination odometer and the navigation result calculated by the matching positioning algorithm, and improves the overall precision of the navigation system. Compared with the existing unmanned aerial vehicle autonomous navigation method, the method has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention;
FIG. 2 is a schematic diagram of feature matching between feature description vectors of visual features of a camera image and feature description vectors of a remote sensing map according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The drugs/reagents used are all commercially available without specific mention.
The invention provides an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance, which comprises the following steps of:
101: loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the remote sensing map is a visual map. Each pixel point in the remote sensing map has an accurate geographical position label, so that reliable position reference can be provided for autonomous positioning of the unmanned aerial vehicle.
102: the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
103: performing feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, as shown in FIG. 2, and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
104: and constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, thereby optimizing the position and the attitude of the unmanned aerial vehicle.
The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention adopts an inertia/vision combined odometer to continuously estimate the position, the speed and the posture of the unmanned aerial vehicle; the inertia/vision combined odometer has better precision and robustness, and can estimate the navigation parameters of the unmanned aerial vehicle and the three-dimensional coordinates of the visual features of the camera image; on the basis of the inertia/vision combined odometer, a matching positioning algorithm is added, and accumulated errors of the position and the posture in the inertia/vision combined odometer are eliminated by matching and positioning the visual features in the airborne camera image and the visual features in the remote sensing map; according to the invention, the accurate matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer in the matching positioning algorithm, so that the precision of the matching positioning algorithm is effectively improved; in addition, the invention adopts a pose graph to optimize the objective function, fuses the navigation result estimated by the inertia/vision combination odometer and the navigation result calculated by the matching positioning algorithm, and improves the overall precision of the navigation system. Compared with the existing unmanned aerial vehicle autonomous navigation method, the method has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.
In one embodiment, for step 101, according to the flight mission of the unmanned aerial vehicle, loading a remote sensing map of a flight area of the unmanned aerial vehicle and performing offline preprocessing on the remote sensing map, including:
001: loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
002: detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
Recording the position of the visual feature in the remote sensing map
Figure 936742DEST_PATH_IMAGE001
Wherein
Figure 535214DEST_PATH_IMAGE002
Represented on a remote sensing map
Figure 615165DEST_PATH_IMAGE003
Pixel coordinates of
Figure 906469DEST_PATH_IMAGE004
Figure 891612DEST_PATH_IMAGE005
Is shown as
Figure 293774DEST_PATH_IMAGE005
The visual characteristics of the human body are shown,
Figure 228232DEST_PATH_IMAGE006
indicating the total number of visual features.
003: and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
The characteristic description vector of the visual characteristic of the remote sensing map is recorded as
Figure 424858DEST_PATH_IMAGE007
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature.
In a next embodiment, for step 102, constructing an inertial/visual combined odometer using the output of the onboard camera and the output of the inertial measurement unit, solving the position, speed and attitude of the drone using the inertial/visual combined odometer, estimating three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates, including:
201: defining an origin and a direction of a navigation coordinate system;
the initial position of the drone is generally defined as the origin of a navigation coordinate system, with XYZ coordinate axes pointing to the east, north and sky, respectively.
202: tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
203: calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
204: and constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
In one embodiment, for step 202, tracking the visual features in the camera images output by the onboard cameras to obtain the positions of the same visual feature in different camera images includes:
2021: detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
2022: and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
The feature tracking mode adopts the existing mode, such as KLT sparse optical flow tracking method, dense optical flow tracking method, feature matching method and the like.
Is numbered as
Figure 444767DEST_PATH_IMAGE008
The visual characteristics of
Figure 650620DEST_PATH_IMAGE009
The positions appearing in the picture of the camera are recorded as
Figure 626535DEST_PATH_IMAGE010
In another embodiment, for step 203, calculating an inertial pre-integration quantity over the two-frame camera image acquisition time interval using the output of the inertial measurement unit comprises:
2031: acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
the measurement model of the inertia measurement unit is as follows:
Figure 525221DEST_PATH_IMAGE011
(1)
in the formula (I), the compound is shown in the specification,
Figure 235688DEST_PATH_IMAGE012
for the unmanned aerial vehicle measured by an accelerometer in an inertial measurement unittSpecific force at the moment;
Figure 979653DEST_PATH_IMAGE013
for gyroscopic measurements in inertial measurement unitstAngular rate of the time of day;
Figure 888704DEST_PATH_IMAGE014
for unmanned aerial vehicle attActual specific force at the moment;
Figure 941979DEST_PATH_IMAGE015
for unmanned aerial vehicle attActual angular rate of the moment;
Figure 874163DEST_PATH_IMAGE016
and
Figure 218557DEST_PATH_IMAGE017
is composed of
Figure 185376DEST_PATH_IMAGE018
Estimating zero offset values of the accelerometer and the gyroscope at the moment;
Figure 160285DEST_PATH_IMAGE019
as a world coordinate system totThe transformation matrix of the time body coordinate system,
Figure 376503DEST_PATH_IMAGE020
is a world coordinate system;
Figure 977117DEST_PATH_IMAGE021
is the gravity under the world coordinate system; measurement noise of a gyroscope
Figure 532863DEST_PATH_IMAGE022
And measurement noise of accelerometer
Figure 740991DEST_PATH_IMAGE023
Subject to a gaussian distribution,
Figure 647767DEST_PATH_IMAGE024
Figure 271646DEST_PATH_IMAGE025
and
Figure 9795DEST_PATH_IMAGE026
the variance of the noise is measured for the accelerometer and gyroscope, respectively.
2032: calculating sampling time of two frames of camera images according to specific force and angular rate of the unmanned aerial vehicle
Figure 841354DEST_PATH_IMAGE027
And
Figure 969847DEST_PATH_IMAGE028
the amount of inertial pre-integration in between.
The pre-integration expression is:
Figure 662996DEST_PATH_IMAGE029
(2)
in the formula (I), the compound is shown in the specification,
Figure 990072DEST_PATH_IMAGE030
as a body coordinate system at time t to
Figure 477686DEST_PATH_IMAGE031
An attitude matrix of the moment body coordinate system;
Figure 155792DEST_PATH_IMAGE032
an attitude matrix from a body coordinate system to a world coordinate system at the moment t;
Figure 901900DEST_PATH_IMAGE033
representing a right-multiplied quaternion;
Figure 286745DEST_PATH_IMAGE034
is an attitude pre-integral quantity;
Figure 945259DEST_PATH_IMAGE035
is a velocity pre-integration quantity;
Figure 110661DEST_PATH_IMAGE036
is the position pre-integration quantity.
The error differential equation of the inertia pre-integral quantity is as follows:
Figure 880034DEST_PATH_IMAGE037
(3)
in the formula (I), the compound is shown in the specification,
Figure 447282DEST_PATH_IMAGE038
estimating an error for the pre-integrated position component;
Figure 260386DEST_PATH_IMAGE039
estimating an error for the pre-integrated velocity component;
Figure 116346DEST_PATH_IMAGE040
estimating an error for the pre-integrated attitude component;
Figure 751727DEST_PATH_IMAGE041
estimating an error for an accelerometer null offset;
Figure 845585DEST_PATH_IMAGE042
estimating an error for a gyroscope zero offset;
Figure 111481DEST_PATH_IMAGE043
measuring noise for the accelerometer;
Figure 985896DEST_PATH_IMAGE044
measuring noise by a gyroscope;
Figure 611918DEST_PATH_IMAGE045
estimating noise for accelerometer null bias;
Figure 888179DEST_PATH_IMAGE046
estimating noise for gyroscope zero bias;
Figure 59397DEST_PATH_IMAGE047
is a state error transfer coefficient matrix;
Figure 624371DEST_PATH_IMAGE048
is a state error matrix;
Figure 601554DEST_PATH_IMAGE049
is a noise transfer coefficient matrix;
Figure 670004DEST_PATH_IMAGE050
is a noise matrix; the amounts represented by the symbols in formula (3) are all in a continuous space.
Inertia pre-productCovariance matrix of components
Figure 74441DEST_PATH_IMAGE051
The iterative solution may be performed by a first order equation with respect to discrete time. First, give the covariance matrix
Figure 211916DEST_PATH_IMAGE052
An initial value of 0 is assigned, and then an iterative solution is performed by:
Figure 664894DEST_PATH_IMAGE053
(4)
in the formula (I), the compound is shown in the specification,
Figure 915746DEST_PATH_IMAGE054
sampling time interval corresponding to the inertia pre-integration quantity;
Figure 694347DEST_PATH_IMAGE055
diagonal matrix constructed for noise
Figure 968333DEST_PATH_IMAGE056
Figure 287319DEST_PATH_IMAGE057
Is an identity matrix;
Figure 845208DEST_PATH_IMAGE058
is an error transfer coefficient matrix;
Figure 529130DEST_PATH_IMAGE059
is a noise transfer coefficient matrix.
The Jacobian matrix of the inertial pre-integration quantity with respect to the error quantity can be iteratively solved by:
Figure 87151DEST_PATH_IMAGE060
(5)
in the formula (I), the compound is shown in the specification,
Figure 147511DEST_PATH_IMAGE061
giving an initial value as a unit matrix
Figure 310639DEST_PATH_IMAGE062
. Covariance matrix calculated by equation (4)
Figure 227779DEST_PATH_IMAGE063
And the Jacobian matrix calculated by the formula (5)
Figure 725625DEST_PATH_IMAGE064
Amount of pre-integration
Figure 589676DEST_PATH_IMAGE065
A first order approximation of zero offset for an inertial measurement unit can be written as:
Figure 404048DEST_PATH_IMAGE066
(6)
wherein the content of the first and second substances,
Figure 695352DEST_PATH_IMAGE067
Figure 431227DEST_PATH_IMAGE068
jacobian matrices of pre-integration position components relative to accelerometer zero-offset and gyroscope zero-offset, respectively;
Figure 895707DEST_PATH_IMAGE069
Figure 17115DEST_PATH_IMAGE070
jacobian matrixes of pre-integration velocity components relative to zero offset of an accelerometer and zero offset of a gyroscope are respectively;
Figure 479321DEST_PATH_IMAGE071
a Jacobian matrix which is the zero offset of the pre-integrated attitude component relative to the gyroscope;
Figure 499229DEST_PATH_IMAGE072
and
Figure 705083DEST_PATH_IMAGE073
estimating errors for the accelerometer and gyroscope zero offsets, respectively;
Figure 431730DEST_PATH_IMAGE074
Figure 579684DEST_PATH_IMAGE075
and
Figure 24571DEST_PATH_IMAGE076
the method is characterized in that the method is as follows:
Figure 830853DEST_PATH_IMAGE077
(7)
wherein the content of the first and second substances,
Figure 677587DEST_PATH_IMAGE078
converting the attitude quaternion into an attitude rotation matrix;
Figure 481595DEST_PATH_IMAGE079
multiplication is carried out for quaternion;
Figure 476095DEST_PATH_IMAGE080
and
Figure 273019DEST_PATH_IMAGE081
is composed of
Figure 974259DEST_PATH_IMAGE082
And estimating zero offset values of the accelerometer and the gyroscope at the moment.
In the next embodiment, for step 204, constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates, including:
is provided with
Figure 745906DEST_PATH_IMAGE083
The navigation state at each moment needs to be estimated, which is then used
Figure 165386DEST_PATH_IMAGE084
Moment-by-moment inertial/visual combined navigation state vector
Figure 516733DEST_PATH_IMAGE085
Expressed as:
Figure 134796DEST_PATH_IMAGE086
(8)
wherein the content of the first and second substances,
Figure 529874DEST_PATH_IMAGE087
is shown as
Figure 436650DEST_PATH_IMAGE088
Navigation state vector to be estimated at each moment, including position vector of unmanned aerial vehicle
Figure 122846DEST_PATH_IMAGE089
Velocity vector
Figure 798678DEST_PATH_IMAGE090
Attitude quaternion
Figure 115390DEST_PATH_IMAGE091
Accelerometer null-bias estimation in sliding window
Figure 306200DEST_PATH_IMAGE092
And gyroscope zero offset estimate
Figure 248617DEST_PATH_IMAGE093
(ii) a Navigation state vector
Figure 778956DEST_PATH_IMAGE094
In addition, also comprises
Figure 328886DEST_PATH_IMAGE095
Inverse depth information of visual features in a camera map over a period of time
Figure 210254DEST_PATH_IMAGE096
Wherein
Figure 175936DEST_PATH_IMAGE097
Is the first
Figure 623098DEST_PATH_IMAGE098
The inverse depth of the visual features in the individual camera images,
Figure 796459DEST_PATH_IMAGE099
wherein
Figure 899544DEST_PATH_IMAGE100
As a total number of feature points, the three-dimensional coordinates of the feature in the navigation coordinate system
Figure 934496DEST_PATH_IMAGE101
The method can be obtained by inverse depth calculation, and the calculation method comprises the following steps:
Figure 236165DEST_PATH_IMAGE102
(9)
wherein the content of the first and second substances,
Figure 65580DEST_PATH_IMAGE103
is as follows
Figure 718279DEST_PATH_IMAGE104
The first observed camera picture of the visual features in the individual camera pictures
Figure 275031DEST_PATH_IMAGE105
The position of (1);
Figure 900047DEST_PATH_IMAGE106
is as follows
Figure 962681DEST_PATH_IMAGE107
A posture matrix from the frame coordinate system to the world coordinate system;
Figure 774779DEST_PATH_IMAGE108
an attitude matrix between the airborne camera and the inertial measurement unit body coordinate system;
Figure 213851DEST_PATH_IMAGE109
a position matrix between the airborne camera and the inertial measurement unit body coordinate system;
Figure 693374DEST_PATH_IMAGE110
representing the inverse mapping of the image coordinates to three-dimensional coordinates of unit depth.
The optimization objective function for the combined inertial/visual navigation can be expressed as:
Figure 113860DEST_PATH_IMAGE111
(10)
in the formula (I), the compound is shown in the specification,
Figure 678833DEST_PATH_IMAGE112
for the residual error of the inertia pre-integration value, the specific expression is as follows:
Figure 390437DEST_PATH_IMAGE113
(11)
in the formula (I), the compound is shown in the specification,
Figure 724467DEST_PATH_IMAGE114
estimating an error for the pre-integrated position component;
Figure 128903DEST_PATH_IMAGE115
estimating an error for the pre-integrated velocity component;
Figure 181173DEST_PATH_IMAGE116
estimating an error for the pre-integrated attitude component;
Figure 883419DEST_PATH_IMAGE117
estimating an error for an accelerometer null offset;
Figure 134271DEST_PATH_IMAGE118
estimating an error for a gyroscope zero offset;
Figure 647292DEST_PATH_IMAGE119
as a world coordinate system to
Figure 983596DEST_PATH_IMAGE120
A transformation matrix of the time body coordinate system;
Figure 240265DEST_PATH_IMAGE121
is composed of
Figure 548886DEST_PATH_IMAGE122
A moment body unmanned aerial vehicle position vector;
Figure 295125DEST_PATH_IMAGE123
is the gravity under the world coordinate system;
Figure 40096DEST_PATH_IMAGE124
is the time window length;
Figure 162773DEST_PATH_IMAGE125
is composed of
Figure 325901DEST_PATH_IMAGE126
A time-of-day unmanned aerial vehicle velocity vector;
Figure 180725DEST_PATH_IMAGE127
Figure 491620DEST_PATH_IMAGE128
Figure 90092DEST_PATH_IMAGE129
is the measurement value of the pre-integration three-component;
Figure 904464DEST_PATH_IMAGE130
is composed of
Figure 445036DEST_PATH_IMAGE131
An attitude inverse matrix of the unmanned aerial vehicle of the time body;
Figure 977648DEST_PATH_IMAGE132
Figure 379811DEST_PATH_IMAGE133
is composed of
Figure 251952DEST_PATH_IMAGE134
Zero bias of the time accelerometer and gyroscope; the quantities represented by the symbols in equation (11) are all in discrete space.
Figure 776474DEST_PATH_IMAGE135
For the visual observation residual, the specific expression is:
Figure 468487DEST_PATH_IMAGE136
(12)
Figure 736657DEST_PATH_IMAGE137
(13)
in the formula (I), the compound is shown in the specification,
Figure 978151DEST_PATH_IMAGE138
Figure 345679DEST_PATH_IMAGE139
is composed of
Figure 852883DEST_PATH_IMAGE140
And
Figure 596848DEST_PATH_IMAGE141
the location vector of the time instant in the world coordinate system.
Figure 443582DEST_PATH_IMAGE142
For marginalized residuals, represent
Figure 309907DEST_PATH_IMAGE143
Influence of navigation state vector before the moment.
In the formula (I), the compound is shown in the specification,
Figure 225779DEST_PATH_IMAGE144
Figure 39014DEST_PATH_IMAGE145
a priori error and hessian matrix.
Solving the formula (10) by adopting a nonlinear optimization algorithm to obtain the position vector of the unmanned aerial vehicle estimated by the inertia/vision combined odometer
Figure 802571DEST_PATH_IMAGE146
Velocity vector
Figure 777480DEST_PATH_IMAGE147
Attitude quaternion
Figure 196960DEST_PATH_IMAGE148
And inverse depth of visual features
Figure 797575DEST_PATH_IMAGE149
. And recovering the three-dimensional coordinates of the visual features under the navigation coordinate system through the inverse depth and the position and posture information obtained by solving, wherein the specific calculation method is shown in a formula (9).
In another embodiment, for step 204, establishing feature description vectors for visual features of the camera view from the three-dimensional coordinates comprises:
selecting visual features in a camera view in a combined inertial/visual odometer based on reprojection errors of three-dimensional coordinates estimated in the camera viewAnd (4) matching and positioning. The reprojection error can be calculated according to the formula (12), only the features with the reprojection error smaller than a set threshold are screened to be used for matching and positioning, and the screened feature set is recorded as
Figure 618900DEST_PATH_IMAGE150
Set characteristics into
Figure 561448DEST_PATH_IMAGE151
Performing feature description to obtain feature description vector of visual features of camera image
Figure 468224DEST_PATH_IMAGE152
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature. The feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map are the same feature description vector.
In the next embodiment, for step 103, performing feature matching on the feature description vector of the visual feature of the camera image and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to the matching relationship, includes:
301: feature description vectors for camera map visual features
Figure 357683DEST_PATH_IMAGE153
Each element in the remote sensing map, and a feature description vector of the remote sensing map by using a matching positioning algorithm
Figure 830253DEST_PATH_IMAGE154
Searching for the most similar features and establishing feature matching pairs
Figure 396232DEST_PATH_IMAGE155
Figure 790304DEST_PATH_IMAGE156
Showing the first selected from the visual features of the camera view
Figure 280191DEST_PATH_IMAGE157
The first of the individual features and the remote sensing map visual features
Figure 810530DEST_PATH_IMAGE158
A matching pair is formed between the elements.
302: and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
In a certain embodiment, for step 302, screening the feature matching pairs by using three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the drone includes:
3021: each feature matching pair
Figure 360460DEST_PATH_IMAGE159
And calculating a position translation amount, wherein the calculation method comprises the following steps:
Figure 976249DEST_PATH_IMAGE160
(14)
wherein the content of the first and second substances,
Figure 456778DEST_PATH_IMAGE161
Figure 107202DEST_PATH_IMAGE162
three-dimensional coordinate points in a navigation coordinate system for visual features in a camera view
Figure 828033DEST_PATH_IMAGE163
Is/are as follows
Figure 196698DEST_PATH_IMAGE164
Figure 966071DEST_PATH_IMAGE165
An axis coordinate value;
Figure 720269DEST_PATH_IMAGE166
coordinates of a point at the upper left corner of the remote sensing map under a navigation coordinate system;
Figure 612002DEST_PATH_IMAGE167
Figure 654913DEST_PATH_IMAGE168
coordinates of the visual features of the remote sensing map in the remote sensing map are obtained;
Figure 493556DEST_PATH_IMAGE169
the map resolution is in pixels/meter.
3022: for each matching pair
Figure 367840DEST_PATH_IMAGE170
Initializing an empty set
Figure 430474DEST_PATH_IMAGE171
. Searching for and matching pairs in a matching set (all matching pairs constitute a matching set)
Figure 242572DEST_PATH_IMAGE170
The method comprises the following steps that a matching pair which is close to translation amount is provided, the search method judges the consistency of the translation amount of the two matching pairs, and the consistency judgment parameter is calculated as follows:
Figure 416064DEST_PATH_IMAGE172
(15)
when the consistency parameter is less than the set threshold value, corresponding translation amount is calculated
Figure 895587DEST_PATH_IMAGE173
Join to a collection
Figure 66805DEST_PATH_IMAGE174
Performing the following steps;
3023: screening the corresponding set in all the matching pairs
Figure 881047DEST_PATH_IMAGE175
Set of most elements, denoted as
Figure 858230DEST_PATH_IMAGE176
. When in use
Figure 192259DEST_PATH_IMAGE177
When the number of the elements in the positioning table exceeds a set threshold value, the matching positioning is successful. If the matching is successful, matching positioning calculation is carried out; if the matching is not successful, the process returns to step 301 to match the next camera image.
3024: if the matching is successful, the set is calculated
Figure 596696DEST_PATH_IMAGE178
Mean of all elements, note
Figure 117807DEST_PATH_IMAGE179
. Matching position of unmanned aerial vehicle
Figure 898681DEST_PATH_IMAGE180
Can be expressed by the following formula:
Figure 87217DEST_PATH_IMAGE181
(16)
wherein the content of the first and second substances,
Figure 662555DEST_PATH_IMAGE182
a drone position estimated for a combined inertial/visual odometer.
In a next embodiment, for step 104, the position and the attitude of the drone solved by the inertia/vision combined odometer, and the position and the attitude of the drone are solved according to the matching relationship, and an optimization objective function of the pose graph of the drone is constructed, so as to optimize the position and the attitude of the drone, including:
401: the position and the attitude of the unmanned aerial vehicle at each camera image sampling moment are described as a node and are represented as
Figure 185809DEST_PATH_IMAGE183
Establishing a connecting edge residual error between two nodes by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer; node point
Figure 442478DEST_PATH_IMAGE184
And
Figure 547837DEST_PATH_IMAGE185
can be expressed as
Figure 231759DEST_PATH_IMAGE186
The specific expression is as follows:
Figure 258621DEST_PATH_IMAGE187
(17)
wherein the content of the first and second substances,
Figure 115719DEST_PATH_IMAGE188
Figure 528114DEST_PATH_IMAGE189
in order to be a node, the node is,
Figure 445255DEST_PATH_IMAGE190
representing to convert the attitude quaternion into an attitude rotation matrix; residual terms of the connected edge
Figure 428254DEST_PATH_IMAGE191
I.e. the residual term between the measured value of the inertial/visual odometer, the expression is:
Figure 89043DEST_PATH_IMAGE192
(18)
wherein the content of the first and second substances,
Figure 372257DEST_PATH_IMAGE193
representing the combined inertial/visual odometer estimation,
Figure 397982DEST_PATH_IMAGE194
and
Figure 930594DEST_PATH_IMAGE195
and (4) respectively representing an estimated value of the position vector and an estimated value of the attitude vector, and the specific expression is the same as (17).
402: utilizing the matching relation to solve the position and the posture of the unmanned aerial vehicle, and constructing a matching positioning residual error
Figure 582024DEST_PATH_IMAGE196
(ii) a The specific expression is shown as:
Figure 516482DEST_PATH_IMAGE197
(19)
wherein the content of the first and second substances,
Figure 713108DEST_PATH_IMAGE198
representing the position of the drone estimated by the combined inertial/visual odometer,
Figure 733017DEST_PATH_IMAGE199
indicating the matching location of the drone.
If it is first
Figure 938870DEST_PATH_IMAGE200
If each node has no corresponding matching positioning result, no corresponding residual error item exists.
403: according to the residual error of the connected edge
Figure 931097DEST_PATH_IMAGE201
And matching positioning residuals
Figure 360941DEST_PATH_IMAGE202
And constructing a pose graph optimization objective function
Figure 55097DEST_PATH_IMAGE203
The specific expression is as follows:
Figure 861379DEST_PATH_IMAGE204
(20)
wherein the content of the first and second substances,
Figure 708112DEST_PATH_IMAGE205
represents a collection of all nodes;
Figure 512120DEST_PATH_IMAGE206
indicating only the nodes successfully matched and positioned;
Figure 241041DEST_PATH_IMAGE207
the error covariance matrix for representing the measurement of the inertial/visual odometer can be set according to experience and algorithm precision;
Figure 788697DEST_PATH_IMAGE208
the covariance matrix of the positioning error can be set based on experience and algorithm accuracy.
404: and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
The invention also provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map assistance, which comprises:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the feature matching module is used for performing feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map and calculating the position and the posture of the unmanned aerial vehicle according to the matching relation;
and the optimization module is used for solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, solving the position and the attitude of the unmanned aerial vehicle according to the matching relation, and constructing an unmanned aerial vehicle pose graph optimization objective function so as to optimize the position and the attitude of the unmanned aerial vehicle.
In one embodiment, the telemetric map module further comprises:
001: loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
002: detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
Recording the position of the visual feature in the remote sensing map
Figure 286675DEST_PATH_IMAGE209
Wherein
Figure 510852DEST_PATH_IMAGE210
Represented on a remote sensing map
Figure 727069DEST_PATH_IMAGE211
Pixel coordinates of
Figure 78416DEST_PATH_IMAGE212
Figure 634163DEST_PATH_IMAGE213
Is shown as
Figure 842290DEST_PATH_IMAGE213
The visual characteristics of the human body are shown,
Figure 483487DEST_PATH_IMAGE214
indicating the total number of visual features.
003: and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
The characteristic description vector of the visual characteristic of the remote sensing map is recorded as
Figure 435262DEST_PATH_IMAGE215
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature.
In a further embodiment, the combined inertial/visual odometer module further comprises:
201: defining an origin and a direction of a navigation coordinate system;
the initial position of the drone is generally defined as the origin of a navigation coordinate system, with XYZ coordinate axes pointing to the east, north and sky, respectively.
202: tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
203: calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
204: and constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
In a certain embodiment, the combined inertial/visual odometer module further comprises:
2021: detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
2022: and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
The feature tracking mode adopts the existing mode, such as KLT sparse optical flow tracking method, dense optical flow tracking method, feature matching method and the like.
Is numbered as
Figure 94783DEST_PATH_IMAGE216
The visual characteristics of
Figure 473812DEST_PATH_IMAGE217
The positions appearing in the picture of the camera are recorded as
Figure 602305DEST_PATH_IMAGE218
In another embodiment, the combined inertial/visual odometer module further comprises:
2031: acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
the measurement model of the inertia measurement unit is as follows:
Figure 295454DEST_PATH_IMAGE219
(1)
in the formula (I), the compound is shown in the specification,
Figure 888109DEST_PATH_IMAGE220
for the unmanned aerial vehicle measured by an accelerometer in an inertial measurement unittSpecific force at the moment;
Figure 110143DEST_PATH_IMAGE221
for gyroscopic measurements in inertial measurement unitstAngular rate of the time of day;
Figure 240779DEST_PATH_IMAGE222
for unmanned aerial vehicle attActual specific force at the moment;
Figure 534357DEST_PATH_IMAGE223
for unmanned aerial vehicle attActual angular rate of the moment;
Figure 919202DEST_PATH_IMAGE224
and
Figure 640034DEST_PATH_IMAGE225
is composed of
Figure 477540DEST_PATH_IMAGE226
Estimating zero offset values of the accelerometer and the gyroscope at the moment;
Figure 309229DEST_PATH_IMAGE227
as a world coordinate system totThe transformation matrix of the time body coordinate system,
Figure 79739DEST_PATH_IMAGE228
is a world coordinate system;
Figure 158423DEST_PATH_IMAGE229
is the gravity under the world coordinate system; measurement noise of a gyroscope
Figure 545542DEST_PATH_IMAGE230
And measurement noise of accelerometer
Figure 118605DEST_PATH_IMAGE231
Subject to a gaussian distribution,
Figure 540359DEST_PATH_IMAGE232
Figure 540676DEST_PATH_IMAGE233
and
Figure 415092DEST_PATH_IMAGE234
respectively accelerometer and gyroscopeThe spirometer measures the variance of the noise.
2032: calculating sampling time of two frames of camera images according to specific force and angular rate of the unmanned aerial vehicle
Figure 791846DEST_PATH_IMAGE235
And
Figure 278495DEST_PATH_IMAGE236
the amount of inertial pre-integration in between.
The pre-integration expression is:
Figure 512030DEST_PATH_IMAGE237
(2)
in the formula (I), the compound is shown in the specification,
Figure 811425DEST_PATH_IMAGE238
as a body coordinate system at time t to
Figure 788608DEST_PATH_IMAGE239
An attitude matrix of the moment body coordinate system;
Figure 122637DEST_PATH_IMAGE240
an attitude matrix from a body coordinate system to a world coordinate system at the moment t;
Figure 464757DEST_PATH_IMAGE241
representing a right-multiplied quaternion;
Figure 313764DEST_PATH_IMAGE242
is an attitude pre-integral quantity;
Figure 281589DEST_PATH_IMAGE243
is a velocity pre-integration quantity;
Figure 266863DEST_PATH_IMAGE244
is the position pre-integration quantity.
The error differential equation of the inertia pre-integral quantity is as follows:
Figure 779884DEST_PATH_IMAGE245
(3)
in the formula (I), the compound is shown in the specification,
Figure 319449DEST_PATH_IMAGE246
estimating an error for the pre-integrated position component;
Figure 638435DEST_PATH_IMAGE247
estimating an error for the pre-integrated velocity component;
Figure 681477DEST_PATH_IMAGE248
estimating an error for the pre-integrated attitude component;
Figure 427717DEST_PATH_IMAGE249
estimating an error for an accelerometer null offset;
Figure 438267DEST_PATH_IMAGE250
estimating an error for a gyroscope zero offset;
Figure 295364DEST_PATH_IMAGE251
measuring noise for the accelerometer;
Figure 192913DEST_PATH_IMAGE252
measuring noise by a gyroscope;
Figure 578895DEST_PATH_IMAGE253
estimating noise for accelerometer null bias;
Figure 624212DEST_PATH_IMAGE254
estimating noise for gyroscope zero bias;
Figure 222683DEST_PATH_IMAGE255
is a state error transfer coefficient matrix;
Figure 302635DEST_PATH_IMAGE256
is a state error matrix;
Figure 577627DEST_PATH_IMAGE257
is a noise transfer coefficient matrix;
Figure 110240DEST_PATH_IMAGE258
is a noise matrix; the amounts represented by the symbols in formula (3) are all in a continuous space.
Covariance matrix of inertia pre-integral quantity
Figure 512402DEST_PATH_IMAGE259
The iterative solution may be performed by a first order equation with respect to discrete time. First, give the covariance matrix
Figure 650122DEST_PATH_IMAGE260
An initial value of 0 is assigned, and then an iterative solution is performed by:
Figure 909065DEST_PATH_IMAGE261
(4)
in the formula (I), the compound is shown in the specification,
Figure 866657DEST_PATH_IMAGE262
sampling time interval corresponding to the inertia pre-integration quantity;
Figure 869248DEST_PATH_IMAGE263
diagonal matrix constructed for noise
Figure 110743DEST_PATH_IMAGE264
Figure 743849DEST_PATH_IMAGE265
Is an identity matrix;
Figure 251054DEST_PATH_IMAGE266
is an error transfer coefficient matrix;
Figure 729440DEST_PATH_IMAGE267
is a noise transfer coefficient matrix.
The Jacobian matrix of the inertial pre-integration quantity with respect to the error quantity can be iteratively solved by:
Figure 638490DEST_PATH_IMAGE268
(5)
in the formula (I), the compound is shown in the specification,
Figure 442498DEST_PATH_IMAGE269
giving an initial value as a unit matrix
Figure 623949DEST_PATH_IMAGE270
. Covariance matrix calculated by equation (4)
Figure 968343DEST_PATH_IMAGE271
And the Jacobian matrix calculated by the formula (5)
Figure 669583DEST_PATH_IMAGE272
Amount of pre-integration
Figure 910071DEST_PATH_IMAGE273
A first order approximation of zero offset for an inertial measurement unit can be written as:
Figure 126289DEST_PATH_IMAGE274
(6)
wherein the content of the first and second substances,
Figure 212057DEST_PATH_IMAGE275
Figure 830120DEST_PATH_IMAGE276
jacobian matrices of pre-integration position components relative to accelerometer zero-offset and gyroscope zero-offset, respectively;
Figure 225198DEST_PATH_IMAGE277
Figure 928712DEST_PATH_IMAGE278
respectively, pre-integrated velocity component phaseJacobian matrices for accelerometer and gyroscope zero offsets;
Figure 818170DEST_PATH_IMAGE279
a Jacobian matrix which is the zero offset of the pre-integrated attitude component relative to the gyroscope;
Figure 494002DEST_PATH_IMAGE280
and
Figure 873031DEST_PATH_IMAGE281
estimating errors for the accelerometer and gyroscope zero offsets, respectively;
Figure 1524DEST_PATH_IMAGE282
Figure 491411DEST_PATH_IMAGE283
and
Figure 5438DEST_PATH_IMAGE284
the method is characterized in that the method is as follows:
Figure 758630DEST_PATH_IMAGE285
(7)
wherein the content of the first and second substances,
Figure 436736DEST_PATH_IMAGE286
converting the attitude quaternion into an attitude rotation matrix;
Figure 667998DEST_PATH_IMAGE287
multiplication is carried out for quaternion;
Figure 115159DEST_PATH_IMAGE288
and
Figure 508095DEST_PATH_IMAGE289
is composed of
Figure 673497DEST_PATH_IMAGE290
The time of day is estimatedZero bias values of the accelerometer and gyroscope.
In a next embodiment, the combined inertial/visual odometer module further comprises:
is provided with
Figure 957716DEST_PATH_IMAGE291
The navigation state at each moment needs to be estimated, which is then used
Figure 728226DEST_PATH_IMAGE292
Moment-by-moment inertial/visual combined navigation state vector
Figure 292063DEST_PATH_IMAGE293
Expressed as:
Figure 944761DEST_PATH_IMAGE294
(8)
wherein the content of the first and second substances,
Figure 517825DEST_PATH_IMAGE295
is shown as
Figure 674000DEST_PATH_IMAGE296
Navigation state vector to be estimated at each moment, including position vector of unmanned aerial vehicle
Figure 923584DEST_PATH_IMAGE297
Velocity vector
Figure 797999DEST_PATH_IMAGE298
Attitude quaternion
Figure 174754DEST_PATH_IMAGE299
Accelerometer null-bias estimation in sliding window
Figure 451015DEST_PATH_IMAGE300
And gyroscope zero offset estimate
Figure 622233DEST_PATH_IMAGE301
(ii) a Navigation state vector
Figure 187207DEST_PATH_IMAGE302
In addition, also comprises
Figure 164390DEST_PATH_IMAGE303
Inverse depth information of visual features in a camera map over a period of time
Figure 482108DEST_PATH_IMAGE304
Wherein
Figure 886544DEST_PATH_IMAGE305
Is the first
Figure 673235DEST_PATH_IMAGE306
The inverse depth of the visual features in the individual camera images,
Figure 391792DEST_PATH_IMAGE307
wherein
Figure 642645DEST_PATH_IMAGE308
As a total number of feature points, the three-dimensional coordinates of the feature in the navigation coordinate system
Figure 421245DEST_PATH_IMAGE309
The method can be obtained by inverse depth calculation, and the calculation method comprises the following steps:
Figure 944499DEST_PATH_IMAGE310
(9)
wherein the content of the first and second substances,
Figure 263485DEST_PATH_IMAGE311
is as follows
Figure 306527DEST_PATH_IMAGE312
The first observed camera picture of the visual features in the individual camera pictures
Figure 52766DEST_PATH_IMAGE313
In (1)A location;
Figure 548470DEST_PATH_IMAGE314
is as follows
Figure 671146DEST_PATH_IMAGE315
A posture matrix from the frame coordinate system to the world coordinate system;
Figure 568695DEST_PATH_IMAGE316
an attitude matrix between the airborne camera and the inertial measurement unit body coordinate system;
Figure 485836DEST_PATH_IMAGE317
a position matrix between the airborne camera and the inertial measurement unit body coordinate system;
Figure 249261DEST_PATH_IMAGE318
representing the inverse mapping of the image coordinates to three-dimensional coordinates of unit depth.
The optimization objective function for the combined inertial/visual navigation can be expressed as:
Figure 847733DEST_PATH_IMAGE111
(10)
in the formula (I), the compound is shown in the specification,
Figure 662105DEST_PATH_IMAGE319
for the residual error of the inertia pre-integration value, the specific expression is as follows:
Figure 953409DEST_PATH_IMAGE320
(11)
in the formula (I), the compound is shown in the specification,
Figure 486022DEST_PATH_IMAGE321
estimating an error for the pre-integrated position component;
Figure 888184DEST_PATH_IMAGE322
estimating an error for the pre-integrated velocity component;
Figure 557063DEST_PATH_IMAGE323
estimating an error for the pre-integrated attitude component;
Figure 534115DEST_PATH_IMAGE324
estimating an error for an accelerometer null offset;
Figure 491707DEST_PATH_IMAGE325
estimating an error for a gyroscope zero offset;
Figure 494298DEST_PATH_IMAGE326
as a world coordinate system to
Figure 220945DEST_PATH_IMAGE327
A transformation matrix of the time body coordinate system;
Figure 916369DEST_PATH_IMAGE328
is composed of
Figure 95677DEST_PATH_IMAGE329
A moment body unmanned aerial vehicle position vector;
Figure 354489DEST_PATH_IMAGE330
is the gravity under the world coordinate system;
Figure 263539DEST_PATH_IMAGE331
is the time window length;
Figure 801968DEST_PATH_IMAGE332
is composed of
Figure 796469DEST_PATH_IMAGE333
A time-of-day unmanned aerial vehicle velocity vector;
Figure 78546DEST_PATH_IMAGE334
Figure 45365DEST_PATH_IMAGE335
Figure 817012DEST_PATH_IMAGE336
is the measurement value of the pre-integration three-component;
Figure 485759DEST_PATH_IMAGE337
is composed of
Figure 633844DEST_PATH_IMAGE338
An attitude inverse matrix of the unmanned aerial vehicle of the time body;
Figure 189590DEST_PATH_IMAGE339
Figure 397718DEST_PATH_IMAGE340
is composed of
Figure 38915DEST_PATH_IMAGE341
Zero bias of the time accelerometer and gyroscope; the quantities represented by the symbols in equation (11) are all in discrete space.
Figure 928373DEST_PATH_IMAGE342
For the visual observation residual, the specific expression is:
Figure 666522DEST_PATH_IMAGE343
(12)
Figure 966922DEST_PATH_IMAGE344
(13)
in the formula (I), the compound is shown in the specification,
Figure 360994DEST_PATH_IMAGE345
Figure 850882DEST_PATH_IMAGE346
is composed of
Figure 381220DEST_PATH_IMAGE347
And
Figure 134412DEST_PATH_IMAGE348
the location vector of the time instant in the world coordinate system.
Figure 812518DEST_PATH_IMAGE349
For marginalized residuals, represent
Figure 27468DEST_PATH_IMAGE350
Influence of navigation state vector before the moment.
In the formula (I), the compound is shown in the specification,
Figure 474630DEST_PATH_IMAGE351
Figure 133144DEST_PATH_IMAGE352
a priori error and hessian matrix.
Solving the formula (10) by adopting a nonlinear optimization algorithm to obtain the position vector of the unmanned aerial vehicle estimated by the inertia/vision combined odometer
Figure 501809DEST_PATH_IMAGE353
Velocity vector
Figure 333498DEST_PATH_IMAGE354
Attitude quaternion
Figure 572850DEST_PATH_IMAGE355
And inverse depth of visual features
Figure 464583DEST_PATH_IMAGE356
. And recovering the three-dimensional coordinates of the visual features under the navigation coordinate system through the inverse depth and the position and posture information obtained by solving, wherein the specific calculation method is shown in a formula (9).
In another embodiment, the combined inertial/visual odometer module further comprises:
selecting visual features in a camera image in a combined inertial/visual odometer for matching according to the reprojection error of the estimated three-dimensional coordinates in the camera imageA bit. The reprojection error can be calculated according to the formula (12), only the features with the reprojection error smaller than a set threshold are screened to be used for matching and positioning, and the screened feature set is recorded as
Figure 569811DEST_PATH_IMAGE357
Set characteristics into
Figure 877295DEST_PATH_IMAGE358
Performing feature description to obtain feature description vector of visual features of camera image
Figure 299049DEST_PATH_IMAGE359
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature. The feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map are the same feature description vector.
In a next embodiment, the feature matching module further comprises:
301: feature description vectors for camera map visual features
Figure 299366DEST_PATH_IMAGE360
Each element in the remote sensing map, and a feature description vector of the remote sensing map by using a matching positioning algorithm
Figure 173781DEST_PATH_IMAGE361
Searching for the most similar features and establishing feature matching pairs
Figure 550536DEST_PATH_IMAGE362
Figure 826797DEST_PATH_IMAGE363
Showing the first selected from the visual features of the camera view
Figure 512862DEST_PATH_IMAGE364
The first of the individual features and the remote sensing map visual features
Figure 812256DEST_PATH_IMAGE365
A matching pair is formed between the elements.
302: and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
In a certain embodiment, the feature matching module further comprises:
3021: each feature matching pair
Figure 523860DEST_PATH_IMAGE366
And calculating a position translation amount, wherein the calculation method comprises the following steps:
Figure 857890DEST_PATH_IMAGE367
(14)
wherein the content of the first and second substances,
Figure 262326DEST_PATH_IMAGE368
Figure 49017DEST_PATH_IMAGE369
three-dimensional coordinate points in a navigation coordinate system for visual features in a camera view
Figure 564312DEST_PATH_IMAGE370
Is/are as follows
Figure 267694DEST_PATH_IMAGE371
Figure 780715DEST_PATH_IMAGE372
An axis coordinate value;
Figure 117019DEST_PATH_IMAGE373
coordinates of a point at the upper left corner of the remote sensing map under a navigation coordinate system;
Figure 373688DEST_PATH_IMAGE374
Figure 479047DEST_PATH_IMAGE375
coordinates of the visual features of the remote sensing map in the remote sensing map are obtained;
Figure 162969DEST_PATH_IMAGE376
the map resolution is in pixels/meter.
3022: for each matching pair
Figure 720989DEST_PATH_IMAGE377
Initializing an empty set
Figure 296196DEST_PATH_IMAGE378
. Searching for and matching pairs in a matching set (all matching pairs constitute a matching set)
Figure 193745DEST_PATH_IMAGE377
The method comprises the following steps that a matching pair which is close to translation amount is provided, the search method judges the consistency of the translation amount of the two matching pairs, and the consistency judgment parameter is calculated as follows:
Figure 110885DEST_PATH_IMAGE379
(15)
when the consistency parameter is less than the set threshold value, corresponding translation amount is calculated
Figure 359464DEST_PATH_IMAGE380
Join to a collection
Figure 20252DEST_PATH_IMAGE381
Performing the following steps;
3023: screening the corresponding set in all the matching pairs
Figure 37887DEST_PATH_IMAGE382
Set of most elements, denoted as
Figure 578459DEST_PATH_IMAGE383
. When in use
Figure 111071DEST_PATH_IMAGE383
When the number of the elements in the positioning table exceeds a set threshold value, the matching positioning is successful. If the matching is successful, matching positioning calculation is carried out; if the matching is not successful, the process returns to step 301 to match the next camera image.
3024: if the matching is successful, the set is calculated
Figure 513234DEST_PATH_IMAGE384
Mean of all elements, note
Figure 385375DEST_PATH_IMAGE385
. Matching position of unmanned aerial vehicle
Figure 909897DEST_PATH_IMAGE386
Can be expressed by the following formula:
Figure 601909DEST_PATH_IMAGE387
(16)
wherein the content of the first and second substances,
Figure 870080DEST_PATH_IMAGE388
a drone position estimated for a combined inertial/visual odometer.
In a further embodiment, the optimization module further comprises:
401: the position and the attitude of the unmanned aerial vehicle at each camera image sampling moment are described as a node and are represented as
Figure 845995DEST_PATH_IMAGE389
Establishing a connecting edge residual error between two nodes by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer; node point
Figure 479102DEST_PATH_IMAGE390
And
Figure 986306DEST_PATH_IMAGE391
can be expressed as
Figure 730271DEST_PATH_IMAGE392
The specific expression is as follows:
Figure 373742DEST_PATH_IMAGE393
(17)
wherein the content of the first and second substances,
Figure 177750DEST_PATH_IMAGE394
Figure 906672DEST_PATH_IMAGE395
in order to be a node, the node is,
Figure 969175DEST_PATH_IMAGE396
representing to convert the attitude quaternion into an attitude rotation matrix; residual terms of the connected edge
Figure 670414DEST_PATH_IMAGE397
I.e. the residual term between the measured value of the inertial/visual odometer, the expression is:
Figure 442061DEST_PATH_IMAGE398
(18)
wherein the content of the first and second substances,
Figure 861541DEST_PATH_IMAGE399
representing the combined inertial/visual odometer estimation,
Figure 9626DEST_PATH_IMAGE400
and
Figure 565372DEST_PATH_IMAGE401
and (4) respectively representing an estimated value of the position vector and an estimated value of the attitude vector, and the specific expression is the same as (17).
402: utilizing the matching relation to solve the position and the posture of the unmanned aerial vehicle, and constructing a matching positioning residual error
Figure 960450DEST_PATH_IMAGE402
(ii) a The specific expression is shown as:
Figure 663964DEST_PATH_IMAGE403
(19)
wherein the content of the first and second substances,
Figure 553423DEST_PATH_IMAGE404
representing the position of the drone estimated by the combined inertial/visual odometer,
Figure 25992DEST_PATH_IMAGE405
indicating the matching location of the drone.
If it is first
Figure 342704DEST_PATH_IMAGE406
If each node has no corresponding matching positioning result, no corresponding residual error item exists.
403: according to the residual error of the connected edge
Figure 533514DEST_PATH_IMAGE407
And matching positioning residuals
Figure 226664DEST_PATH_IMAGE408
And constructing a pose graph optimization objective function
Figure 6270DEST_PATH_IMAGE409
The specific expression is as follows:
Figure 556200DEST_PATH_IMAGE410
(20)
wherein the content of the first and second substances,
Figure 906410DEST_PATH_IMAGE411
represents a collection of all nodes;
Figure 199988DEST_PATH_IMAGE412
indicating only the nodes successfully matched and positioned;
Figure 584833DEST_PATH_IMAGE413
the error covariance matrix for representing the measurement of the inertial/visual odometer can be set according to experience and algorithm precision;
Figure 508926DEST_PATH_IMAGE414
the covariance matrix of the positioning error can be set based on experience and algorithm accuracy.
404: and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
The invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. An unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance is characterized by comprising the following steps:
loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
carrying out feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to a matching relation;
the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation are utilized to construct an unmanned aerial vehicle pose graph optimization objective function, so that the position and the attitude of the unmanned aerial vehicle are optimized, and the method comprises the following steps:
describing the position and the posture of the unmanned aerial vehicle at each camera image sampling moment as a node, and establishing a communication edge residual error between the two nodes by utilizing the position and the posture of the unmanned aerial vehicle solved by the inertia/vision combined odometer;
constructing a matching positioning residual error by utilizing the position and the posture of the unmanned aerial vehicle calculated according to the matching relation;
constructing a pose graph optimization objective function according to the connected edge residual error and the matched positioning residual error;
and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
2. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 1, wherein loading a remote sensing map of an unmanned aerial vehicle flight area and performing offline preprocessing on the remote sensing map according to an unmanned aerial vehicle flight mission comprises:
loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
3. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance as claimed in claim 1, wherein the method comprises the steps of constructing an inertia/vision combined odometer by using a camera image output by an onboard camera and an output of an inertia measurement unit, solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates, and comprises the following steps:
defining an origin and a direction of a navigation coordinate system;
tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
and constructing an inertia/vision combined odometer according to the positions of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
4. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 3, wherein tracking visual features in camera images output by an onboard camera to obtain positions of the same visual feature in different camera images comprises:
detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
5. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 3, wherein calculating the inertial pre-integration value within the two-frame camera image acquisition time interval using the output of the inertial measurement unit comprises:
acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
calculating sampling time of two frames of camera images according to specific force and angular rate of the unmanned aerial vehicle
Figure 387809DEST_PATH_IMAGE001
And
Figure 413534DEST_PATH_IMAGE002
the amount of inertial pre-integration in between.
6. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 1, wherein the matching positioning algorithm is used for matching the feature description vector of the camera image visual feature with the feature description vector of the remote sensing map, and the position and the posture of the unmanned aerial vehicle are solved according to the matching relation, and the method comprises the following steps:
for each element in the feature description vector of the visual features of the camera image, searching features most similar to the element in the feature description vector of the remote sensing map by using a matching positioning algorithm, and establishing a feature matching pair;
and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in a navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
7. The utility model provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map is supplementary which characterized in that includes:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the characteristic matching module is used for carrying out characteristic matching on the characteristic description vector of the visual characteristic of the camera map and the characteristic description vector of the remote sensing map by using a matching positioning algorithm and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
the optimization module is used for constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combination odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, so that the position and the attitude of the unmanned aerial vehicle are optimized, and the optimization module comprises:
describing the position and the posture of the unmanned aerial vehicle at each camera image sampling moment as a node, and establishing a communication edge residual error between the two nodes by utilizing the position and the posture of the unmanned aerial vehicle solved by the inertia/vision combined odometer;
constructing a matching positioning residual error by utilizing the position and the posture of the unmanned aerial vehicle calculated according to the matching relation;
constructing a pose graph optimization objective function according to the connected edge residual error and the matched positioning residual error;
and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110222597.4A 2021-03-01 2021-03-01 Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance Active CN112577493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110222597.4A CN112577493B (en) 2021-03-01 2021-03-01 Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110222597.4A CN112577493B (en) 2021-03-01 2021-03-01 Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance

Publications (2)

Publication Number Publication Date
CN112577493A CN112577493A (en) 2021-03-30
CN112577493B true CN112577493B (en) 2021-05-04

Family

ID=75114091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110222597.4A Active CN112577493B (en) 2021-03-01 2021-03-01 Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance

Country Status (1)

Country Link
CN (1) CN112577493B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610134B (en) * 2021-07-29 2024-02-23 Oppo广东移动通信有限公司 Image feature point matching method, device, chip, terminal and storage medium
CN113625774B (en) * 2021-09-10 2023-07-21 天津大学 Local map matching and end-to-end ranging multi-unmanned aerial vehicle co-location system and method
CN113705734B (en) * 2021-09-30 2022-12-09 中国电子科技集团公司第五十四研究所 Remote sensing image characteristic point elevation obtaining method based on multiple sensors and geocentric
CN114509070B (en) * 2022-02-16 2024-03-15 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle navigation positioning method
CN115388902B (en) * 2022-10-28 2023-03-24 苏州工业园区测绘地理信息有限公司 Indoor positioning method and system, AR indoor positioning navigation method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954283B (en) * 2014-04-01 2016-08-31 西北工业大学 Inertia integrated navigation method based on scene matching aided navigation/vision mileage
US10502840B2 (en) * 2016-02-03 2019-12-10 Qualcomm Incorporated Outlier detection for satellite positioning system using visual inertial odometry
CN108489482B (en) * 2018-02-13 2019-02-26 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108731670B (en) * 2018-05-18 2021-06-22 南京航空航天大学 Inertial/visual odometer integrated navigation positioning method based on measurement model optimization
CN109974693B (en) * 2019-01-31 2020-12-11 中国科学院深圳先进技术研究院 Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN110160522A (en) * 2019-04-16 2019-08-23 浙江大学 A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN110375738B (en) * 2019-06-21 2023-03-14 西安电子科技大学 Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit
CN111024066B (en) * 2019-12-10 2023-08-01 中国航空无线电电子研究所 Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111707261A (en) * 2020-04-10 2020-09-25 南京非空航空科技有限公司 High-speed sensing and positioning method for micro unmanned aerial vehicle
CN112268564B (en) * 2020-12-25 2021-03-02 中国人民解放军国防科技大学 Unmanned aerial vehicle landing space position and attitude end-to-end estimation method

Also Published As

Publication number Publication date
CN112577493A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN112577493B (en) Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance
CN111811506B (en) Visual/inertial odometer combined navigation method, electronic equipment and storage medium
US10295365B2 (en) State estimation for aerial vehicles using multi-sensor fusion
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN108731670B (en) Inertial/visual odometer integrated navigation positioning method based on measurement model optimization
CN107869989B (en) Positioning method and system based on visual inertial navigation information fusion
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
CN111462231B (en) Positioning method based on RGBD sensor and IMU sensor
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN111504312A (en) Unmanned aerial vehicle pose estimation method based on visual inertial polarized light fusion
CN111932674A (en) Optimization method of line laser vision inertial system
CN108613675B (en) Low-cost unmanned aerial vehicle movement measurement method and system
CN111862316A (en) IMU tight coupling dense direct RGBD three-dimensional reconstruction method based on optimization
CN108827287B (en) Robust visual SLAM system in complex environment
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN116989772B (en) Air-ground multi-mode multi-agent cooperative positioning and mapping method
CN112284381B (en) Visual inertia real-time initialization alignment method and system
CN113465596A (en) Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion
CN114459474B (en) Inertial/polarization/radar/optical-fluidic combined navigation method based on factor graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant