CN112577493B - Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance - Google Patents
Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance Download PDFInfo
- Publication number
- CN112577493B CN112577493B CN202110222597.4A CN202110222597A CN112577493B CN 112577493 B CN112577493 B CN 112577493B CN 202110222597 A CN202110222597 A CN 202110222597A CN 112577493 B CN112577493 B CN 112577493B
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- remote sensing
- feature
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Abstract
The invention discloses an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance, the method adopts an inertia/vision combination odometer to continuously estimate the position and the posture of an unmanned aerial vehicle, a matching positioning algorithm is added on the basis of the inertia/vision combination odometer, and the accumulated errors of the position and the posture in the inertia/vision combination odometer are eliminated by matching and positioning the visual characteristics in a camera image and the visual characteristics of the remote sensing map; in the matching positioning algorithm, the correct matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer, so that the precision of the matching positioning algorithm is improved; in addition, a pose graph is adopted to optimize a target function, and a navigation result estimated by the inertia/vision combined odometer and a navigation result calculated by a matching positioning algorithm are fused, so that the overall precision of the navigation system is improved. The method provided by the invention has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.
Description
Technical Field
The invention relates to the technical field of autonomous navigation, in particular to an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance.
Background
The positioning system is an important guarantee for the unmanned aerial vehicle carrier to smoothly complete tasks. Currently, the widely used positioning method in drones is "inertial + satellite (radio)" combined navigation. Satellite navigation and radio navigation signals are extremely susceptible to interference, and there is a great potential risk to the navigation system of the drone to rely heavily on satellite or radio navigation. Therefore, the sensor that self carried is utilized, realizes independently fixing a position, has important meaning and extensive application prospect to unmanned aerial vehicle.
The inertial/visual combined odometer is a widely used unmanned aerial vehicle autonomous positioning method, and estimates navigation parameters of an unmanned aerial vehicle by utilizing the advantages of scale sensitivity, high output frequency, good dynamic property and the like of an inertial sensor and the advantage of high measurement precision of a visual sensor. The inertial/visual combined odometer can estimate the navigation parameters of the unmanned aerial vehicle and can perform three-dimensional reconstruction on the observed image characteristics. However, the positioning error of the combined visual/inertial odometer will accumulate with the flight of the drone, which severely limits the accuracy of the combined visual/inertial odometer.
Disclosure of Invention
The invention provides an unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance, which are used for overcoming the defects that the position error of an unmanned aerial vehicle accumulates along with time in an inertia/vision combined navigation method in the prior art and the like.
In order to achieve the purpose, the invention provides an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance, which comprises the following steps:
loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
carrying out feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to a matching relation;
and constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, thereby optimizing the position and the attitude of the unmanned aerial vehicle.
In order to achieve the above object, the present invention further provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map assistance, including:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the characteristic matching module is used for carrying out characteristic matching on the characteristic description vector of the visual characteristic of the camera map and the characteristic description vector of the remote sensing map by using a matching positioning algorithm and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
and the optimization module is used for constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, so that the position and the attitude of the unmanned aerial vehicle are optimized.
To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
To achieve the above object, the present invention further proposes a computer-readable storage medium having a computer program stored thereon, which, when being executed by a processor, implements the steps of the above method.
Compared with the prior art, the invention has the beneficial effects that:
the unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention adopts an inertia/vision combined odometer to continuously estimate the position and the posture of the unmanned aerial vehicle; the inertia/vision combined odometer has better precision and robustness, and can estimate the navigation parameters of the unmanned aerial vehicle and the three-dimensional coordinates of the visual features of the camera image; on the basis of the inertia/vision combined odometer, a matching positioning algorithm is added, and accumulated errors of the position and the posture in the inertia/vision combined odometer are eliminated by matching and positioning the visual features in the airborne camera image and the visual features in the remote sensing map; according to the invention, the accurate matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer in the matching positioning algorithm, so that the precision of the matching positioning algorithm is effectively improved; in addition, the invention adopts a pose graph to optimize the objective function, fuses the navigation result estimated by the inertia/vision combination odometer and the navigation result calculated by the matching positioning algorithm, and improves the overall precision of the navigation system. Compared with the existing unmanned aerial vehicle autonomous navigation method, the method has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention;
FIG. 2 is a schematic diagram of feature matching between feature description vectors of visual features of a camera image and feature description vectors of a remote sensing map according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The drugs/reagents used are all commercially available without specific mention.
The invention provides an unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance, which comprises the following steps of:
101: loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the remote sensing map is a visual map. Each pixel point in the remote sensing map has an accurate geographical position label, so that reliable position reference can be provided for autonomous positioning of the unmanned aerial vehicle.
102: the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
103: performing feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, as shown in FIG. 2, and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
104: and constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, thereby optimizing the position and the attitude of the unmanned aerial vehicle.
The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance provided by the invention adopts an inertia/vision combined odometer to continuously estimate the position, the speed and the posture of the unmanned aerial vehicle; the inertia/vision combined odometer has better precision and robustness, and can estimate the navigation parameters of the unmanned aerial vehicle and the three-dimensional coordinates of the visual features of the camera image; on the basis of the inertia/vision combined odometer, a matching positioning algorithm is added, and accumulated errors of the position and the posture in the inertia/vision combined odometer are eliminated by matching and positioning the visual features in the airborne camera image and the visual features in the remote sensing map; according to the invention, the accurate matching points are screened by using the characteristic three-dimensional coordinate information estimated by the inertia/vision combination odometer in the matching positioning algorithm, so that the precision of the matching positioning algorithm is effectively improved; in addition, the invention adopts a pose graph to optimize the objective function, fuses the navigation result estimated by the inertia/vision combination odometer and the navigation result calculated by the matching positioning algorithm, and improves the overall precision of the navigation system. Compared with the existing unmanned aerial vehicle autonomous navigation method, the method has the advantages of continuous output of navigation parameters, no accumulation of navigation errors along with time and the like.
In one embodiment, for step 101, according to the flight mission of the unmanned aerial vehicle, loading a remote sensing map of a flight area of the unmanned aerial vehicle and performing offline preprocessing on the remote sensing map, including:
001: loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
002: detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
Recording the position of the visual feature in the remote sensing mapWhereinRepresented on a remote sensing mapPixel coordinates of,Is shown asThe visual characteristics of the human body are shown,indicating the total number of visual features.
003: and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
The characteristic description vector of the visual characteristic of the remote sensing map is recorded as。
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature.
In a next embodiment, for step 102, constructing an inertial/visual combined odometer using the output of the onboard camera and the output of the inertial measurement unit, solving the position, speed and attitude of the drone using the inertial/visual combined odometer, estimating three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates, including:
201: defining an origin and a direction of a navigation coordinate system;
the initial position of the drone is generally defined as the origin of a navigation coordinate system, with XYZ coordinate axes pointing to the east, north and sky, respectively.
202: tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
203: calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
204: and constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
In one embodiment, for step 202, tracking the visual features in the camera images output by the onboard cameras to obtain the positions of the same visual feature in different camera images includes:
2021: detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
2022: and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
The feature tracking mode adopts the existing mode, such as KLT sparse optical flow tracking method, dense optical flow tracking method, feature matching method and the like.
Is numbered asThe visual characteristics ofThe positions appearing in the picture of the camera are recorded as。
In another embodiment, for step 203, calculating an inertial pre-integration quantity over the two-frame camera image acquisition time interval using the output of the inertial measurement unit comprises:
2031: acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
the measurement model of the inertia measurement unit is as follows:
in the formula (I), the compound is shown in the specification,for the unmanned aerial vehicle measured by an accelerometer in an inertial measurement unittSpecific force at the moment;for gyroscopic measurements in inertial measurement unitstAngular rate of the time of day;for unmanned aerial vehicle attActual specific force at the moment;for unmanned aerial vehicle attActual angular rate of the moment;andis composed ofEstimating zero offset values of the accelerometer and the gyroscope at the moment;as a world coordinate system totThe transformation matrix of the time body coordinate system,is a world coordinate system;is the gravity under the world coordinate system; measurement noise of a gyroscopeAnd measurement noise of accelerometerSubject to a gaussian distribution,,andthe variance of the noise is measured for the accelerometer and gyroscope, respectively.
2032: calculating sampling time of two frames of camera images according to specific force and angular rate of the unmanned aerial vehicleAndthe amount of inertial pre-integration in between.
The pre-integration expression is:
in the formula (I), the compound is shown in the specification,as a body coordinate system at time t toAn attitude matrix of the moment body coordinate system;an attitude matrix from a body coordinate system to a world coordinate system at the moment t;representing a right-multiplied quaternion;is an attitude pre-integral quantity;is a velocity pre-integration quantity;is the position pre-integration quantity.
The error differential equation of the inertia pre-integral quantity is as follows:
in the formula (I), the compound is shown in the specification,estimating an error for the pre-integrated position component;estimating an error for the pre-integrated velocity component;estimating an error for the pre-integrated attitude component;estimating an error for an accelerometer null offset;estimating an error for a gyroscope zero offset;measuring noise for the accelerometer;measuring noise by a gyroscope;estimating noise for accelerometer null bias;estimating noise for gyroscope zero bias;is a state error transfer coefficient matrix;is a state error matrix;is a noise transfer coefficient matrix;is a noise matrix; the amounts represented by the symbols in formula (3) are all in a continuous space.
Inertia pre-productCovariance matrix of componentsThe iterative solution may be performed by a first order equation with respect to discrete time. First, give the covariance matrixAn initial value of 0 is assigned, and then an iterative solution is performed by:
in the formula (I), the compound is shown in the specification,sampling time interval corresponding to the inertia pre-integration quantity;diagonal matrix constructed for noise;Is an identity matrix;is an error transfer coefficient matrix;is a noise transfer coefficient matrix.
The Jacobian matrix of the inertial pre-integration quantity with respect to the error quantity can be iteratively solved by:
in the formula (I), the compound is shown in the specification,giving an initial value as a unit matrix. Covariance matrix calculated by equation (4)And the Jacobian matrix calculated by the formula (5)Amount of pre-integrationA first order approximation of zero offset for an inertial measurement unit can be written as:
wherein the content of the first and second substances,、jacobian matrices of pre-integration position components relative to accelerometer zero-offset and gyroscope zero-offset, respectively;、jacobian matrixes of pre-integration velocity components relative to zero offset of an accelerometer and zero offset of a gyroscope are respectively;a Jacobian matrix which is the zero offset of the pre-integrated attitude component relative to the gyroscope;andestimating errors for the accelerometer and gyroscope zero offsets, respectively;、andthe method is characterized in that the method is as follows:
wherein the content of the first and second substances,converting the attitude quaternion into an attitude rotation matrix;multiplication is carried out for quaternion;andis composed ofAnd estimating zero offset values of the accelerometer and the gyroscope at the moment.
In the next embodiment, for step 204, constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates, including:
is provided withThe navigation state at each moment needs to be estimated, which is then usedMoment-by-moment inertial/visual combined navigation state vectorExpressed as:
wherein the content of the first and second substances,is shown asNavigation state vector to be estimated at each moment, including position vector of unmanned aerial vehicleVelocity vectorAttitude quaternionAccelerometer null-bias estimation in sliding windowAnd gyroscope zero offset estimate(ii) a Navigation state vectorIn addition, also comprisesInverse depth information of visual features in a camera map over a period of timeWhereinIs the firstThe inverse depth of the visual features in the individual camera images,whereinAs a total number of feature points, the three-dimensional coordinates of the feature in the navigation coordinate systemThe method can be obtained by inverse depth calculation, and the calculation method comprises the following steps:
wherein the content of the first and second substances,is as followsThe first observed camera picture of the visual features in the individual camera picturesThe position of (1);is as followsA posture matrix from the frame coordinate system to the world coordinate system;an attitude matrix between the airborne camera and the inertial measurement unit body coordinate system;a position matrix between the airborne camera and the inertial measurement unit body coordinate system;representing the inverse mapping of the image coordinates to three-dimensional coordinates of unit depth.
The optimization objective function for the combined inertial/visual navigation can be expressed as:
in the formula (I), the compound is shown in the specification,for the residual error of the inertia pre-integration value, the specific expression is as follows:
in the formula (I), the compound is shown in the specification,estimating an error for the pre-integrated position component;estimating an error for the pre-integrated velocity component;estimating an error for the pre-integrated attitude component;estimating an error for an accelerometer null offset;estimating an error for a gyroscope zero offset;as a world coordinate system toA transformation matrix of the time body coordinate system;is composed ofA moment body unmanned aerial vehicle position vector;is the gravity under the world coordinate system;is the time window length;is composed ofA time-of-day unmanned aerial vehicle velocity vector;、、is the measurement value of the pre-integration three-component;is composed ofAn attitude inverse matrix of the unmanned aerial vehicle of the time body;、is composed ofZero bias of the time accelerometer and gyroscope; the quantities represented by the symbols in equation (11) are all in discrete space.
in the formula (I), the compound is shown in the specification,、is composed ofAndthe location vector of the time instant in the world coordinate system.
Solving the formula (10) by adopting a nonlinear optimization algorithm to obtain the position vector of the unmanned aerial vehicle estimated by the inertia/vision combined odometerVelocity vectorAttitude quaternionAnd inverse depth of visual features. And recovering the three-dimensional coordinates of the visual features under the navigation coordinate system through the inverse depth and the position and posture information obtained by solving, wherein the specific calculation method is shown in a formula (9).
In another embodiment, for step 204, establishing feature description vectors for visual features of the camera view from the three-dimensional coordinates comprises:
selecting visual features in a camera view in a combined inertial/visual odometer based on reprojection errors of three-dimensional coordinates estimated in the camera viewAnd (4) matching and positioning. The reprojection error can be calculated according to the formula (12), only the features with the reprojection error smaller than a set threshold are screened to be used for matching and positioning, and the screened feature set is recorded as;
Set characteristics intoPerforming feature description to obtain feature description vector of visual features of camera image。
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature. The feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map are the same feature description vector.
In the next embodiment, for step 103, performing feature matching on the feature description vector of the visual feature of the camera image and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to the matching relationship, includes:
301: feature description vectors for camera map visual featuresEach element in the remote sensing map, and a feature description vector of the remote sensing map by using a matching positioning algorithmSearching for the most similar features and establishing feature matching pairs;Showing the first selected from the visual features of the camera viewThe first of the individual features and the remote sensing map visual featuresA matching pair is formed between the elements.
302: and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
In a certain embodiment, for step 302, screening the feature matching pairs by using three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the drone includes:
3021: each feature matching pairAnd calculating a position translation amount, wherein the calculation method comprises the following steps:
wherein the content of the first and second substances,、three-dimensional coordinate points in a navigation coordinate system for visual features in a camera viewIs/are as follows、An axis coordinate value;coordinates of a point at the upper left corner of the remote sensing map under a navigation coordinate system;、coordinates of the visual features of the remote sensing map in the remote sensing map are obtained;the map resolution is in pixels/meter.
3022: for each matching pairInitializing an empty set. Searching for and matching pairs in a matching set (all matching pairs constitute a matching set)The method comprises the following steps that a matching pair which is close to translation amount is provided, the search method judges the consistency of the translation amount of the two matching pairs, and the consistency judgment parameter is calculated as follows:
when the consistency parameter is less than the set threshold value, corresponding translation amount is calculatedJoin to a collectionPerforming the following steps;
3023: screening the corresponding set in all the matching pairsSet of most elements, denoted as. When in useWhen the number of the elements in the positioning table exceeds a set threshold value, the matching positioning is successful. If the matching is successful, matching positioning calculation is carried out; if the matching is not successful, the process returns to step 301 to match the next camera image.
3024: if the matching is successful, the set is calculatedMean of all elements, note. Matching position of unmanned aerial vehicleCan be expressed by the following formula:
wherein the content of the first and second substances,a drone position estimated for a combined inertial/visual odometer.
In a next embodiment, for step 104, the position and the attitude of the drone solved by the inertia/vision combined odometer, and the position and the attitude of the drone are solved according to the matching relationship, and an optimization objective function of the pose graph of the drone is constructed, so as to optimize the position and the attitude of the drone, including:
401: the position and the attitude of the unmanned aerial vehicle at each camera image sampling moment are described as a node and are represented asEstablishing a connecting edge residual error between two nodes by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer; node pointAndcan be expressed asThe specific expression is as follows:
wherein the content of the first and second substances,、in order to be a node, the node is,representing to convert the attitude quaternion into an attitude rotation matrix; residual terms of the connected edgeI.e. the residual term between the measured value of the inertial/visual odometer, the expression is:
wherein the content of the first and second substances,representing the combined inertial/visual odometer estimation,andand (4) respectively representing an estimated value of the position vector and an estimated value of the attitude vector, and the specific expression is the same as (17).
402: utilizing the matching relation to solve the position and the posture of the unmanned aerial vehicle, and constructing a matching positioning residual error(ii) a The specific expression is shown as:
wherein the content of the first and second substances,representing the position of the drone estimated by the combined inertial/visual odometer,indicating the matching location of the drone.
If it is firstIf each node has no corresponding matching positioning result, no corresponding residual error item exists.
403: according to the residual error of the connected edgeAnd matching positioning residualsAnd constructing a pose graph optimization objective functionThe specific expression is as follows:
wherein the content of the first and second substances,represents a collection of all nodes;indicating only the nodes successfully matched and positioned;the error covariance matrix for representing the measurement of the inertial/visual odometer can be set according to experience and algorithm precision;the covariance matrix of the positioning error can be set based on experience and algorithm accuracy.
404: and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
The invention also provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map assistance, which comprises:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the feature matching module is used for performing feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map and calculating the position and the posture of the unmanned aerial vehicle according to the matching relation;
and the optimization module is used for solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, solving the position and the attitude of the unmanned aerial vehicle according to the matching relation, and constructing an unmanned aerial vehicle pose graph optimization objective function so as to optimize the position and the attitude of the unmanned aerial vehicle.
In one embodiment, the telemetric map module further comprises:
001: loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
002: detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
Recording the position of the visual feature in the remote sensing mapWhereinRepresented on a remote sensing mapPixel coordinates of,Is shown asThe visual characteristics of the human body are shown,indicating the total number of visual features.
003: and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
The characteristic description vector of the visual characteristic of the remote sensing map is recorded as。
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature.
In a further embodiment, the combined inertial/visual odometer module further comprises:
201: defining an origin and a direction of a navigation coordinate system;
the initial position of the drone is generally defined as the origin of a navigation coordinate system, with XYZ coordinate axes pointing to the east, north and sky, respectively.
202: tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
203: calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
204: and constructing an inertia/vision combined odometer according to the position of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
In a certain embodiment, the combined inertial/visual odometer module further comprises:
2021: detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
the feature detection algorithm is a feature extraction algorithm based on SITF features, a feature extraction algorithm based on FAST corners or a feature extraction algorithm based on HARRIS corners, and the like.
2022: and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
The feature tracking mode adopts the existing mode, such as KLT sparse optical flow tracking method, dense optical flow tracking method, feature matching method and the like.
Is numbered asThe visual characteristics ofThe positions appearing in the picture of the camera are recorded as。
In another embodiment, the combined inertial/visual odometer module further comprises:
2031: acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
the measurement model of the inertia measurement unit is as follows:
in the formula (I), the compound is shown in the specification,for the unmanned aerial vehicle measured by an accelerometer in an inertial measurement unittSpecific force at the moment;for gyroscopic measurements in inertial measurement unitstAngular rate of the time of day;for unmanned aerial vehicle attActual specific force at the moment;for unmanned aerial vehicle attActual angular rate of the moment;andis composed ofEstimating zero offset values of the accelerometer and the gyroscope at the moment;as a world coordinate system totThe transformation matrix of the time body coordinate system,is a world coordinate system;is the gravity under the world coordinate system; measurement noise of a gyroscopeAnd measurement noise of accelerometerSubject to a gaussian distribution,,andrespectively accelerometer and gyroscopeThe spirometer measures the variance of the noise.
2032: calculating sampling time of two frames of camera images according to specific force and angular rate of the unmanned aerial vehicleAndthe amount of inertial pre-integration in between.
The pre-integration expression is:
in the formula (I), the compound is shown in the specification,as a body coordinate system at time t toAn attitude matrix of the moment body coordinate system;an attitude matrix from a body coordinate system to a world coordinate system at the moment t;representing a right-multiplied quaternion;is an attitude pre-integral quantity;is a velocity pre-integration quantity;is the position pre-integration quantity.
The error differential equation of the inertia pre-integral quantity is as follows:
in the formula (I), the compound is shown in the specification,estimating an error for the pre-integrated position component;estimating an error for the pre-integrated velocity component;estimating an error for the pre-integrated attitude component;estimating an error for an accelerometer null offset;estimating an error for a gyroscope zero offset;measuring noise for the accelerometer;measuring noise by a gyroscope;estimating noise for accelerometer null bias;estimating noise for gyroscope zero bias;is a state error transfer coefficient matrix;is a state error matrix;is a noise transfer coefficient matrix;is a noise matrix; the amounts represented by the symbols in formula (3) are all in a continuous space.
Covariance matrix of inertia pre-integral quantityThe iterative solution may be performed by a first order equation with respect to discrete time. First, give the covariance matrixAn initial value of 0 is assigned, and then an iterative solution is performed by:
in the formula (I), the compound is shown in the specification,sampling time interval corresponding to the inertia pre-integration quantity;diagonal matrix constructed for noise;Is an identity matrix;is an error transfer coefficient matrix;is a noise transfer coefficient matrix.
The Jacobian matrix of the inertial pre-integration quantity with respect to the error quantity can be iteratively solved by:
in the formula (I), the compound is shown in the specification,giving an initial value as a unit matrix. Covariance matrix calculated by equation (4)And the Jacobian matrix calculated by the formula (5)Amount of pre-integrationA first order approximation of zero offset for an inertial measurement unit can be written as:
wherein the content of the first and second substances,、jacobian matrices of pre-integration position components relative to accelerometer zero-offset and gyroscope zero-offset, respectively;、respectively, pre-integrated velocity component phaseJacobian matrices for accelerometer and gyroscope zero offsets;a Jacobian matrix which is the zero offset of the pre-integrated attitude component relative to the gyroscope;andestimating errors for the accelerometer and gyroscope zero offsets, respectively;、andthe method is characterized in that the method is as follows:
wherein the content of the first and second substances,converting the attitude quaternion into an attitude rotation matrix;multiplication is carried out for quaternion;andis composed ofThe time of day is estimatedZero bias values of the accelerometer and gyroscope.
In a next embodiment, the combined inertial/visual odometer module further comprises:
is provided withThe navigation state at each moment needs to be estimated, which is then usedMoment-by-moment inertial/visual combined navigation state vectorExpressed as:
wherein the content of the first and second substances,is shown asNavigation state vector to be estimated at each moment, including position vector of unmanned aerial vehicleVelocity vectorAttitude quaternionAccelerometer null-bias estimation in sliding windowAnd gyroscope zero offset estimate(ii) a Navigation state vectorIn addition, also comprisesInverse depth information of visual features in a camera map over a period of timeWhereinIs the firstThe inverse depth of the visual features in the individual camera images,whereinAs a total number of feature points, the three-dimensional coordinates of the feature in the navigation coordinate systemThe method can be obtained by inverse depth calculation, and the calculation method comprises the following steps:
wherein the content of the first and second substances,is as followsThe first observed camera picture of the visual features in the individual camera picturesIn (1)A location;is as followsA posture matrix from the frame coordinate system to the world coordinate system;an attitude matrix between the airborne camera and the inertial measurement unit body coordinate system;a position matrix between the airborne camera and the inertial measurement unit body coordinate system;representing the inverse mapping of the image coordinates to three-dimensional coordinates of unit depth.
The optimization objective function for the combined inertial/visual navigation can be expressed as:
in the formula (I), the compound is shown in the specification,for the residual error of the inertia pre-integration value, the specific expression is as follows:
in the formula (I), the compound is shown in the specification,estimating an error for the pre-integrated position component;estimating an error for the pre-integrated velocity component;estimating an error for the pre-integrated attitude component;estimating an error for an accelerometer null offset;estimating an error for a gyroscope zero offset;as a world coordinate system toA transformation matrix of the time body coordinate system;is composed ofA moment body unmanned aerial vehicle position vector;is the gravity under the world coordinate system;is the time window length;is composed ofA time-of-day unmanned aerial vehicle velocity vector;、、is the measurement value of the pre-integration three-component;is composed ofAn attitude inverse matrix of the unmanned aerial vehicle of the time body;、is composed ofZero bias of the time accelerometer and gyroscope; the quantities represented by the symbols in equation (11) are all in discrete space.
in the formula (I), the compound is shown in the specification,、is composed ofAndthe location vector of the time instant in the world coordinate system.
Solving the formula (10) by adopting a nonlinear optimization algorithm to obtain the position vector of the unmanned aerial vehicle estimated by the inertia/vision combined odometerVelocity vectorAttitude quaternionAnd inverse depth of visual features. And recovering the three-dimensional coordinates of the visual features under the navigation coordinate system through the inverse depth and the position and posture information obtained by solving, wherein the specific calculation method is shown in a formula (9).
In another embodiment, the combined inertial/visual odometer module further comprises:
selecting visual features in a camera image in a combined inertial/visual odometer for matching according to the reprojection error of the estimated three-dimensional coordinates in the camera imageA bit. The reprojection error can be calculated according to the formula (12), only the features with the reprojection error smaller than a set threshold are screened to be used for matching and positioning, and the screened feature set is recorded as;
Set characteristics intoPerforming feature description to obtain feature description vector of visual features of camera image。
The feature description vector is established by adopting SIFT feature, SURF feature or ORB feature. The feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map are the same feature description vector.
In a next embodiment, the feature matching module further comprises:
301: feature description vectors for camera map visual featuresEach element in the remote sensing map, and a feature description vector of the remote sensing map by using a matching positioning algorithmSearching for the most similar features and establishing feature matching pairs;Showing the first selected from the visual features of the camera viewThe first of the individual features and the remote sensing map visual featuresA matching pair is formed between the elements.
302: and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in the navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
In a certain embodiment, the feature matching module further comprises:
3021: each feature matching pairAnd calculating a position translation amount, wherein the calculation method comprises the following steps:
wherein the content of the first and second substances,、three-dimensional coordinate points in a navigation coordinate system for visual features in a camera viewIs/are as follows、An axis coordinate value;coordinates of a point at the upper left corner of the remote sensing map under a navigation coordinate system;、coordinates of the visual features of the remote sensing map in the remote sensing map are obtained;the map resolution is in pixels/meter.
3022: for each matching pairInitializing an empty set. Searching for and matching pairs in a matching set (all matching pairs constitute a matching set)The method comprises the following steps that a matching pair which is close to translation amount is provided, the search method judges the consistency of the translation amount of the two matching pairs, and the consistency judgment parameter is calculated as follows:
when the consistency parameter is less than the set threshold value, corresponding translation amount is calculatedJoin to a collectionPerforming the following steps;
3023: screening the corresponding set in all the matching pairsSet of most elements, denoted as. When in useWhen the number of the elements in the positioning table exceeds a set threshold value, the matching positioning is successful. If the matching is successful, matching positioning calculation is carried out; if the matching is not successful, the process returns to step 301 to match the next camera image.
3024: if the matching is successful, the set is calculatedMean of all elements, note. Matching position of unmanned aerial vehicleCan be expressed by the following formula:
wherein the content of the first and second substances,a drone position estimated for a combined inertial/visual odometer.
In a further embodiment, the optimization module further comprises:
401: the position and the attitude of the unmanned aerial vehicle at each camera image sampling moment are described as a node and are represented asEstablishing a connecting edge residual error between two nodes by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer; node pointAndcan be expressed asThe specific expression is as follows:
wherein the content of the first and second substances,、in order to be a node, the node is,representing to convert the attitude quaternion into an attitude rotation matrix; residual terms of the connected edgeI.e. the residual term between the measured value of the inertial/visual odometer, the expression is:
wherein the content of the first and second substances,representing the combined inertial/visual odometer estimation,andand (4) respectively representing an estimated value of the position vector and an estimated value of the attitude vector, and the specific expression is the same as (17).
402: utilizing the matching relation to solve the position and the posture of the unmanned aerial vehicle, and constructing a matching positioning residual error(ii) a The specific expression is shown as:
wherein the content of the first and second substances,representing the position of the drone estimated by the combined inertial/visual odometer,indicating the matching location of the drone.
If it is firstIf each node has no corresponding matching positioning result, no corresponding residual error item exists.
403: according to the residual error of the connected edgeAnd matching positioning residualsAnd constructing a pose graph optimization objective functionThe specific expression is as follows:
wherein the content of the first and second substances,represents a collection of all nodes;indicating only the nodes successfully matched and positioned;the error covariance matrix for representing the measurement of the inertial/visual odometer can be set according to experience and algorithm precision;the covariance matrix of the positioning error can be set based on experience and algorithm accuracy.
404: and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
The invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. An unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance is characterized by comprising the following steps:
loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission, and performing off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the method comprises the steps of constructing an inertia/vision combined odometer by utilizing a camera image output by an airborne camera and the output of an inertia measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
carrying out feature matching on the feature description vector of the visual feature of the camera map and the feature description vector of the remote sensing map by using a matching positioning algorithm, and solving the position and the posture of the unmanned aerial vehicle according to a matching relation;
the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combined odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation are utilized to construct an unmanned aerial vehicle pose graph optimization objective function, so that the position and the attitude of the unmanned aerial vehicle are optimized, and the method comprises the following steps:
describing the position and the posture of the unmanned aerial vehicle at each camera image sampling moment as a node, and establishing a communication edge residual error between the two nodes by utilizing the position and the posture of the unmanned aerial vehicle solved by the inertia/vision combined odometer;
constructing a matching positioning residual error by utilizing the position and the posture of the unmanned aerial vehicle calculated according to the matching relation;
constructing a pose graph optimization objective function according to the connected edge residual error and the matched positioning residual error;
and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
2. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 1, wherein loading a remote sensing map of an unmanned aerial vehicle flight area and performing offline preprocessing on the remote sensing map according to an unmanned aerial vehicle flight mission comprises:
loading a remote sensing map along the unmanned aerial vehicle task track according to the unmanned aerial vehicle flight task, wherein the remote sensing map is a whole range area of the unmanned aerial vehicle flight task or a local area of the unmanned aerial vehicle flight task;
detecting the visual features in the remote sensing map by adopting a feature detection algorithm to obtain the visual features of the remote sensing map, and recording the positions of the visual features in the remote sensing map;
and establishing a feature description vector of the visual features of the remote sensing map according to the positions of the visual features in the remote sensing map.
3. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance as claimed in claim 1, wherein the method comprises the steps of constructing an inertia/vision combined odometer by using a camera image output by an onboard camera and an output of an inertia measurement unit, solving the position and the attitude of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates, and comprises the following steps:
defining an origin and a direction of a navigation coordinate system;
tracking visual features in camera pictures output by an onboard camera to obtain the positions of the same visual feature in different camera pictures;
calculating the inertia pre-integral quantity in the acquisition time interval of the two frames of camera images by using the output of the inertia measurement unit;
and constructing an inertia/vision combined odometer according to the positions of the same visual feature in different camera images and the inertia pre-integration value, solving the position, the speed and the posture of the unmanned aerial vehicle by using the inertia/vision combined odometer, estimating the three-dimensional coordinates of the visual feature in the camera images in a navigation coordinate system, and establishing a feature description vector of the visual feature of the camera images according to the three-dimensional coordinates.
4. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 3, wherein tracking visual features in camera images output by an onboard camera to obtain positions of the same visual feature in different camera images comprises:
detecting visual features in a camera image output by the airborne camera by using a feature detection algorithm;
and performing feature tracking on the visual features in the camera images in a feature tracking mode to obtain the positions of the same visual feature appearing in different camera images.
5. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 3, wherein calculating the inertial pre-integration value within the two-frame camera image acquisition time interval using the output of the inertial measurement unit comprises:
acquiring the specific force and angular rate of the unmanned aerial vehicle by using an inertia measurement unit;
6. The unmanned aerial vehicle autonomous positioning method based on remote sensing map assistance of claim 1, wherein the matching positioning algorithm is used for matching the feature description vector of the camera image visual feature with the feature description vector of the remote sensing map, and the position and the posture of the unmanned aerial vehicle are solved according to the matching relation, and the method comprises the following steps:
for each element in the feature description vector of the visual features of the camera image, searching features most similar to the element in the feature description vector of the remote sensing map by using a matching positioning algorithm, and establishing a feature matching pair;
and screening the feature matching pairs by using the three-dimensional coordinates of the visual features in the camera image in a navigation coordinate system to obtain the position and the posture of the unmanned aerial vehicle.
7. The utility model provides an unmanned aerial vehicle autonomous positioning system based on remote sensing map is supplementary which characterized in that includes:
the remote sensing map module is used for loading a remote sensing map of an unmanned aerial vehicle flight area according to the unmanned aerial vehicle flight mission and carrying out off-line preprocessing on the remote sensing map to obtain a feature description vector of the remote sensing map;
the inertial/visual combined odometer module is used for constructing an inertial/visual combined odometer by utilizing a camera image output by an airborne camera and the output of an inertial measurement unit, solving the position and the posture of the unmanned aerial vehicle by utilizing the inertial/visual combined odometer, estimating three-dimensional coordinates of visual features in the camera image in a navigation coordinate system, and establishing feature description vectors of the visual features of the camera image according to the three-dimensional coordinates;
the characteristic matching module is used for carrying out characteristic matching on the characteristic description vector of the visual characteristic of the camera map and the characteristic description vector of the remote sensing map by using a matching positioning algorithm and calculating the position and the posture of the unmanned aerial vehicle according to a matching relation;
the optimization module is used for constructing an unmanned aerial vehicle pose graph optimization objective function by utilizing the position and the attitude of the unmanned aerial vehicle solved by the inertia/vision combination odometer and the position and the attitude of the unmanned aerial vehicle solved according to the matching relation, so that the position and the attitude of the unmanned aerial vehicle are optimized, and the optimization module comprises:
describing the position and the posture of the unmanned aerial vehicle at each camera image sampling moment as a node, and establishing a communication edge residual error between the two nodes by utilizing the position and the posture of the unmanned aerial vehicle solved by the inertia/vision combined odometer;
constructing a matching positioning residual error by utilizing the position and the posture of the unmanned aerial vehicle calculated according to the matching relation;
constructing a pose graph optimization objective function according to the connected edge residual error and the matched positioning residual error;
and solving the pose graph optimization objective function by adopting a nonlinear optimization algorithm to obtain a node state vector which minimizes the pose graph optimization objective function, wherein the node state vector is the position and the posture of the unmanned aerial vehicle.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110222597.4A CN112577493B (en) | 2021-03-01 | 2021-03-01 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110222597.4A CN112577493B (en) | 2021-03-01 | 2021-03-01 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112577493A CN112577493A (en) | 2021-03-30 |
CN112577493B true CN112577493B (en) | 2021-05-04 |
Family
ID=75114091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110222597.4A Active CN112577493B (en) | 2021-03-01 | 2021-03-01 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112577493B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610134B (en) * | 2021-07-29 | 2024-02-23 | Oppo广东移动通信有限公司 | Image feature point matching method, device, chip, terminal and storage medium |
CN113625774B (en) * | 2021-09-10 | 2023-07-21 | 天津大学 | Local map matching and end-to-end ranging multi-unmanned aerial vehicle co-location system and method |
CN113705734B (en) * | 2021-09-30 | 2022-12-09 | 中国电子科技集团公司第五十四研究所 | Remote sensing image characteristic point elevation obtaining method based on multiple sensors and geocentric |
CN114509070B (en) * | 2022-02-16 | 2024-03-15 | 中国电子科技集团公司第五十四研究所 | Unmanned aerial vehicle navigation positioning method |
CN115388902B (en) * | 2022-10-28 | 2023-03-24 | 苏州工业园区测绘地理信息有限公司 | Indoor positioning method and system, AR indoor positioning navigation method and system |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103954283B (en) * | 2014-04-01 | 2016-08-31 | 西北工业大学 | Inertia integrated navigation method based on scene matching aided navigation/vision mileage |
US10502840B2 (en) * | 2016-02-03 | 2019-12-10 | Qualcomm Incorporated | Outlier detection for satellite positioning system using visual inertial odometry |
CN108489482B (en) * | 2018-02-13 | 2019-02-26 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN108731670B (en) * | 2018-05-18 | 2021-06-22 | 南京航空航天大学 | Inertial/visual odometer integrated navigation positioning method based on measurement model optimization |
CN109974693B (en) * | 2019-01-31 | 2020-12-11 | 中国科学院深圳先进技术研究院 | Unmanned aerial vehicle positioning method and device, computer equipment and storage medium |
CN110160522A (en) * | 2019-04-16 | 2019-08-23 | 浙江大学 | A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method |
CN110375738B (en) * | 2019-06-21 | 2023-03-14 | 西安电子科技大学 | Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit |
CN111024066B (en) * | 2019-12-10 | 2023-08-01 | 中国航空无线电电子研究所 | Unmanned aerial vehicle vision-inertia fusion indoor positioning method |
CN111707261A (en) * | 2020-04-10 | 2020-09-25 | 南京非空航空科技有限公司 | High-speed sensing and positioning method for micro unmanned aerial vehicle |
CN112268564B (en) * | 2020-12-25 | 2021-03-02 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle landing space position and attitude end-to-end estimation method |
-
2021
- 2021-03-01 CN CN202110222597.4A patent/CN112577493B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112577493A (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112577493B (en) | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance | |
CN111811506B (en) | Visual/inertial odometer combined navigation method, electronic equipment and storage medium | |
US10295365B2 (en) | State estimation for aerial vehicles using multi-sensor fusion | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN108731670B (en) | Inertial/visual odometer integrated navigation positioning method based on measurement model optimization | |
CN107869989B (en) | Positioning method and system based on visual inertial navigation information fusion | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
US9071829B2 (en) | Method and system for fusing data arising from image sensors and from motion or position sensors | |
CN111462231B (en) | Positioning method based on RGBD sensor and IMU sensor | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN111504312A (en) | Unmanned aerial vehicle pose estimation method based on visual inertial polarized light fusion | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN108613675B (en) | Low-cost unmanned aerial vehicle movement measurement method and system | |
CN111862316A (en) | IMU tight coupling dense direct RGBD three-dimensional reconstruction method based on optimization | |
CN108827287B (en) | Robust visual SLAM system in complex environment | |
CN113763548B (en) | Vision-laser radar coupling-based lean texture tunnel modeling method and system | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
CN116989772B (en) | Air-ground multi-mode multi-agent cooperative positioning and mapping method | |
CN112284381B (en) | Visual inertia real-time initialization alignment method and system | |
CN113465596A (en) | Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion | |
CN114459474B (en) | Inertial/polarization/radar/optical-fluidic combined navigation method based on factor graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |