CN111521176A - Visual auxiliary inertial navigation method fusing laser - Google Patents
Visual auxiliary inertial navigation method fusing laser Download PDFInfo
- Publication number
- CN111521176A CN111521176A CN202010346111.3A CN202010346111A CN111521176A CN 111521176 A CN111521176 A CN 111521176A CN 202010346111 A CN202010346111 A CN 202010346111A CN 111521176 A CN111521176 A CN 111521176A
- Authority
- CN
- China
- Prior art keywords
- laser
- detector
- imu
- inertial navigation
- range finder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000000007 visual effect Effects 0.000 title claims description 12
- 230000003287 optical effect Effects 0.000 claims description 24
- 230000001133 acceleration Effects 0.000 claims description 16
- 238000005259 measurement Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 6
- 239000000523 sample Substances 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000004088 simulation Methods 0.000 abstract description 5
- 238000011160 research Methods 0.000 abstract description 4
- 230000004927 fusion Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Navigation (AREA)
- Gyroscopes (AREA)
Abstract
The invention discloses a laser-fused vision-aided inertial navigation method, which belongs to the scientific research of a navigation method of a planet detector in a descent landing stage. And a planet detector landing navigation image simulation system is used for generating a descending image sequence and laser ranging data to design a simulation experiment, and the experimental result shows that the landing precision requirement of the detector can be met.
Description
Technical Field
The invention belongs to scientific research of a laser-fused vision-aided inertial navigation method, and particularly relates to a method for providing distance information for vision inertial navigation by using single-point laser so as to obtain an accurate scale and calculate more accurate characteristic point positions, fusing vision and inertial information by using an EKF-SLAM filtering algorithm, and updating the position of a detector by using the characteristic point positions in a filtering updating stage so as to be applied to the pose estimation problem of a landing descending segment of a planetary detector.
Background
The problem of safe and accurate landing of a detector on the surfaces of celestial bodies such as moon, asteroid, comet and the like is one of the research hotspots in the field of deep space detection. Due to the problems of long detection time, long communication time delay, unknown deep space environment and the like existing in the remote distance from the earth, in order to realize safe and accurate landing, the detector is required to have certain autonomous navigation capability.
The early autonomous navigation method mainly adopts an inertial navigation method, the position and pose of a detector can be obtained according to acceleration and angular velocity integration, but due to the existence of initial navigation errors, inertial element measurement errors and gravitational field model errors, the inertial navigation errors are gradually accumulated along with the increase of time, so that the navigation accuracy needs to be improved by combining measurement information of other external sensors.
In recent years, inertial navigation methods based on visual assistance are becoming hot spots for the research of deep space exploration navigation technology. By combining the advantages of the visual SLAM and an Inertial Measurement Unit (IMU), Inertial information is fused into the visual SLAM system, visual Inertial combination positioning is realized, the respective advantages of the visual and IMU are fully exerted, the IMU can provide continuous recursion information for the system, and the accumulated error of the IMU can be eliminated by images. And fusing the visual information and the inertial information by the rear end of the visual SLAM through a Kalman filtering method to obtain the final pose estimation. Due to the fact that monocular vision lacks scale information, the distance of features in the image cannot be accurately estimated, and therefore pose estimation accuracy of the detector is affected.
In order to solve the above situation, the invention fuses the depth information of a single-point laser to provide accurate distance information for the features in the image on the basis of the vision-aided inertial navigation method, thereby reducing the scale error and improving the pose estimation accuracy of the detector.
Disclosure of Invention
The invention provides a vision-assisted inertial navigation method fused with laser. The method can be applied to pose estimation of a landing descending segment of the planet detector, positioning accuracy is improved, and safe landing of the detector is guaranteed. Laser ranging information is introduced on the basis of vision-assisted inertial navigation, accurate scales are provided for images, the position estimation of the feature points in the images is more accurate, and the Kalman filtering algorithm updates the state by using the accurate feature point positions, so that the state estimation error is reduced. The laser-fused vision-assisted inertial navigation method well solves the problem that monocular vision cannot provide scales for characteristic positions in images, and improves the state estimation precision of the detector to a certain extent.
The invention is realized by the following technical scheme:
referring to fig. 1, a laser-fused vision-aided inertial navigation system includes an IMU (1), an optical camera (2), and a laser range finder (3). The IMU (1) acquires the acceleration and the angular velocity of the detector, the optical camera (2) and the laser range finder (3) face the planet surface all the time, synchronous acquisition of image and distance information is achieved through triggering or proper interpolation, and alignment parameters among the IMU (1), the optical camera (2) and the laser range finder (3) are known through sensors.
Referring to fig. 2, the algorithm flow chart of the vision-aided inertial navigation method fusing laser is that the algorithm inputs the measurement information of the IMU, the optical camera and the laser range finder and fuses and outputs the detector pose information by applying the EKF-SLAM filtering algorithm. The algorithm comprises the following processes:
in the first step, the IMU acquires the acceleration and angular velocity of the probe for predicting the state at the next moment, i.e. the inertial measurement model.
And secondly, acquiring the image and distance information by the optical camera and the laser range finder, fusing the feature points and the depth in the image, and calculating to obtain the feature point coordinates under the camera coordinate system.
And thirdly, fusing the results of the first step and the second step by applying an EKF-SLAM algorithm, and finally outputting an accurate detector pose.
In particular, the method of manufacturing a semiconductor device,
in the first step:
the IMU comprises a gyroscope for measuring the rotation angular velocity of the detector and an accelerometer for measuring the acceleration, and the rotation angular velocity actually measured is set as omeganAcceleration of non-gravitational force of anThe true and noiseless angular velocity of rotation is omega, the non-gravitational acceleration is a, and the correlation between the actual measured value and the true valueThe method comprises the following steps:
in the formula, CqRepresenting a rotation matrix between the coordinate system of the probe and the planetary coordinate system, gGRepresenting the acceleration of gravity of the planet, bgAnd baIndicating drift of gyroscopes and accelerometers, respectively, ngAnd naZero mean measurement white gaussian noise for the gyroscope and accelerometer respectively. The state of the detector at the next moment can be represented by omeganAnd anIs obtained by time integration of (a).
In the second step:
the optical camera and laser range finder acquire image and distance information, if valid, by a single laser distance zlAnd the observation point position P of the relative sensor position is calculated by the directionl=[0 0 zl]TConverted into a position P under the camera coordinate systemc=[xpcypczpc]T:
Pc=Rcl(Pl-tcl)
Wherein [ R ] iscl,tcl]Is a transformation matrix between the optical camera and the laser rangefinder. x is the number ofpc、ypc、zpcRespectively representing the coordinates of the x, y and z axes of the camera coordinate system.
Projection principle of optical camera referring to fig. 3, oc-xcyczcRepresenting a camera coordinate system, f is the focal length of the camera, and (u, v) is the pixel point coordinate of a point p on the ground on an image plane. Referring to fig. 4, coordinates p of any pixel point in the imagei=(ui,vi) Expressed as the coordinates q in the camera coordinate systemi:
i represents the sequence number of the pixel points.
Pixel point piHas a covariance of ∑piUncertainty of laser ranging is σzlThen q isiIs expressed as:
in the third step:
and fusing the results of the first step and the second step by using a Kalman filtering algorithm to obtain accurate pose estimation, wherein the Kalman filtering process comprises the following steps:
1. state prediction
The equation of motion is set to:
xk=f(xk-1,uk,wk)
wherein x is a state vector, k represents a time scale, and the state vector at the k-1 th moment satisfies the following distribution:t represents the Gaussian distribution, bandThe symbols represent a posteriori estimates; w is akT (0, Q) is a noise vector (e.g., various noises of an accelerometer and a gyroscope), 0, Q are the mean and variance of the noise, respectively, ukMotion data (measurement data of IMU).
Since the equation of motion is non-linear and non-Gaussian distribution, according to the EKF idea, at xk-1A posteriori ofAnd wkThe first order linear expansion at 0 gives:
then the prediction equation for the state vector at time k and the prediction equation for the covariance are:
2. status update
The observation equation is linearly expanded to the first order as:
the update equation for the state vector and the update equation for the covariance are:
the invention adds a laser range finder on the basis of a vision-assisted inertial navigation method, the distance information of characteristic points in an image is given by the laser distance, and the three-dimensional coordinates of the characteristic points can be calculated and used for updating the state of a detector. Due to the addition of the laser ranging information, the depth error of the characteristic point is reduced, and therefore the pose estimation precision of the detector is improved.
Drawings
FIG. 1 is a schematic diagram of a system structure of a laser-fused vision-assisted inertial navigation method;
FIG. 2 is a flow chart of an algorithm of a visual aided inertial navigation method of fusion laser;
FIG. 3 is a schematic projection diagram of an optical camera;
FIG. 4 is a graph of a fusion relationship between pixel points and laser data;
FIG. 5, image simulation of the detector descent segment;
fig. 6, a position estimation error map. Wherein plot (a) is the nominal and estimated trajectories, plot (b) is the position estimation error curve, and plot (c) is the velocity estimation error curve;
in the figure: 1. an IMU; 2. a monocular camera; 3. a laser range finder;
Detailed Description
The following examples are given for the purpose of illustrating the present invention, and the detailed embodiments and specific procedures are given for the purpose of implementing the present invention as a premise of the technical solution of the present invention.
Example (b):
the method is described in detail below with reference to the accompanying drawings and examples.
The visual auxiliary inertial navigation system fused with laser as shown in fig. 1 comprises an IMU (1), an optical camera (2) and a laser range finder (3). The IMU (1) acquires the acceleration and the angular velocity of the detector, the optical camera (2) and the laser range finder (3) face the planet surface all the time, synchronous acquisition of image and distance information is achieved through triggering or proper interpolation, and alignment parameters among the IMU (1), the optical camera (2) and the laser range finder (3) are known through sensors.
Referring to fig. 2, the algorithm flow chart of the vision-aided inertial navigation method of the fusion laser is shown, wherein the algorithm inputs the measurement information of the IMU, the optical camera and the laser range finder, and the EKF-SLAM filtering algorithm is applied to fuse and output the pose information of the detector.
In a first step, the IMU acquires the acceleration and angular velocity of the probe for predicting the state at the next moment, i.e. the inertial measurement model.
The IMU comprises a gyroscope for measuring the rotation angular velocity of the detector and an accelerometer for measuring the acceleration, and the rotation angular velocity actually measured is set as omeganAcceleration of non-gravitational force of anThe true and noiseless rotational angular velocity is ω, the non-gravitational acceleration is a, and the relationship between the actual measured value and the true value is as follows:
in the formula, CqRepresenting a rotation matrix between the coordinate system of the probe and the planetary coordinate system, gGRepresenting the acceleration of gravity of the planet, bgAnd baIndicating drift of gyroscopes and accelerometers, respectively, ngAnd naZero mean measurement white gaussian noise for the gyroscope and accelerometer respectively. The state of the detector at the next moment can be represented by omeganAnd anIs obtained by time integration of (a).
Second, the optical camera and laser range finder acquire image and distance information, if valid, by a single laser distance zlAnd the observation point position P of the relative sensor position is calculated by the directionl=[0 0 zl]TConverted into a position P under the camera coordinate systemc=[xpcypczpc]T:
Pc=Rcl(Pl-tcl)
Wherein [ R ] iscl,tcl]Is a transformation matrix between the optical camera and the laser rangefinder.
The projection principle of the optical camera is shown in FIG. 3, in which oc-xcyczcRepresenting a camera coordinate system, f is the focal length of the camera, and (u, v) is the pixel point coordinate of a point p on the ground on an image plane. As shown in FIG. 4, the coordinates p of any pixel point in the imagei=(ui,vi) Can be expressed as the coordinates q in the camera coordinate systemi:
Pixel point piHas a covariance of ∑piLaser rangingHas an uncertainty of σzlThen q isiThe uncertainty model of (a) can be expressed as:
thirdly, fusing the results of the first step and the second step by applying a Kalman filtering algorithm to obtain accurate pose estimation, wherein the Kalman filtering process comprises the following steps:
1. state prediction
The equation of motion is set to:
xk=f(xk-1,uk,wk)
wherein x is a state vector, k represents a time scale, and the state vector at the k-1 th moment satisfies the following distribution:t represents the Gaussian distribution, and the symbol represents the posterior estimation; w is akT (0, Q) is a noise vector (e.g., various noises of an accelerometer and a gyroscope), 0, Q are the mean and variance of the noise, respectively, ukMotion data (measurement data of IMU). Since the equation of motion is non-linear and non-Gaussian distribution, according to the EKF idea, at xk-1A posteriori ofAnd wkThe first order linear expansion at 0 gives:
then the prediction equation for the state vector at time k and the prediction equation for the covariance are:
2. status update
The observation equation is linearly expanded to the first order as:
the update equation for the state vector and the update equation for the covariance are:
effects of the implementation
According to the steps, the invention designs a simulation experiment. The IMU data in the experiment is given by the designed standard trajectory, the camera image is shown in figure 5, and a descending image sequence and laser ranging data are generated by a planet probe landing navigation image simulation system. The experimental results are shown in fig. 6, (a) which shows the detector trajectory estimated by the navigation method of the present invention and the designed nominal trajectory, and it can be seen that the estimated trajectory and the nominal trajectory almost completely coincide. (b) And (c) respectively showing position and speed estimation errors in three directions of XYZ axes, wherein a position estimation error curve is within a range of 100m, and a speed estimation error curve is within 0.2m/s, and both the position estimation error curve and the speed estimation error curve meet the landing precision requirement of the detector.
Claims (5)
1. The vision-assisted inertial navigation system fused with laser is characterized in that: the system comprises an IMU (1), an optical camera (2) and a laser range finder (3); the IMU (1) acquires the acceleration and the angular velocity of the detector, the optical camera (2) and the laser range finder (3) face the planet surface all the time, synchronous acquisition of image and distance information is achieved through triggering or interpolation, and alignment parameters among the IMU (1), the optical camera (2) and the laser range finder (3) are known through sensors; the vision-aided inertial navigation method fusing laser inputs measurement information of an IMU, an optical camera and a laser range finder, and fuses and outputs detector pose information by applying an EKF-SLAM filtering algorithm.
2. The visual auxiliary inertial navigation method fused with laser is characterized by comprising the following steps:
firstly, the IMU acquires the acceleration and the angular velocity of a detector and is used for predicting the state at the next moment, namely an inertial measurement model;
secondly, the optical camera and the laser range finder acquire images and distance information, feature points and depth in the images are fused, and feature point coordinates under a camera coordinate system are obtained through calculation;
and thirdly, fusing the results of the first step and the second step by applying an EKF-SLAM algorithm, and finally outputting an accurate detector pose.
3. The laser-fused visually-assisted inertial navigation method according to claim 2, wherein in the first step:
the IMU comprises a gyroscope for measuring the rotation angular velocity of the detector and an accelerometer for measuring the acceleration, and the rotation angular velocity actually measured is set as omeganAcceleration of non-gravitational force of anThe true and noiseless rotational angular velocity is ω, the non-gravitational acceleration is a, and the relationship between the actual measured value and the true value is as follows:
in the formula, CqRepresenting between the coordinate system of the probe and the planetary coordinate systemRotation matrix, gGRepresenting the acceleration of gravity of the planet, bgAnd baIndicating drift of gyroscopes and accelerometers, respectively, ngAnd naZero mean measurement white gaussian noise representing a gyroscope and an accelerometer respectively; the state of the detector at the next moment can be represented by omeganAnd anIs obtained by time integration of (a).
4. The laser-fused visual-aided inertial navigation method according to claim 2, wherein in the second step:
the optical camera and laser range finder acquire image and distance information, if valid, by a single laser distance zlAnd the observation point position P of the relative sensor position is calculated by the directionl=[0 0 zl]TConverted into a position P under the camera coordinate systemc=[xpcypczpc]T:
Pc=Rcl(Pl-tcl)
Wherein [ R ] iscl,tcl]Is a conversion matrix between the optical camera and the laser range finder;
projection principle of optical camera referring to fig. 3, oc-xcyczcRepresenting a camera coordinate system, wherein f is the focal length of the camera, and (u, v) are pixel point coordinates of a point p on the ground on an image plane; coordinates p of any pixel point in imagei=(ui,vi) Can be expressed as the coordinates q in the camera coordinate systemi:
Pixel point piHas a covariance of ∑piUncertainty of laser ranging is σzlThen q isiThe uncertainty model of (a) can be expressed as:
5. the laser-fused vision-aided inertial navigation method according to claim 2, wherein in the third step:
and fusing the results of the first step and the second step by using a Kalman filtering algorithm to obtain accurate pose estimation, wherein the Kalman filtering process comprises the following steps:
(1) prediction of state
The equation of motion is set to:
xk=f(xk-1,uk,wk)
wherein x is a state vector, and the state vector at the k-1 th moment satisfies the following distribution:wkn (0, Q) is a noise vector (e.g., various noises of accelerometers and gyroscopes), ukMotion data (measurement data of IMU);
since the equation of motion is non-linear and non-Gaussian distribution, according to the EKF idea, at xk-1A posteriori ofAnd wkThe first order linear expansion at 0 gives:
then the prediction equation for the state vector at time k and the prediction equation for the covariance are:
(2) status update
The observation equation is linearly expanded to the first order as:
the update equation for the state vector and the update equation for the covariance are:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010346111.3A CN111521176A (en) | 2020-04-27 | 2020-04-27 | Visual auxiliary inertial navigation method fusing laser |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010346111.3A CN111521176A (en) | 2020-04-27 | 2020-04-27 | Visual auxiliary inertial navigation method fusing laser |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111521176A true CN111521176A (en) | 2020-08-11 |
Family
ID=71902914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010346111.3A Pending CN111521176A (en) | 2020-04-27 | 2020-04-27 | Visual auxiliary inertial navigation method fusing laser |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111521176A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
CN115628740A (en) * | 2022-08-11 | 2023-01-20 | 中国电子科技集团公司第十一研究所 | Pointing control method of moving-base single-detector type composite-axis laser pointing system |
CN115931009A (en) * | 2023-03-13 | 2023-04-07 | 北京航空航天大学 | Inertial device centrifugal measurement method based on gyroscope and laser ranging |
CN117284499A (en) * | 2023-11-24 | 2023-12-26 | 北京航空航天大学 | Monocular vision-laser-based pose measurement method for spatial unfolding mechanism |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140316698A1 (en) * | 2013-02-21 | 2014-10-23 | Regents Of The University Of Minnesota | Observability-constrained vision-aided inertial navigation |
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
CN107688184A (en) * | 2017-07-24 | 2018-02-13 | 宗晖(上海)机器人有限公司 | A kind of localization method and system |
CN108362266A (en) * | 2018-02-22 | 2018-08-03 | 北京航空航天大学 | One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system |
CN110849362A (en) * | 2019-11-28 | 2020-02-28 | 湖南率为控制科技有限公司 | Laser radar and vision combined navigation algorithm based on vehicle-mounted inertia |
-
2020
- 2020-04-27 CN CN202010346111.3A patent/CN111521176A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140316698A1 (en) * | 2013-02-21 | 2014-10-23 | Regents Of The University Of Minnesota | Observability-constrained vision-aided inertial navigation |
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
CN107688184A (en) * | 2017-07-24 | 2018-02-13 | 宗晖(上海)机器人有限公司 | A kind of localization method and system |
CN108362266A (en) * | 2018-02-22 | 2018-08-03 | 北京航空航天大学 | One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system |
CN110849362A (en) * | 2019-11-28 | 2020-02-28 | 湖南率为控制科技有限公司 | Laser radar and vision combined navigation algorithm based on vehicle-mounted inertia |
Non-Patent Citations (1)
Title |
---|
FRANZ ANDERT 等: "Lidar-Aided Camera Feature Tracking and Visual SLAM for Spacecraft Low-Orbit Navigation and Planetary Landing", 《ADVANCES IN AEROSPACE GUIDANCE, NAVIGATION AND CONTROL》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN112712107B (en) * | 2020-12-10 | 2022-06-28 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
CN114485620B (en) * | 2022-01-29 | 2023-07-28 | 中国科学院国家空间科学中心 | Autonomous visual positioning system and method for asteroid detector fused with orbit dynamics |
CN115628740A (en) * | 2022-08-11 | 2023-01-20 | 中国电子科技集团公司第十一研究所 | Pointing control method of moving-base single-detector type composite-axis laser pointing system |
CN115931009A (en) * | 2023-03-13 | 2023-04-07 | 北京航空航天大学 | Inertial device centrifugal measurement method based on gyroscope and laser ranging |
CN115931009B (en) * | 2023-03-13 | 2023-04-28 | 北京航空航天大学 | Inertial device centrifugal measurement method based on gyroscope and laser ranging |
CN117284499A (en) * | 2023-11-24 | 2023-12-26 | 北京航空航天大学 | Monocular vision-laser-based pose measurement method for spatial unfolding mechanism |
CN117284499B (en) * | 2023-11-24 | 2024-01-19 | 北京航空航天大学 | Monocular vision-laser-based pose measurement method for spatial unfolding mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111521176A (en) | Visual auxiliary inertial navigation method fusing laser | |
Ludwig et al. | Comparison of Euler estimate using extended Kalman filter, Madgwick and Mahony on quadcopter flight data | |
US10295365B2 (en) | State estimation for aerial vehicles using multi-sensor fusion | |
CN111947652B (en) | Inertia/vision/astronomy/laser ranging combined navigation method suitable for lunar lander | |
CN114018274B (en) | Vehicle positioning method and device and electronic equipment | |
CN109269511B (en) | Curve matching visual navigation method for planet landing in unknown environment | |
CN112629538A (en) | Ship horizontal attitude measurement method based on fusion complementary filtering and Kalman filtering | |
CN112562077B (en) | Pedestrian indoor positioning method integrating PDR and priori map | |
CN107144278B (en) | Lander visual navigation method based on multi-source characteristics | |
CN112857398B (en) | Rapid initial alignment method and device for ship under mooring state | |
CN111895988A (en) | Unmanned aerial vehicle navigation information updating method and device | |
CN103438890A (en) | Planetary power descending branch navigation method based on TDS (total descending sensor) and image measurement | |
CN113218389B (en) | Vehicle positioning method, device, storage medium and computer program product | |
CN114690229A (en) | GPS-fused mobile robot visual inertial navigation method | |
CN108871319B (en) | Attitude calculation method based on earth gravity field and earth magnetic field sequential correction | |
CN112284388B (en) | Unmanned aerial vehicle multisource information fusion navigation method | |
CN111539982B (en) | Visual inertial navigation initialization method based on nonlinear optimization in mobile platform | |
CN116047567B (en) | Deep learning assistance-based guard and inertial navigation combined positioning method and navigation method | |
CN113008229A (en) | Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor | |
Ilyas et al. | Drift reduction in IMU-only pedestrian navigation system in unstructured environment | |
CN112729283A (en) | Navigation method based on depth camera/MEMS inertial navigation/odometer combination | |
Jaekel et al. | An inertial navigation system with acoustic obstacle detection for pedestrian applications | |
CN107830856B (en) | Formation-flight-oriented solar TDOA (time difference of arrival) measurement method and integrated navigation method | |
CN114459474B (en) | Inertial/polarization/radar/optical-fluidic combined navigation method based on factor graph | |
CN116222551A (en) | Underwater navigation method and device integrating multiple data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200811 |