CN112179338A - Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion - Google Patents

Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion Download PDF

Info

Publication number
CN112179338A
CN112179338A CN202010930651.6A CN202010930651A CN112179338A CN 112179338 A CN112179338 A CN 112179338A CN 202010930651 A CN202010930651 A CN 202010930651A CN 112179338 A CN112179338 A CN 112179338A
Authority
CN
China
Prior art keywords
positioning
imu
pose
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010930651.6A
Other languages
Chinese (zh)
Inventor
张通
符文星
陈康
常晓飞
张晓峰
许涛
付斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Innno Aviation Technology Co ltd
Northwestern Polytechnical University
Original Assignee
Xi'an Innno Aviation Technology Co ltd
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Innno Aviation Technology Co ltd, Northwestern Polytechnical University filed Critical Xi'an Innno Aviation Technology Co ltd
Priority to CN202010930651.6A priority Critical patent/CN112179338A/en
Publication of CN112179338A publication Critical patent/CN112179338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion, which utilizes respective characteristics of a monocular camera and an IMU module to realize the complementation of sensor data. Although the positioning by the monocular camera can adapt to most scenes, the scale problem cannot be solved, the matching relation cannot be established under a weak texture scene and a fast motion scene, and the tracking loss problem is easy to occur; while the pure IMU can reflect the dynamic change well in a short time (millisecond), the accumulated error increases continuously in a long time (second). Therefore, in the stages of initialization, local optimization, global optimization and the like of the positioning process, advantage complementation among different sensors can be realized by fusing visual information and inertial navigation information, the purpose of enhancing the applicability and accuracy of the positioning system is finally achieved, and a reliable technical support is provided for positioning of the unmanned aerial vehicle under the battlefield low-altitude environment.

Description

Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
Technical Field
The invention belongs to the field of computer vision application, and relates to a low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion.
Background
The unmanned aerial vehicle gradually plays more and more important roles due to the characteristics of high portability, strong maneuverability and the like, and the high-precision positioning and navigation capability is an important index for measuring the system performance of the unmanned aerial vehicle and is also an important link for embodying the value of the unmanned aerial vehicle in various fields. Under the conventional state, the unmanned aerial vehicle adopts a GPS or Beidou and other combined navigation modes to realize self-positioning, but because the navigation signal has poor anti-interference capability and the signal strength is weak in a sheltering environment, the problem of self-positioning of the unmanned aerial vehicle under the condition of no satellite navigation signal is solved, and the problem is very important.
At present, the positioning problem under the environment without satellite navigation signals is realized by adopting an unmanned aerial vehicle to carry or an external auxiliary sensor, and the unmanned aerial vehicle has attracted extensive attention. Unmanned aerial vehicle is because the restriction of mounting capacity and operational capability, and the sensor that can carry on mainly includes camera, laser radar, IMU, ultrasonic sensor, barometer etc.. According to the type of the adopted sensor, the unmanned aerial vehicle self-positioning method mainly comprises a visual positioning mode, a laser positioning mode, a multi-sensor fusion positioning mode and the like. Due to the characteristics of low cost, small size and the like, particularly after the SLAM technology relying on a characteristic point method and a direct method, the visual positioning mode is greatly developed in the field of unmanned aerial vehicle positioning, and has stronger algorithm robustness, environmental adaptability and calculation speed, but has an inherent defect that the requirement on the brightness and texture of input image data is higher; the laser positioning mode adopts an active working mode, so that the problem that a vision sensor has high requirements on environmental illumination can be effectively avoided, accurate scene three-dimensional point cloud can be generated, and the method is widely applied to the field of three-dimensional reconstruction of a closed space, but the hardware cost required by the mode is high and the hiding capability is poor; the multi-sensor fusion positioning mode can better solve the problem that a single sensor can only be suitable for a single specific scene, the advantages of each sensor are exerted, and data complementation is formed, so that the applicability and the accuracy of the positioning system are greatly enhanced, and the method is also a main direction of research at the present stage.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion, which aims at a multi-rotor unmanned aerial vehicle under the condition of no satellite navigation signal in a land battlefield environment, adopts a multi-sensor fusion positioning mode, and comprehensively considers the usability, reliability, concealment and accuracy of each sensor, thereby obtaining an accurate unmanned aerial vehicle positioning result in real time.
Technical scheme
A step S1: a monocular camera adopting a global shutter and an IMU module fixedly connected with the camera are mounted on the unmanned aerial vehicle, and image data with a frame rate of 30fps and IMU data with a frequency of 500Hz are collected in a low-altitude flight process and are used as input of a whole positioning algorithm;
step S2, initializing the input data: performing FAST key point extraction and optical flow method adjacent image matching on the image to prepare for subsequently solving the camera pose; meanwhile, IMU data is pre-integrated, and high-frequency IMU data acquired between adjacent image frames is subjected to pose, speed and rotation angle accumulation processing. Then, solving the relative pose of the adjacent image frames by adopting SVD (singular value decomposition) according to the matching result of the adjacent image frames, and aligning the relative pose with the IMU (inertial measurement Unit) pre-integration result as an initialization parameter;
step S3, performing local nonlinear optimization on the image and inertial navigation data: performing nonlinear optimization on the keyframe and IMU data in the set local window, performing least square solution on visual constraint and IMU constraint, and calculating the corresponding pose when the overall error of the local window is minimum by using a beam adjustment method to obtain a relatively accurate local positioning result;
step S4: the accumulated error generated by the pose estimation is corrected through loop detection, if a loop exists, the same characteristic point generated by the loop is used as a fixed value, and the current pose is subjected to nonlinear optimization, so that the positioning precision is further improved;
step S5: on the basis of constraints such as cameras, IMUs, loop detection and the like, global nonlinear optimization is carried out on all data, and more accurate pose is output;
step S6: on the basis of global optimization, 6-degree-of-freedom pose information containing translation and rotation is finally output.
Advantageous effects
The invention provides a low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion, which solves the problem that the existing positioning method cannot simultaneously give consideration to the self-positioning reliability, concealment, usability and accuracy of an unmanned aerial vehicle in a battlefield low-altitude environment. Aiming at a battlefield environment, the whole system hardware mainly comprises: many rotor unmanned aerial vehicle, machine carry handle the board and keep monocular camera and the IMU of linking firmly the state. The internal reference calibration needs to be carried out on the monocular camera in advance, the external reference calibration needs to be carried out on the fixedly connected camera and the IMU, the whole algorithm runs on the airborne processing board, and the data acquired by the unmanned aerial vehicle at low altitude is processed in real time.
The invention provides a low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion, which utilizes respective characteristics of a monocular camera and an IMU module to realize the complementation of sensor data. Although the positioning by the monocular camera can adapt to most scenes, the scale problem cannot be solved, the matching relation cannot be established under a weak texture scene and a fast motion scene, and the tracking loss problem is easy to occur; while the pure IMU can reflect the dynamic change well in a short time (millisecond), the accumulated error increases continuously in a long time (second). Therefore, in the stages of initialization, local optimization, global optimization and the like of the positioning process, advantage complementation among different sensors can be realized by fusing visual information and inertial navigation information, the purpose of enhancing the applicability and accuracy of the positioning system is finally achieved, and a reliable technical support is provided for positioning of the unmanned aerial vehicle under the battlefield low-altitude environment.
Drawings
FIG. 1 is a basic flow chart of a method for positioning an unmanned aerial vehicle based on fusion of vision and inertial navigation;
FIG. 2 is a relationship of image frames, keyframes, and IMU data and IMU pre-integration data;
FIG. 3 is a schematic structural view of a visual odometer (left) and a visual inertial odometer (right);
FIG. 4 shows the result of the method actually measured around a garden, including the track, point cloud and pose;
FIG. 5 is an original and results with a loop as measured by the method around a sculpture.
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
as shown in the attached figure 1, the invention provides a battlefield low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion. Firstly, taking image frames acquired by a monocular camera and high-frequency data acquired by an IMU (inertial measurement unit) module as input; extracting key points of adjacent images, matching the adjacent images by an optical flow method, and pre-integrating IMU data to realize the estimation of initialization parameters after the images and the IMU are fused; then carrying out nonlinear optimization on the data of the local window according to the image and inertial navigation information to obtain preliminary pose information; judging the current scene through loop detection, if loop exists, establishing the relation between the current frame and the historical frame by using a word bag model, and finishing the correction of the pose accuracy; and finally, carrying out integral optimization on the global image and inertial navigation information, further correcting the pose accuracy, and outputting the final pose with 6 degrees of freedom as a result.
The method comprises the following specific implementation steps:
step S1: and taking an image frame acquired by the monocular camera and inertial navigation data acquired by the IMU module as input. The monocular camera needs to select a global shutter instead of a rolling shutter, so that distortion generated under the condition that the camera or a target moves fast can be effectively avoided; meanwhile, the IMU module needs to be fixedly connected with the camera instead of flexibly connected, external parameters of the IMU module and the camera need to be calibrated in advance, otherwise, data of the IMU module cannot truly reflect the motion state of the camera. In specific implementation, the unmanned aerial vehicle acquires image data with a frame rate of 30fps and IMU data with a frequency of 500Hz in a low-altitude flight process, and the image data and the IMU data serve as input of a whole positioning algorithm;
step S2: performing initialization operation on an input image, firstly performing FAST key point extraction on an image acquired by a camera, establishing a matching relation between adjacent images by using a pyramid optical flow method on the basis of the extracted corner points, and solving a relative pose according to a matching result; then, performing pre-integration processing on high-frequency IMU data acquired between adjacent key frames to obtain the pose, the speed and the rotation angle at the current moment; and then, aligning with an IMU pre-integration result according to the relative pose of the camera calculated by the vision solution and roughly recovering initialization parameters including dimension, gravity, speed and even offset. The relationship between the image data and the IMU data is shown in fig. 2, and the pre-integration integrates IMU measurement values between key frames into a constraint of relative motion by re-parameterization, thereby avoiding repeated integration caused by initial condition change;
step S3: the image and inertial navigation data are locally optimized, the key frame and IMU data in a local window are nonlinearly optimized, the visual constraint and the IMU constraint are placed in a target function to carry out least square solution, and the corresponding pose when the overall error of the local window is minimum is calculated by a beam adjustment method to obtain a relatively accurate local positioning result. The vision and inertial navigation tight coupling optimization is to estimate the state quantity by using the image and the IMU information together, as shown in fig. 3, the left side is a pure vision odometer structure, and the right side is a vision and inertial navigation fused odometer structure. The IMU measures a random walk offset, each measurement is combined with the offset to form a structure shown on the right, and a unified loss function containing the residual error of the IMU and the reprojection error needs to be established for the new structure to carry out joint optimization.
Step S4: and correcting accumulated errors generated by the position estimation through loop detection. The key of loop detection is how to effectively detect that a camera passes through the same place, wherein a word bag model mode is used, and a scene shot by an image is described through feature combinations, so that the relation between a current frame and a historical frame is established according to feature similarity, and if a loop exists, the current pose can be corrected according to a matched feature point, so that the positioning precision is further improved, and the loop detection is very necessary for positioning a battlefield environment of a large scene;
step S5: when the scene is closed-loop or the data acquisition is completed, the camera constraint, the IMU constraint and the loop detection constraint are utilized, the global optimization can be performed on the whole data on the basis of the local optimization, and a more accurate pose is output. Particularly, the local optimization precision is low, the global consistency is poor, but the speed is high, the utilization rate of the IMU is high, the global optimization precision is high, the global consistency is good, but the speed is low, and the utilization rate of the IMU is low, so that the local optimization and the global optimization are organically combined, only the key frame in the local window is optimized when data are continuously updated, the pose with low precision is quickly estimated, the data in the closed-loop range are corrected when the loop returns, the global optimization is finally used for completing the overall optimization of the data, and the positioning accuracy is further improved;
step S6: on the basis of global optimization, 6-degree-of-freedom pose information containing translation and rotation is finally output. In addition, whether information such as the whole track, the scene point cloud and the like is output can be determined according to actual needs.
In a word, through the six steps, the advantage complementation among different sensors can be realized by fusing the vision and inertial navigation information, the aim of accurately positioning the unmanned aerial vehicle under the condition of no satellite navigation signal is finally achieved, and a reliable technical support is provided for the positioning of the unmanned aerial vehicle under the battlefield low-altitude environment.
Particularly, in an actual test, the method can continuously correct the bias of the IMU through combined optimization, and can better adapt to the conditions of weak texture and quick movement, as shown in the attached figures 4 and 5, and the actually measured pose is relatively accurate in the scene containing the loop. In addition, the output of the system also comprises scene point cloud and flight path, so the scheme can be used in the fields of environment map generation, path planning and the like.

Claims (1)

1. A low altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion is characterized in that
Step S1: a monocular camera adopting a global shutter and an IMU module fixedly connected with the camera are mounted on the unmanned aerial vehicle, and image data with a frame rate of 30fps and IMU data with a frequency of 500Hz are collected in a low-altitude flight process and are used as input of a whole positioning algorithm;
step S2, initializing the input data: performing FAST key point extraction and optical flow method adjacent image matching on the image to prepare for subsequently solving the camera pose; meanwhile, IMU data is pre-integrated, and high-frequency IMU data acquired between adjacent image frames is subjected to pose, speed and rotation angle accumulation processing. Then, solving the relative pose of the adjacent image frames by adopting SVD (singular value decomposition) according to the matching result of the adjacent image frames, and aligning the relative pose with the IMU (inertial measurement Unit) pre-integration result as an initialization parameter;
step S3, performing local nonlinear optimization on the image and inertial navigation data: performing nonlinear optimization on the keyframe and IMU data in the set local window, performing least square solution on visual constraint and IMU constraint, and calculating the corresponding pose when the overall error of the local window is minimum by using a beam adjustment method to obtain a relatively accurate local positioning result;
step S4: the accumulated error generated by the pose estimation is corrected through loop detection, if a loop exists, the same characteristic point generated by the loop is used as a fixed value, and the current pose is subjected to nonlinear optimization, so that the positioning precision is further improved;
step S5: on the basis of constraints such as cameras, IMUs, loop detection and the like, global nonlinear optimization is carried out on all data, and more accurate pose is output;
step S6: on the basis of global optimization, 6-degree-of-freedom pose information containing translation and rotation is finally output.
CN202010930651.6A 2020-09-07 2020-09-07 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion Pending CN112179338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010930651.6A CN112179338A (en) 2020-09-07 2020-09-07 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010930651.6A CN112179338A (en) 2020-09-07 2020-09-07 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion

Publications (1)

Publication Number Publication Date
CN112179338A true CN112179338A (en) 2021-01-05

Family

ID=73925626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010930651.6A Pending CN112179338A (en) 2020-09-07 2020-09-07 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion

Country Status (1)

Country Link
CN (1) CN112179338A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN113570667A (en) * 2021-09-27 2021-10-29 北京信息科技大学 Visual inertial navigation compensation method and device and storage medium
CN115311353A (en) * 2022-08-29 2022-11-08 上海鱼微阿科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115523920A (en) * 2022-11-30 2022-12-27 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195316A1 (en) * 2007-02-12 2008-08-14 Honeywell International Inc. System and method for motion estimation using vision sensors
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN110726406A (en) * 2019-06-03 2020-01-24 北京建筑大学 Improved nonlinear optimization monocular inertial navigation SLAM method
CN111288989A (en) * 2020-02-25 2020-06-16 浙江大学 Visual positioning method for small unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195316A1 (en) * 2007-02-12 2008-08-14 Honeywell International Inc. System and method for motion estimation using vision sensors
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN110726406A (en) * 2019-06-03 2020-01-24 北京建筑大学 Improved nonlinear optimization monocular inertial navigation SLAM method
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN111288989A (en) * 2020-02-25 2020-06-16 浙江大学 Visual positioning method for small unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOCHEN QIU,HAI ZHANG,WENXING FU ETC.: "Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End", 《SENSORS》 *
王晨曦: "基于IMU与单目视觉融合的位姿估计方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN113570667A (en) * 2021-09-27 2021-10-29 北京信息科技大学 Visual inertial navigation compensation method and device and storage medium
CN115311353A (en) * 2022-08-29 2022-11-08 上海鱼微阿科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115311353B (en) * 2022-08-29 2023-10-10 玩出梦想(上海)科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115523920A (en) * 2022-11-30 2022-12-27 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling
CN115523920B (en) * 2022-11-30 2023-03-10 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling

Similar Documents

Publication Publication Date Title
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109709801B (en) Indoor unmanned aerial vehicle positioning system and method based on laser radar
CN112179338A (en) Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
Rosinol et al. Incremental visual-inertial 3d mesh generation with structural regularities
CN112230242B (en) Pose estimation system and method
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN112461210B (en) Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN111077907A (en) Autonomous positioning method of outdoor unmanned aerial vehicle
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
CN110986888A (en) Aerial photography integrated method
CN114088087A (en) High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED
CN114234967B (en) Six-foot robot positioning method based on multi-sensor fusion
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
CN114966789A (en) Mapping method and system fusing GNSS and multi-view vision
CN112577499B (en) VSLAM feature map scale recovery method and system
CN116989772B (en) Air-ground multi-mode multi-agent cooperative positioning and mapping method
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process
CN116907469A (en) Synchronous positioning and mapping method and system for multi-mode data combined optimization
CN115930948A (en) Orchard robot fusion positioning method
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210105

WD01 Invention patent application deemed withdrawn after publication