CN114543786B - Wall climbing robot positioning method based on visual inertial odometer - Google Patents

Wall climbing robot positioning method based on visual inertial odometer Download PDF

Info

Publication number
CN114543786B
CN114543786B CN202210337210.4A CN202210337210A CN114543786B CN 114543786 B CN114543786 B CN 114543786B CN 202210337210 A CN202210337210 A CN 202210337210A CN 114543786 B CN114543786 B CN 114543786B
Authority
CN
China
Prior art keywords
climbing robot
wall
visual
model
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210337210.4A
Other languages
Chinese (zh)
Other versions
CN114543786A (en
Inventor
陶波
顾振峰
龚泽宇
谭科
王健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202210337210.4A priority Critical patent/CN114543786B/en
Publication of CN114543786A publication Critical patent/CN114543786A/en
Application granted granted Critical
Publication of CN114543786B publication Critical patent/CN114543786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the field of wall climbing robot positioning, and particularly discloses a wall climbing robot positioning method based on a visual inertial odometer, which comprises the following steps: acquiring measurement data through a visual inertial odometer on the wall climbing robot; according to the acquired measurement data, solving state variables by using a conventional bundling model in a sliding window mode; according to the coordinate system transformation relation between the robot and the component, the position of the robot in the sliding window is projected onto the component, and the position of a projection point and the normal direction of the surface of the projection point, namely adsorption information, are obtained; constructing an adsorption constraint item according to the adsorption information, and adding the adsorption constraint item into a conventional bundling model to form an improved bundling model; and solving the state variable according to the improved bundling model to obtain the optimal pose of the robot and realize the positioning of the wall-climbing robot. The invention can reduce the accumulated error of the odometer, improve the accuracy and the robustness of the positioning algorithm, and realize the large-scale high-accuracy positioning of the wall-climbing robot.

Description

Wall climbing robot positioning method based on visual inertial odometer
Technical Field
The invention belongs to the field of wall climbing robot positioning, and particularly relates to a wall climbing robot positioning method based on a visual inertial odometer.
Background
Autonomous positioning technology is a key technology in mobile robots and is also an important precondition for autonomous movement of wall climbing robots. The Visual Inertial Odometer (VIO) well integrates the advantages of a camera and an Inertial Measurement Unit (IMU), and is widely used for autonomous positioning of a mobile robot. However, for some wall climbing robots, the on-board camera can only observe the adsorption surface, and the field of view is small; the vacuum fan vibrates greatly, which affects the system precision and robustness. Conventional VIOs have four degrees of freedom in floating, requiring a priori information to determine the initial pose and rotation about gravity (yaw angle). Accumulated errors exist in the movement process, particularly the yaw angle errors have a large influence on positioning results, and when the VIO system is degraded and shakes greatly, the system is easy to diverge and positioning failure occurs.
Therefore, the problem of insufficient robustness of the conventional visual odometer in positioning of the wall climbing robot is a problem to be solved in the field.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a wall climbing robot positioning method based on a visual inertial odometer, which aims to improve the robustness and the precision of a visual inertial odometer model when the wall climbing robot is positioned and realize the large-scale high-precision positioning of the wall climbing robot.
In order to achieve the above purpose, the invention provides a wall climbing robot positioning method based on a visual inertial odometer, which comprises the following steps:
s1, acquiring measurement data through a visual inertial odometer on the wall climbing robot in the process that the wall climbing robot is adsorbed on the surface of a component to move;
s2, according to the acquired measurement data, solving a state variable by using a bundle set model of a conventional visual inertial odometer, and processing the measurement data and the state variable of the robot in a sliding window mode at present and before a period of time during solving; the state variables include robot position and pose;
s3, according to the coordinate system transformation relation between the robot and the component, the robot position in the sliding window is projected onto the component, and the projection point position and the surface normal of the projection point, namely adsorption information, are obtained;
s4, constructing an adsorption constraint item according to the adsorption information, and adding the adsorption constraint item into a conventional bundling model to form an improved bundling model;
and S5, solving the state variables according to the improved bundling model to obtain the optimal pose of the robot and realize the positioning of the wall-climbing robot.
As a further preferred aspect, the adsorption constraintThe construction method of (2) is as follows:
wherein,for the position of the robot at time k +.>The position of the projection point at the moment k is m, the direction vector of the central axis of the robot is m, and the direction vector of the surface normal of the projection point is n.
As a further preferred embodiment, the acquiring measurement data specifically includes: reading image data acquired by a camera and extracting image features; and simultaneously reading the data of the inertial measurement unit, and performing pre-integration processing to obtain IMU data integration among the image data.
As a further preferred feature, the conventional binder model includes a priori constraint terms, inertial constraint terms, and visual constraint terms, wherein the a priori constraint terms are generated by an marginalization operation of the sliding window, the inertial constraint terms are constructed from IMU data integration, and the visual constraint terms are constructed from image data and image features.
As a further preferred, the improved bundle model is specifically:
wherein χ is a state variable in the sliding window; r is (r) p (χ) is an a priori constraint term;is an inertial confinement term, B is in a sliding windowIMU data integration set,/->Integrating IMU data from k to k+1; />Is a visual constraint term, C is a visual measurement set in a sliding window, (l, j) is a combination of any two different image data in the set, < ->C is l To c j C) l 、c j One image feature in the images l and j respectively; ρ (·) represents a robust kernel function, PR is the set of adsorption information within the sliding window, ++>The adsorption information at time k.
As a further preferred, ρ (·) is a robust kernel function, a Ke Xilu-rod kernel function is employed.
As a further preferred aspect, when solving the state variables according to the improved bundling model, a nonlinear optimization method is used for solving.
As a further preferred, the nonlinear optimization method is the Levenberg-marquaddt algorithm.
As a further preferred embodiment, the initial coordinate system transformation relationship between the robot and the adsorbing member is obtained in advance before the robot moves, specifically by: measured by an external sensor or a specific position of the placement robot is set on the member in advance.
As a further preferred option, a neighbor search method is used when projecting the robot position in the sliding window onto the component.
In general, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. according to the invention, the surface data of an object adsorbed by the wall climbing robot are combined, and the constraint of the adsorption surface is increased for the traditional visual inertial odometer model; therefore, under the condition of large vibration or limited camera visual field, the accumulated error of the odometer can be reduced, and the accuracy and the robustness of the visual inertial odometer are improved, so that the positioning accuracy of the wall climbing robot is improved. According to the invention, the estimation precision of the relative pose of the wall-climbing robot can be greatly improved without introducing new equipment and sensors, and a new effective method is provided for large-scale high-precision positioning of the wall-climbing robot.
2. According to the invention, the data are processed in a sliding window mode, so that a batch of data with a fixed scale are processed when the pose of the robot is solved each time, wherein the data comprise sensor measurement data and robot state variables in the latest period of time, and when new sensor measurement is acquired, the oldest data are subjected to marginalization processing so as to maintain the solving scale of the optimization problem unchanged. The current measurement can be related to the previous measurement and state variable for a period of time to obtain better accuracy, and the relative fixation of the scale of the optimization problem does not infinitely increase the computational efficiency.
3. According to the invention, the Ke Xilu robust kernel is selected in the adsorption constraint term to inhibit outliers, so that the influence of error data can be reduced, and the algorithm robustness is improved.
4. The nonlinear optimization method for solving the bundle set model preferably adopts a Levenberg-Marquaddt algorithm, and has the advantage of high optimizing speed.
Drawings
FIG. 1 is a schematic diagram of state variables of a sliding window participation optimization according to an embodiment of the present invention;
fig. 2 is a flowchart of a positioning method of a wall climbing robot based on a visual inertial odometer according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment of the invention provides a wall climbing robot positioning method based on a visual inertial odometer, which is shown in fig. 2 and comprises the following steps:
before starting positioning, a visual inertial odometer is installed on the wall climbing robot, and the visual inertial odometer is initialized; the visual inertial odometer comprises a camera and an Inertial Measurement Unit (IMU), and the visual inertial odometer initialization method comprises, but is not limited to, calibrating internal and external parameters of the camera and the IMU in advance, and performing dynamic motion alignment on visual and IMU tracks. The large complex components absorbed by the wall climbing robot can acquire the normal direction on any position of the surface of the large complex components, including but not limited to scanning point clouds, CAD models and the like.
In particular, the camera is a sensor that can acquire external image data in real time, including but not limited to a monocular camera, a depth camera, an RGBD camera; preferably, a depth camera is adopted, which can acquire image depth information and gray information, extract angular point features in an image by using OpenCV, track the features by using an optical flow method, and output feature positions and depths to the rear end of the visual inertial odometer. The inertial measurement unit comprises an accelerometer and a gyroscope, and is a sensor capable of measuring self acceleration and angular velocity.
S1, the wall climbing robot is adsorbed on the surface of a component in the moving process, and measurement data are obtained through a visual inertial odometer on the wall climbing robot.
Specifically, the obtaining measurement data specifically includes: reading image data acquired by a camera, enhancing and correcting the image data, and extracting image features; and simultaneously reading the data of the inertial measurement unit, and performing pre-integration processing to obtain IMU data integration between the image data.
S2, processing measurement data and state variables of the robot in a sliding window mode for a period of time before and at present; specifically, as shown in fig. 1, the sliding window refers to processing a batch of data with a fixed scale each time the pose of the robot is solved, where the data includes sensor measurement data and robot state variables in a latest period of time, and when new sensor measurement is acquired, performing marginalization processing on earliest data to maintain the solving scale of the optimization problem unchanged.
According to the acquired measurement data, solving the state variables by using a conventional bundling model; the state variables at each moment contain the robot pose, i.e. the position and pose of the robot.
Specifically, the conventional bundle model is as follows:
wherein χ is a state variable in the sliding window; r is (r) p (χ) is an a priori constraint term generated by the marginalization operation of the sliding window;is an inertial constraint term, B is an IMU data integral set in a sliding window, and +.>Integrating IMU data from k to k+1; />Is a visual constraint term, C is a visual measurement set in a sliding window, (l, j) is a combination of any two different image data in the set, < ->C is l To c j C) l 、c j The p (·) represents the robust kernel function for one image feature in the images i, j, respectively.
The conventional bundle model is a bundle-based bundle model commonly used in conventional visual odometers, see "Qin, t., p.li and s.shen, VINS-Mono: A Robust and Versatile Monocular Visual-industrial State estater.ieee Transactions on Robotics,2018.34 (4): p.1004-1020).
And S3, according to the coordinate system transformation relation between the robot and the component, the robot position in the sliding window is projected onto the surface of the adsorption component, and the projection point position and the normal direction of the projection point surface, namely adsorption information, are obtained.
Specifically, before the wall climbing robot works for the first time, the coordinate system transformation relation between the robot and the adsorption member is calibrated in advance, namely, the initial global position of the robot relative to the adsorption member is obtained, wherein the initial global position refers to the position on the member when the robot is initialized, and the position is not required to be updated in the whole positioning process; acquisition methods include, but are not limited to, measurement by external sensors (e.g., laser trackers), pre-positioning specific locations on the component where the robot is placed, etc.
Preferably, when the robot position is projected onto the component, a neighbor search method is used, that is, the position closest to the track point in the component model is regarded as a projection point, and the position and the surface normal of the projection point are obtained.
S4, constructing an adsorption constraint item in the sliding window according to the adsorption information, and adding the adsorption constraint item into a conventional bundling model to form an improved bundling model.
Specifically, the improved bundle set model is specifically:
wherein,for the adsorption constraint term, ρ (·) represents a robust kernel function, preferably using a Ke Xilu rod kernel; PR is the set of adsorption information in the sliding window, < >>The adsorption information at time k.
More specifically, the method for constructing the adsorption constraint term is as follows:
wherein,for the position of the robot at time k +.>The position of the projection point at the moment k is m, the direction vector of the central axis of the robot is m, and the direction vector of the surface normal of the projection point is n.
And S5, solving the state variable according to the improved bundling model by adopting a nonlinear optimization method, obtaining the optimal pose of the robot, and realizing the positioning of the wall climbing robot.
Preferably, the nonlinear optimization method is a Levenberg-Marquaddt algorithm.
And S6, receiving new sensor information, updating the sliding window, repeating the steps S2-S5, and updating the pose of the robot.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A wall climbing robot positioning method based on a visual inertial odometer is characterized by comprising the following steps:
s1, acquiring measurement data through a visual inertial odometer on the wall climbing robot in the process that the wall climbing robot is adsorbed on the surface of a component to move;
s2, according to the acquired measurement data, solving a state variable by using a bundle set model of a conventional visual inertial odometer, and processing the measurement data and the state variable of the wall-climbing robot in a sliding window mode at present and before a period of time during solving; the state variables comprise the position and the gesture of the wall climbing robot;
s3, projecting the position of the wall climbing robot in the sliding window onto the component according to the coordinate system transformation relation between the wall climbing robot and the component, and acquiring the position of the projection point and the normal direction of the surface of the projection point, namely adsorption information;
s4, constructing an adsorption constraint item according to the adsorption information, and adding the adsorption constraint item into a conventional bundling model to form an improved bundling model;
s5, solving the state variables according to the improved bundling model to obtain the optimal pose of the wall-climbing robot, and positioning the wall-climbing robot;
the obtaining measurement data specifically includes: reading image data acquired by a camera and extracting image features; simultaneously reading the data of the inertial measurement unit, and performing pre-integration processing to obtain IMU data integration among image data; the conventional bundling model comprises a priori constraint item, an inertial constraint item and a visual constraint item, wherein the priori constraint item is generated by the marginalization operation of a sliding window, the inertial constraint item is constructed according to IMU data integration, and the visual constraint item is constructed according to image data and image characteristics;
adsorption constraint terms in the improved bundle modelThe construction method of (2) is as follows:
wherein,for the position of the wall climbing robot at time k,/>the position of the projection point at the moment k is m, the direction vector of the central axis of the wall climbing robot is m, and the direction vector of the surface normal of the projection point is n;
the improved bundle model is specifically as follows:
wherein χ is a state variable in the sliding window; r is (r) p (χ) is an a priori constraint term;is an inertial constraint term, B is an IMU data integral set in a sliding window, and +.>Integrating IMU data from k to k+1; />Is a visual constraint item, C is a visual measurement set in a sliding window, (l, j) is a combination of any two different image data in the visual measurement set, +.>C is l To c j C) l 、c j One image feature in the images l and j respectively; ρ (·) represents a robust kernel function; PR is the set of adsorption information in the sliding window, < >>The adsorption information at time k.
2. The visual inertial odometer-based wall climbing robot positioning method of claim 1, wherein ρ (·) is a robust kernel function, employing a Ke Xilu-bar kernel function.
3. The visual inertial odometer-based wall climbing robot positioning method of claim 1, wherein the solving of the state variables is performed using a nonlinear optimization method based on an improved bundling model.
4. The wall-climbing robot positioning method based on a visual inertial odometer according to claim 3, wherein the nonlinear optimization method is a Levenberg-marquaddt algorithm.
5. The method for positioning the wall-climbing robot based on the visual inertial odometer according to claim 1, wherein the initial coordinate system transformation relation between the wall-climbing robot and the component is obtained in advance before the wall-climbing robot moves, and the method comprises the following steps: measured by an external sensor or a specific position for placing the wall climbing robot is set on the component in advance.
6. A method of positioning a wall climbing robot based on a visual inertial odometer according to any of claims 1-5, wherein the wall climbing robot position in the sliding window is projected onto the member using a neighbor search method.
CN202210337210.4A 2022-03-31 2022-03-31 Wall climbing robot positioning method based on visual inertial odometer Active CN114543786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210337210.4A CN114543786B (en) 2022-03-31 2022-03-31 Wall climbing robot positioning method based on visual inertial odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210337210.4A CN114543786B (en) 2022-03-31 2022-03-31 Wall climbing robot positioning method based on visual inertial odometer

Publications (2)

Publication Number Publication Date
CN114543786A CN114543786A (en) 2022-05-27
CN114543786B true CN114543786B (en) 2024-02-02

Family

ID=81665675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210337210.4A Active CN114543786B (en) 2022-03-31 2022-03-31 Wall climbing robot positioning method based on visual inertial odometer

Country Status (1)

Country Link
CN (1) CN114543786B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116242366B (en) * 2023-03-23 2023-09-12 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN113091738A (en) * 2021-04-09 2021-07-09 安徽工程大学 Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113358117A (en) * 2021-03-09 2021-09-07 北京工业大学 Visual inertial indoor positioning method using map
CN113432593A (en) * 2021-06-25 2021-09-24 北京华捷艾米科技有限公司 Centralized synchronous positioning and map construction method, device and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100812724B1 (en) * 2006-09-29 2008-03-12 삼성중공업 주식회사 Multi function robot for moving on wall using indoor global positioning system
US11687086B2 (en) * 2020-07-09 2023-06-27 Brookhurst Garage, Inc. Autonomous robotic navigation in storage site

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
WO2019157925A1 (en) * 2018-02-13 2019-08-22 视辰信息科技(上海)有限公司 Visual-inertial odometry implementation method and system
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN113358117A (en) * 2021-03-09 2021-09-07 北京工业大学 Visual inertial indoor positioning method using map
CN113091738A (en) * 2021-04-09 2021-07-09 安徽工程大学 Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113432593A (en) * 2021-06-25 2021-09-24 北京华捷艾米科技有限公司 Centralized synchronous positioning and map construction method, device and system

Also Published As

Publication number Publication date
CN114543786A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN111207774B (en) Method and system for laser-IMU external reference calibration
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
JP4876204B2 (en) Small attitude sensor
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
Xiong et al. G-VIDO: A vehicle dynamics and intermittent GNSS-aided visual-inertial state estimator for autonomous driving
CN111380514A (en) Robot position and posture estimation method and device, terminal and computer storage medium
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
JP2012173190A (en) Positioning system and positioning method
CN110207693B (en) Robust stereoscopic vision inertial pre-integration SLAM method
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN111156997A (en) Vision/inertia combined navigation method based on camera internal parameter online calibration
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN110702113B (en) Method for preprocessing data and calculating attitude of strapdown inertial navigation system based on MEMS sensor
CN114693754B (en) Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN114543786B (en) Wall climbing robot positioning method based on visual inertial odometer
JP2014240266A (en) Sensor drift amount estimation device and program
CN112179373A (en) Measuring method of visual odometer and visual odometer
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN115015956A (en) Laser and vision SLAM system of indoor unmanned vehicle
Tang et al. Exploring the accuracy potential of IMU preintegration in factor graph optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant