CN113639743B - Visual inertia SLAM positioning method based on pedestrian step information assistance - Google Patents

Visual inertia SLAM positioning method based on pedestrian step information assistance Download PDF

Info

Publication number
CN113639743B
CN113639743B CN202110723640.5A CN202110723640A CN113639743B CN 113639743 B CN113639743 B CN 113639743B CN 202110723640 A CN202110723640 A CN 202110723640A CN 113639743 B CN113639743 B CN 113639743B
Authority
CN
China
Prior art keywords
information
pedestrian
visual
method based
positioning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110723640.5A
Other languages
Chinese (zh)
Other versions
CN113639743A (en
Inventor
董艺彤
施闯
李团
闫大禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110723640.5A priority Critical patent/CN113639743B/en
Publication of CN113639743A publication Critical patent/CN113639743A/en
Application granted granted Critical
Publication of CN113639743B publication Critical patent/CN113639743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention is applicable to the technical field of INS/Visual integrated navigation positioning based on smart phones, and provides a Visual inertial SLAM positioning method based on pedestrian step information assistance, which comprises the following steps: step S100: performing initialization operation simultaneously by utilizing visual and inertial information; step S200: front-end image optical flow tracking and IMU pre-integration; step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, simultaneously adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item; step S400: designing strong constraints in extreme scenes where visual tracking fails; according to the invention, under the indoor pedestrian positioning motion scene, based on nonlinear optimization fusion pedestrian step length information, when the system divergence is obvious, the corresponding pose in the sliding window is fixed by using the pose obtained by the step length information, and then the optimization is performed on the basis, so that the problem that the IMU positioning result diverges rapidly in a short time is avoided.

Description

Visual inertia SLAM positioning method based on pedestrian step information assistance
Technical Field
The invention belongs to the technical field of INS/Visual integrated navigation positioning of smart phones, and particularly relates to a Visual inertial SLAM positioning method based on pedestrian step information assistance.
Background
The deployment of the global satellite navigation system (GNSS, global Navigation Satellite System) can provide accurate and reliable location service for people all weather, and a very mature and complete outdoor positioning service system is formed. With the continuous development of society, the development process of towns and underground spaces is continuously accelerated, and people can live and work indoors or underground for more than 90% of the time, so that the people can know the position of the people in the process, and meanwhile, the surrounding environment is clear so as to know how to reach a destination, and therefore, indoor positioning navigation plays an increasingly important role in life.
Because the indoor environment is shielded by the wall, the satellite signals widely used at present are seriously attenuated by the influence of non-line-of-sight and multipath effects, so that satellite navigation cannot be effectively applied to the indoor environment. The SLAM (Simultaneous Localization andMapping) technology widely used in the indoor positioning field at present refers to the process of establishing an environment model in the motion process and simultaneously estimating the motion of the user under the condition of no environment priori information. The visual inertial SLAM utilizes visual information and inertial information, and can achieve good effects in an ideal environment with rich textures and gentle actions. However, in indoor scenes such as a mall and a corridor, extreme scenes such as disappearance of image textures, obvious camera shake and the like exist, at this time, visual information is rapidly disabled, and IMU (Inertial Measurement Unit) positioning results are rapidly diverged in a short time.
Disclosure of Invention
The invention provides a visual inertial SLAM positioning method based on pedestrian step information assistance, and aims to solve the problems that in an indoor scene, extreme scenes such as disappearance of image textures, obvious camera shake and the like exist, at the moment, visual information can be rapidly invalid, and an IMU positioning result can be rapidly diverged in a short time.
The invention is realized in such a way that the visual inertia SLAM positioning method based on the assistance of pedestrian step length information comprises the following steps:
step S100: performing initialization operation simultaneously by utilizing visual and inertial information;
step S200: front-end image optical flow tracking and IMU pre-integration;
step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, simultaneously adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item;
step S400: and under the condition that the visual inertia SLAM system diverges rapidly, the pose obtained by directly using the pedestrian step length information is used for fixing the state quantity corresponding to the PDR time in the sliding window, and the subsequent optimization is iterated on the fixed state quantity.
Preferably, in the step S100, the rotation amount is calculated by the gyroscope measurement value and the vision measurement value when the initialization operation is performed.
Preferably, the gyroscope measurement value mainly comprises measurement noise and zero offset error of the gyroscope, and the vision measurement value mainly comprises measurement noise.
Preferably, in the step S100, the zero bias, the visual scale, the speed state quantity, and the initial global map with scale information of the gyroscope are estimated on line by using the visual inertia information.
Preferably, in the step S200, for the image information, each frame of image is first tracked by optical flow to obtain corresponding feature points between images, and a corresponding key frame is selected by time interval or image parallax.
Preferably, in the step S200, for inertial navigation information, pre-integration processing is performed by using IMU information between images, so as to obtain a relative pose change obtained by IMU between images.
Preferably, in a pedestrian walking scenario, the analysis and solution is performed using a pedestrian dead reckoning technique, wherein the pedestrian dead reckoning technique includes gait detection, step estimation and dead reckoning.
Preferably, the pedestrian step size, gait and heading angle are calculated using inertial sensors or gyroscopes or accelerometers or magnetometers.
Preferably, a nonlinear optimization model is used to estimate pedestrian step size.
Compared with the prior art, the invention has the beneficial effects that: according to the visual inertia SLAM positioning method based on the assistance of the pedestrian step length information, the pedestrian step length information is fused based on nonlinear optimization under the condition that the indoor pedestrian positioning motion scene is deduced, when the system divergence is obvious, the corresponding pose in the sliding window is directly fixed by the pose obtained by the step length information, then the optimization is carried out on the basis, and the problem that the IMU positioning result can be rapidly diverged in a short time is avoided.
Drawings
FIG. 1 is a schematic diagram of the method steps of the present invention;
FIG. 2 is an overall frame diagram of a pedestrian stride information assisted visual inertia SLAM according to the present invention;
fig. 3 is a schematic view of pedestrian dead reckoning in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1-3, the present invention provides a technical solution: a visual inertia SLAM positioning method based on pedestrian step information assistance comprises the following steps:
step S100: the visual and inertial information is utilized to perform initialization operation simultaneously, state quantities such as zero offset, visual scale, speed and the like of the gyroscope are estimated on line through the visual inertial information of the gyroscope, and an initial global map with scale information is estimated at the same time, so that preparation is made for subsequent positioning;
step S200: for image information, each frame of image is firstly subjected to optical flow tracking to obtain corresponding feature points among the images, and corresponding key frames are selected through time intervals or image parallax; for inertial navigation information, in order to avoid re-integration after each optimization so as to greatly increase the calculated amount, carrying out pre-integration processing by using IMU information between images to obtain relative pose change obtained by IMU between images;
step S300: and simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, simultaneously adding a residual error item constructed by pedestrian step length and speed observation in the cost function, optimizing the residual error item, solving the speed and position information obtained by pedestrian dead reckoning and the state quantity in the current sliding window to obtain a corresponding residual error solving quantity, and adding the corresponding residual error item into the cost function to perform optimization together because pedestrian walking information is required to be introduced. Meanwhile, since the PDR is an empirical model in nature, the error of the PDR needs to be subjected to error characteristic analysis so as to determine a corresponding noise covariance matrix;
step S400: the strong constraint is designed under the extreme scene of failure of visual tracking, and the extreme scene such as disappearance of image texture characteristics, rapid rotation of a camera and the like easily occurs under the scene of pedestrian positioning, so that the image information is completely invalid and the track is rapidly diverged. At this time, the constraint of adding only residual information is insufficient to enable the system to normally operate, so that the step information auxiliary mode is selected by judging the current state of the system. When the system diverges obviously, the pose obtained by directly using the step information is used for fixing the corresponding pose in the sliding window, and then the optimization is performed on the basis.
In the present embodiment, the visual inertial SLAM mainly uses image information and inertial information, and the front end primarily performs preliminary processing on the information. For image information, firstly, extracting characteristic points by using Harris angular points, and carrying out real-time tracking on the extracted characteristic points by using KLT optical flow to obtain corresponding characteristic points. For a point p= [ X, Y, Z in space] T The relationship between the three-dimensional coordinates and the pixel coordinates on the image is:
s 1 p 1 =KP,s 2 p 2 =K(KP+t) (1)
according to the gray invariance assumption, the gray value of the pixel of the same space point in the two images is kept unchanged:
I(x+dx,y+dy,t+dt)=I(x,y,t) (2)
the bias derivative can be obtained by:
in order to solve the correspondence of feature points between images, it is assumed that the pixel points in a window have the same motion:
and solving to obtain corresponding optical flow, so as to obtain corresponding values of the characteristic points in the two images, and removing the characteristic points which are erroneously tracked by using a random detection consistency (Random Sample Consensus, RANSAC) algorithm.
For inertial navigation information, to reduce the amount of computation, a pre-integration algorithm is used to solve the pose variation between key frames:
after the data are processed, the system is firstly initialized, and meanwhile, the rotation quantity is solved through the measured value of the gyroscope and the visual measured value. The measured value of the gyroscope mainly comprises measurement noise and zero offset error of the gyroscope, and the visual measured value mainly comprises measurement noise, wherein the measurement noise is negligible. Therefore, the zero offset of the gyroscope can be obtained by solving the difference between the visual observation in the sliding window and the gyroscope observation:
wherein the method comprises the steps ofAnd after zero offset of the gyroscope is obtained, the value of the IMU predictive score can be updated.
After zero offset of the gyroscope is obtained, solving the speed, the gravity acceleration and the scale, and firstly, designing an optimization variable:
for two consecutive frames b within a window k And b k+1 It is possible to obtain:
thereby obtaining a linear measurement model:
wherein the method comprises the steps of So that the least squares problem can be solved:
in order to reduce the calculation amount, the visual inertia SLAM only maintains the key frame pose and the corresponding characteristic points in the sliding window. Wherein the state quantity can be mainly expressed as:
χ=[x n ,x n+1 ,…,x n+Nmm+1 ,…,λ m+M ]
(15)
x i represents the key frame state quantity lambda in the sliding window i Representing the state quantity corresponding to the feature point.
Wherein the method comprises the steps ofRepresenting the position offset of the body coordinate system corresponding to the key frame of the ith frame from the world coordinate system,/>Representing a posture transfer matrix from a body coordinate system corresponding to an ith frame key frame to a world coordinate system,/a>Representing the representation of the speed in body coordinate system in world coordinate system, +.>Representing accelerometer zero bias->Representing the zero bias of the gyroscope.
And at the SLAM rear end, constructing a cost function to perform nonlinear optimization solution by constructing an IMU pre-integral residual, a visual re-projection residual and a priori residual. Wherein the cost function is:
wherein the method comprises the steps ofFor a priori residual +.>Pre-integrating residual for IMU,>is a visual re-projection residual.
After constructing The cost function, using LM (The Levenberg-Marquardt Method) algorithm to carry out nonlinear optimization iterative solution.
For pedestrian walking scenes, analysis and analysis are performed by using a pedestrian dead reckoning technology, and certain characteristics of step information exist in the process of pedestrian movement. The pedestrian dead reckoning technology comprises 3 core steps of gait detection, step length estimation and dead reckoning. The specific position of the pedestrian is estimated by estimating the pedestrian step size, gait and heading angle with inertial sensors (gyroscopes, accelerometers, magnetometers, etc.).
As shown in fig. 3, the pdr achieves continuous tracking and positioning of pedestrians by measuring the distance and direction of movement from the start of a known location. And step judgment and step measurement are carried out through the acceleration sensor, the direction of the pedestrian is judged through the direction sensor and the gyroscope, and finally all information is integrated to realize continuous tracking and positioning of the pedestrian.
There are three main models for estimating the length of a pedestrian step: constant model, linear model, and nonlinear model. The constant model divides a measured walking distance by the number of steps counted to obtain an average step size, i.e. the step size is considered as a constant. The linear model assumes a linear relationship between step size and frequency by collecting pedestrian walking data of different heights. The nonlinear model is adopted, and the step length of each step of the pedestrian is as follows:
wherein a is max And a min Is the detection of maximum and minimum acceleration values within one step, K represents the dimensional constraint of the step size. In the pedestrian coordinate systemThe average speed of (c) can be expressed as:
the pose and the speed are estimated values, noise exists, and the pose and the speed are developed into:
for the situation that the visual inertia SLAM system does not diverge rapidly, a residual error item is constructed in the nonlinear optimization process by using the step length and speed information obtained by the PDR to assist the system:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the constraint residual of PDR speed to the system,and representing the constraint residual error of the PDR step increment to the system, wherein a covariance matrix and an information matrix are obtained by noise error characteristic analysis.
For the situation that the visual inertia SLAM system diverges rapidly, the system is restrained only through residual errors at the moment, an obvious restraint effect cannot be achieved, and state quantity corresponding to PDR time in a sliding window is directly fixed:
wherein the method comprises the steps ofAnd (3) carrying out iterative nonlinear optimization on the fixed state quantity by reckoning by the pedestrian track and carrying out subsequent optimization.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (9)

1. A visual inertia SLAM positioning method based on pedestrian step information assistance is characterized in that: the method comprises the following steps:
step S100: performing initialization operation simultaneously by utilizing visual and inertial information;
step S200: front-end image optical flow tracking and IMU pre-integration;
step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, simultaneously adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item;
step S400: and under the condition that the visual inertia SLAM system diverges rapidly, the pose obtained by directly using the pedestrian step length information is used for fixing the state quantity corresponding to the PDR time in the sliding window, and the subsequent optimization is iterated on the fixed state quantity.
2. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 1, wherein: in the step S100, the rotation amount is calculated from the gyroscope measurement value and the vision measurement value at the time of the initialization operation.
3. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 2, wherein: the gyroscope measurement value comprises measurement noise and zero offset error of the gyroscope, and the vision measurement value mainly comprises measurement noise.
4. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 1, wherein: in the step S100, zero bias, visual scale, speed state quantity of the gyroscope and an initial global map with scale information are estimated on line through visual inertia information of the gyroscope.
5. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 1, wherein: in the step S200, for the image information, each frame of image is first tracked by optical flow to obtain corresponding feature points between images, and corresponding key frames are selected by time intervals or image disparities.
6. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 1, wherein: in step S200, for inertial navigation information, pre-integration processing is performed by using IMU information between images, so as to obtain a relative pose change obtained by IMU between images.
7. The visual inertial SLAM positioning method based on pedestrian step information assistance as claimed in claim 1, wherein: in a pedestrian walking scenario, analysis and interpretation are performed using a pedestrian dead reckoning technique, wherein the pedestrian dead reckoning technique includes gait detection, step estimation, and dead reckoning.
8. The visual inertial SLAM positioning method based on pedestrian step size information assistance as claimed in claim 7, wherein: and calculating the step length, gait and course angle of the pedestrian by using the inertial sensor.
9. The visual inertial SLAM positioning method based on pedestrian step size information assistance as claimed in claim 7, wherein: and estimating the step length of the pedestrian by adopting a nonlinear model.
CN202110723640.5A 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance Active CN113639743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110723640.5A CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110723640.5A CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Publications (2)

Publication Number Publication Date
CN113639743A CN113639743A (en) 2021-11-12
CN113639743B true CN113639743B (en) 2023-10-17

Family

ID=78416287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110723640.5A Active CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Country Status (1)

Country Link
CN (1) CN113639743B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108106613A (en) * 2017-11-06 2018-06-01 上海交通大学 The localization method and system of view-based access control model auxiliary
CN109631889A (en) * 2019-01-07 2019-04-16 重庆邮电大学 Mems accelerometer error compensating method based on LMS adaptive-filtering and gradient decline
CN111982103A (en) * 2020-08-14 2020-11-24 北京航空航天大学 Point-line comprehensive visual inertial odometer method with optimized weight
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
CN112964257A (en) * 2021-02-05 2021-06-15 南京航空航天大学 Pedestrian inertia SLAM method based on virtual landmarks
CN112985450A (en) * 2021-02-09 2021-06-18 东南大学 Binocular vision inertial odometer method with synchronous time error estimation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9658070B2 (en) * 2014-07-11 2017-05-23 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108106613A (en) * 2017-11-06 2018-06-01 上海交通大学 The localization method and system of view-based access control model auxiliary
CN109631889A (en) * 2019-01-07 2019-04-16 重庆邮电大学 Mems accelerometer error compensating method based on LMS adaptive-filtering and gradient decline
CN111982103A (en) * 2020-08-14 2020-11-24 北京航空航天大学 Point-line comprehensive visual inertial odometer method with optimized weight
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
CN112964257A (en) * 2021-02-05 2021-06-15 南京航空航天大学 Pedestrian inertia SLAM method based on virtual landmarks
CN112985450A (en) * 2021-02-09 2021-06-18 东南大学 Binocular vision inertial odometer method with synchronous time error estimation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Solving Monocular Visual Odometry Scale Factor with Adaptive Step Length Estimates for Pedestrians Using Handheld Devices;Antigny Nicolas et al.;《SENSORS》;第19卷(第4期);1-18 *
国内室内定位技术发展现状综述;闫大禹等;《导航定位学报》;第7卷(第4期);5-12 *
基于智能终端的多传感器融合行人导航定位算法研究;叶俊华;《中国博士学位论文全文数据库 (信息科技辑)》;I136-75 *
基于视觉惯性融合的半直接单目视觉里程计;龚赵慧;张霄力;彭侠夫;李鑫;;机器人(第05期);85-95 *

Also Published As

Publication number Publication date
CN113639743A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN109991636A (en) Map constructing method and system based on GPS, IMU and binocular vision
CN111220153B (en) Positioning method based on visual topological node and inertial navigation
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN110553648A (en) method and system for indoor navigation
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN111161337A (en) Accompanying robot synchronous positioning and composition method in dynamic environment
CN114529576A (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
CN112781582A (en) Multi-sensor fusion high-precision pose estimation algorithm under satellite weak observation condition
Aufderheide et al. Towards real-time camera egomotion estimation and three-dimensional scene acquisition from monocular image streams
EP3227634A1 (en) Method and system for estimating relative angle between headings
CN112731503B (en) Pose estimation method and system based on front end tight coupling
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
Kessler et al. Multi-Sensor indoor pedestrian navigation system with vision aiding
CN113639743B (en) Visual inertia SLAM positioning method based on pedestrian step information assistance
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN112837374B (en) Space positioning method and system
CN113503872B (en) Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
CN115218889A (en) Multi-sensor indoor positioning method based on dotted line feature fusion
CN113916221A (en) Self-adaptive pedestrian track calculation method integrating visual odometer and BP network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant