CN113639743A - Pedestrian step length information-assisted visual inertia SLAM positioning method - Google Patents

Pedestrian step length information-assisted visual inertia SLAM positioning method Download PDF

Info

Publication number
CN113639743A
CN113639743A CN202110723640.5A CN202110723640A CN113639743A CN 113639743 A CN113639743 A CN 113639743A CN 202110723640 A CN202110723640 A CN 202110723640A CN 113639743 A CN113639743 A CN 113639743A
Authority
CN
China
Prior art keywords
information
visual
pedestrian
step length
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110723640.5A
Other languages
Chinese (zh)
Other versions
CN113639743B (en
Inventor
董艺彤
施闯
李团
闫大禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110723640.5A priority Critical patent/CN113639743B/en
Publication of CN113639743A publication Critical patent/CN113639743A/en
Application granted granted Critical
Publication of CN113639743B publication Critical patent/CN113639743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention is suitable for the technical field of INS/Visual integrated navigation positioning based on a smart phone, and provides a Visual inertia SLAM positioning method based on pedestrian step length information assistance, which comprises the following steps: step S100: initializing operation by using visual and inertial information; step S200: front-end image optical flow tracking and IMU pre-integration; step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item; step S400: designing strong constraints in extreme scenarios of visual tracking failure; according to the method, under the condition that an indoor positioning pedestrian motion scene is deduced, pedestrian step length information is fused based on nonlinear optimization, when the system divergence is obvious, the corresponding pose in the sliding window is fixed by using the pose obtained by the step length information, and then optimization is carried out on the basis, so that the problem that an IMU positioning result is rapidly diverged in a short time is solved.

Description

Pedestrian step length information-assisted visual inertia SLAM positioning method
Technical Field
The invention belongs to the technical field of INS/Visual integrated navigation positioning of smart phones, and particularly relates to a Visual inertia SLAM positioning method based on pedestrian step length information assistance.
Background
The deployment of the Global Navigation Satellite System (GNSS) can provide accurate and reliable position service for people all the day, and a very mature and complete outdoor positioning service System is formed. With the continuous development of society, the development process of urbanization and underground space is accelerated, people live and work indoors or underground more than 90% of the time, and in the process, people need to know where the people are located and know the surrounding environment so as to know how to reach the destination, so that indoor positioning navigation plays an increasingly important role in life.
Because the indoor environment is shielded by a wall, the influence of non-line-of-sight and multipath effects on the satellite signals which are widely used at present is seriously attenuated, so that the satellite navigation cannot be effectively applied to the indoor environment. At present, the slam (simultaneous Localization and mapping) technology widely used in the urban positioning field refers to a process of establishing an environment model in a motion process and estimating self motion under the condition of no environment prior information. The visual inertia SLAM utilizes visual information and inertial information, and can achieve good effects in an ideal environment with rich texture and smooth action. However, in indoor scenes such as shopping malls and corridors, extreme scenes such as disappearance of image textures and remarkable camera shaking exist, at this time, visual information is rapidly invalidated, and the positioning result of the imu (inertial Measurement unit) is rapidly dispersed in a short time.
Disclosure of Invention
The invention provides a pedestrian step length information-assisted visual inertia SLAM positioning method, and aims to solve the problems that in an indoor scene, extreme scenes such as disappearance of image textures and obvious camera shaking exist, visual information is rapidly invalid at the moment, and an IMU positioning result is rapidly dispersed in a short time.
The invention is realized in such a way that a visual inertia SLAM positioning method based on pedestrian step length information assistance comprises the following steps:
step S100: initializing operation by using visual and inertial information;
step S200: front-end image optical flow tracking and IMU pre-integration;
step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item;
step S400: strong constraints are designed in extreme scenarios where visual tracking fails.
Preferably, in step S100, the rotation amount is calculated from the gyro measurement value and the visual measurement value when the initialization operation is performed.
Preferably, the gyroscope measurement value mainly comprises measurement noise and a gyroscope zero offset error, and the visual measurement value mainly comprises measurement noise.
Preferably, in step S100, the gyroscope zero offset, the visual scale, the speed state quantity, and the initial global map with scale information are estimated online by using the visual inertial information of the gyroscope.
Preferably, in step S200, for the image information, for each frame of received image, the corresponding feature points between the images are obtained through optical flow tracking, and the corresponding key frame is selected through the time interval or the image parallax.
Preferably, in step S200, for inertial navigation information, performing pre-integration processing by using IMU information between images to obtain a relative pose change obtained by the IMU between the images.
Preferably, in the pedestrian walking scene, the analysis and the solution are performed by using a pedestrian dead reckoning technology, wherein the pedestrian dead reckoning technology comprises gait detection, step length estimation and heading estimation.
Preferably, the pedestrian step size, gait and heading angle are calculated using inertial sensors or gyroscopes or accelerometers or magnetometers.
Preferably, a nonlinear optimization model is used to estimate the pedestrian step size.
Compared with the prior art, the invention has the beneficial effects that: according to the visual inertia SLAM positioning method based on pedestrian step length information assistance, under the condition that an indoor positioning pedestrian motion scene is deduced, pedestrian step length information is fused based on nonlinear optimization, when the system divergence is obvious, the corresponding pose in a sliding window is fixed by directly using the pose obtained by the step length information, and then optimization is carried out on the basis, so that the problem that an IMU positioning result can be rapidly diverged in a short time is solved.
Drawings
FIG. 1 is a schematic diagram of the process steps of the present invention;
FIG. 2 is an overall frame diagram of the pedestrian step information aided visual inertia SLAM in the present invention;
FIG. 3 is a schematic diagram of the dead reckoning of a pedestrian according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1-3, the present invention provides a technical solution: a visual inertia SLAM positioning method based on pedestrian step length information assistance comprises the following steps:
step S100: initializing by using visual and inertial information, estimating state quantities such as zero offset, visual scale, speed and the like of the gyroscope on line by using the visual and inertial information, and estimating an initial global map with scale information to prepare for subsequent positioning;
step S200: front-end image optical flow tracking and IMU pre-integration, for image information, when receiving a frame of image, firstly obtaining corresponding characteristic points between the images through optical flow tracking, and selecting corresponding key frames through time intervals or image parallax; for inertial navigation information, in order to avoid re-integration after each optimization and greatly increase calculated amount, pre-integration processing is carried out by utilizing IMU information between images to obtain relative pose change obtained by IMU between the images;
step S300: the method comprises the steps of simultaneously constructing a cost function by using visual information, IMU information and prior information in a visual inertia SLAM, adding a residual error item constructed by pedestrian step length and speed observation in the cost function, optimizing the residual error item, solving speed and position information obtained through pedestrian dead reckoning and state quantity in a current sliding window to obtain a corresponding residual error solution quantity due to the fact that pedestrian walking information needs to be introduced, and adding the corresponding residual error item into the cost function for optimization. Meanwhile, because the PDR is an empirical model essentially, the error of the PDR needs to be analyzed by error characteristics so as to determine a corresponding noise covariance matrix;
step S400: the strong constraint is designed in the extreme scene of visual tracking failure, and the extreme scene such as disappearance of image texture features and quick rotation of a camera easily appears in the pedestrian positioning scene, so that the image information is completely invalid and the track is quickly diverged. At the moment, only adding residual error information for constraint is not enough to enable the system to normally operate, so that a step information auxiliary mode is selected by judging the current state of the system. When the system divergence is obvious, the corresponding pose in the sliding window is directly fixed by using the pose obtained by the step length information, and then optimization is carried out on the basis.
In the present embodiment, the visual inertia SLAM mainly uses image information and inertia information, and the front end mainly performs preliminary processing on the information. For image information, firstly, Harris corner points are used for extracting feature points, and KLT optical flow is used for tracking the extracted feature points in real time to obtain corresponding feature points. For a point in space P ═ X, Y, Z]TThe relationship between the three-dimensional coordinates and the pixel coordinates on the image is:
S1p1=KP,S2p2=K(RP+t) (1)
according to the assumption of gray scale invariance, the gray scale values of the pixels at the same spatial point in the two images are kept unchanged:
I(x+dx,y+dy,t+dt)=I(x,y,t) (2)
the partial derivatives are calculated to obtain:
Figure BDA0003137622900000041
Figure BDA0003137622900000051
in order to solve the correspondence between the feature points of the images, it is assumed that the pixel points in one window have the same motion:
Figure BDA0003137622900000052
and solving to obtain corresponding optical flows so as to obtain corresponding values of the feature points in the two images, and removing the feature points which are wrongly tracked by using a Random Sample Consensus (RANSAC) algorithm.
For inertial navigation information, in order to reduce the calculated amount, a pre-integration algorithm is used for solving the pose variation between key frames:
Figure BDA0003137622900000053
Figure BDA0003137622900000054
Figure BDA0003137622900000055
after the data is processed, the system needs to be initialized, and the rotation amount is solved through the gyroscope measurement value and the visual measurement value. The gyroscope measurement value mainly comprises measurement noise and a gyroscope zero offset error, and the visual measurement value mainly comprises measurement noise, wherein the measurement noise can be ignored. Therefore, the visual observation and the gyroscope observation in the sliding window can be differenced to obtain the zero offset of the gyroscope:
Figure BDA0003137622900000056
wherein
Figure BDA0003137622900000057
After the gyroscope zero offset is obtained, the value of the IMU pre-scoring can be updated.
After obtaining the zero offset of the gyroscope, solving the speed, the gravity acceleration and the scale, and firstly designing an optimization variable:
Figure BDA0003137622900000058
for two consecutive frames b within the windowkAnd bk+1It is possible to obtain:
Figure BDA0003137622900000061
Figure BDA0003137622900000062
a linear measurement model is thus obtained:
Figure BDA0003137622900000063
wherein
Figure BDA0003137622900000064
The least squares problem can thus be solved:
Figure BDA0003137622900000065
in order to reduce the calculation amount, the visual inertia SLAM only maintains the key frame poses in the sliding window and corresponding feature points. Wherein the state quantity can be mainly expressed as:
x=[xn,xn+1,…,xn+Nmm+1,…,λm+M] (15)
xirepresenting the key frame state quantity, λ, within a sliding windowiRepresenting the state quantities corresponding to the feature points.
Figure BDA0003137622900000066
Wherein
Figure BDA00031376229000000611
Represents the position offset of the body coordinate system corresponding to the ith frame key frame to the world coordinate system,
Figure BDA00031376229000000612
representing the posture transfer matrix from the body coordinate system corresponding to the key frame of the ith frame to the world coordinate system,
Figure BDA0003137622900000067
representing the representation of the velocity in the body coordinate system in the world coordinate system,
Figure BDA0003137622900000068
representing the zero offset of the accelerometer,
Figure BDA0003137622900000069
representing the zero bias of the gyroscope.
And at the rear end of the SLAM, constructing an IMU pre-integration residual error, a visual re-projection residual error and a prior residual error, and constructing a cost function to perform nonlinear optimization solution. Wherein the cost function is:
Figure BDA00031376229000000610
Figure BDA0003137622900000071
wherein
Figure BDA0003137622900000072
In order to be a priori the residual error,
Figure BDA0003137622900000073
the residual is pre-integrated for the IMU,
Figure BDA0003137622900000074
the visual reprojection residual is.
After The cost function is constructed, LM (The Levenberg-Marquardt Method) algorithm is used for carrying out nonlinear optimization iterative solution.
Aiming at a pedestrian walking scene, a pedestrian dead reckoning technology is used for analyzing and resolving, and a certain characteristic of step length information in the pedestrian moving process is assumed. The pedestrian dead reckoning technology comprises 3 core steps of gait detection, step length estimation and course estimation. The pedestrian step length, gait and heading angle are estimated by using inertial sensors (gyroscopes, accelerometers, magnetometers and the like), so that the specific position of the pedestrian is estimated.
As shown in fig. 3, the PDR enables continuous tracking and locating of the pedestrian by measuring the distance and direction of movement from the start of a known location. The method comprises the steps of judging the steps and measuring and calculating the step length through an acceleration sensor, judging the direction of the pedestrian by using a direction sensor and a gyroscope, and finally integrating all information to realize the continuous tracking and positioning of the pedestrian.
There are three main models for estimating the pedestrian step length: constant models, linear models, and non-linear models. The constant model divides a section of measured walking distance by the number of steps obtained by counting to obtain an average step length, namely the step length is regarded as a constant. The linear model is used for assuming that the step length and the frequency are in a linear relation by collecting walking data of pedestrians with different heights. Here, a non-linear model is used, and the step length of each step of the pedestrian is:
Figure BDA0003137622900000075
wherein a ismaxAnd aminIs detecting aAcceleration maximum and minimum within a step, and K represents the scale constraint for the step. The average velocity in the pedestrian coordinate system can be expressed as:
Figure BDA0003137622900000076
wherein, the pose and the speed are estimated values, and noise exists, and the pose and the speed are developed as follows:
Figure BDA0003137622900000081
Figure BDA0003137622900000082
for the case that the visual inertia SLAM system does not rapidly diverge, a residual error term is constructed in the process of nonlinear optimization by using the step length and speed information obtained by PDR to assist the system:
Figure BDA0003137622900000083
wherein the content of the first and second substances,
Figure BDA0003137622900000084
representing the constrained residual of PDR velocity on the system,
Figure BDA0003137622900000085
representing the constraint residual error of the PDR step increment to the system, and obtaining the covariance matrix and the information matrix by noise error characteristic analysis.
For the situation that the visual inertia SLAM system quickly diverges, the system is constrained only through the residual error at the moment, so that an obvious constraint effect cannot be achieved, and the state quantity corresponding to the PDR time in the sliding window is directly fixed:
Figure BDA0003137622900000086
wherein
Figure BDA0003137622900000087
The optimization method is obtained by pedestrian track calculation, and the subsequent optimization is iterative nonlinear optimization on a fixed state quantity.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A visual inertia SLAM positioning method based on pedestrian step length information assistance is characterized by comprising the following steps: the method comprises the following steps:
step S100: initializing operation by using visual and inertial information;
step S200: front-end image optical flow tracking and IMU pre-integration;
step S300: simultaneously constructing a cost function by using visual information, IMU information and prior information in the visual inertia SLAM, adding a residual error item constructed by pedestrian step length and speed observation into the cost function, and optimizing the residual error item;
step S400: strong constraints are designed in extreme scenarios where visual tracking fails.
2. The visual inertial SLAM localization method based on pedestrian step information assistance of claim 1, characterized in that: in step S100, the rotation amount is calculated from the gyro measurement value and the visual measurement value at the time of the initialization operation.
3. The visual inertial SLAM positioning method based on pedestrian step information assistance of claim 2, characterized in that: the gyroscope measurement value mainly comprises measurement noise and a gyroscope zero offset error, and the visual measurement value mainly comprises measurement noise.
4. The visual inertial SLAM localization method based on pedestrian step information assistance of claim 1, characterized in that: in the step S100, the zero offset, the visual scale, the speed state quantity of the gyroscope, and the initial global map with scale information are estimated online by the visual inertial information of the gyroscope.
5. The visual inertial SLAM localization method based on pedestrian step information assistance of claim 1, characterized in that: in step S200, for image information, for each frame of received image, corresponding feature points between images are obtained through optical flow tracking, and corresponding key frames are selected through time intervals or image parallax.
6. The visual inertial SLAM localization method based on pedestrian step information assistance of claim 1, characterized in that: in the step S200, for inertial navigation information, pre-integration processing is performed by using IMU information between images to obtain a relative pose change obtained by the IMU between the images.
7. The visual inertial SLAM localization method based on pedestrian step information assistance of claim 1, characterized in that: in the pedestrian walking scene, analysis and calculation are carried out by using a pedestrian dead reckoning technology, wherein the pedestrian dead reckoning technology comprises gait detection, step length estimation and course estimation.
8. The pedestrian step size information-aided visual inertial SLAM positioning method of claim 7, wherein: and calculating the step length, the gait and the heading angle of the pedestrian by using an inertial sensor or a gyroscope or an accelerometer or a magnetometer.
9. The pedestrian step size information-aided visual inertial SLAM positioning method of claim 7, wherein: and estimating the pedestrian step length by adopting a nonlinear model.
CN202110723640.5A 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance Active CN113639743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110723640.5A CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110723640.5A CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Publications (2)

Publication Number Publication Date
CN113639743A true CN113639743A (en) 2021-11-12
CN113639743B CN113639743B (en) 2023-10-17

Family

ID=78416287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110723640.5A Active CN113639743B (en) 2021-06-29 2021-06-29 Visual inertia SLAM positioning method based on pedestrian step information assistance

Country Status (1)

Country Link
CN (1) CN113639743B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160327395A1 (en) * 2014-07-11 2016-11-10 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems
CN108106613A (en) * 2017-11-06 2018-06-01 上海交通大学 The localization method and system of view-based access control model auxiliary
CN109631889A (en) * 2019-01-07 2019-04-16 重庆邮电大学 Mems accelerometer error compensating method based on LMS adaptive-filtering and gradient decline
CN111982103A (en) * 2020-08-14 2020-11-24 北京航空航天大学 Point-line comprehensive visual inertial odometer method with optimized weight
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
CN112964257A (en) * 2021-02-05 2021-06-15 南京航空航天大学 Pedestrian inertia SLAM method based on virtual landmarks
CN112985450A (en) * 2021-02-09 2021-06-18 东南大学 Binocular vision inertial odometer method with synchronous time error estimation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160327395A1 (en) * 2014-07-11 2016-11-10 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems
CN108106613A (en) * 2017-11-06 2018-06-01 上海交通大学 The localization method and system of view-based access control model auxiliary
CN109631889A (en) * 2019-01-07 2019-04-16 重庆邮电大学 Mems accelerometer error compensating method based on LMS adaptive-filtering and gradient decline
CN111982103A (en) * 2020-08-14 2020-11-24 北京航空航天大学 Point-line comprehensive visual inertial odometer method with optimized weight
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
CN112964257A (en) * 2021-02-05 2021-06-15 南京航空航天大学 Pedestrian inertia SLAM method based on virtual landmarks
CN112985450A (en) * 2021-02-09 2021-06-18 东南大学 Binocular vision inertial odometer method with synchronous time error estimation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANTIGNY NICOLAS ET AL.: "Solving Monocular Visual Odometry Scale Factor with Adaptive Step Length Estimates for Pedestrians Using Handheld Devices", 《SENSORS》, vol. 19, no. 4, pages 1 - 18 *
叶俊华: "基于智能终端的多传感器融合行人导航定位算法研究", 《中国博士学位论文全文数据库 (信息科技辑)》, pages 136 - 75 *
闫大禹等: "国内室内定位技术发展现状综述", 《导航定位学报》, vol. 7, no. 4, pages 5 - 12 *
龚赵慧;张霄力;彭侠夫;李鑫;: "基于视觉惯性融合的半直接单目视觉里程计", 机器人, no. 05, pages 85 - 95 *

Also Published As

Publication number Publication date
CN113639743B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
Cortés et al. ADVIO: An authentic dataset for visual-inertial odometry
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
JP2022106924A (en) Device and method for autonomous self-position estimation
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN111024066A (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109991636A (en) Map constructing method and system based on GPS, IMU and binocular vision
Zho et al. Reconstructing urban 3D model using vehicle-borne laser range scanners
CN110553648A (en) method and system for indoor navigation
CN111161337B (en) Accompanying robot synchronous positioning and composition method in dynamic environment
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
CN208323361U (en) A kind of positioning device and robot based on deep vision
KR20210026795A (en) System for Positioning Hybrid Indoor Localization Using Inertia Measurement Unit Sensor and Camera
CN114529576A (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
KR20230008000A (en) Positioning method and apparatus based on lane line and feature point, electronic device, storage medium, computer program and autonomous vehicle
CN112731503B (en) Pose estimation method and system based on front end tight coupling
EP3227634A1 (en) Method and system for estimating relative angle between headings
Li et al. RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments
CN112907633A (en) Dynamic characteristic point identification method and application thereof
CN113639743B (en) Visual inertia SLAM positioning method based on pedestrian step information assistance
Irmisch et al. Robust visual-inertial odometry in dynamic environments using semantic segmentation for feature selection
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN113327270A (en) Visual inertial navigation method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant