CN110118554B - SLAM method, apparatus, storage medium and device based on visual inertia - Google Patents

SLAM method, apparatus, storage medium and device based on visual inertia Download PDF

Info

Publication number
CN110118554B
CN110118554B CN201910411404.2A CN201910411404A CN110118554B CN 110118554 B CN110118554 B CN 110118554B CN 201910411404 A CN201910411404 A CN 201910411404A CN 110118554 B CN110118554 B CN 110118554B
Authority
CN
China
Prior art keywords
pose
initial
local map
optimized
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910411404.2A
Other languages
Chinese (zh)
Other versions
CN110118554A (en
Inventor
高军强
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN201910411404.2A priority Critical patent/CN110118554B/en
Publication of CN110118554A publication Critical patent/CN110118554A/en
Application granted granted Critical
Publication of CN110118554B publication Critical patent/CN110118554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a SLAM method, apparatus, storage medium and device based on visual inertia, and relates to the technical field of wireless positioning, the method comprising: the method comprises the steps of fusing image information acquired by a first time image acquisition unit and motion information acquired by an inertial measurement unit IMU according to an extended Kalman filtering algorithm to acquire an initial pose, determining an initial local map according to the image information and the initial pose, optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to acquire an optimized pose and an optimized local map, and updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map. The calculation efficiency and the positioning accuracy of the SLAM can be improved.

Description

SLAM method, apparatus, storage medium and device based on visual inertia
Technical Field
The present disclosure relates to the field of wireless positioning technologies, and in particular, to a SLAM method, apparatus, storage medium, and device based on visual inertia.
Background
With the continuous development of terminal technology, robots have entered many technical fields to help people to complete various tasks. For a robot which needs to move continuously in the working process, accurate positioning needs to be obtained for better completing a working task. The robot uses various information collecting devices to collect external information of the physical environment in which the robot is located, thereby performing SLAM (english: Simultaneous Localization and Mapping, chinese: instantaneous Localization and Mapping). The external information typically includes both visual information (e.g., images) and motion information (acceleration, direction, etc.), which need to be combined to implement SLAM. In the prior art, there are generally two processing methods for combining visual information and motion information: kalman filtering algorithm and nonlinear optimization algorithm. The Kalman filtering algorithm is simple to implement, high in calculation efficiency, but low in positioning accuracy, and difficult to adapt to application scenes needing accurate positioning.
Disclosure of Invention
The disclosure aims to provide a SLAM method, a device, a storage medium and equipment based on visual inertia, which are used for solving the problem that the calculation efficiency and the positioning accuracy of the SLAM are difficult to be considered in the prior art.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a visual inertia based SLAM method, the method including:
fusing image information acquired by the first time image acquisition unit and motion information acquired by the inertial measurement unit IMU according to an extended Kalman filtering algorithm to acquire an initial pose;
determining an initial local map according to the image information and the initial pose;
optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map;
and updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map, wherein the target initial pose is an initial pose obtained by fusing image information acquired by the image acquisition unit and motion information acquired by the IMU at a second moment according to the extended Kalman filtering algorithm, the target initial local map is an initial local map determined according to the image information acquired by the image acquisition unit and the target initial pose at the second moment, and the second moment is a moment of acquiring the optimized pose and the optimized local map.
Optionally, the fusing the image information acquired by the first time image acquisition unit and the motion information acquired by the inertial measurement unit IMU according to an extended kalman filter algorithm to acquire an initial pose includes:
taking the motion information and historical pose as input of an IMU motion model to obtain a predicted pose output by the IMU motion model, wherein the historical pose is an initial pose obtained by fusing image information acquired by the image acquisition unit at a third moment and the motion information acquired by the IMU according to the extended Kalman filtering algorithm, and the third moment is a moment before the first moment;
processing the image information according to a preset image processing algorithm to obtain a visual constraint relation included in the image information;
updating the predicted pose according to the visual constraint relationship to obtain the initial pose, wherein the visual constraint relationship comprises: the reprojection error is minimal, or the photometric error is minimal.
Optionally, the determining an initial local map according to the image information and the initial pose includes:
determining pixel points which meet preset conditions in the image information as feature points;
and performing three-dimensional reconstruction according to the feature points and the initial pose to acquire the initial local map.
Optionally, the optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map includes:
when the image information is a key frame, establishing an objective function according to the initial pose, the initial local map and a preset number of key pose information before the first time, wherein the objective function comprises a measurement residual of the IMU and a measurement residual of the image acquisition unit, and the key pose information is pose information included in historical image information when the historical image information acquired by the image acquisition unit before the first time is the key frame;
and iterating the objective function according to a preset algorithm to obtain the corresponding optimized pose and the optimized local map under the condition that the objective function is minimum.
Optionally, the updating the target initial pose and the target initial local map according to the optimized pose and the optimized local map includes:
updating the target initial local map to the optimized local map;
when the visual constraint relation is that the reprojection error is minimum, fusing the optimized pose and a preset number of key pose information before a second moment according to a Kalman filtering algorithm to update the initial pose of the target to the pose output by the Kalman filtering algorithm, wherein the key pose information is pose information included in historical image information when the historical image information acquired by the image acquisition unit before the second moment is a key frame;
when the vision constraint relation is that the luminosity error is minimum, acquiring the offset between the initial pose and the optimized pose;
and updating the initial pose of the target according to the offset.
Optionally, the updating the target initial pose and the target initial local map according to the optimized pose and the optimized local map further includes:
and storing the updated target initial local map into a global map.
According to a second aspect of embodiments of the present disclosure, there is provided a visual inertia based SLAM device, the device comprising:
the fusion module is used for fusing the image information acquired by the first time image acquisition unit and the motion information acquired by the inertial measurement unit IMU according to an extended Kalman filtering algorithm so as to acquire an initial pose;
the determining module is used for determining an initial local map according to the image information and the initial pose;
the optimization module is used for optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm so as to obtain an optimized pose and an optimized local map;
and the updating module is used for updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map, the target initial pose is an initial pose obtained by fusing image information acquired by the image acquisition unit and motion information acquired by the IMU at a second moment according to the extended Kalman filtering algorithm, the target initial local map is an initial local map determined according to the image information acquired by the image acquisition unit and the target initial pose at the second moment, and the second moment is the moment of acquiring the optimized pose and the optimized local map.
Optionally, the fusion module comprises:
the prediction submodule is used for taking the motion information and a historical pose as the input of an IMU motion model so as to acquire the predicted pose output by the IMU motion model, wherein the historical pose is an initial pose acquired by fusing image information acquired by the image acquisition unit at a third moment and the motion information acquired by the IMU according to the extended Kalman filtering algorithm, and the third moment is a moment before the first moment;
the image processing submodule is used for processing the image information according to a preset image processing algorithm so as to obtain a visual constraint relation included in the image information;
an updating submodule, configured to update the predicted pose according to the visual constraint relationship to obtain the initial pose, where the visual constraint relationship includes: the reprojection error is minimal, or the photometric error is minimal.
Optionally, the determining module includes:
the determining submodule is used for determining pixel points which meet preset conditions in the image information as feature points;
and the mapping submodule is used for carrying out three-dimensional reconstruction according to the characteristic points and the initial pose so as to acquire the initial local map.
Optionally, the optimization module comprises:
the establishing submodule is used for establishing an objective function according to the initial poses, the initial local map and a preset number of pieces of key pose information before the first moment when the image information is a key frame, wherein the objective function comprises measurement residual errors of the IMU and the measurement residual errors of the image acquisition unit, and the key pose information is pose information included in historical image information when the historical image information acquired by the image acquisition unit before the first moment is the key frame;
and the iteration submodule is used for iterating the objective function according to a preset algorithm so as to obtain the corresponding optimized pose and the optimized local map under the condition that the objective function is minimum.
Optionally, the update module includes:
the map updating submodule is used for updating the target initial local map into the optimized local map;
a pose updating sub-module, configured to fuse, when the visual constraint relationship is that a reprojection error is minimum, the optimized pose and a preset number of pieces of key pose information before a second time according to a kalman filter algorithm, so as to update the target initial pose to a pose output by the kalman filter algorithm, where the key pose information is pose information included in history image information acquired by the image acquisition unit before the second time when the history image information is a key frame;
and the pose updating sub-module is further used for acquiring an offset between the initial pose and the optimized pose when the vision constraint relation is that the luminosity error is minimum, and updating the target initial pose according to the offset.
Optionally, the map updating sub-module is further configured to store the updated target initial local map in a global map.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the visual inertia based SLAM method provided by the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the visual inertia based SLAM method provided by the first aspect.
According to the technical scheme, the image information and the motion information acquired by the image acquisition unit and the IMU at the first moment are fused according to an extended Kalman filtering algorithm to acquire an initial pose, an initial local map is determined according to the image information and the initial pose, the initial pose and the initial local map are optimized according to a preset nonlinear optimization algorithm to acquire an optimized pose and an optimized local map, and finally, the target initial pose and the target initial local map corresponding to the second moment are updated according to the optimized pose and the optimized local map. The output of the extended Kalman filtering algorithm is used as the input of the nonlinear optimization algorithm, and the output of the nonlinear optimization algorithm is used for correcting the output of the extended Kalman filtering algorithm, so that the calculation efficiency and the positioning precision of the SLAM are improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a visual inertia based SLAM method in accordance with one exemplary embodiment;
FIG. 2 is a flow diagram illustrating another visual inertia based SLAM method in accordance with one illustrative embodiment;
FIG. 3 is a flow diagram illustrating another visual inertia based SLAM method in accordance with one illustrative embodiment;
FIG. 4 is a flow diagram illustrating another visual inertia based SLAM method in accordance with one illustrative embodiment;
FIG. 5 is a flow diagram illustrating another visual inertia based SLAM method in accordance with one illustrative embodiment;
FIG. 6 is a flow diagram illustrating another visual inertia based SLAM method in accordance with one illustrative embodiment;
FIG. 7 is a block diagram illustrating a visual inertia based SLAM apparatus in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating another visual inertia based SLAM apparatus in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating another visual inertia based SLAM apparatus in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating another visual inertia based SLAM apparatus in accordance with an exemplary embodiment;
FIG. 11 is a block diagram illustrating another visual inertia based SLAM apparatus in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before introducing the visual inertia based SLAM method, apparatus, storage medium, and device provided by the present disclosure, an application scenario involved in various embodiments of the present disclosure is first introduced. The application scenario can be any terminal (for example: robot) requiring SLAM, and the terminal can be provided with two information acquisition devices, including: an image acquisition Unit and an IMU (English: Inertial Measurement Unit, Chinese: Inertial Measurement Unit). The image acquisition unit may be, for example, a camera or a video camera, and the IMU may include: the image acquisition unit and the IMU can be fixed together (namely the image acquisition unit is consistent with the position of the IMU) so as to ensure that the image information acquired by the image acquisition unit is consistent with the motion information acquired by the IMU.
Fig. 1 is a flow diagram illustrating a visual inertia based SLAM method according to an exemplary embodiment, as shown in fig. 1, including the steps of:
and 101, fusing image information acquired by the first time image acquisition unit and motion information acquired by the inertial measurement unit IMU according to an extended Kalman filtering algorithm to acquire an initial pose.
And 102, determining an initial local map according to the image information and the initial pose.
For example, first, image information of a physical environment where the terminal is located, which is acquired by the image acquisition unit at the first time, and motion information corresponding to the terminal, which is acquired by the IMU at the first time, are acquired, where the motion information may include: orientation, position, velocity, offset, etc. of the terminal. The pose can be predicted through inertial measurement by the terminal contained in the motion information, and the predicted pose is updated according to a visual constraint relation (which can be understood as an observation equation) contained in the image information to determine the pose of the terminal. And fusing the image information and the motion information according to an Extended Kalman Filtering (EKF) algorithm to acquire an initial pose and an initial local map of the terminal at a first moment. The extended Kalman filtering algorithm is fast in calculation, and the occupancy rate of a CPU memory is low, so that the calculation efficiency is high. Firstly, the motion information can be used as the input of the extended Kalman filtering algorithm motion equation, and the pose of the terminal is roughly predicted to obtain the predicted pose output by the extended Kalman filtering algorithm motion equation. Because the sensors contained in the IMU are usually consumption-level sensors, the errors are large, the drift is easily caused, and the pose of the terminal can be further updated according to the visual constraint relation contained in the image information as an observation equation of the extended Kalman filtering algorithm, so that the initial pose is obtained.
Furthermore, the image information can be processed according to a preset image processing method to extract texture information contained in the image information, and an initial local map capable of reflecting the local environment of the terminal at the first moment is obtained by combining the initial pose. It should be noted that the initial local map is a local map formed by landmark points that can be observed in a preset number of keyframes acquired by the image acquisition unit before the first time, and may be understood as a sliding time window that is preset and used for storing N-1 keyframes before the first time and image information corresponding to the first time. The initial local map and the sliding time window are in corresponding relation, and when the key frames in the sliding time window are updated, the initial local map is also updated correspondingly.
And 103, optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map.
In the example, because the EFK algorithm has lower positioning accuracy, a preset nonlinear optimization algorithm can be selected after the initial pose and the initial local map are obtained, and BA (English: Bundle Adjustment, Chinese: Bundle Adjustment) optimization is performed on the initial pose and the initial local map, so that the optimized pose and the optimized local map with higher positioning accuracy are obtained. It can be understood that the initial pose and the initial local map are used as initial values of the nonlinear optimization algorithm, so that the nonlinear optimization algorithm can be converged faster (i.e. the calculation efficiency is higher), and more accurate positioning can be obtained.
For example, an objective function may be established according to a Measurement Residual error (english: Measurement Residual) of the IMU and a Measurement Residual error of the image acquisition unit, the initial pose and the initial local map are brought into the objective function, and then the objective function is iterated by using a nonlinear optimization method until the objective function reaches an optimal state (i.e., the objective function is minimized), where the pose and the local map determined by the objective function are the optimized pose and the optimized local map. The nonlinear optimization algorithm may include: gaussian-newton iterative method, LM (english: Levenberg-Marquardt algorithm, chinese: Levenberg-Marquardt algorithm), optimization, etc., and the present disclosure does not specifically limit the nonlinear optimization algorithm.
And 104, updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map, wherein the target initial pose is obtained by fusing the image information acquired by the image acquisition unit at the second moment and the motion information acquired by the IMU according to an extended Kalman filtering algorithm, the target initial local map is an initial local map determined according to the image information acquired by the image acquisition unit at the second moment and the target initial pose, and the second moment is the moment of acquiring the optimized pose and the optimized local map.
For example, when an optimized pose and an optimized local map with higher positioning accuracy are obtained, the optimized pose and the optimized local map may be fed back to step 101 and step 102, so as to optimize the target initial pose determined by using the extended kalman filter algorithm in step 101 and the target initial local map determined in step 102, thereby improving the positioning accuracy of the SLAM. It should be noted that, because the extended kalman filter algorithm and the nonlinear optimization algorithm have different computational efficiencies, step 101, step 102, and step 103 may be executed simultaneously, that is, it can be understood that step 101, step 102, and step 103 respectively correspond to two threads that can be executed in parallel, for example, step 101 and step 102 are front-end threads, step 103 is a back-end thread, and the front-end thread does not need to wait for the back-end thread and can be executed all the time. And after determining the initial pose and the initial local map corresponding to the first moment, the front-end thread sends the initial pose and the initial local map to the back-end thread, the back-end thread determines an optimized pose and an optimized local map, and the back-end thread feeds the optimized pose and the optimized local map back to the front-end thread at the second moment so as to optimize the target initial pose and the target initial local map determined by the front-end thread at the second moment.
After the initial pose and the initial local map corresponding to the first time are determined in step 101, the image acquisition unit and the image information and the motion information acquired by the IMU at the next time (which may be understood as the next time + T) after the first time may be continuously fused to acquire the initial pose and the initial local map corresponding to the next time. And step 102 optimizes the initial pose and the initial local map corresponding to the first time to obtain an optimized pose and an optimized local map, because the iterative process of the nonlinear optimization algorithm takes a long time, the time for obtaining the optimized pose and the optimized local map is the second time, and a plurality of T (usually tens of milliseconds) may exist between the second time and the first time, that is, the second time is the first time + x T. Step 101 is executed all the time, so at the second time, the image information and the motion information acquired by the image acquisition unit and the IMU at the second time are fused to acquire the initial pose of the target and the initial local map of the target corresponding to the second time. At this time, the initial pose and the initial local map corresponding to the first moment are invalid and have no practical significance, so that the target initial pose and the target initial local map corresponding to the second moment can be updated according to the optimized pose and the optimized local map, and the positioning accuracy is improved.
In summary, the present disclosure first fuses image information and motion information acquired by an image acquisition unit and an IMU at a first time according to an extended kalman filter algorithm to obtain an initial pose, then determines an initial local map according to the image information and the initial pose, then optimizes the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map, and finally updates a target initial pose and a target initial local map corresponding to a second time according to the optimized pose and the optimized local map. The output of the extended Kalman filtering algorithm is used as the input of the nonlinear optimization algorithm, and the output of the nonlinear optimization algorithm is used for correcting the output of the extended Kalman filtering algorithm, so that the calculation efficiency and the positioning precision of the SLAM are improved.
Fig. 2 is a flow chart illustrating another visual inertia based SLAM method according to an exemplary embodiment, as shown in fig. 2, step 101 may be implemented by:
step 1011, taking the motion information and the historical pose as the input of the IMU motion model to obtain the predicted pose output by the IMU motion model, wherein the historical pose is the initial pose obtained by fusing the image information acquired by the image acquisition unit at the third moment and the motion information acquired by the IMU according to the extended Kalman filtering algorithm, and the third moment is the moment before the first moment.
Step 1012, processing the image information according to a preset image processing algorithm to obtain a visual constraint relation included in the image information.
1013, updating the predicted pose according to a visual constraint relation to obtain an initial pose, wherein the visual constraint relation comprises: the reprojection error is minimal, or the photometric error is minimal.
The extended kalman filter algorithm in the present disclosure may include two types: MSCKF (Multi-State Constraint Kalman Filter, Chinese) algorithm and ROVIO (Robust Visual Inertial odometer, Chinese) algorithm. The two algorithms adopt an IMU motion model as an extended Kalman filtering algorithm motion equation to acquire a predicted pose, a visual constraint relation in the MSCKF algorithm is used as an observation equation with the smallest reprojection error and is used for updating the predicted pose, and a visual constraint relation in the ROVIO algorithm is used as an observation equation with the smallest photometric error and is used for updating the predicted pose.
Fig. 3 is a flow chart illustrating another visual inertia based SLAM method according to an example embodiment, as shown in fig. 3, step 102 may include the steps of:
step 1021, determining pixel points meeting preset conditions in the image information as feature points.
And 1022, performing three-dimensional reconstruction according to the feature points and the initial pose to acquire an initial local map.
For example, the initial local map may be obtained by first selecting a pixel point satisfying a preset condition from the image information as a feature point. The number of feature points may be multiple (e.g.; 100), and the preset condition may be, for example: and the point with the brightness difference value between the adjacent pixel points being larger than the preset brightness threshold value. And then, performing three-dimensional reconstruction by using the characteristic points matched with each other at multiple moments and the initial pose determined in the step 101 to obtain a three-dimensional initial local map capable of reflecting the local environment where the terminal is located at the first moment.
The following is a detailed description of the implementation of the two algorithms:
in the MSCKF algorithm, that is, the visual constraint relationship is that the reprojection error is minimum, the pose information, the motion information, and a preset number of key pose information before the first time obtained in step 1011 are used as the inputs of the IMU motion model (for example, the latest key pose information before the first time) to obtain the predicted pose output by the IMU motion model.
For example, the state vector to be estimated for the input MSCKF algorithm mayTo be provided with
Figure BDA0002062912320000121
Wherein,
Figure BDA0002062912320000122
Figure BDA0002062912320000123
the measured motion information of the IMU for MSCKF,
Figure BDA0002062912320000131
for the orientation of the IMU in the global coordinate system,
Figure BDA0002062912320000132
is the amount of bias of the gyroscope,
Figure BDA0002062912320000133
is the velocity of the IMU in the global coordinate system,
Figure BDA0002062912320000134
as to the amount of offset of the accelerometer,
Figure BDA0002062912320000135
for the position of the IMU in the global coordinate system,
Figure BDA0002062912320000136
for the pose information acquired in step 1011,
Figure BDA0002062912320000137
is the ith key pose information in the N-1 key pose information before the first moment,
Figure BDA0002062912320000138
for the orientation of the image acquisition unit in the ith key pose information under the global coordinate system,
Figure BDA0002062912320000139
for image acquisition in ith key pose informationThe location of the set element in the global coordinate system. And further updating the predicted pose according to the condition that the minimum reprojection error is a visual constraint relation, wherein the obtained pose is the initial pose.
It should be noted that, the IMU motion model describes a constraint relationship between pose information and motion information, and since the image acquisition unit and the IMU are fixed together, the image information and the motion information have consistency, for example: the acceleration obtained by the accelerometer measurement is twice integrated with time, the obtained value is the same as the position in the pose information, and the orientation obtained by the gyroscope measurement is the same as the orientation in the pose information.
The key pose information can be understood as pose information included in image information at a certain moment when the image information acquired by the image acquisition unit is a key frame. The preset number of key pose information before the first time can be understood as a sliding time window preset, and is used for storing N pieces of pose information, and the method comprises the following steps: the key position and pose information of the N-1 keys before the first moment and the position and pose information corresponding to the first moment. The rule for determining whether the image information is a key frame may be that the image information is compared with the pose of the image information acquired at the moment of the previous key frame, and when the variation of the position exceeds a preset distance threshold and/or the variation of the orientation exceeds a preset angle threshold, the image information is determined to be a key frame.
In the ROVIO algorithm, when the visual constraint relationship is that the photometric error (the photometric error of the feature point) is minimum, the pose information, the motion information and the landmark points acquired in the step 1011 are used as the input of the IMU motion model to acquire the predicted pose output by the IMU motion model, and the landmark points are coordinates of the feature point corresponding to the local map.
For example, the state vector of the input ROVIO to be estimated may be χ ═ (r, v, q, b)f,bw,c,z,μ0,…,μM0,…,ρM) Wherein r is the IMU global coordinateCoordinates under the system, v is the speed of the IMU under the global coordinate system, q is the orientation of the IMU under the global coordinate system, bfIs the offset of the accelerometer, bwIs the offset of the gyroscope, c is the translation component of the image acquisition unit and the IMU external parameter, z is the rotation component of the image acquisition unit and the IMU external parameter, mu0,…,μMA bearing vector rho of the M characteristic points in the coordinate system of the image acquisition unit0,…,ρMThe inverse depth parameters of the M characteristic points under the coordinate system of the image acquisition unit are obtained, and the inverse depth is the reciprocal of the distance parameter. And further updating the predicted pose according to the vision constraint relation with the minimum photometric error, wherein the obtained pose is the initial pose.
Fig. 4 is a flow chart illustrating another visual inertia based SLAM method according to an example embodiment, as shown in fig. 4, step 103 comprising:
and step 1031, when the image information is a key frame, establishing an objective function according to the initial pose, the initial local map and a preset number of key pose information before the first time, wherein the objective function comprises the measurement residual error of the IMU and the measurement residual error of the image acquisition unit, and the key pose information is the pose information included in the historical image information when the historical image information acquired by the image acquisition unit before the first time is the key frame, and can also comprise information such as the speed and offset obtained by IMU measurement.
And 1032, iterating the objective function according to a preset algorithm to obtain the corresponding optimized pose and optimized local map under the condition that the objective function is minimum.
For step 103, optimizing the initial pose and the initial local map requires establishing an objective function to perform loop detection, so as to obtain an optimized pose and an optimized local map. Because the nonlinear optimization algorithm is slow in calculation and the occupancy rate of the memory of the CPU is high, the initial pose and the initial local map acquired in the step 101 can be screened, and the initial pose and the initial local map are optimized only under the condition that the image information is a key frame, namely, the initial pose and the initial local map are sent to the back-end thread by the front-end thread when the image information is determined to be the key frame, and the optimized pose and the optimized local map are determined by the back-end thread. When the image information is not a key frame, it can be understood that the change between the image information at the first moment and the image information at the previous moment is not large, and the initial pose and the initial local map do not need to be optimized. In general, the time interval between two consecutive key frames acquired in step 101 is relatively long (e.g., 100ms), so the amount of calculation in step 103 is reduced, and the calculation efficiency and positioning accuracy of SLAM are further improved.
And when the image information is a key frame, establishing an objective function according to the initial pose, the initial local map and a preset number of key pose information before the first moment. Wherein the objective function may comprise a measurement residual of the IMU and a measurement residual of the image acquisition unit, for example: IMU pre-integration residuals and visual photometric errors, etc. After the objective function is selected, the objective function is iterated according to a preset algorithm (for example, a Gauss-Newton iteration method) until the objective function reaches an optimal state (namely, the objective function is minimum), and the pose and the local map determined by the objective function are the optimized pose and the optimized local map.
Fig. 5 is a flow chart illustrating another visual inertia based SLAM method according to an example embodiment, as shown in fig. 5, step 104 comprising:
step 1041, update the target initial local map to the optimized local map.
For example, the step 104 may be divided into two parts, namely, local map optimization and pose optimization, the optimized local map may be directly used to replace the target initial local map determined at the second time in the step 101, and the target initial pose determined at the second time in the step 101 may be divided into two processing modes according to the difference of the visual constraints in the extended kalman filter algorithm in the step 101.
And 1042, when the visual constraint relation is that the reprojection error is minimum, fusing the optimized poses and a preset number of pieces of key pose information before the second moment according to a Kalman filtering algorithm to update the initial poses of the target to the poses output by the Kalman filtering algorithm, wherein the key pose information is pose information included in the historical image information when the historical image information acquired by the image acquisition unit before the second moment is a key frame.
For example, when the kalman filter algorithm is used to fuse the optimized pose and the preset number of pieces of key pose information before the second time, the optimized state quantity and the key state quantity may be fused, where the optimized state quantity includes the optimized pose, and the speed and the IMU offset obtained through the optimization in step 103 may also be included, and the key state quantity includes the key pose information and the corresponding speed and IMU offset. And then updating the target initial state quantity into a state quantity output by a Kalman filtering algorithm, wherein the target initial state quantity comprises: target initial pose and corresponding velocity and IMU offset.
Further, the updating of the target initial state quantity at the second time may include: and establishing visual back projection residual error and IMU pre-integration residual error terms by using the state quantities of the optimized local map, the key frame corresponding to the second moment and the (N-1) th key frame, and performing nonlinear iterative optimization (such as a Gauss-Newton iteration method) to obtain the high-precision state quantity at the second moment. The difference from the optimization in the back-end step 103 is that only the state quantity at the second time is optimized, and the state quantities of the local map and the (N-1) th frame do not need to be optimized (because the two quantities are fed back by the back-end step 103, the accuracy is high).
And 1043, when the vision constraint relation is that the luminosity error is minimum, acquiring the offset between the initial pose and the optimized pose.
And step 1044, updating the initial pose of the target according to the offset.
Specifically, in the MSCKF algorithm, namely the visual constraint relation is that the reprojection error is minimum, the optimized pose and the preset number of key pose information before the second moment are used as the input of a Kalman Filtering algorithm (English: Kalman Filtering, abbreviation: KF), the pose output by the Kalman Filtering algorithm is obtained by using a state covariance matrix and a process covariance matrix, and the initial pose of the target and the corresponding covariance matrix are updated to the pose and covariance matrix output by the Kalman Filtering algorithm. The preset number of key pose information before the second moment can be understood as a sliding time window which is preset and used for storing N-1 key pose information before the second moment.
In the ROVIO algorithm, when the vision constraint relation is that the luminosity error is minimum, the offset between the initial pose corresponding to the first moment and the optimized pose can be obtained first, and then the initial pose of the target is corrected by using the offset. Taking the first time as t and the second time as t + M as an example, the offset XdifftXbac*tX'froWherein the initial pose corresponding to the first time istX'froOptimizing the pose astXbac. Then the updated initial pose of the objectt+MX=Xdiff*t+MXfroWhereint+MXfroAnd the target initial pose before updating corresponding to the second moment is obtained.
Fig. 6 is a flowchart illustrating another visual inertia based SLAM method according to an example embodiment, as shown in fig. 6, step 104 may further include:
and 1045, storing the updated target initial local map into the global map.
Further, after the target initial local map is updated, the updated target initial local map may be stored in the global map, that is, the global map stored on the terminal is updated according to the target initial local map. For example, a map corresponding to an area indicated by the position information in the global map may be replaced with the target initial local map according to the position information included in the target initial local map.
In summary, the present disclosure first fuses image information and motion information acquired by an image acquisition unit and an IMU at a first time according to an extended kalman filter algorithm to obtain an initial pose, then determines an initial local map according to the image information and the initial pose, then optimizes the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map, and finally updates a target initial pose and a target initial local map corresponding to a second time according to the optimized pose and the optimized local map. The output of the extended Kalman filtering algorithm is used as the input of the nonlinear optimization algorithm, and the output of the nonlinear optimization algorithm is used for correcting the output of the extended Kalman filtering algorithm, so that the calculation efficiency and the positioning precision of the SLAM are improved.
Fig. 7 is a block diagram illustrating a visual inertia based SLAM apparatus according to an exemplary embodiment, as shown in fig. 7, the apparatus 200 including:
the fusion module 201 is configured to fuse the image information acquired by the first time image acquisition unit and the motion information acquired by the inertial measurement unit IMU according to an extended kalman filter algorithm to obtain an initial pose.
And the determining module 202 is configured to determine an initial local map according to the image information and the initial pose.
And the optimizing module 203 is configured to optimize the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map.
And the updating module 204 is configured to update an initial pose of the target and an initial local map of the target according to the optimized pose and the optimized local map, where the initial pose of the target is obtained by fusing the image information acquired by the image acquisition unit at the second time and the motion information acquired by the IMU according to an extended kalman filter algorithm, the initial local map of the target is an initial local map determined according to the image information acquired by the image acquisition unit at the second time and the initial pose of the target, and the second time is the time of acquiring the optimized pose and the optimized local map.
Fig. 8 is a block diagram illustrating another visual inertia based SLAM device according to an exemplary embodiment, and as shown in fig. 8, a fusion module 201 includes:
the prediction sub-module 2011 is configured to take the motion information and the historical pose as inputs of the IMU motion model to obtain a predicted pose output by the IMU motion model, where the historical pose is an initial pose obtained by fusing the image information acquired by the image acquisition unit at the third time and the motion information acquired by the IMU according to the extended kalman filter algorithm, and the third time is a time before the first time.
The image processing sub-module 2012 is configured to process the image information according to a preset image processing algorithm to obtain a visual constraint relationship included in the image information.
An updating submodule 2013, configured to update the predicted pose according to a visual constraint relationship to obtain an initial pose, where the visual constraint relationship includes: the reprojection error is minimal, or the photometric error is minimal.
Fig. 9 is a block diagram illustrating another visual inertia based SLAM apparatus according to an example embodiment, as shown in fig. 9, the determination module 202 includes:
the determining submodule 2021 is configured to determine, as the feature point, a pixel point in the image information that meets a preset condition.
And the mapping submodule 2022 is configured to perform three-dimensional reconstruction according to the feature points and the initial pose to obtain an initial local map.
Fig. 10 is a block diagram illustrating another visual inertia based SLAM device according to an example embodiment, as shown in fig. 10, the optimization module 203 includes:
the establishing sub-module 2031 is configured to, when the image information is a key frame, establish an objective function according to the initial pose, the initial local map, and a preset number of pieces of key pose information before the first time, where the objective function includes a measurement residual of the IMU and a measurement residual of the image acquisition unit, and the key pose information is pose information included in the history image information when the history image information acquired by the image acquisition unit before the first time is a key frame.
The iteration submodule 2032 is configured to iterate the objective function according to a preset algorithm to obtain an optimized pose corresponding to the minimum objective function and an optimized local map.
Fig. 11 is a block diagram illustrating another visual inertia based SLAM device according to an example embodiment, as shown in fig. 11, the update module 204 includes:
the map updating sub-module 2041 is configured to update the target initial local map into the optimized local map.
And the pose updating submodule 2042 is configured to fuse the optimized poses and a preset number of pieces of key pose information before the second time according to a kalman filtering algorithm when the visual constraint relationship is that the reprojection error is minimum, so as to update the initial pose of the target to the pose output by the kalman filtering algorithm, where the key pose information is pose information included in the historical image information when the historical image information acquired by the image acquisition unit before the second time is a key frame.
The pose updating sub-module 2042 is further configured to, when the visual constraint relationship is that the photometric error is minimum, obtain an offset between the initial pose and the optimized pose, and update the target initial pose according to the offset.
Further, the map update sub-module 2041 may also be configured to store the updated target initial local map in the global map.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the present disclosure first fuses image information and motion information acquired by an image acquisition unit and an IMU at a first time according to an extended kalman filter algorithm to obtain an initial pose, then determines an initial local map according to the image information and the initial pose, then optimizes the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map, and finally updates a target initial pose and a target initial local map corresponding to a second time according to the optimized pose and the optimized local map. The output of the extended Kalman filtering algorithm is used as the input of the nonlinear optimization algorithm, and the output of the nonlinear optimization algorithm is used for correcting the output of the extended Kalman filtering algorithm, so that the calculation efficiency and the positioning precision of the SLAM are improved.
Fig. 12 is a block diagram illustrating an electronic device 300 in accordance with an example embodiment. As shown in fig. 12, the electronic device 300 may include: a processor 301 and a memory 302. The electronic device 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the above SLAM method based on visual inertia. The memory 302 is used to store various types of data to support operation at the electronic device 300, such as instructions for any application or method operating on the electronic device 300 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 302 or transmitted through the communication component 305. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding Communication component 305 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the visual inertia based SLAM method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the visual inertia based SLAM method described above is also provided. For example, the computer readable storage medium may be the memory 302 described above including program instructions executable by the processor 301 of the electronic device 300 to perform the visual inertia based SLAM method described above.
In summary, the present disclosure first fuses image information and motion information acquired by an image acquisition unit and an IMU at a first time according to an extended kalman filter algorithm to obtain an initial pose, then determines an initial local map according to the image information and the initial pose, then optimizes the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map, and finally updates a target initial pose and a target initial local map corresponding to a second time according to the optimized pose and the optimized local map. The output of the extended Kalman filtering algorithm is used as the input of the nonlinear optimization algorithm, and the output of the nonlinear optimization algorithm is used for correcting the output of the extended Kalman filtering algorithm, so that the calculation efficiency and the positioning precision of the SLAM are improved.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A visual inertia based SLAM method, the method comprising:
fusing image information acquired by the first-time image acquisition unit and motion information acquired by the Inertial Measurement Unit (IMU) through a front-end thread according to an extended Kalman filtering algorithm to acquire an initial pose;
determining an initial local map according to the image information and the initial pose through the front-end thread;
optimizing the initial pose and the initial local map through a back-end thread according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map, wherein the front-end thread and the back-end thread are threads executed in parallel;
updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map, wherein the target initial pose is an initial pose obtained by fusing image information acquired by the image acquisition unit and motion information acquired by the IMU at a second moment according to the extended Kalman filtering algorithm, the target initial local map is an initial local map determined according to the image information acquired by the image acquisition unit and the target initial pose at the second moment, and the second moment is a moment of acquiring the optimized pose and the optimized local map;
determining an initial local map according to the image information and the initial pose through the front-end process, including:
determining pixel points which meet preset conditions in the image information as feature points;
performing three-dimensional reconstruction according to the feature points and the initial pose to acquire the initial local map;
the optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to obtain an optimized pose and an optimized local map includes:
when the image information is a key frame, establishing an objective function according to the initial pose, the initial local map and a preset number of pieces of key pose information before the first time, wherein the objective function comprises a measurement residual error of the IMU and a measurement residual error of the image acquisition unit, the key pose information is pose information included in historical image information when historical image information acquired by the image acquisition unit before the first time is the key frame, and the key pose information further comprises speed and offset acquired by measurement of the IMU when the historical image information acquired by the image acquisition unit before the first time is the key frame;
and iterating the objective function according to a preset algorithm to obtain the corresponding optimized pose and the optimized local map under the condition that the objective function is minimum.
2. The method according to claim 1, wherein the fusing the image information acquired by the image acquisition unit at the first time and the motion information acquired by the inertial measurement unit IMU according to an extended kalman filter algorithm to obtain the initial pose comprises:
taking the motion information and historical pose as input of an IMU motion model to obtain a predicted pose output by the IMU motion model, wherein the historical pose is an initial pose obtained by fusing image information acquired by the image acquisition unit at a third moment and the motion information acquired by the IMU according to the extended Kalman filtering algorithm, and the third moment is a moment before the first moment;
processing the image information according to a preset image processing algorithm to obtain a visual constraint relation included in the image information;
updating the predicted pose according to the visual constraint relationship to obtain the initial pose, wherein the visual constraint relationship comprises: the reprojection error is minimal, or the photometric error is minimal.
3. The method of claim 2, wherein updating the target initial pose and the target initial local map according to the optimized pose and the optimized local map comprises:
updating the target initial local map to the optimized local map;
when the visual constraint relation is that the reprojection error is minimum, fusing the optimized pose and a preset number of key pose information before a second moment according to a Kalman filtering algorithm to update the initial pose of the target to the pose output by the Kalman filtering algorithm, wherein the key pose information is pose information included in historical image information when the historical image information acquired by the image acquisition unit before the second moment is a key frame;
when the vision constraint relation is that the luminosity error is minimum, acquiring the offset between the initial pose and the optimized pose;
and updating the initial pose of the target according to the offset.
4. The method of claim 3, wherein updating the target initial pose and the target initial local map based on the optimized pose and the optimized local map further comprises:
and storing the updated target initial local map into a global map.
5. A visual inertia based SLAM apparatus, the apparatus comprising:
the fusion module is used for fusing the image information acquired by the first-time image acquisition unit and the motion information acquired by the inertial measurement unit IMU through a front-end thread according to an extended Kalman filtering algorithm so as to acquire an initial pose;
the determining module is used for determining an initial local map according to the image information and the initial pose through the front-end thread;
the optimization module is used for optimizing the initial pose and the initial local map through a back-end thread according to a preset nonlinear optimization algorithm so as to obtain an optimized pose and an optimized local map, and the front-end thread and the back-end thread are threads executed in parallel;
the updating module is used for updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map, the target initial pose is an initial pose obtained by fusing image information acquired by the image acquisition unit and motion information acquired by the IMU at a second moment according to the extended Kalman filtering algorithm, the target initial local map is an initial local map determined according to the image information acquired by the image acquisition unit and the target initial pose at the second moment, and the second moment is the moment of acquiring the optimized pose and the optimized local map;
the determining module comprises:
the determining submodule is used for determining pixel points which meet preset conditions in the image information as feature points;
the map building sub-module is used for carrying out three-dimensional reconstruction according to the feature points and the initial pose so as to acquire the initial local map;
the optimization module comprises:
the establishing submodule is used for establishing an objective function according to the initial poses, the initial local map and a preset number of pieces of key pose information before the first moment when the image information is a key frame, wherein the objective function comprises a measurement residual error of the IMU and a measurement residual error of the image acquisition unit, the key pose information is pose information included in historical image information when historical image information acquired by the image acquisition unit before the first moment is the key frame, and the key pose information also comprises speed and offset acquired by measurement of the IMU when the historical image information acquired by the image acquisition unit before the first moment is the key frame;
and the iteration submodule is used for iterating the objective function according to a preset algorithm so as to obtain the corresponding optimized pose and the optimized local map under the condition that the objective function is minimum.
6. The apparatus of claim 5, wherein the fusion module comprises:
the prediction submodule is used for taking the motion information and a historical pose as the input of an IMU motion model so as to acquire the predicted pose output by the IMU motion model, wherein the historical pose is an initial pose acquired by fusing image information acquired by the image acquisition unit at a third moment and the motion information acquired by the IMU according to the extended Kalman filtering algorithm, and the third moment is a moment before the first moment;
the image processing submodule is used for processing the image information according to a preset image processing algorithm so as to obtain a visual constraint relation included in the image information;
an updating submodule, configured to update the predicted pose according to the visual constraint relationship to obtain the initial pose, where the visual constraint relationship includes: the reprojection error is minimal, or the photometric error is minimal.
7. The apparatus of claim 6, wherein the update module comprises:
the map updating submodule is used for updating the target initial local map into the optimized local map;
a pose updating sub-module, configured to fuse, when the visual constraint relationship is that a reprojection error is minimum, the optimized pose and a preset number of pieces of key pose information before a second time according to a kalman filter algorithm, so as to update the target initial pose to a pose output by the kalman filter algorithm, where the key pose information is pose information included in history image information acquired by the image acquisition unit before the second time when the history image information is a key frame;
and the pose updating sub-module is further used for acquiring an offset between the initial pose and the optimized pose when the vision constraint relation is that the luminosity error is minimum, and updating the target initial pose according to the offset.
8. The apparatus of claim 7, wherein the map update sub-module is further configured to store the updated target initial local map in a global map.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 4.
CN201910411404.2A 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia Active CN110118554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910411404.2A CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910411404.2A CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Publications (2)

Publication Number Publication Date
CN110118554A CN110118554A (en) 2019-08-13
CN110118554B true CN110118554B (en) 2021-07-16

Family

ID=67522698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910411404.2A Active CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Country Status (1)

Country Link
CN (1) CN110118554B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112394720A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Robot control method and apparatus, storage medium, and electronic apparatus
WO2021035471A1 (en) * 2019-08-26 2021-03-04 Beijing Voyager Technology Co., Ltd. Systems and methods for positioning a target subject
CN110647609B (en) * 2019-09-17 2023-07-18 上海图趣信息科技有限公司 Visual map positioning method and system
CN110704562B (en) * 2019-09-27 2022-07-19 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110967018B (en) * 2019-11-25 2024-04-12 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN110986930B (en) * 2019-11-29 2022-07-15 北京三快在线科技有限公司 Equipment positioning method and device, electronic equipment and storage medium
CN111060113B (en) * 2019-12-31 2022-04-08 歌尔股份有限公司 Map updating method and device
CN113190564A (en) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 Map updating system, method and device
CN111275769B (en) * 2020-01-17 2023-10-24 联想(北京)有限公司 Monocular vision parameter correction method and device
CN111238450B (en) * 2020-02-27 2021-11-30 北京三快在线科技有限公司 Visual positioning method and device
CN111683203B (en) 2020-06-12 2021-11-09 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111879306B (en) * 2020-06-17 2022-09-27 杭州易现先进科技有限公司 Visual inertial positioning method, device and system and computer equipment
CN111623773B (en) * 2020-07-17 2022-03-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN114814872A (en) * 2020-08-17 2022-07-29 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN112347840B (en) * 2020-08-25 2022-12-02 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112150550B (en) * 2020-09-23 2021-07-27 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN112067007B (en) * 2020-11-12 2021-01-29 湖北亿咖通科技有限公司 Map generation method, computer storage medium, and electronic device
CN112577494B (en) * 2020-11-13 2022-11-18 上海航天控制技术研究所 SLAM method, electronic device and storage medium suitable for lunar vehicle
WO2022116154A1 (en) * 2020-12-04 2022-06-09 深圳市优必选科技股份有限公司 Map library establishment method, computer device, and storage medium
CN112712018B (en) * 2020-12-29 2024-06-28 东软睿驰汽车技术(沈阳)有限公司 Identification map building method, visual positioning method and device
CN112923919B (en) * 2021-01-21 2023-05-16 湖南云箭格纳微信息科技有限公司 Pedestrian positioning method and system based on graph optimization
CN112936269B (en) * 2021-02-04 2022-07-08 珠海一微半导体股份有限公司 Robot control method based on intelligent terminal
CN113362387B (en) * 2021-04-12 2023-06-09 深圳大学 Environment map construction method of dynamic environment mobile robot
CN112948411B (en) * 2021-04-15 2022-10-18 深圳市慧鲤科技有限公司 Pose data processing method, interface, device, system, equipment and medium
CN113514058A (en) * 2021-04-23 2021-10-19 北京华捷艾米科技有限公司 Visual SLAM positioning method and device integrating MSCKF and graph optimization
CN113190568A (en) * 2021-05-12 2021-07-30 上海快仓自动化科技有限公司 Map updating method, device and related components
CN113390422B (en) * 2021-06-10 2022-06-10 奇瑞汽车股份有限公司 Automobile positioning method and device and computer storage medium
CN114353781A (en) * 2021-12-31 2022-04-15 广州小鹏自动驾驶科技有限公司 Map updating method, map updating device, electronic device and storage medium
CN114827727B (en) * 2022-04-25 2024-05-07 深圳创维-Rgb电子有限公司 Television control method, television control device, television and computer readable storage medium
CN115342806B (en) * 2022-07-14 2024-09-10 歌尔股份有限公司 Positioning method and device of head-mounted display device, head-mounted display device and medium
CN118525296A (en) * 2022-10-26 2024-08-20 北京小米移动软件有限公司 Image positioning method, device, electronic equipment and storage medium
CN115859212B (en) * 2022-11-17 2023-07-18 广东智能无人系统研究院(南沙) Autonomous deployment and recovery method and system for marine equipment
CN117687772B (en) * 2023-07-31 2024-09-20 荣耀终端有限公司 Algorithm scheduling method and electronic equipment
CN117705125B (en) * 2024-02-05 2024-04-30 安徽蔚来智驾科技有限公司 Positioning method, readable storage medium and intelligent device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
WO2019127445A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109307508B (en) * 2018-08-29 2022-04-08 中国科学院合肥物质科学研究院 Panoramic inertial navigation SLAM method based on multiple key frames
CN109211241B (en) * 2018-09-08 2022-04-29 天津大学 Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system

Also Published As

Publication number Publication date
CN110118554A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
Qin et al. A general optimization-based framework for global pose estimation with multiple sensors
CN111442722B (en) Positioning method, positioning device, storage medium and electronic equipment
CN111052183B (en) Vision inertial odometer using event camera
CN111337947A (en) Instant mapping and positioning method, device, system and storage medium
CN111666891B (en) Method and device for estimating movement state of obstacle
CN110806215B (en) Vehicle positioning method, device, equipment and storage medium
CN110660098B (en) Positioning method and device based on monocular vision
CN110310304B (en) Monocular vision mapping and positioning method and device, storage medium and mobile equipment
Seiskari et al. HybVIO: Pushing the limits of real-time visual-inertial odometry
CN114111776A (en) Positioning method and related device
CN112556699B (en) Navigation positioning method and device, electronic equipment and readable storage medium
CN111783611B (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
CN111798489B (en) Feature point tracking method, device, medium and unmanned equipment
JP2020107336A (en) Method, device, and robot apparatus of improving robust property of visual inertial navigation system
KR20210051002A (en) Method and apparatus for estimating pose, computer-readable storage medium and computer program for controlling the holder device
CN114299192B (en) Method, device, equipment and medium for positioning and mapping
CN117029802A (en) Multi-mode SLAM method based on deep learning
CN114088093B (en) Point cloud map construction method, device and system and storage medium
Huai Collaborative slam with crowdsourced data
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN113628284A (en) Pose calibration data set generation method, device and system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210301

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address