CN112509006A - Sub-map recovery fusion method and device - Google Patents

Sub-map recovery fusion method and device Download PDF

Info

Publication number
CN112509006A
CN112509006A CN202011457290.4A CN202011457290A CN112509006A CN 112509006 A CN112509006 A CN 112509006A CN 202011457290 A CN202011457290 A CN 202011457290A CN 112509006 A CN112509006 A CN 112509006A
Authority
CN
China
Prior art keywords
map
image
feature
characteristic point
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011457290.4A
Other languages
Chinese (zh)
Inventor
马浩凯
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN202011457290.4A priority Critical patent/CN112509006A/en
Publication of CN112509006A publication Critical patent/CN112509006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for recovering and fusing a sub-map. The invention can continue to track and build the graph even if the relocation requirement can not be met after the system tracking fails. The method is less dependent on scenes, and map information before relocation can be saved as long as initialization is successful.

Description

Sub-map recovery fusion method and device
Technical Field
The invention relates to the technical field of vision simultaneous positioning and map building, in particular to a method and a device for restoring and fusing a sub-map.
Background
SLAM (simultaneous localization and mapping) refers to a mobile robot that establishes an environment model and determines its own position in an unknown environment through a sensor. The SLAM algorithm of visual-inertial fusion has become a research hotspot today.
The mainstream visual inertia fusion framework at present is a tightly coupled SLAM algorithm based on nonlinear optimization or filtering, and although the robustness is improved, in practical application, due to the complexity of environment and motion, the tracking failure still can be caused.
Therefore, how to re-track and build a map after the tracking of the SLAM system fails is a problem which needs to be solved in the field.
Disclosure of Invention
In view of the above, in order to solve the above problems, the present invention provides a method and an apparatus for restoring and fusing a sub-map, and the technical scheme is as follows:
a method of sub-map restoration fusion, the method comprising:
if the system tracking failure is detected, establishing a current sub-map, wherein the image of the frame corresponding to the current sub-map and having the earliest time is obtained by calculating the camera pose of the current sub-map through an inertial measurement unit IMU;
acquiring a previous sub-map with the establishment time closest to the current sub-map, and performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map;
if the first image and the second image are successfully matched in feature, solving first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by feature matching;
fusing the current sub-map and the previous sub-map based on the first camera pose transformation.
Preferably, the performing feature matching on the first N frames of images with the earliest acquisition time corresponding to the current sub-map and the first M frames of images with the latest acquisition time corresponding to the previous sub-map includes:
for a first feature point in the first image, determining a first candidate feature point with a same bag-of-word vector as the first feature point from second feature points of the second image by searching an image dictionary database;
and selecting a first target characteristic point with the maximum descriptor similarity from the first candidate characteristic points, wherein the first target characteristic point and the first characteristic point form a first characteristic point pair.
Preferably, the method further comprises:
if the feature matching between the first image and the second image fails, performing feature matching on a frame of third image corresponding to the current sub-map and having the latest acquisition time and each frame of fourth image corresponding to the previous sub-map;
if the feature matching of the third image and the fourth image is successful, performing sim (3) transformation on at least one second feature point pair obtained by feature matching to obtain second camera pose transformation between the current sub-map and the previous sub-map;
and fusing the current sub-map and the previous sub-map based on the second camera pose transformation.
Preferably, the performing feature matching on the one frame of third image corresponding to the current sub-map and the frames of fourth images corresponding to the previous sub-map, includes:
for a third feature point in the third image, determining a second candidate feature point with a bag-of-word vector same as the third feature point from fourth feature points of the fourth image by searching an image dictionary database;
screening a plurality of frames of candidate images with the largest number of the second candidate feature points from the fourth images with the second candidate feature points;
for a third feature point in the third image, determining a third candidate feature point with a bag-of-word vector identical to the third feature point from fourth feature points of the candidate image by searching an image dictionary database;
and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
Preferably, the obtaining of the second camera pose transformation between the current sub-map and the previous sub-map by performing sim (3) transformation on at least one second feature point pair obtained by feature matching includes:
dividing the second characteristic point pair into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pair is located;
determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between the third image and a candidate image corresponding to the current characteristic point pair group;
carrying out reprojection operation on a second feature point pair in the current feature point pair group based on the candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group;
if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as second camera pose transformation between the current sub-map and the previous sub-map;
and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
Preferably, the method further comprises:
if the number of second characteristic point pairs belonging to the inner points in the current characteristic point pair group is larger than or equal to a preset threshold value, determining the second characteristic point pairs belonging to the outer points in the current characteristic point pair group based on the result of the reprojection operation, and deleting the second characteristic point pairs;
and optimizing a second characteristic point pair belonging to the inner point in the current characteristic point pair group after the outer point is deleted based on the second camera position and orientation transformation, and adjusting the second camera position and orientation transformation according to an optimization result.
A sub-map recovery fusion apparatus, the apparatus comprising:
the map establishing module is used for establishing a current sub map if the system tracking failure is detected, wherein the image of the frame with the earliest time corresponding to the current sub map calculates the camera pose of the current sub map through an inertial measurement unit IMU;
the first feature matching module is used for acquiring a previous sub-map with the establishment time closest to the current sub-map, and performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map;
the first pose calculation module is used for solving first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by feature matching if the feature matching of the first image and the second image is successful;
and the first map fusion module is used for fusing the current sub-map and the previous sub-map based on the first camera pose transformation.
Preferably, the apparatus further comprises:
the second feature matching module is used for performing feature matching on a frame of third image corresponding to the current sub-map and frames of fourth images corresponding to the previous sub-map, wherein the frame of third image has the latest acquisition time if the feature matching between the first image and the second image fails;
a second pose calculation module, configured to, if feature matching between the third image and the fourth image is successful, perform sim (3) transformation on at least one second feature point pair obtained by feature matching to obtain a second camera pose transformation between the current sub-map and the previous sub-map;
and the second map fusion module is used for fusing the current sub-map and the previous sub-map based on the second camera position and posture transformation.
Preferably, the second feature matching module is specifically configured to:
for a third feature point in the third image, determining a second candidate feature point with a bag-of-word vector same as the third feature point from fourth feature points of the fourth image by searching an image dictionary database; screening a plurality of frames of candidate images with the largest number of the second candidate feature points from the fourth images with the second candidate feature points; for a third feature point in the third image, determining a third candidate feature point with a bag-of-word vector identical to the third feature point from fourth feature points of the candidate image by searching an image dictionary database; and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
Preferably, the second posture calculation module is specifically configured to:
dividing the second characteristic point pair into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pair is located; determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between the third image and a candidate image corresponding to the current characteristic point pair group; carrying out reprojection operation on a second feature point pair in the current feature point pair group based on the candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group; if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as second camera pose transformation between the current sub-map and the previous sub-map; and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
The invention provides a method and a device for recovering and fusing a sub-map. The invention can continue to track and build the graph even if the relocation requirement can not be met after the system tracking fails. The method is less dependent on scenes, and map information before relocation can be saved as long as initialization is successful.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for restoring and fusing a sub-map according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method of a sub-map recovery fusion method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a sub-map recovery fusion device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
SLAM technology is applicable in many areas, such as autopilot, augmented and virtual reality, mobile robotics and drone navigation. However, the motion in the real world is often complex, for example, when the camera moves too fast, the image motion is blurred, and the visual SLAM tracking fails under the conditions of high brightness, low brightness, less environmental texture features, and the like, so that the practical application requirements cannot be met only by the visual sensor. The inertial sensor can provide better state estimation in quick motion in a short time, has obvious complementarity with the camera, and basically has the camera and the inertial sensor in the mobile equipment at present, and the SLAM algorithm of visual inertia fusion becomes a research hotspot at present.
The mainstream visual inertia fusion framework is a tightly coupled SLAM algorithm based on nonlinear optimization or filtering. Both the filtering and the optimization schemes, although increasing the robustness of the system, may still fail tracking in practical applications due to the complexity of the environment and motion.
The most adopted solution to resolve tracking failures at present is relocation. Such as ORB SLAM2, which is a set of SLAM algorithms based on ORB feature points, ORB refers to a feature point with rotation invariance. In ORB SLAM2, using Bag of Word model, calculate Bow of each image frame, and combine the Bag with the feature points to describe the image; when visual tracking fails, ORB SLAM2 matches all data in the image database with the BoW of the current frame to find a similar image frame.
According to the method, the pose of the current camera can be recovered only when the image acquired by the camera is very similar to a key frame of a certain frame in an established map, so that the repositioning is successful. In practical application, when the system fails to track, the device must return to the scene on which the system can track before the device can be successfully relocated and continue to track and construct the graph. However, in many application scenarios, such as when the unmanned vehicle and the intelligent drone need to move forward all the time, such a relocation scheme is not suitable.
Aiming at the problem, the invention provides a sub-map fusion algorithm based on ORB SLAM2 to solve the problem of tracking failure. The framework of the algorithm is based on an ORB SLAM2 framework tightly coupled with an IMU (inertial measurement unit), and tracking and mapping can be continuously carried out even if the relocation requirement cannot be met after the system tracking fails. The method is less dependent on scenes, and map information before relocation can be saved as long as initialization is successful.
The embodiment of the invention provides a method for recovering and fusing a sub-map, wherein the method has the flow chart shown in figure 1 and comprises the following steps:
and S10, if the system tracking failure is detected, establishing a current sub-map, wherein the camera pose of the current sub-map is calculated by the inertial measurement unit IMU corresponding to the image in the frame with the earliest time.
In the embodiment of the invention, after the ORB SLAM2 system fails to track, the camera pose is temporarily calculated by using IMU pre-integration, and the camera pose is used as the basis for mapping.
The following describes the process of IMU pre-integration and the process of temporarily calculating the camera pose using the pre-integration:
the IMU pre-integration is calculated using the following equations (1), (2), (3):
Figure BDA0002829810360000071
Figure BDA0002829810360000072
Figure BDA0002829810360000073
wherein i and j are indexes of IMU data; k is an index of the image frame; Δ Rij,Δvij,ΔpijThe IMU is a pre-integral term, namely rotation, speed and displacement of the pre-integral respectively;
Figure BDA0002829810360000074
respectively angular velocity measured by a gyroscope and acceleration measured by an accelerometer,
Figure BDA0002829810360000075
respectively the offset of the angular velocity and the acceleration,
Figure BDA0002829810360000076
discrete time noise of angular velocity and acceleration, respectively, Δ t is the time interval of the IMU data.
And (3) calculating the pose of the camera by adopting the following formulas (4), (5) and (6):
Figure BDA0002829810360000081
Figure BDA0002829810360000082
Figure BDA0002829810360000083
wherein W represents a world coordinate system (coordinate system of the first frame camera), B represents a carrier coordinate system,
Figure BDA0002829810360000084
respectively the rotational and displacement components of the pose (from the carrier coordinate system to the world coordinate system),
Figure BDA0002829810360000085
is the speed of the carrier in the world coordinate system at the moment i +1,
Figure BDA0002829810360000086
is the displacement of the carrier in the world coordinate system at the moment i +1,
Figure BDA0002829810360000087
velocity pairs, each pre-integrated
Figure BDA0002829810360000088
The jacobian matrix of (a) is,
Figure BDA0002829810360000089
displacement pairs, each of which is a pre-integration
Figure BDA00028298103600000810
The jacobian matrix of.
The ORB SLAM2 system creates multiple maps during operation, each map referred to as a child map. And the ORB SLAM2 is tracked based on vision, after the system tracking fails, a new sub-map, namely the current sub-map, is reestablished while the pose of the camera is temporarily calculated by the IMU, and the vision tracking is continuously carried out based on the ORB SLAM 2.
And S20, acquiring the previous sub-map with the establishment time closest to the current sub-map, and performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map.
In the embodiment of the invention, for the matching of the sub-map, the fast matching is carried out based on the previous N frames of the current sub-map and the last M frames of the previous sub-map, and the size relationship between N and M is not limited. The method can quickly match the map and recover the pose after the system fails to track.
The system is initialized successfully, and feature point matching is carried out on the first N frames of the current sub-map and the last M frames of the previous sub-map; if matching, the sub-maps can be fused.
It should be noted that the images in the embodiment of the present invention all refer to key frames, and the key frames are selected according to the conditions of the time interval, the number of matching points between frames, and the like in the ORB SLAM2 system, so that information redundancy can be avoided, and memory usage can be reduced.
In a specific implementation process, in step S20, "performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map" may include the following steps:
aiming at a first feature point in a first image, determining a first candidate feature point with the same bag-of-word vector as the first feature point from second feature points of a second image by searching an image dictionary database; and selecting a first target characteristic point with the maximum descriptor similarity from the first candidate characteristic points, wherein the first target characteristic point and the first characteristic point form a first characteristic point pair.
In the embodiment of the invention, a plurality of feature points exist in the image, each feature point is provided with a corresponding bag-of-word vector, and the bag-of-word vectors of the feature points in the image are recorded in an image dictionary database. Therefore, according to the bag-of-words vector, feature point matching is performed on the previous N frames of the current sub-map and the last M frames of the previous sub-map through a function searchbow () to obtain a series of matched feature point pairs. And (3) searching possible matching points in the image dictionary database according to the bag-of-word vector of the first feature point, and then calculating the similarity of the descriptor of the matched feature point to obtain the optimal matching point.
S30, if the feature matching of the first image and the second image is successful, solving the first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by the feature matching.
In the embodiment of the present invention, according to at least one first feature point pair obtained in step S20, the camera pose is optimized and solved through the function posoptimization (), and the pose transformation between the image of the current sub-map and the image matched with the previous sub-map is further solved
Figure BDA0002829810360000091
Further change the pose
Figure BDA0002829810360000092
As the camera pose transformation between the current sub-map and the previous sub-map.
And S40, fusing the current sub-map and the previous sub-map based on the first camera pose transformation.
In the embodiment of the invention, the camera poses of all images in the current sub-map are adjusted to the previous sub-map, and the adjusted camera poses are calculated by adopting the following formula (7):
Figure BDA0002829810360000093
where i is the index of the image in the current sub-map, w2 is the world coordinate system of the current sub-map, w1 is the world coordinate system of the matching map, cur is the current image, and matched is the image in the last sub-map that matches.
Figure BDA0002829810360000094
Refers to the SE (3) transformation from coordinate system b to coordinate system a, i.e. including rotation and translation.
And (3) adjusting the coordinates of all map points in the current sub-map into the previous sub-map, wherein the coordinate transformation of the 3D point between the two sub-maps is represented by the following formula (8):
Figure BDA0002829810360000095
wherein the content of the first and second substances,
Figure BDA0002829810360000096
is a SE (3) transformation from the w2 coordinate system to the w1 coordinate system.
In other embodiments, in order to improve the accuracy of map matching, the embodiment of the present invention may further adopt long-run sub-map matching based on sim (3) transformation, and this way may perform global optimization on the map, thereby improving the accuracy of the system. On the basis of the sub-map recovery fusion method shown in fig. 1, the following steps may also be adopted, and a flow chart of the method is shown in fig. 2:
and S50, if the feature matching between the first image and the second image fails, performing feature matching on a third image of the frame corresponding to the current sub-map and having the latest acquisition time and a fourth image of each frame corresponding to the previous sub-map.
In the embodiment of the invention, for the matching of the sub-maps, the last frame of the current sub-map is adopted to carry out global matching with all frames of the previous sub-map, and the mode can accurately match the map and recover the pose after the system fails to track. Of course, the matching method can be processed by combining the bag-of-words vector and the descriptor. That is, searching possible matching points in the image dictionary database according to the bag-of-word vectors of the feature points in the third image, and then calculating the descriptor similarity with the matched feature points to obtain the optimal matching point.
In a specific implementation process, in step S50, "performing feature matching on a frame of third image corresponding to the current sub-map and having the latest acquisition time and each frame of fourth image corresponding to the previous sub-map" may include the following steps:
for a third feature point in a third image, determining a second candidate feature point with the same bag-of-word vector as the third feature point from fourth feature points of a fourth image by searching an image dictionary database; screening a plurality of frames of candidate images with the largest number of second candidate feature points from the fourth image with the second candidate feature points; for a third feature point in a third image, determining a third candidate feature point with the same bag-of-word vector as the third feature point from fourth feature points of the candidate image by searching an image dictionary database; and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
In the embodiment of the invention, according to the bag-of-word vector of the third feature point in the third image, a function detecto candidates () interface in an ORB SLAM2 system is called, an image dictionary database is retrieved, the fourth image matched with the function in the previous sub-map is obtained, and the matched fourth image is screened through a certain screening strategy, so that a plurality of candidate images with better matching are obtained.
Further, the function searchbywow () is used for searching for ORB feature matching between the candidate image and the third image, and the basic principle of feature point matching is to search for possible matching points in the image dictionary database according to the bag-of-word vector of the third feature point, and then calculate the similarity of the descriptors of the matching feature points to obtain the best matching point. Feature matching is performed for each candidate image with the third image.
And S60, if the feature matching of the third image and the fourth image is successful, performing sim (3) transformation on at least one second feature point pair obtained by the feature matching to obtain a second camera pose transformation between the current sub-map and the previous sub-map.
Monocular SLAM systems have 7 degrees of freedom, 3 translations, 3 rotations, 1 scale factor. In the embodiment of the present invention, for at least one second feature point pair obtained in step S50, a translation and a rotation between the third image and the matched image are calculated by calling the function ComputeSim3 ().
In a specific implementation process, in step S60, "obtaining a second camera pose transformation between the current sub-map and the previous sub-map by performing sim (3) transformation on at least one second feature point pair obtained by feature matching" may include the following steps:
dividing the second characteristic point pairs into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pairs is located; determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between a third image and a candidate image corresponding to the current characteristic point pair group; carrying out reprojection operation on a second feature point pair in the current feature point pair group based on candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group; if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as the second camera pose transformation between the current sub-map and the previous sub-map; and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
In the embodiment of the invention, the second characteristic point pairs of the second target characteristic point in the same candidate image are divided into the same characteristic point pair group, and the characteristic point pair group is taken as a dimensionality for processing.
For the current feature point pair group to be processed, randomly selecting three second feature point pairs from the current feature point pair group, and obtaining camera pose transformation between two frames of images through sim (3) transformation, namely candidate camera pose transformation; further, projecting all the feature points capable of forming second feature point pairs in one frame into another frame through pose transformation of the candidate camera, wherein the feature points are projected from the third image to the fourth image once and from the fourth image to the third image another time, and calculating the reprojection error of each second feature point pair between the two frames of images after the two times of projection are finished; and if the reprojection error is greater than or equal to a preset error threshold value, the corresponding second characteristic point pair belongs to the outer point, otherwise, the corresponding second characteristic point pair belongs to the inner point.
And finally, if the number of the inner points is larger than or equal to a preset threshold value, the candidate camera pose transformation calculated at this time is effective, and otherwise, the current feature point pair group to be processed is reselected.
Of course, for the current feature point pair group to be processed, if the candidate camera pose transformation calculated once is invalid, the three second feature point pairs in the group can be reselected, and when the candidate camera pose transformations calculated by multiple selections are invalid, the current feature point pair group to be processed is reselected.
On this basis, in order to obtain more accurate translation and rotation, the embodiment of the present invention may further include the following steps:
if the number of second characteristic point pairs belonging to the inner points in the current characteristic point pair group is larger than or equal to a preset threshold value, determining the second characteristic point pairs belonging to the outer points in the current characteristic point pair group based on the result of the reprojection operation, and deleting the second characteristic point pairs; and optimizing a second characteristic point pair belonging to the inner point in the current characteristic point pair group after the outer point is deleted based on the second camera position and orientation transformation, and adjusting the second camera position and orientation transformation according to the optimization result.
In the embodiment of the invention, based on the obtained reprojection error of each second characteristic point pair, the second characteristic point pairs smaller than the preset error threshold are used as outer points, and the abnormal points are removed. After an initial translation and rotation is transformed through sim (3), a function SearchBySim3() can be called to find more second feature point pairs matched with features, a sim (3) optimization problem is further constructed by utilizing a reprojection error, and more accurate translation and rotation are obtained through the function Optimizer that is optimized through OptimizeSim3 (). Therefore, the pose transformation between the third image of the current sub-map and the fourth image matched with the previous sub-map can be obtained
Figure BDA0002829810360000121
Further change the pose
Figure BDA0002829810360000122
As the camera pose transformation between the current sub-map and the previous sub-map, the transformation matrix is sim (3).
And S70, fusing the current sub-map and the previous sub-map based on the second camera pose transformation.
In the embodiment of the present invention, for the process of performing map fusion based on the second camera pose transformation, reference may be made to the disclosure of performing map fusion based on the first camera pose transformation in step S40, which is not described herein again.
Finally, the embodiment of the invention can also call a primary function RunGlobalBundleAdjustment () to perform global optimization of the map.
According to the sub-map recovery fusion method provided by the embodiment of the invention, even if the relocation requirement cannot be met after the system tracking fails, the tracking and map building can be continued. The method is less dependent on scenes, and map information before relocation can be saved as long as initialization is successful.
Based on the sub-map recovery fusion method provided in the foregoing embodiment, an embodiment of the present invention further provides a device for executing the sub-map recovery fusion method, where a schematic structural diagram of the device is shown in fig. 3, and the device includes:
the map establishing module 10 is configured to establish a current sub-map if it is detected that the system tracking fails, where a frame of image corresponding to the current sub-map and having the earliest time is obtained by calculating a camera pose of the current sub-map through an inertial measurement unit IMU;
the first feature matching module 20 is configured to obtain a previous sub-map with a time closest to a current sub-map, and perform feature matching on N previous images with the earliest acquisition time corresponding to the current sub-map and M previous images with the latest acquisition time corresponding to the previous sub-map;
the first pose calculation module 30 is configured to, if feature matching between the first image and the second image is successful, solve a first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by the feature matching;
and the first map fusion module 40 is configured to fuse the current sub-map and the previous sub-map based on the first camera pose transformation.
Optionally, the first feature matching module 20 is specifically configured to:
aiming at a first feature point in a first image, determining a first candidate feature point with the same bag-of-word vector as the first feature point from second feature points of a second image by searching an image dictionary database; and selecting a first target characteristic point with the maximum descriptor similarity from the first candidate characteristic points, wherein the first target characteristic point and the first characteristic point form a first characteristic point pair.
Optionally, the apparatus further comprises:
the second feature matching module is used for matching features of a frame of third image corresponding to the current sub-map and a frame of fourth image corresponding to the previous sub-map, wherein the frame of third image has the latest acquisition time if the first image and the second image fail to be matched;
the second pose calculation module is used for performing sim (3) transformation on at least one second feature point pair obtained by feature matching to obtain second camera pose transformation between the current sub-map and the previous sub-map if the feature matching of the third image and the fourth image is successful;
and the second map fusion module is used for fusing the current sub-map and the previous sub-map based on the second camera position and posture transformation.
Optionally, the second feature matching module is specifically configured to:
for a third feature point in a third image, determining a second candidate feature point with the same bag-of-word vector as the third feature point from fourth feature points of a fourth image by searching an image dictionary database; screening a plurality of frames of candidate images with the largest number of second candidate feature points from the fourth image with the second candidate feature points; for a third feature point in a third image, determining a third candidate feature point with the same bag-of-word vector as the third feature point from fourth feature points of the candidate image by searching an image dictionary database; and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
Optionally, the second posture calculation module is specifically configured to:
dividing the second characteristic point pairs into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pairs is located; determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between a third image and a candidate image corresponding to the current characteristic point pair group; carrying out reprojection operation on a second feature point pair in the current feature point pair group based on candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group; if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as the second camera pose transformation between the current sub-map and the previous sub-map; and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
Optionally, the second posture calculation module is further configured to:
if the number of second characteristic point pairs belonging to the inner points in the current characteristic point pair group is larger than or equal to a preset threshold value, determining the second characteristic point pairs belonging to the outer points in the current characteristic point pair group based on the result of the reprojection operation, and deleting the second characteristic point pairs; and optimizing a second characteristic point pair belonging to the inner point in the current characteristic point pair group after the outer point is deleted based on the second camera position and orientation transformation, and adjusting the second camera position and orientation transformation according to the optimization result.
The sub-map recovery fusion device provided by the embodiment of the invention can continue to track and build the map even if the relocation requirement cannot be met after the system tracking fails. The method is less dependent on scenes, and map information before relocation can be saved as long as initialization is successful.
The method and the device for restoring and fusing the sub-map provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include or include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for restoring and fusing a sub-map, the method comprising:
if the system tracking failure is detected, establishing a current sub-map, wherein the image of the frame corresponding to the current sub-map and having the earliest time is obtained by calculating the camera pose of the current sub-map through an inertial measurement unit IMU;
acquiring a previous sub-map with the establishment time closest to the current sub-map, and performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map;
if the first image and the second image are successfully matched in feature, solving first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by feature matching;
fusing the current sub-map and the previous sub-map based on the first camera pose transformation.
2. The method of claim 1, wherein the performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map comprises:
for a first feature point in the first image, determining a first candidate feature point with a same bag-of-word vector as the first feature point from second feature points of the second image by searching an image dictionary database;
and selecting a first target characteristic point with the maximum descriptor similarity from the first candidate characteristic points, wherein the first target characteristic point and the first characteristic point form a first characteristic point pair.
3. The method of claim 1, further comprising:
if the feature matching between the first image and the second image fails, performing feature matching on a frame of third image corresponding to the current sub-map and having the latest acquisition time and each frame of fourth image corresponding to the previous sub-map;
if the feature matching of the third image and the fourth image is successful, performing sim (3) transformation on at least one second feature point pair obtained by feature matching to obtain second camera pose transformation between the current sub-map and the previous sub-map;
and fusing the current sub-map and the previous sub-map based on the second camera pose transformation.
4. The method according to claim 3, wherein the performing feature matching on the third image of the frame corresponding to the current sub-map and the fourth images of the frames corresponding to the previous sub-map, includes:
for a third feature point in the third image, determining a second candidate feature point with a bag-of-word vector same as the third feature point from fourth feature points of the fourth image by searching an image dictionary database;
screening a plurality of frames of candidate images with the largest number of the second candidate feature points from the fourth images with the second candidate feature points;
for a third feature point in the third image, determining a third candidate feature point with a bag-of-word vector identical to the third feature point from fourth feature points of the candidate image by searching an image dictionary database;
and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
5. The method according to claim 3, wherein the obtaining of the second camera pose transformation between the current sub-map and the previous sub-map by performing sim (3) transformation on at least one second feature point pair obtained by feature matching comprises:
dividing the second characteristic point pair into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pair is located;
determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between the third image and a candidate image corresponding to the current characteristic point pair group;
carrying out reprojection operation on a second feature point pair in the current feature point pair group based on the candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group;
if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as second camera pose transformation between the current sub-map and the previous sub-map;
and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
6. The method of claim 5, further comprising:
if the number of second characteristic point pairs belonging to the inner points in the current characteristic point pair group is larger than or equal to a preset threshold value, determining the second characteristic point pairs belonging to the outer points in the current characteristic point pair group based on the result of the reprojection operation, and deleting the second characteristic point pairs;
and optimizing a second characteristic point pair belonging to the inner point in the current characteristic point pair group after the outer point is deleted based on the second camera position and orientation transformation, and adjusting the second camera position and orientation transformation according to an optimization result.
7. A device for restoring and fusing a sub-map, the device comprising:
the map establishing module is used for establishing a current sub map if the system tracking failure is detected, wherein the image of the frame with the earliest time corresponding to the current sub map calculates the camera pose of the current sub map through an inertial measurement unit IMU;
the first feature matching module is used for acquiring a previous sub-map with the establishment time closest to the current sub-map, and performing feature matching on the first N frames of first images with the earliest acquisition time corresponding to the current sub-map and the first M frames of second images with the latest acquisition time corresponding to the previous sub-map;
the first pose calculation module is used for solving first camera pose transformation between the current sub-map and the previous sub-map based on at least one first feature point pair obtained by feature matching if the feature matching of the first image and the second image is successful;
and the first map fusion module is used for fusing the current sub-map and the previous sub-map based on the first camera pose transformation.
8. The apparatus of claim 7, further comprising:
the second feature matching module is used for performing feature matching on a frame of third image corresponding to the current sub-map and frames of fourth images corresponding to the previous sub-map, wherein the frame of third image has the latest acquisition time if the feature matching between the first image and the second image fails;
a second pose calculation module, configured to, if feature matching between the third image and the fourth image is successful, perform sim (3) transformation on at least one second feature point pair obtained by feature matching to obtain a second camera pose transformation between the current sub-map and the previous sub-map;
and the second map fusion module is used for fusing the current sub-map and the previous sub-map based on the second camera position and posture transformation.
9. The apparatus of claim 8, wherein the second feature matching module is specifically configured to:
for a third feature point in the third image, determining a second candidate feature point with a bag-of-word vector same as the third feature point from fourth feature points of the fourth image by searching an image dictionary database; screening a plurality of frames of candidate images with the largest number of the second candidate feature points from the fourth images with the second candidate feature points; for a third feature point in the third image, determining a third candidate feature point with a bag-of-word vector identical to the third feature point from fourth feature points of the candidate image by searching an image dictionary database; and selecting a second target characteristic point with the maximum descriptor similarity from the third candidate characteristic points, wherein the second target characteristic point and the third characteristic point form a second characteristic point pair.
10. The apparatus of claim 9, wherein the second pose calculation module is specifically configured to:
dividing the second characteristic point pair into corresponding characteristic point pair groups according to the candidate image in which the second target characteristic point in the second characteristic point pair is located; determining a current characteristic point pair group to be processed, randomly selecting three second characteristic point pairs from the current characteristic point pair group, and performing sim (3) transformation on the selected three second characteristic point pairs to obtain candidate camera pose transformation between the third image and a candidate image corresponding to the current characteristic point pair group; carrying out reprojection operation on a second feature point pair in the current feature point pair group based on the candidate camera pose transformation so as to determine a second feature point pair belonging to an inner point in the current feature point pair group; if the number of second feature point pairs belonging to the inner points in the current feature point pair group is larger than or equal to a preset threshold value, the candidate camera pose transformation is used as second camera pose transformation between the current sub-map and the previous sub-map; and if the number of second characteristic point pairs belonging to the internal points in the current characteristic point pair group is smaller than a preset threshold value, returning to the step of determining the current characteristic point pair group to be processed until all the characteristic point pair groups are traversed, and ending.
CN202011457290.4A 2020-12-11 2020-12-11 Sub-map recovery fusion method and device Pending CN112509006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011457290.4A CN112509006A (en) 2020-12-11 2020-12-11 Sub-map recovery fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011457290.4A CN112509006A (en) 2020-12-11 2020-12-11 Sub-map recovery fusion method and device

Publications (1)

Publication Number Publication Date
CN112509006A true CN112509006A (en) 2021-03-16

Family

ID=74973665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011457290.4A Pending CN112509006A (en) 2020-12-11 2020-12-11 Sub-map recovery fusion method and device

Country Status (1)

Country Link
CN (1) CN112509006A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238557A (en) * 2021-05-17 2021-08-10 珠海市一微半导体有限公司 Mapping abnormity identification and recovery method, chip and mobile robot

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053056A1 (en) * 2016-08-22 2018-02-22 Magic Leap, Inc. Augmented reality display device with deep learning sensors
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras
CN110533587A (en) * 2019-07-03 2019-12-03 浙江工业大学 A kind of SLAM method of view-based access control model prior information and map recovery
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN110967009A (en) * 2019-11-27 2020-04-07 云南电网有限责任公司电力科学研究院 Navigation positioning and map construction method and device for transformer substation inspection robot
US20200109954A1 (en) * 2017-06-30 2020-04-09 SZ DJI Technology Co., Ltd. Map generation systems and methods
CN111583136A (en) * 2020-04-25 2020-08-25 华南理工大学 Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053056A1 (en) * 2016-08-22 2018-02-22 Magic Leap, Inc. Augmented reality display device with deep learning sensors
US20200109954A1 (en) * 2017-06-30 2020-04-09 SZ DJI Technology Co., Ltd. Map generation systems and methods
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system
CN110533587A (en) * 2019-07-03 2019-12-03 浙江工业大学 A kind of SLAM method of view-based access control model prior information and map recovery
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN110967009A (en) * 2019-11-27 2020-04-07 云南电网有限责任公司电力科学研究院 Navigation positioning and map construction method and device for transformer substation inspection robot
CN111583136A (en) * 2020-04-25 2020-08-25 华南理工大学 Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHENG YUAN 等: "A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment", 《MICROMACHINES》, vol. 9, no. 12, 27 November 2018 (2018-11-27), pages 1 - 19 *
MARY B. ALATISE 等: "Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using and Extended Kalman Filter", 《SENSORS》, vol. 17, no. 10, 21 September 2017 (2017-09-21), pages 1 - 22 *
向良华: "室外环境下视觉与惯导融合定位算法研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2019, 15 January 2019 (2019-01-15), pages 140 - 1526 *
张玉龙: "基于关键帧的视觉惯性SLAM算法", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2019, 15 April 2019 (2019-04-15), pages 140 - 293 *
许峰: "基于单目视觉的多传感器组合导航算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, 15 February 2019 (2019-02-15), pages 140 - 664 *
陈常: "基于视觉和惯导融合的巡检机器人定位与建图技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, 15 September 2019 (2019-09-15), pages 138 - 651 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238557A (en) * 2021-05-17 2021-08-10 珠海市一微半导体有限公司 Mapping abnormity identification and recovery method, chip and mobile robot
CN113238557B (en) * 2021-05-17 2024-05-07 珠海一微半导体股份有限公司 Method for identifying and recovering abnormal drawing, computer readable storage medium and mobile robot

Similar Documents

Publication Publication Date Title
CN107990899B (en) Positioning method and system based on SLAM
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN110411441B (en) System and method for multi-modal mapping and localization
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
CN110490900B (en) Binocular vision positioning method and system under dynamic environment
CN109520497B (en) Unmanned aerial vehicle autonomous positioning method based on vision and imu
US20190204084A1 (en) Binocular vision localization method, device and system
Rambach et al. Learning to fuse: A deep learning approach to visual-inertial camera pose estimation
WO2017163596A1 (en) Autonomous navigation using visual odometry
CN107735797B (en) Method for determining a movement between a first coordinate system and a second coordinate system
CN111274847B (en) Positioning method
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN110986969B (en) Map fusion method and device, equipment and storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
WO2020000395A1 (en) Systems and methods for robust self-relocalization in pre-built visual map
CN114623817B (en) Self-calibration-contained visual inertial odometer method based on key frame sliding window filtering
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN111540011A (en) Hybrid metric-topology camera based positioning
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN110827353A (en) Robot positioning method based on monocular camera assistance
CN116295412A (en) Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
CN113012224A (en) Positioning initialization method and related device, equipment and storage medium
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
CN112731503B (en) Pose estimation method and system based on front end tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination