CN111141295A - Automatic map recovery method based on monocular ORB-SLAM - Google Patents
Automatic map recovery method based on monocular ORB-SLAM Download PDFInfo
- Publication number
- CN111141295A CN111141295A CN201911325034.7A CN201911325034A CN111141295A CN 111141295 A CN111141295 A CN 111141295A CN 201911325034 A CN201911325034 A CN 201911325034A CN 111141295 A CN111141295 A CN 111141295A
- Authority
- CN
- China
- Prior art keywords
- map
- coordinate system
- information
- old
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000011084 recovery Methods 0.000 title claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims abstract description 93
- 238000010276 construction Methods 0.000 claims description 30
- 238000013519 translation Methods 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 claims description 2
- 230000017105 transposition Effects 0.000 claims description 2
- 241000764238 Isis Species 0.000 claims 1
- 238000005457 optimization Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an automatic map recovery method based on monocular ORB-SLAM, which solves the problem of map tracking failure caused by complex environment influence by improving ORB-SLAM map building flow, can quickly recover tracking map building by means of a form of RTK information assisted vision without returning to an original scene for relocation, automatically integrates map information before loss into a current map building system, improves map building efficiency and ensures scale consistency and integrity of visual map building.
Description
Technical Field
The invention relates to the technical field of visual map construction, in particular to an automatic map recovery method based on monocular ORB-SLAM.
Background
The visual synchronous positioning and Mapping technology (SLAM) is proposed and developed from the nineties of the last century, is one of core technologies of visual autonomous navigation, can extract and process information of continuous images, incrementally constructs a visual map consistent with an unknown environment, and provides an automatic geographic information Mapping method for an actual environment by combining with a traditional navigation mode such as GPS and inertial navigation.
Based on monocular vision ORB (improved FAST and Rotated BRIEF, a feature detection algorithm combining an improved FAST corner and a BRIEF descriptor) -SLAM (sequential object matching method) carries out feature extraction and matching on continuous image information, a tracking mechanism giving consideration to calculation amount and robustness is adopted to estimate and optimize a camera motion track, and then a sparse feature map is recovered, and the method is one of the vision reference map mapping methods with high robustness at present. However, visual information has the limitation of being easily affected by the environment, and under the conditions of violent motion, blurred images and unobvious textures, the problem of overlarge camera motion amplitude or less image feature extraction can occur, so that a uniform motion model and a key frame matching model in a tracking mechanism are invalid, and a camera quickly returns to an original scene for relocation aiming at some environments, so that a system is in a state of being incapable of continuously tracking and building a map for a long time, and further the problems of low mapping efficiency and missing map building of part of the scene are caused.
For the condition that the map is lost due to tracking failure, a local sub-map method can be adopted, namely, SLAM is used for carrying out segmented mapping and then sub-map information is transmitted to a global map, however, the method cannot establish the connection between sub-maps, so that the global consistency of the map cannot be ensured, and for monocular vision, the sub-maps cannot be directly spliced due to scale uncertainty.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic map recovery method based on monocular ORB-SLAM.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides an automatic map recovery method based on monocular ORB-SLAM, which comprises the following steps:
step 1, synchronously acquiring an image sequence and real-time dynamic differential RTK data, further tracking and constructing a map based on a monocular ORB-SLAM system, and recovering map information;
step 2, if the map building tracking fails, saving the current map information, and recording the key frame and the map point serial number;
step 3, resetting the visual map construction system, and reconstructing a map by utilizing monocular initialization;
step 4, automatically recovering and fusing lost map information based on RTK information assistance;
and 5, if the visual tracking fails again, repeating the steps 2-4 until the map construction is finished.
As a further optimization scheme of the monocular ORB-SLAM-based automatic map recovery method, the step 1 is as follows:
aligning RTK positioning information with video data according to a timestamp by adopting a linear interpolation method, further performing map tracking and construction by taking a camera coordinate system of an initial key frame as a visual coordinate system based on a monocular ORB-SLAM mapping frame, wherein the map information comprises a key frame sequence, triangularization-recovered map points and a common-view relation between the key frame and the key frame;
the position coordinate of the map point in the visual coordinate system is [ X ]v1,Xv2,...,Xvn]The pose information estimated for the sequence of key frames isThe position coordinate of the key frame in the current visual coordinate system is [ Y ]v1,Yv2,...,Yvm], wherein wherein ,XvjThe position coordinate of the jth map point in the current visual coordinate system is shown, j is 1,2.. n, n is the number of map points,respectively a rotation matrix and a translation vector of the ith key frame down-converted from the camera coordinate to the current visual coordinate system, YviThe position coordinate of the ith key frame in the current visual coordinate system is 1,2 … m, and m is the number of key framesThe number, superscript T is transposed.
As a further optimization scheme of the monocular ORB-SLAM-based automatic map recovery method, the map building and tracking failure in the step 2 means that the monocular ORB-SLAM system cannot match the current frame feature information with a map, and further incremental map building failure is caused;
if the map tracking and construction based on the monocular ORB-SLAM system fails, judging whether the number N of the current key frames is more than 5, if so, automatically storing the current map information to a corresponding map path, and recording the maximum sequence numbers of the key frames and map points before the tracking loss: LastKF _ id and LastMP _ id, and a flag amount LastMapFlag indicating whether the old map exists, true indicating that the old map exists.
Step 3 specifically means that related map construction variables in the monocular ORB-SLAM system are emptied and reset, the initial map information is restored under a new visual coordinate system, and the new map key frame and map point serial numbers are accumulated on LastKF _ id and LastMP _ id to prevent the serial numbers of the front map and the rear map from being overlapped.
As a further optimization scheme of the monocular ORB-SLAM-based automatic map recovery method, the automatic recovery and fusion of lost map information in step 4 means that after recovering an M frame key frame and corresponding map point information in a new visual coordinate system, whether an old map exists or not is judged, if an old map flag quantity LastMapFlag is true, the map information is automatically loaded, and based on the constraint of RTK geographic information, coordinates of the old map key frame and the map points are converted into a current new visual coordinate system and fused into the current map construction; m is an integer between 14 and 26.
As a further optimization scheme of the monocular ORB-SLAM-based automatic map recovery method, the key frame and map point coordinates of the old map are converted into a current new visual coordinate system and are fused into the current map construction; the method specifically comprises the following steps:
4.1, establishing a similar transformation model of the visual map from a visual coordinate system to a real geographic coordinate system to obtain the scaling, translation and rotation relations of the visual information;
Xg=svgRvgXv+tvg
wherein ,svgIs a scale factor and satisfies svg>0,RvgIs a rotation matrix in three-dimensional space, tvg=(tx,ty,tz)TAs translation vector, XgFor the position coordinates of the map in the real geographic coordinate system, XvAs position coordinates of the map in the visual coordinate system, txIs the translation of the x-axis in the visual coordinate system, tyIs the translation of the y-axis in the visual coordinate system, tzThe translation amount of the z axis under the visual coordinate system;
4.2, establishing an objective function, and solving a similarity transformation parameter in a form of optimizing a least square objective function by using aligned RTK data and keyframe track information of monocular vision recovery;
wherein, the key frame set to be estimated is (c)1,...,ci,...,cm) The aligned RTK dataset is (r)1,...,ri,...,rm), wherein ,ciFor the coordinates of the ith key frame in the visual coordinate system, riIs the coordinate of the ith key frame in the geographic coordinate system, R, t and s are respectively the rotation matrix, translation vector and scale factor of the map transformed from the visual coordinate system to the geographic coordinate system, m is the number of key frames, e2(Rvg,tvg,svg) The mean square error between the vision estimated value after similarity transformation and the two groups of data of the alignment RTK is obtained;
4.3, solving the coordinate transformation parameters of the old map from the original visual coordinate system to the new visual coordinate system;
respectively solving the similarity transformation parameters (R) of the new map and the old map according to the objective function in the step 4.21,t1,s1)、(R2,t2,s2) Based on the constraint relation of the real geographic information to the two sections of map information, the three-dimensional point information of the old map is converted into the current new visual coordinate system;
X″vj=s21R21X′vj+t21
wherein ,X′vjis the position, X', of the jth map point in the old map in the original visual coordinate systemvjFor the location of the jth map point in the new visual coordinate system, R21、t21、s21Respectively a rotation matrix, a translation vector and a scale factor R of the old map down-converted from the original visual coordinate system to the new visual coordinate system1、t1、s1Rotation matrix, translation vector, scale factor, R, respectively, for the new map down-conversion from the new visual coordinate system to the geographic coordinate system2、t2、s2Respectively rotating matrix, translation vector and scale factor of the old map transformed from the original visual coordinate system to the geographic coordinate system, and superscript T is transposition;
4.4, according to the similarity transformation relation, converting the pose information of the old map key frame into the current new visual coordinate system;
the position coordinate of the ith old key frame in the original visual coordinate system isThe position coordinate of the ith old key frame in the new visual coordinate system is wherein ,a rotation matrix of the ith old key frame in the original visual coordinate systemAnd a translation vector of the image data to be translated,a rotation matrix and a translation vector of the ith old key frame in the new visual coordinate system are obtained;
and 4.5, adding the old map information into the current new map variable to obtain key frames and map point data which meet the requirements of reasonable position distribution under a new visual coordinate system and consistency of physical dimension and the current new map, simultaneously calculating a visual word bag of the old key frames, and adding index information related to the construction of the old map into the new map construction variable of the monocular ORB-SLAM system to enable the new map construction variable to participate in map mapping of the current system, thereby providing more redundant map information for tracking relocation and loop-back correction.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) aiming at the problem that tracking is easy to interrupt due to the influence of environment on vision, new map information can be quickly recovered, relocation to the original scene is avoided, and the map building efficiency is improved;
(2) based on geographic information constraint provided by Real-time kinematic (RTK), map information before loss can be automatically added into the current map construction, the consistency of the physical scales of the front and rear maps is ensured, and the efficiency and the integrity of the map construction are improved;
(3) the method improves the robustness of visual mapping, can continuously carry out geographic information mapping on the scene for a long time, and provides a complete matching reference library for visual navigation.
Drawings
FIG. 1 is a schematic flow chart illustrating the present invention.
Fig. 2 is a schematic diagram of coordinate transformation of a map before loss.
Fig. 3 is a schematic diagram of the effect of map fusion.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the invention provides an automatic map recovery method based on monocular ORB-SLAM, which solves the problem of map tracking failure caused by complex environment influence by improving ORB-SLAM map building flow, can quickly recover tracking map building by means of a form of RTK information assisted vision without returning to an original scene for relocation, automatically integrates map information before loss into a current map building system, improves map building efficiency and ensures scale consistency and integrity of visual map building.
Fig. 1 is a flowchart of a method for automatic map recovery based on monocular ORB-SLAM, which is mainly divided into the following five steps:
synchronously acquiring an image sequence and RTK data, processing the data, tracking and constructing a map based on a monocular ORB-SLAM system, and recovering map information;
step two, the visual tracking fails, the current map information is saved, and the key frame and the map point serial number are recorded;
clearing and resetting system map construction variables, and reconstructing a map based on monocular initialization;
step four, judging whether to load and fuse the old map information into the current map construction;
and step five, if the tracking fails, repeating the step two, the step three and the step four until the map construction is finished.
Further, for the multi-sensor data processing problem in the first step, data which fail in differential positioning are eliminated, and then RTK information and visual data are aligned by adopting a linear interpolation method, so that the problem of frequency inconsistency is solved.
The camera coordinates of the initial key frames are a current visual coordinate system, visual map coordinates are all represented in the coordinate system, and map information comprises a key frame sequence, triangularized map points and a common-view relation between the key frames.
Further, the failure of visual tracking in the step two means that the current image feature information cannot be continuously tracked and mapped under the influence of motion and illumination, and the camera cannot be quickly repositioned back to the original scene.
If the current key frame sequence N is greater than 5, storing map information, recording the maximum sequence numbers of the current key frame and the map point, and simultaneously setting an old map mark LastMapFlag ═ true.
And further, judging whether the old map information is loaded and fused or not by considering the construction condition of the current map information. Firstly, because the number of key frames and map points which can be used for matching in the initial recovery is small, the tracking is easy to lose again, after a new map is selected to recover a 20-frame key frame sequence, secondly, whether an old map mark LastMapFag is true or not is judged, if true, the old map information is loaded, then, based on the constraint of RTK real geographic information, the coordinates of the old map key frames and the map points are converted into a current new visual coordinate system, and then, the coordinates are fused into the current system map construction, so that the consistency and the integrity of the physical scale of the global map construction are ensured;
as shown in FIG. 2, the method for transforming the map information before loss from the original visual coordinate system to the new visual coordinate system is schematically illustrated, and the similar transformation relations between the new map and the visual coordinate system corresponding to the old map before loss and the real geographic coordinate system are first respectively obtained through the constraint of RTK information, wherein the similar transformation relations are respectively (R)1,t1,s1)、(R2,t2,s2) And according to the constraint relation of the two geographic information, the similarity transformation relation (R) from the old map to the new map is obtained21,t21,s21) And further, the position information of the old map key frame and the map point in the new visual coordinate system is deduced, and the scale consistency of the old map and the current newly-constructed map is ensured.
FIG. 3 is a schematic diagram of the fusion effect of the designed automatic map when the tracking failure occurs.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (6)
1. An automatic map recovery method based on monocular ORB-SLAM is characterized by comprising the following steps:
step 1, synchronously acquiring an image sequence and real-time dynamic differential RTK data, further tracking and constructing a map based on a monocular ORB-SLAM system, and recovering map information;
step 2, if the map building tracking fails, saving the current map information, and recording the key frame and the map point serial number;
step 3, resetting the visual map construction system, and reconstructing a map by utilizing monocular initialization;
step 4, automatically recovering and fusing lost map information based on RTK information assistance;
and 5, if the visual tracking fails again, repeating the steps 2-4 until the map construction is finished.
2. The monocular ORB-SLAM-based automated map retrieval method of claim 1, wherein step 1 is specifically as follows:
aligning RTK positioning information with video data according to a timestamp by adopting a linear interpolation method, further performing map tracking and construction by taking a camera coordinate system of an initial key frame as a visual coordinate system based on a monocular ORB-SLAM mapping frame, wherein the map information comprises a key frame sequence, triangularization-recovered map points and a common-view relation between the key frame and the key frame;
the position coordinate of the map point in the visual coordinate system is [ X ]v1,Xv2,...,Xvn]The pose information estimated for the sequence of key frames isThe position coordinate of the key frame in the current visual coordinate system is [ Y ]v1,Yv2,...,Yvm], wherein wherein ,XvjThe position coordinate of the jth map point in the current visual coordinate system is shown, j is 1,2.. n, n is the number of map points,respectively a rotation matrix and a translation vector of the ith key frame down-converted from the camera coordinate to the current visual coordinate system, YviThe position coordinate of the ith key frame in the current visual coordinate system is 1,2 … m, m is the number of key frames, and the superscript T is the transpose.
3. The monocular ORB-SLAM-based automatic map recovery method as claimed in claim 1, wherein the map construction tracking failure in step 2 means that the monocular ORB-SLAM system cannot match the current frame feature information with the map, thereby causing an incremental map construction failure;
if the map tracking and construction based on the monocular ORB-SLAM system fails, judging whether the number N of the current key frames is more than 5, if so, automatically storing the current map information to a corresponding map path, and recording the maximum sequence numbers of the key frames and map points before the tracking loss: LastKF _ id and LastMP _ id, and a flag amount LastMapFlag indicating whether the old map exists, true indicating that the old map exists.
4. The monocular ORB-SLAM-based automatic map recovery method as claimed in claim 1, wherein step 3 specifically means that related mapping variables in the monocular ORB-SLAM system are cleared and reset, the initial map information is recovered under a new visual coordinate system, and new map key frame and map point serial numbers are accumulated on LastKF _ id and LastMP _ id to prevent overlapping of front and rear map serial numbers.
5. The monocular ORB-SLAM-based automatic map restoration method according to claim 1, wherein the automatically restoring and fusing lost map information in step 4 is to restore M-frame key frames and corresponding map point information in a new visual coordinate system, determine whether an old map exists, automatically load the map information if an old map flag quantity LastMapFlag is true, convert the old map key frames and the map point coordinates to a current new visual coordinate system based on RTK geographic information constraint, and fuse the old map key frames and the map point coordinates to the current map construction; m is an integer between 14 and 26.
6. The monocular ORB-SLAM based automated map restoration method of claim 5, wherein the old map keyframe and map point coordinates are transformed into the current new visual coordinate system and fused into the current map construction; the method specifically comprises the following steps:
4.1, establishing a similar transformation model of the visual map from a visual coordinate system to a real geographic coordinate system to obtain the scaling, translation and rotation relations of the visual information;
Xg=svgRvgXv+tvg
wherein ,svgIs a scale factor and satisfies svg>0,RvgIs a rotation matrix in three-dimensional space, tvg=(tx,ty,tz)TAs translation vector, XgFor the position coordinates of the map in the real geographic coordinate system, XvAs position coordinates of the map in the visual coordinate system, txIs the translation of the x-axis in the visual coordinate system, tyIs the translation of the y-axis in the visual coordinate system, tzThe translation amount of the z axis under the visual coordinate system;
4.2, establishing an objective function, and solving a similarity transformation parameter in a form of optimizing a least square objective function by using aligned RTK data and keyframe track information of monocular vision recovery;
wherein, the key frame set to be estimated is (c)1,...,ci,...,cm) The aligned RTK dataset is (r)1,...,ri,...,rm), wherein ,ciFor the coordinates of the ith key frame in the visual coordinate system, riFor the coordinates of the ith key frame in the geographic coordinate system, R, t and s are respectively a rotation matrix, a translation vector and a scale factor of the map transformed from the visual coordinate system to the geographic coordinate system, and m isIs the number of key frames, e2(Rvg,tvg,svg) The mean square error between the vision estimated value after similarity transformation and the two groups of data of the alignment RTK is obtained;
4.3, solving the coordinate transformation parameters of the old map from the original visual coordinate system to the new visual coordinate system;
respectively solving the similarity transformation parameters (R) of the new map and the old map according to the objective function in the step 4.21,t1,s1)、(R2,t2,s2) Based on the constraint relation of the real geographic information to the two sections of map information, the three-dimensional point information of the old map is converted into the current new visual coordinate system;
X″vj=s21R21X′vj+t21
wherein ,X′vjis the position, X', of the jth map point in the old map in the original visual coordinate systemvjFor the location of the jth map point in the new visual coordinate system, R21、t21、s21Respectively a rotation matrix, a translation vector and a scale factor R of the old map down-converted from the original visual coordinate system to the new visual coordinate system1、t1、s1Rotation matrix, translation vector, scale factor, R, respectively, for the new map down-conversion from the new visual coordinate system to the geographic coordinate system2、t2、s2Respectively rotating matrix, translation vector and scale factor of the old map transformed from the original visual coordinate system to the geographic coordinate system, and superscript T is transposition;
4.4, according to the similarity transformation relation, converting the pose information of the old map key frame into the current new visual coordinate system;
the position coordinate of the ith old key frame in the original visual coordinate system isThe position coordinate of the ith old key frame in the new visual coordinate system is wherein ,is a rotation matrix and a translation vector of the ith old key frame in the original visual coordinate system,a rotation matrix and a translation vector of the ith old key frame in the new visual coordinate system are obtained;
and 4.5, adding the old map information into the current new map variable to obtain key frames and map point data which meet the requirements of reasonable position distribution under a new visual coordinate system and consistency of physical dimension and the current new map, simultaneously calculating a visual word bag of the old key frames, and adding index information related to the construction of the old map into the new map construction variable of the monocular ORB-SLAM system to enable the new map construction variable to participate in map mapping of the current system, thereby providing more redundant map information for tracking relocation and loop-back correction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911325034.7A CN111141295B (en) | 2019-12-20 | 2019-12-20 | Automatic map restoration method based on monocular ORB-SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911325034.7A CN111141295B (en) | 2019-12-20 | 2019-12-20 | Automatic map restoration method based on monocular ORB-SLAM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111141295A true CN111141295A (en) | 2020-05-12 |
CN111141295B CN111141295B (en) | 2023-04-25 |
Family
ID=70519074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911325034.7A Active CN111141295B (en) | 2019-12-20 | 2019-12-20 | Automatic map restoration method based on monocular ORB-SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111141295B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862120A (en) * | 2020-07-22 | 2020-10-30 | 苏州大学 | Monocular SLAM scale recovery method |
CN112004196A (en) * | 2020-08-24 | 2020-11-27 | 唯羲科技有限公司 | Positioning method, positioning device, terminal and computer storage medium |
CN113190022A (en) * | 2021-03-18 | 2021-07-30 | 浙江大学 | Underwater cabled robot positioning system and method based on visual SLAM |
CN113238557A (en) * | 2021-05-17 | 2021-08-10 | 珠海市一微半导体有限公司 | Mapping abnormity identification and recovery method, chip and mobile robot |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
CN109901207A (en) * | 2019-03-15 | 2019-06-18 | 武汉大学 | A kind of high-precision outdoor positioning method of Beidou satellite system and feature combinations |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN110533587A (en) * | 2019-07-03 | 2019-12-03 | 浙江工业大学 | A kind of SLAM method of view-based access control model prior information and map recovery |
-
2019
- 2019-12-20 CN CN201911325034.7A patent/CN111141295B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
CN109901207A (en) * | 2019-03-15 | 2019-06-18 | 武汉大学 | A kind of high-precision outdoor positioning method of Beidou satellite system and feature combinations |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN110533587A (en) * | 2019-07-03 | 2019-12-03 | 浙江工业大学 | A kind of SLAM method of view-based access control model prior information and map recovery |
Non-Patent Citations (3)
Title |
---|
周绍磊;吴修振;刘刚;张嵘;徐海刚;: "一种单目视觉ORB-SLAM/INS组合导航方法" * |
张福斌;刘宗伟;: "一种单目相机/三轴陀螺仪/里程计紧组合导航算法" * |
李承;胡钊政;胡月志;吴华伟;: "基于GPS与图像融合的智能车辆高精度定位算法" * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862120A (en) * | 2020-07-22 | 2020-10-30 | 苏州大学 | Monocular SLAM scale recovery method |
CN111862120B (en) * | 2020-07-22 | 2023-07-11 | 苏州大学 | Monocular SLAM scale recovery method |
CN112004196A (en) * | 2020-08-24 | 2020-11-27 | 唯羲科技有限公司 | Positioning method, positioning device, terminal and computer storage medium |
CN112004196B (en) * | 2020-08-24 | 2021-10-29 | 唯羲科技有限公司 | Positioning method, positioning device, terminal and computer storage medium |
CN113190022A (en) * | 2021-03-18 | 2021-07-30 | 浙江大学 | Underwater cabled robot positioning system and method based on visual SLAM |
CN113238557A (en) * | 2021-05-17 | 2021-08-10 | 珠海市一微半导体有限公司 | Mapping abnormity identification and recovery method, chip and mobile robot |
CN113238557B (en) * | 2021-05-17 | 2024-05-07 | 珠海一微半导体股份有限公司 | Method for identifying and recovering abnormal drawing, computer readable storage medium and mobile robot |
Also Published As
Publication number | Publication date |
---|---|
CN111141295B (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111141295A (en) | Automatic map recovery method based on monocular ORB-SLAM | |
CN110033489B (en) | Method, device and equipment for evaluating vehicle positioning accuracy | |
Ventura et al. | Global localization from monocular slam on a mobile phone | |
Toft et al. | Long-term 3d localization and pose from semantic labellings | |
CN109544636A (en) | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method | |
CN111561923A (en) | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion | |
CN105096386A (en) | Method for automatically generating geographic maps for large-range complex urban environment | |
US10152828B2 (en) | Generating scene reconstructions from images | |
CN104200523A (en) | Large-scale scene three-dimensional reconstruction method for fusion of additional information | |
CN104537709A (en) | Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes | |
CN111415417B (en) | Mobile robot topology experience map construction method integrating sparse point cloud | |
CN111882602B (en) | Visual odometer implementation method based on ORB feature points and GMS matching filter | |
CN110176060B (en) | Dense three-dimensional reconstruction method and system based on multi-scale geometric consistency guidance | |
CN112991534B (en) | Indoor semantic map construction method and system based on multi-granularity object model | |
CN110599545A (en) | Feature-based dense map construction system | |
CN111860651A (en) | Monocular vision-based semi-dense map construction method for mobile robot | |
CN112731503B (en) | Pose estimation method and system based on front end tight coupling | |
CN112991436B (en) | Monocular vision SLAM method based on object size prior information | |
CN109509208B (en) | High-precision three-dimensional point cloud acquisition method, system, device and storage medium | |
CN110021041B (en) | Unmanned scene incremental gridding structure reconstruction method based on binocular camera | |
CN113554102A (en) | Aviation image DSM matching method for cost calculation dynamic programming | |
CN115965673B (en) | Centralized multi-robot positioning method based on binocular vision | |
CN112305558A (en) | Mobile robot track determination method and device by using laser point cloud data | |
CN112146647A (en) | Binocular vision positioning method and chip for ground texture | |
Wang et al. | Stream Query Denoising for Vectorized HD Map Construction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |