CN112132745B - Multi-sub-map splicing feature fusion method based on geographic information - Google Patents

Multi-sub-map splicing feature fusion method based on geographic information Download PDF

Info

Publication number
CN112132745B
CN112132745B CN201910554020.6A CN201910554020A CN112132745B CN 112132745 B CN112132745 B CN 112132745B CN 201910554020 A CN201910554020 A CN 201910554020A CN 112132745 B CN112132745 B CN 112132745B
Authority
CN
China
Prior art keywords
sub
map
coordinate system
maps
geographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910554020.6A
Other languages
Chinese (zh)
Other versions
CN112132745A (en
Inventor
赵科东
孙永荣
薛源
赵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910554020.6A priority Critical patent/CN112132745B/en
Publication of CN112132745A publication Critical patent/CN112132745A/en
Application granted granted Critical
Publication of CN112132745B publication Critical patent/CN112132745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a multi-sub map splicing feature fusion method based on geographic information, which relates to the field of map mapping and navigation positioning and comprises the following steps: aligning the vision sensor with the high-precision GPS information; establishing a mapping relation between a plurality of sub-maps and a geographic coordinate system; carrying out coordinate transformation on the three-dimensional feature points of all the sub-maps; performing coordinate transformation on the keyframe pose data of all the sub-maps; and optimizing key frames and feature points of the global map. The method can simply splice a plurality of sub-maps by utilizing the geographical position constraint, and can obtain the scale factor between the global visual coordinate system and the geographical coordinate system after splicing. The method is applied to mapping operation, expensive professional sensing equipment is not needed for mapping, the mapping can be realized mainly by means of a camera and a GPS receiver, and the mapping cost is reduced.

Description

Multi-sub-map splicing feature fusion method based on geographic information
Technical Field
The invention relates to the field of map mapping and navigation positioning, in particular to a multi-sub map splicing feature fusion method based on geographic information.
Background
Aiming at the field surveying and mapping function of a large-range road information thematic map of a military surveying and mapping vehicle and the positioning function of a long-endurance and high-precision vehicle-mounted navigation system of a guided missile vehicle under the condition of satellite signal rejection and the requirement of military combat vehicle battle and patrol integration, a surveying and mapping integrated positioning system and a surveying and mapping integrated positioning method are urgently required to be explored so as to achieve the functions of training time surveying and mapping and wartime navigation.
In a common GPS/INS combined navigation mode by introducing GPS information, because the military vehicle has the problems of unavailable satellite signals or signal deception and the like in wartime, the positioning requirement of a navigation system in long endurance is difficult to meet. Therefore, visual navigation information is introduced to solve the problem of an automatic engineering method for field surveying and mapping of the large-range road information thematic map, and the capability of keeping high-precision positioning of a navigation system under the satellite-free condition is ensured. Under the condition that satellite signals are available, namely in a mapping stage, road information mapping is carried out, and a road information thematic map which can be used for subsequent visual navigation is generated; under the condition that the satellite signal is rejected, namely in a navigation stage, the current position information of the carrier is solved through the characteristic information in the real-time photographic image, and global navigation positioning in the geographic sense is realized.
The construction of the map is the operation result of the surveying and mapping vehicle in the surveying and mapping stage, is the basis for the correct positioning of military combat vehicles in the navigation stage, and is one of the key technologies for the correct operation of the surveying and mapping integrated positioning system. At present, in the drawing process of the SLAM technology based on vision, the tracking failure is easy to occur. If the system is not reset, the subsequent area cannot be mapped, and only the system can be waited to return to the mapped area for relocation and tracking; if the system is reset, the map that the system has mapped cannot reside in memory, resulting in loss of mapped data. In addition, the size of the drawing environment is an important factor affecting the drawing success rate. When the system is in the mapping mode, the data of key frames, feature points and the like in the memory gradually increase along with the increase of continuous working time and geographic areas. The larger the environmental scale is, the more likely the system is to have problems in terms of accuracy, robustness, computational efficiency, and the like, and the system is not favorable for effective utilization of memory resources.
For this reason, a common method is to fuse the stitched map after segmented mapping. At present, most of researches on map fusion focus on the conversion splicing of two-dimensional road topology, and the optimal rigid transformation relation is obtained by optimizing the overlapping part of a map. However, the conventional transformation methods have insufficient robustness and cannot process substantial map deformation. For example, in the case of a local map that is severely distorted when the system is operating in open loop for a long period of time, a suitable transformation cannot be found between the local maps even if there is a significant overlap area between the two sub-maps. Furthermore, for monocular SLAM, the visual coordinate system used to describe the keyframes, the common viewpoints, and the camera motion parameters is typically not physically sized, i.e., there is a difference in scale from the real physical world, and the scale factor is different between different sub-maps. Therefore, it is very important to design a map stitching method integrating road topology and visual feature points.
Disclosure of Invention
The invention mainly aims at the defects in the background technology and provides a method for splicing monocular vision SLAM sub-maps by means of RTK information fusion. The invention not only simply splices the topological map and the characteristic point map in the local sub-map drawn in a segmented manner. The invention aims to provide a method for performing fusion splicing on a local sub-map drawn according to a region by adopting an off-line processing mode. The fused map not only has higher global precision, but also can provide a scale factor between the monocular vision SLAM and a real geographic coordinate system, and provides a complete global map for subsequent visual navigation.
The invention adopts the following technical scheme for solving the technical problems:
a multi-sub map splicing feature fusion method based on geographic information specifically comprises the following steps:
aligning visual sensor and high-precision GPS information;
step two, establishing a mapping relation between a plurality of sub-maps and a geographic coordinate system;
step three, carrying out coordinate transformation on the three-dimensional feature points of all the sub-maps;
step four, coordinate transformation is carried out on the key frame pose data of all the sub-maps;
and fifthly, optimizing key frames and feature points of the global map.
As a further preferable scheme of the multi-sub map stitching feature fusion method based on geographic information, in the first step, the aligning of the vision sensor and the high-precision GPS information is to align the vision keyframe and the RTK by using a linear interpolation method: estimate tkThe geographic coordinates of the images are acquired at the time,first find the distance t using the time linekRTK acquisition time with nearest time
Figure BDA0002106333800000021
And
Figure BDA0002106333800000022
obtaining tkGeographical coordinates of time g (t)k) Comprises the following steps:
Figure BDA0002106333800000023
as a further preferable scheme of the multi-sub-map stitching feature fusion method based on geographic information, in the second step, the establishing of the mapping relationship between the plurality of sub-maps and the geographic coordinate system is to establish the mapping relationship between the plurality of sub-maps located under different visual coordinate systems and the geographic coordinate system, and the mapping model is as follows:
defining a visual coordinate system Ov-XvYvZvGeographic coordinate system of Og-XgYgZgAnd for the image acquisition position corresponding to a certain key frame image, the coordinate in the visual coordinate system is recorded as Xv=(xv,yv,zv)TAnd the coordinates under the corresponding geographic coordinate system are marked as Xg=(xg,yg,zg)TThen the two satisfy the following similarity transformation relation:
Xg=svgRvgXv+tvg (2)
wherein s isvgIs a scale factor and satisfies svg>0,RvgIs a rotation matrix in three-dimensional space, tvg=(tx,ty,tz)TIs a translation vector;
deriving a transformation relation between the geographic coordinate system to the visual coordinate system according to the above equation (2) as equation (3):
Figure BDA0002106333800000031
rewriting to obtain formula (4):
Figure BDA0002106333800000032
obviously, equation (4) is an inverse transform of equation (2), and an inverse transform of the similarity transform satisfies the following equation:
Figure BDA0002106333800000033
as a further preferable scheme of the multi-sub-map stitching feature fusion method based on geographic information, in step three, the coordinate transformation is performed on the three-dimensional feature points of all sub-maps to transform the three-dimensional feature point coordinates of the sub-maps except the first sub-map, the feature point of the first sub-map is kept unchanged, and the coordinates of the feature points of the other sub-maps are transformed to meet the geographic position constraint condition, wherein the coordinate transformation relation is solved as follows:
the situation with N sub-maps, note (R)k,tk,sk) Transforming model parameters for similarity between the kth sub-map visual coordinate system and the geographic coordinate system; let the sub-map have NkA characteristic point, let XkjRecording Y for the position coordinate of the jth characteristic point in the visual coordinate systemkjFor its position coordinates in the geographic coordinate system, there are:
Ykj=skRkXkj+tk (6)
transforming the first sub-map into the visual coordinate system of the first sub-map according to equation (4) to obtain:
Figure BDA0002106333800000034
the new similarity transformation obtained by arranging the formula (3) and the formula (4) is as follows:
X′kj=sk1Rk1Xkj+tk1 (8)
wherein the content of the first and second substances,
Figure BDA0002106333800000035
and (3) carrying out coordinate transformation of the formula (8) and the formula (9) on the feature point positions of all the sub-maps to obtain the global map feature points which are consistent with the geographical position distribution and the first sub-map scale.
As a further preferable scheme of the multi-sub-map stitching feature fusion method based on geographic information, in step three, the three-dimensional feature point coordinate transformation method can keep the difference between the scale of the whole map and the scale of the sub-map as small as possible, and can obtain a better positioning result in the navigation process.
As a further preferable scheme of the multi-sub-map stitching feature fusion method based on geographic information, in step four, the coordinate transformation is performed on the keyframe pose data of all the sub-maps to transform the keyframe pose data coordinates of the sub-maps except the first sub-map, the keyframe of the first sub-map is kept unchanged, and the coordinates of the keyframes of the other sub-maps are transformed to meet the geographic position constraint condition; the coordinate transformation relationship is solved as follows:
let the kth sub-map have FkThe camera position and posture corresponding to the ith key frame of each key frame are recorded as
Figure BDA0002106333800000041
Superscript (c) represents a camera; wherein the content of the first and second substances,
Figure BDA0002106333800000042
for rigid body transformation model parameters from a visual coordinate system to a camera coordinate system, the method satisfies
Figure BDA0002106333800000043
Note the book
Figure BDA0002106333800000044
Rigid body transformation model parameters relative to a global map visual coordinate system; according to formula (9):
Figure BDA0002106333800000045
solving for
Figure BDA0002106333800000046
The coordinates of the origin of the camera coordinate system in the visual coordinate system can be obtained according to equation (10):
Figure BDA0002106333800000047
transforming the global map visual coordinate system into a global map visual coordinate system according to the formula (8) to obtain:
Figure BDA0002106333800000048
the origin of the camera coordinate system meets the following relation under the global map visual coordinate system:
Figure BDA0002106333800000049
solving equations (13) and (14) yields:
Figure BDA00021063338000000410
and (3) performing coordinate transformation on the keyframe camera position and posture data of all the sub-maps by using an equation (11) and an equation (15) to obtain global map keyframe camera position and posture data which are consistent with the geographical position distribution and the first sub-map scale.
As a further preferable scheme of the multi-sub map splicing feature fusion method based on geographic information, in the fifth step, optimizing key frames and feature points of the global map is to optimize the map after preliminary fusion transformation, and a common view relation of the global map is established by adopting a map optimization method, so that a high-precision fusion map is obtained;
memory MjFor the jth sub-map, note MkFor the kth sub-map, the global map optimization process may be described as:
1) retrieving constraints between the two sub-maps: cj+k
2) In { Cj∪Ck∪Cj+kAnd seeking the optimal global map through map optimization under the constraint of the map optimization.
The method comprises the following specific steps:
1) find MjAnd at MkSearching matched feature points in the field of the feature points;
2) performing a feature point matching decision, and judging whether the feature points are the same feature point or not to establish inter-graph constraint;
3) searching all matching point pairs, and establishing all inter-graph constraints;
4) system optimization was performed with g2 o.
The invention can splice the local sub-maps drawn in segments into a complete global map, and can also fuse the three-dimensional characteristic point information for subsequent visual navigation.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
1) the method is applied to mapping operation, expensive professional sensing equipment is not needed for mapping, the mapping can be realized mainly by means of a camera and a GPS receiver, and the mapping cost is reduced;
2) the surveying and mapping process is convenient and simple, multiple small-area surveying and mapping operations can be adopted, and application scenes are widened;
3) the invention has high robustness for processing the condition of system tracking failure, and improves the surveying and mapping work efficiency and the surveying and mapping precision;
4) the map fused by the method can be used for subsequent navigation and can provide real scale and position information.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a sub-map fusion diagram of the present invention;
FIG. 3 is a sub-map fusion effect diagram of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a method for fusing monocular vision SLAM sub-maps by utilizing RTK information. The invention not only simply splices the topological map and the characteristic point map in the local sub-map drawn in a segmented manner. The map fused by the method not only has higher global precision, but also can provide a scale factor between the monocular vision SLAM and a real geographic coordinate system, and provides a complete global map for subsequent visual navigation.
As shown in fig. 1, which is a flowchart of a multi-sub map stitching feature fusion method based on geographic information, the map fusion method of the present invention can be divided into the following five steps:
step one, as shown in a block 101 in fig. 1, aligning an asynchronously acquired visual keyframe with RTK information to obtain real position information of each keyframe;
step two, as shown in block 102 in fig. 1, a mapping relationship between a plurality of sub-maps located under different visual coordinate systems and a geographic coordinate system is established.
Step three, as shown in a module 103 in fig. 1, reserving a mapping relation between a visual coordinate system and a geographic coordinate system of the first sub-map, and performing coordinate transformation on the three-dimensional feature points of all the remaining sub-maps to obtain global map feature points which are consistent with the geographic position distribution and the scale of the first sub-map;
step four, as shown in a block 104 in fig. 1, a mapping relation between a visual coordinate system of the first sub-map and a geographic coordinate system is reserved, and coordinate transformation is performed on the key frame pose data of all the remaining sub-maps to obtain a global map key frame camera pose consistent with geographic position distribution and first sub-map scale;
and step five, as shown in a block 105 in fig. 1, establishing a common-view relationship of the global map, and performing processing by using map optimization to further obtain a high-precision fusion map.
FIG. 2 is a schematic view of sub-map fusion according to the present invention;
fig. 3 is a diagram illustrating the effect of sub-map fusion according to the present invention.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A multi-sub map splicing feature fusion method based on geographic information is characterized by comprising the following steps: the method specifically comprises the following steps:
aligning visual sensor and high-precision GPS information;
step two, establishing a mapping relation between a plurality of sub-maps and a geographic coordinate system;
step three, carrying out coordinate transformation on the three-dimensional feature points of all the sub-maps;
step four, coordinate transformation is carried out on the key frame pose data of all the sub-maps;
fifthly, optimizing key frames and feature points of the global map;
in the second step, the establishing of the mapping relationship between the sub-maps and the geographic coordinate system is to establish the mapping relationship between the sub-maps and the geographic coordinate system under different visual coordinate systems, and the mapping model is as follows:
defining a visual coordinate system Ov-XvYvZvGeographic coordinate system of Og-XgYgZgAnd for the image acquisition position corresponding to a certain key frame image, the coordinate in the visual coordinate system is recorded as Xv=(xv,yv,zv)TAnd the coordinates under the corresponding geographic coordinate system are marked as Xg=(xg,yg,zg)TThen the two satisfy the following similarity transformation relation:
Xg=svgRvgXv+tvg (2)
wherein s isvgIs a scale factor and satisfies svg>0,RvgIs a rotation matrix in three-dimensional space, tvg=(tx,ty,tz)TIs a translation vector;
deriving a transformation relation between the geographic coordinate system to the visual coordinate system according to the above equation (2) as equation (3):
Figure FDA0003344510430000011
rewriting to obtain formula (4):
Figure FDA0003344510430000012
obviously, equation (4) is an inverse transform of equation (2), and an inverse transform of the similarity transform satisfies the following equation:
Figure FDA0003344510430000013
2. a geography based on claim 1The information multi-sub map splicing feature fusion method is characterized by comprising the following steps: in the first step, the alignment of the vision sensor and the high-precision GPS information is performed by aligning a vision keyframe and an RTK by using a linear interpolation method: estimate tkAcquiring the geographic coordinates of the image at any moment, and finding the distance t by using a time linekRTK acquisition time with nearest time
Figure FDA0003344510430000014
And
Figure FDA0003344510430000015
obtaining tkGeographical coordinates of time g (t)k) Comprises the following steps:
Figure FDA0003344510430000016
3. the method for fusing the splicing features of the multiple sub-maps based on the geographic information as claimed in claim 1, wherein: in the third step, the coordinate transformation of the three-dimensional feature points of all the sub-maps is performed to transform the three-dimensional feature point coordinates of the sub-maps except the first sub-map, the feature point of the first sub-map is kept unchanged, and the coordinates of the feature points of the other sub-maps are transformed to meet the geographic position constraint condition, wherein the coordinate transformation relation is solved as follows:
the situation with N sub-maps, note (R)k,tk,sk) Transforming model parameters for similarity between the kth sub-map visual coordinate system and the geographic coordinate system; let the sub-map have NkA characteristic point, let XkjRecording Y for the position coordinate of the jth characteristic point in the visual coordinate systemkjFor its position coordinates in the geographic coordinate system, there are:
Ykj=skRkXkj+tk (6)
transforming the first sub-map into the visual coordinate system of the first sub-map according to equation (4) to obtain:
Figure FDA0003344510430000021
the new similarity transformation obtained by arranging the formula (3) and the formula (4) is as follows:
X′kj=sk1Rk1Xkj+tk1 (8)
wherein the content of the first and second substances,
Figure FDA0003344510430000022
and (3) carrying out coordinate transformation of the formula (8) and the formula (9) on the feature point positions of all the sub-maps to obtain the global map feature points which are consistent with the geographical position distribution and the first sub-map scale.
4. The method for fusing the splicing features of the multiple sub-maps based on the geographic information as claimed in claim 1, wherein: in the third step, the three-dimensional feature point coordinate transformation method can keep the difference between the scale of the whole map and the scale of the sub-map as small as possible, and can obtain a better positioning result in the navigation process.
5. The method for fusing the splicing features of the multiple sub-maps based on the geographic information as claimed in claim 3, wherein: in the fourth step, the coordinate transformation of the key frame pose data of all the sub-maps is carried out to transform the key frame pose data coordinates of the sub-maps except the first sub-map, the key frame of the first sub-map is kept unchanged, and the key frame coordinates of other sub-maps are transformed to meet the geographical position constraint condition; the coordinate transformation relationship is solved as follows:
let the kth sub-map have FkThe camera position and posture corresponding to the ith key frame of each key frame are recorded as
Figure FDA0003344510430000023
Superscript (c) represents a camera; wherein the content of the first and second substances,
Figure FDA0003344510430000024
for rigid body transformation model parameters from a visual coordinate system to a camera coordinate system, the method satisfies
Figure FDA0003344510430000025
Note the book
Figure FDA0003344510430000026
Rigid body transformation model parameters relative to a global map visual coordinate system; according to formula (9):
Figure FDA0003344510430000031
solving for
Figure FDA0003344510430000032
The coordinates of the origin of the camera coordinate system in the visual coordinate system can be obtained according to equation (10):
Figure FDA0003344510430000033
transforming the global map visual coordinate system into a global map visual coordinate system according to the formula (8) to obtain:
Figure FDA0003344510430000034
the origin of the camera coordinate system meets the following relation under the global map visual coordinate system:
Figure FDA0003344510430000035
solving equations (13) and (14) yields:
Figure FDA0003344510430000036
and (3) performing coordinate transformation on the keyframe camera position and posture data of all the sub-maps by using an equation (11) and an equation (15) to obtain global map keyframe camera position and posture data which are consistent with the geographical position distribution and the first sub-map scale.
6. The method for fusing the splicing features of the multiple sub-maps based on the geographic information as claimed in claim 1, wherein: in the fifth step, optimizing the keyframes and the feature points of the global map in the fifth step is to optimize the map after the preliminary fusion transformation, and a common-view relation of the global map is established by adopting a map optimization method so as to obtain a high-precision fusion map;
memory MjFor the jth sub-map, note MkFor the kth sub-map, the global map optimization process may be described as:
1) retrieving constraints between the two sub-maps: cj+k
2) In { Cj∪Ck∪Cj+kSeeking an optimal global map through map optimization under the constraint;
the method comprises the following specific steps:
1) find MjAnd at MkSearching matched feature points in the field of the feature points;
2) performing a feature point matching decision, and judging whether the feature points are the same feature point or not to establish inter-graph constraint;
3) searching all matching point pairs, and establishing all inter-graph constraints;
4) system optimization was performed with g2 o.
CN201910554020.6A 2019-06-25 2019-06-25 Multi-sub-map splicing feature fusion method based on geographic information Active CN112132745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910554020.6A CN112132745B (en) 2019-06-25 2019-06-25 Multi-sub-map splicing feature fusion method based on geographic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910554020.6A CN112132745B (en) 2019-06-25 2019-06-25 Multi-sub-map splicing feature fusion method based on geographic information

Publications (2)

Publication Number Publication Date
CN112132745A CN112132745A (en) 2020-12-25
CN112132745B true CN112132745B (en) 2022-01-04

Family

ID=73849127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910554020.6A Active CN112132745B (en) 2019-06-25 2019-06-25 Multi-sub-map splicing feature fusion method based on geographic information

Country Status (1)

Country Link
CN (1) CN112132745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674351B (en) * 2021-07-27 2023-08-08 追觅创新科技(苏州)有限公司 Drawing construction method of robot and robot
CN114322994B (en) * 2022-03-10 2022-07-01 之江实验室 Multipoint cloud map fusion method and device based on offline global optimization
CN115200572B (en) * 2022-09-19 2022-12-09 季华实验室 Three-dimensional point cloud map construction method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN108107462A (en) * 2017-12-12 2018-06-01 中国矿业大学 The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN109887053A (en) * 2019-02-01 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM map joining method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101840429B1 (en) * 2017-12-27 2018-03-20 (주)국토해양기술 Digital map accuracy enhancement system using field survey data
CN109029417B (en) * 2018-05-21 2021-08-10 南京航空航天大学 Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN108107462A (en) * 2017-12-12 2018-06-01 中国矿业大学 The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN109887053A (en) * 2019-02-01 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM map joining method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland;Szporak-Wasilewska, Sylwia等;《REMOTE SENSING FOR AGRICULTURE, ECOSYSTEMS, AND HYDROLOGY XVII》;20150924;1-5 *
基于ROS的移动机器人SLAM技术研究;何叶;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20181215(第12期);I140-530 *

Also Published As

Publication number Publication date
CN112132745A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN107967457B (en) Site identification and relative positioning method and system adapting to visual characteristic change
CN111862672B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN112132745B (en) Multi-sub-map splicing feature fusion method based on geographic information
CN103411609B (en) A kind of aircraft return route planing method based on online composition
CN106931961A (en) A kind of automatic navigation method and device
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
Scaramuzza et al. Closing the loop in appearance-guided omnidirectional visual odometry by using vocabulary trees
CN104732518A (en) PTAM improvement method based on ground characteristics of intelligent robot
McManus et al. Learning place-dependant features for long-term vision-based localisation
US7860269B2 (en) Auxilliary navigation system for use in urban areas
JP7338369B2 (en) Environment map adjustment value calculation method and environment map adjustment value calculation program
US10872246B2 (en) Vehicle lane detection system
US11430199B2 (en) Feature recognition assisted super-resolution method
CN110032965A (en) Vision positioning method based on remote sensing images
CN113343875A (en) Driving region sensing method for robot
JP7259454B2 (en) Mobile position estimation system and mobile position estimation method
Fang et al. CFVL: A coarse-to-fine vehicle localizer with omnidirectional perception across severe appearance variations
Praczyk et al. Concept and first results of optical navigational system
Ma et al. RoLM: Radar on LiDAR map localization
Hoang et al. Motion estimation based on two corresponding points and angular deviation optimization
CN111460854B (en) Remote target detection method, device and system
CN114792338A (en) Vision fusion positioning method based on prior three-dimensional laser radar point cloud map
CN113390422B (en) Automobile positioning method and device and computer storage medium
Miao et al. A Survey on Monocular Re-Localization: From the Perspective of Scene Map Representation
Musa Multi-view traffic sign localization with high absolute accuracy in real-time at the edge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant