CN113052761A - Laser point cloud map fusion method, device and computer readable storage medium - Google Patents
Laser point cloud map fusion method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN113052761A CN113052761A CN201911368775.3A CN201911368775A CN113052761A CN 113052761 A CN113052761 A CN 113052761A CN 201911368775 A CN201911368775 A CN 201911368775A CN 113052761 A CN113052761 A CN 113052761A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- map
- cloud map
- point
- clouds
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims abstract description 56
- 230000009466 transformation Effects 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000006243 chemical reaction Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 4
- 101150064138 MAP1 gene Proteins 0.000 description 7
- 101100075995 Schizosaccharomyces pombe (strain 972 / ATCC 24843) fma2 gene Proteins 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 230000007547 defect Effects 0.000 description 2
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a laser point cloud map fusion method, laser point cloud map fusion equipment and a computer readable storage medium. Wherein, the method comprises the following steps: splicing and matching a first local point cloud and a second local point cloud which comprise a public area to obtain a first position and orientation conversion matrix; projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix; matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix; and projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose conversion matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map. The efficient laser point cloud map fusion scheme is realized, and the fault tolerance rate in the map building process is improved.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a laser point cloud map fusion method, laser point cloud map fusion equipment and a computer readable storage medium.
Background
In the prior art, when a robot needs to accurately travel to a destination, a map similar to that drawn by a human is required. In the field of robotics, the process of describing an environment, recognizing the environment, relies mainly on relevant map information. The robot generally uses an environment map to describe the current environment information of the robot, and the map description form adopted is different according to different algorithms and sensor differences. In the field of robotics, map representation methods mainly include four types, namely a grid map, a feature point map, a direct representation method and a topological map. The grid map is the most widely used map storage method for the robot at present, is not greatly different from the map which is perceived by people, and is essentially a bitmap picture, wherein each pixel represents the probability distribution of obstacles in the actual environment. On the one hand, the grid map can represent many features in the space environment, and the robot can use the grid map to carry out path planning, and on the other hand, the grid map does not directly record raw data of the sensor, so that the optimization of space and time consumption is relatively realized, and the laser radar also uses the grid map.
However, in the prior art, when the laser is used for constructing the map, it cannot be guaranteed that the map is successfully constructed every time, and various errors may occur in the middle. For example, if an error occurs in the small scene mapping, the mapping operation may be performed again, whereas if an error occurs in the large scene mapping, for example, when the map exceeds 10000 square meters, it may take a lot of time to perform mapping again, and the accuracy of the reconstruction process is not guaranteed, and once an error occurs, the mapping needs to be re-established again. It can be seen that such mapping schemes in the prior art have significant technical defects, on one hand, the fault tolerance of the system is low, and on the other hand, the execution efficiency of mapping is not high.
Disclosure of Invention
In order to solve the technical defects in the prior art, the invention provides a laser point cloud map fusion method, which comprises the following steps:
splicing and matching a first local point cloud and a second local point cloud which comprise a public area to obtain a first position and orientation conversion matrix;
projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix;
matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix;
and projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose conversion matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map.
Optionally, before the first local point cloud and the second local point cloud containing the common area are spliced and matched to obtain the first bit posture conversion matrix, the method includes:
determining a target area, extracting all point clouds in the target area as the first local point cloud in the first point cloud map, and extracting all point clouds in the target area as the second local point cloud in the second point cloud map.
Optionally, the determining a target area, extracting a first local point cloud of the target area in a first point cloud map, and extracting a second local point cloud of the target area in a second point cloud map includes:
the first point cloud map and the second point cloud map simultaneously contain the description of the same environment as the public area;
and selecting a map area containing the public area as the target area through a preset shape frame.
Optionally, the projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix includes:
and converting all point clouds in the second point cloud map from a second coordinate system of the second point cloud map to a reference coordinate system of the first point cloud map through the first pose conversion matrix.
Optionally, the second pose transformation matrix is used to project all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again, and the method includes:
and updating the coordinates of all point clouds in the second point cloud map under the reference coordinate system through the second pose transformation matrix.
Optionally, the fusing all point clouds in the second point cloud map after the re-projection with all point clouds in the first point cloud map further includes:
and fusing all point clouds in the first point cloud map with all point clouds updated in the second point cloud map under the reference coordinate system.
Optionally, the fusing all point clouds in the second point cloud map after the re-projection with all point clouds in the first point cloud map, and then further comprising:
and rendering the fused point cloud map to obtain a rasterized laser map.
Optionally, the rendering is performed on the fused point cloud map to obtain a rasterized laser map, and then the method further includes:
determining a corresponding erasing area in the laser map through a preset shape frame according to the map reconstruction requirement, and deleting all point clouds in the erasing area;
and determining a newly-built map corresponding to the map reconstruction requirement, and fusing the newly-built map to the deleted laser map.
The invention also provides laser point cloud map fusion equipment which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein when the computer program is executed by the processor, the laser point cloud map fusion method is realized.
The invention also provides a computer readable storage medium, which stores a laser point cloud map fusion program, and when the laser point cloud map fusion program is executed by a processor, the steps of the laser point cloud map fusion method are realized.
The method has the advantages that a first position and posture conversion matrix is obtained by splicing and matching the first local point cloud and the second local point cloud which comprise the public area; projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix; matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix; and projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose conversion matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map. The efficient and accurate map fusion scheme is realized, the fault tolerance rate in the map building process is improved, and meanwhile, updating iteration of each area in the map is facilitated.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a first flowchart of a laser point cloud map fusion method according to an embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Example one
Fig. 1 is a first flowchart of a laser point cloud map fusion method according to an embodiment of the present invention. The embodiment provides a map fusion method, which comprises the following steps:
s1, splicing and matching a first local point cloud and a second local point cloud which comprise a public area to obtain a first position and posture conversion matrix;
s2, projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix;
s3, matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix;
and S4, projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose transformation matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map.
In this embodiment, it should be first described that, for a map with a large coverage area, or when a large-scale map building is required, a small map with a certain coverage area is built according to the current geographic location characteristics, or a map unit with a certain scale is built, and then the small maps or the map units are fused to obtain map information with complete coverage. In the following embodiments, first, the first map and the second map that have been separately rendered are determined, and then, the specific manner in which the second map merges into the first map will be described in detail.
Specifically, first, a first local point cloud and a second local point cloud including a common area are spliced and matched to obtain a first bit posture conversion matrix. The first map and the second map which are independently drawn are in an adjacent relation, a certain public area is arranged between the first map and the second map, on the basis, a first local point cloud containing the public area is selected in the first map, and a second local point cloud containing the public area is selected in the second map, so that the first local point cloud is established under a first coordinate system, and the second local point cloud is established under a second coordinate system different from the first coordinate system. In this embodiment, since the first local point cloud and the second local point cloud both include a predetermined common area, and the first local point cloud and the second local point cloud are in different coordinate systems, the first local point cloud and the second local point cloud are subjected to a stitching matching operation, so as to obtain a pose transformation matrix suitable for coordinate transformation, which is referred to as a first pose transformation matrix herein.
Specifically, after a first position and posture conversion matrix is obtained, projecting all point clouds in a second point cloud map to a reference coordinate system of the point clouds in a first point cloud map through the first position and posture conversion matrix; and matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix. In this embodiment, after the first pose transformation matrix suitable for coordinate transformation is obtained, coordinate transformation operation is performed on all the laser point clouds in the second map, that is, all the laser point clouds in the second map established under the second coordinate system are transformed to the reference coordinate system corresponding to the first coordinate system, so as to implement projection of the second point cloud map.
Specifically, after the projection operation of all point clouds in the second point cloud map is completed for the first time, all point clouds in the second point cloud map are projected to the reference coordinate system of the point clouds in the first point cloud map again through the second pose transformation matrix, and all point clouds in the second point cloud map after being projected again are fused with all point clouds in the first point cloud map. At this time, the stitching matching operation for obtaining the second pose transformation matrix is derived from all the laser point clouds in the second map of the first coordinate system and all the laser point clouds in the first map of the first coordinate system, so that the obtained second pose transformation matrix has higher transformation accuracy compared with the first pose transformation matrix. On this basis, in order to improve the accuracy of map fusion, in this embodiment, the second point cloud map is projected to the reference coordinate system through the second pose transformation matrix again, so that the transformation and update of the coordinates of all the laser point clouds in the second map are completed; and finally, inserting all the updated laser point clouds in the second map into all the laser point clouds in the first map to obtain a new map fused by the first map and the second map.
In this embodiment, optionally, before the first local point cloud and the second local point cloud are matched in a stitching manner, one or more target areas are determined, in the first point cloud map, all point clouds of the target areas are extracted as the first local point cloud, and in the second point cloud map, all point clouds of the same target area are extracted as the second local point cloud.
In this embodiment, optionally, the first point cloud map and the second point cloud map simultaneously include descriptions of the same environment as the public area. The determined one or more target areas include a common area generated by intersecting the first map with the second map, and it can be understood that if a plurality of independent common areas are generated by intersecting the first map with the second map, the target areas including the one or more common areas are determined as point cloud extraction areas according to actual splicing matching requirements. Specifically, firstly, a public area of the first point cloud map and the second point cloud map is determined, and then, a map area including the public area is framed by a preset shape frame as the target area, wherein the preset shape frame includes, but is not limited to, a square frame, a circular frame and other frame types.
In this embodiment, optionally, all point clouds in the second point cloud map are converted from the second coordinate system of the second point cloud map to the reference coordinate system of the first point cloud map by the first pose conversion matrix; or if the first map is fused to the second map, converting all point clouds in the first point cloud map from a first coordinate system of the first point cloud map to a reference coordinate system of the second point cloud map through the first pose conversion matrix.
In this embodiment, optionally, all point clouds of the second point cloud map under the first coordinate are updated through the second pose transformation matrix; or if the first map is fused to the second map, updating all point clouds of the first point cloud map under the second coordinate through the second posture conversion matrix.
In this embodiment, optionally, the coordinates of all point clouds in the second point cloud map under the reference coordinate system are updated through the second pose transformation matrix; or if the first map is fused to the second map, updating the coordinates of all point clouds in the first point cloud map under the reference coordinate system through the second attitude transformation matrix.
In this embodiment, optionally, the fused point cloud map is rendered to obtain a rasterized laser map.
In this embodiment, optionally, according to a map reconstruction requirement, a corresponding erasing area is determined in the laser map through a preset shape frame, and simultaneously, all point clouds in the erasing area are deleted; and then, determining a newly-built map corresponding to the map reconstruction requirement, and fusing the newly-built map to the deleted laser map.
The method has the advantages that the first position and posture conversion matrix is obtained by splicing and matching the first local point cloud and the second local point cloud which comprise the public area; projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix; matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix; and projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose conversion matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map. The efficient and accurate map fusion scheme is realized, the fault tolerance rate in the map building process is improved, and meanwhile, updating iteration of each area in the map is facilitated.
Example two
Based on the above embodiment, in the present embodiment, a detailed description is made of a specific conversion process of fusing the second map2 to the first map 1:
first, a part of the point cloud is selected from map1, and most or all of the part of the point cloud is selectedUnifying the selected point clouds to a coordinate system O 'in the overlapped area of map1 and map 2'iThe following steps.
Wherein, the coordinate system is O'iAnd the original coordinate system O of map1iThe relationship between them is:
Oi=Ti*Oi
then, in the same area of map2, a part of the point cloud is also selected and unified to coordinate system O'jThe following steps.
Wherein, the coordinate system is O'jAnd the original coordinate system O of map2jThe relationship between them is:
O′j=Tj*Oj
using the two pieces of laser point clouds to carry out splicing matching to obtain a pose transformation matrix
then, all point clouds of map2 are converted to the original coordinate system O 'of map 1'iThe conversion equation is:
wherein the content of the first and second substances,
Pjis at OjLaser point cloud coordinates of map2 under the coordinate system;
P′iis OiLaser point cloud coordinates of map2 under the coordinate system.
Then, P'iPoint cloud and map1 at OiIn a coordinate systemAnd matching and splicing the laser point cloud coordinates again to obtain a more accurate pose transformation matrix.
Finally, all P 'are subjected to the obtained more accurate pose transformation matrix'iPerforming pose conversion on the point cloud to obtain P ″)i. Wherein, P ″)iIs at OiUpdated and more accurate laser point cloud coordinates of map2 under the coordinate system. Inserting the P' i into the same OiAnd obtaining a laser point cloud map after the map1 and the map2 are fused in the map1 under the coordinate system.
EXAMPLE III
Based on the above embodiment, the present invention further provides a laser point cloud map fusion device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the laser point cloud map fusion method as described in any one of the above.
It should be noted that the device embodiment and the method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment, and technical features in the method embodiment are correspondingly applicable in the device embodiment, which is not described herein again.
Example four
Based on the above embodiment, the present invention further provides a computer-readable storage medium, where a laser point cloud map fusion program is stored, and when the laser point cloud map fusion program is executed by a processor, the steps of the laser point cloud map fusion method described in any one of the above are implemented.
It should be noted that the media embodiment and the method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment, and technical features in the method embodiment are correspondingly applicable in the media embodiment, which is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A laser point cloud map fusion method is characterized by comprising the following steps:
splicing and matching a first local point cloud and a second local point cloud which comprise a public area to obtain a first position and orientation conversion matrix;
projecting all point clouds in the second point cloud map to a reference coordinate system of the point clouds in the first point cloud map through the first pose transformation matrix;
matching and splicing all point clouds in the projected second point cloud map and all point clouds in the first point cloud map again to obtain a second attitude transformation matrix;
and projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map again through the second pose conversion matrix, and fusing all the point clouds in the second point cloud map after being projected again with all the point clouds in the first point cloud map.
2. The laser point cloud map fusion method of claim 1, wherein before the stitching matching of the first local point cloud and the second local point cloud containing the public area to obtain the first pose transformation matrix, the method comprises:
determining a target area, extracting all point clouds in the target area as the first local point cloud in the first point cloud map, and extracting all point clouds in the target area as the second local point cloud in the second point cloud map.
3. The laser point cloud map fusion method of claim 2, wherein the determining a target area, extracting a first local point cloud of the target area in a first point cloud map, and extracting a second local point cloud of the target area in a second point cloud map comprises:
the first point cloud map and the second point cloud map simultaneously contain the description of the same environment as the public area;
and selecting a map area containing the public area as the target area through a preset shape frame.
4. The laser point cloud map fusion method of claim 1, wherein the projecting all point clouds in the second point cloud map to the reference coordinate system of the point clouds in the first point cloud map by the first pose transformation matrix comprises:
and converting all point clouds in the second point cloud map from a second coordinate system of the second point cloud map to a reference coordinate system of the first point cloud map through the first pose conversion matrix.
5. The laser point cloud map fusion method of claim 1, wherein the re-projecting all point clouds in the second point cloud map to a reference coordinate system of point clouds in the first point cloud map by the second pose transformation matrix comprises:
and updating the coordinates of all point clouds in the second point cloud map under the reference coordinate system through the second pose transformation matrix.
6. The laser point cloud map fusion method of claim 5, wherein fusing all point clouds in the second point cloud map after the re-projection with all point clouds in the first point cloud map further comprises:
and fusing all point clouds in the first point cloud map with all point clouds updated in the second point cloud map under the reference coordinate system.
7. The laser point cloud map fusion method of claim 1, wherein the fusion of all point clouds in the second point cloud map after the re-projection with all point clouds in the first point cloud map further comprises:
and rendering the fused point cloud map to obtain a rasterized laser map.
8. The laser point cloud map fusion method of claim 7, wherein the rendering is performed on the fused point cloud map to obtain a rasterized laser map, and then further comprising:
determining a corresponding erasing area in the laser map through a preset shape frame according to the map reconstruction requirement, and deleting all point clouds in the erasing area;
and determining a newly-built map corresponding to the map reconstruction requirement, and fusing the newly-built map to the deleted laser map.
9. A laser point cloud map fusion apparatus, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the laser point cloud map fusion method of any one of claims 1-8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a laser point cloud map fusion program which, when executed by a processor, implements the steps of the laser point cloud map fusion method of any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911368775.3A CN113052761B (en) | 2019-12-26 | 2019-12-26 | Laser point cloud map fusion method, device and computer readable storage medium |
PCT/CN2020/133821 WO2021129349A1 (en) | 2019-12-26 | 2020-12-04 | Laser point cloud map merging method, apparatus, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911368775.3A CN113052761B (en) | 2019-12-26 | 2019-12-26 | Laser point cloud map fusion method, device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113052761A true CN113052761A (en) | 2021-06-29 |
CN113052761B CN113052761B (en) | 2024-01-30 |
Family
ID=76505519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911368775.3A Active CN113052761B (en) | 2019-12-26 | 2019-12-26 | Laser point cloud map fusion method, device and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113052761B (en) |
WO (1) | WO2021129349A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359463A (en) * | 2022-03-20 | 2022-04-15 | 宁波博登智能科技有限公司 | Point cloud marking system and method for ground identification |
WO2023040137A1 (en) * | 2021-09-16 | 2023-03-23 | 上海仙途智能科技有限公司 | Data processing |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658194B (en) * | 2021-07-23 | 2024-06-07 | 佛山缔乐视觉科技有限公司 | Point cloud splicing method and device based on reference object and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761721A (en) * | 2013-12-30 | 2014-04-30 | 西北工业大学 | Stereoscopic vision fast image stitching method suitable for space tether-robot |
EP2894602A1 (en) * | 2014-01-13 | 2015-07-15 | Honeywell International Inc. | A method of using image warping for geo-registration feature matching in vision-aided positioning |
CN106339981A (en) * | 2016-08-25 | 2017-01-18 | 安徽协创物联网技术有限公司 | Panorama stitching method |
CN106504320A (en) * | 2016-11-02 | 2017-03-15 | 华东师范大学 | A kind of based on GPU and the real-time three-dimensional reconstructing method towards depth image |
CN108257089A (en) * | 2018-01-12 | 2018-07-06 | 北京航空航天大学 | A kind of method of the big visual field video panorama splicing based on iteration closest approach |
CN109064506A (en) * | 2018-07-04 | 2018-12-21 | 百度在线网络技术(北京)有限公司 | Accurately drawing generating method, device and storage medium |
CN110243370A (en) * | 2019-05-16 | 2019-09-17 | 西安理工大学 | A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning |
-
2019
- 2019-12-26 CN CN201911368775.3A patent/CN113052761B/en active Active
-
2020
- 2020-12-04 WO PCT/CN2020/133821 patent/WO2021129349A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761721A (en) * | 2013-12-30 | 2014-04-30 | 西北工业大学 | Stereoscopic vision fast image stitching method suitable for space tether-robot |
EP2894602A1 (en) * | 2014-01-13 | 2015-07-15 | Honeywell International Inc. | A method of using image warping for geo-registration feature matching in vision-aided positioning |
CN106339981A (en) * | 2016-08-25 | 2017-01-18 | 安徽协创物联网技术有限公司 | Panorama stitching method |
CN106504320A (en) * | 2016-11-02 | 2017-03-15 | 华东师范大学 | A kind of based on GPU and the real-time three-dimensional reconstructing method towards depth image |
CN108257089A (en) * | 2018-01-12 | 2018-07-06 | 北京航空航天大学 | A kind of method of the big visual field video panorama splicing based on iteration closest approach |
CN109064506A (en) * | 2018-07-04 | 2018-12-21 | 百度在线网络技术(北京)有限公司 | Accurately drawing generating method, device and storage medium |
CN110243370A (en) * | 2019-05-16 | 2019-09-17 | 西安理工大学 | A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023040137A1 (en) * | 2021-09-16 | 2023-03-23 | 上海仙途智能科技有限公司 | Data processing |
CN114359463A (en) * | 2022-03-20 | 2022-04-15 | 宁波博登智能科技有限公司 | Point cloud marking system and method for ground identification |
Also Published As
Publication number | Publication date |
---|---|
WO2021129349A1 (en) | 2021-07-01 |
CN113052761B (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113052761B (en) | Laser point cloud map fusion method, device and computer readable storage medium | |
CN110163064B (en) | Method and device for identifying road marker and storage medium | |
CN112764053B (en) | Fusion positioning method, device, equipment and computer readable storage medium | |
CN107917712B (en) | Synchronous positioning and map construction method and device | |
US10354433B2 (en) | Method and apparatus for generating an abstract texture for a building facade or model | |
EP4116462A2 (en) | Method and apparatus of processing image, electronic device, storage medium and program product | |
US11209277B2 (en) | Systems and methods for electronic mapping and localization within a facility | |
Kim et al. | UAV-UGV cooperative 3D environmental mapping | |
Sohn et al. | An implicit regularization for 3D building rooftop modeling using airborne lidar data | |
US20190051029A1 (en) | Annotation Generation for an Image Network | |
WO2022237821A1 (en) | Method and device for generating traffic sign line map, and storage medium | |
KR101767006B1 (en) | The method and apparatus of updated object detection of the construction layers using uav image | |
CN112700497A (en) | Loop detection method, device, equipment and computer readable storage medium | |
CN115375868A (en) | Map display method, remote sensing map display method, computing device and storage medium | |
CN113932796A (en) | High-precision map lane line generation method and device and electronic equipment | |
CN116958478B (en) | City building model programming generation method, device, equipment and storage medium | |
CN111132181B (en) | Ray tracing technology method and device applied to wireless communication network | |
CN111489421A (en) | Satellite map segmentation building recombination method | |
CN113656633B (en) | Geographical space information transmission method and device based on OFD (office file format) file | |
Fu et al. | Costmap construction and pseudo-lidar conversion method of mobile robot based on monocular camera | |
CN116907471A (en) | Multi-element joint map representation and updating method and device based on SLAM | |
CN117611759B (en) | Three-dimensional model-based scoring map generation method, device, terminal and storage medium | |
CN117433509A (en) | Photovoltaic regional road network manufacturing method, system and photovoltaic regional road network fault navigation method | |
WO2023053485A1 (en) | Information processing device, information processing method, and information processing program | |
CN114838729A (en) | Path planning method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 518000 Room 401, block D, building 7, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Applicant after: JUXING TECHNOLOGY (SHENZHEN) Co.,Ltd. Address before: 518000 building 101, building R3b, Gaoxin industrial village, No.018, Gaoxin South 7th Road, community, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Applicant before: JUXING TECHNOLOGY (SHENZHEN) Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |