CN117949968B - Laser radar SLAM positioning method, device, computer equipment and storage medium - Google Patents

Laser radar SLAM positioning method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117949968B
CN117949968B CN202410348136.5A CN202410348136A CN117949968B CN 117949968 B CN117949968 B CN 117949968B CN 202410348136 A CN202410348136 A CN 202410348136A CN 117949968 B CN117949968 B CN 117949968B
Authority
CN
China
Prior art keywords
point cloud
identification
points
laser
slam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410348136.5A
Other languages
Chinese (zh)
Other versions
CN117949968A (en
Inventor
李广来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyu Innovation Technology Co ltd
Original Assignee
Shenzhen Qiyu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyu Innovation Technology Co ltd filed Critical Shenzhen Qiyu Innovation Technology Co ltd
Priority to CN202410348136.5A priority Critical patent/CN117949968B/en
Publication of CN117949968A publication Critical patent/CN117949968A/en
Application granted granted Critical
Publication of CN117949968B publication Critical patent/CN117949968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a laser radar SLAM positioning method, a device, equipment and a storage medium, wherein a target scene is scanned through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, and identification points are arranged in the target scene; under the condition that the scene degradation degree of the target scene meets the preset condition, carrying out semantic identification processing on the laser SLAM point cloud to identify the identification point in the laser SLAM point cloud, extracting the three-dimensional coordinates of the identification point, and determining the prior characteristic of the identification point; the method comprises the steps of acquiring the characteristic points and the information of the characteristic points of the laser SLAM point cloud, and carrying out pose optimization by combining the three-dimensional coordinates and the priori characteristics of the identification points on the basis of the information of the characteristic points and the information of the characteristic points to obtain positioning data, so that the technical problems of difficult characteristic matching and unstable positioning of the laser radar SLAM algorithm in scenes with less textures such as tunnels, long corridors and the like are effectively solved.

Description

Laser radar SLAM positioning method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a laser radar SLAM positioning method, a device, a computer device, and a storage medium.
Background
The SLAM technology of LiDAR refers to simultaneous localization and mapping (SLAM) using a LiDAR sensor. This is a technique of scanning the terrain by a lidar and determining the position of itself at the same time in an unknown environment. By constantly acquiring three-dimensional information of the environment, the SLAM algorithm may build a map and update the position of the robot, vehicle, or other device on the map. The method is a key technology for autonomous navigation and environment perception, and is widely applied to the fields of automatic driving vehicles, robots, unmanned aerial vehicles and the like.
However, in scenes with less texture, such as tunnels, long corridors, etc., the lidar SLAM algorithm may face some challenges because the texture lacks obvious features, which may lead to the following problems:
1. Feature matching is difficult: areas lacking texture tend to be difficult to extract stable feature points. In lidar SLAMs, feature points are typically used to match laser data at different time steps to achieve localization and mapping. In areas of less texture, it may be difficult to find features that are unique, resulting in matching difficulties.
2. Positioning instability: areas lacking texture are difficult to provide stable positioning information. In SLAM, the robot needs to estimate its own position through feature matching, and in an area lacking texture, a situation of unstable positioning may occur, resulting in a drift of the robot pose or an inability to accurately estimate the position.
Disclosure of Invention
The invention mainly aims to provide a laser radar SLAM positioning method, a device, computer equipment and a storage medium, which are used for carrying out pose optimization by combining three-dimensional coordinates and priori features of identification points on the basis of feature points of laser SLAM point clouds, so that the technical problems of difficult feature matching and unstable positioning of laser radar SLAM algorithm in scenes with less textures such as tunnels, long corridors and the like are effectively solved.
In order to achieve the above object, the present invention provides a laser radar SLAM positioning method, comprising the steps of: scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, wherein identification points are arranged in the target scene; determining a scene degradation degree of the target scene based on the laser SLAM point cloud containing the visual image; under the condition that the scene degradation degree of the target scene meets a preset condition, carrying out semantic identification processing on the laser SLAM point cloud to identify an identification point in the laser SLAM point cloud, extracting three-dimensional coordinates of the identification point, and determining prior characteristics of the identification point; and acquiring the characteristic points of the laser SLAM point cloud and the information of the characteristic points, and carrying out pose optimization by combining the three-dimensional coordinates and the priori features of the identification points on the basis of the information of the characteristic points and the information of the characteristic points to obtain positioning data.
Further, performing semantic recognition processing on the laser SLAM point cloud to identify an identification point in the laser SLAM point cloud, including: and performing semantic recognition processing on the laser SLAM point cloud through a deep learning semantic segmentation network to recognize identification points in the laser SLAM point cloud.
Further, based on the feature points and the information of the feature points, pose optimization is performed by combining the three-dimensional coordinates of the identification points and the prior features to obtain positioning data, including: acquiring a weight value of the identification point in pose optimization; and based on the weight value, on the basis of the characteristic points and the information of the characteristic points, combining the three-dimensional coordinates of the identification points and the prior characteristics, and performing pose optimization to obtain positioning data.
Further, obtaining the weight value of the identification point in pose optimization comprises the following steps: and determining a weight value of the identification point in pose optimization based on the scene degradation degree of the target scene, wherein the scene degradation degree is in direct proportion to the weight value.
Further, the shape of the identification point includes: triangle, square, round, five-pointed star.
Further, the target scene includes a tunnel scene.
Further, the identification point is arranged at any one position of the ground, the top and the two sides in the tunnel scene.
The invention provides a laser radar SLAM positioning device, which comprises: the scanning unit is used for scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, and identification points are arranged in the target scene; a determining unit, configured to determine a scene degradation degree of the target scene based on the laser SLAM point cloud including the visual image; the identification unit is used for carrying out semantic identification processing on the laser SLAM point cloud under the condition that the scene degradation degree of the target scene meets the preset condition so as to identify the identification point in the laser SLAM point cloud, extract the three-dimensional coordinates of the identification point and determine the prior characteristic of the identification point; the acquisition unit is used for acquiring the characteristic points of the laser SLAM point cloud and the information of the characteristic points, and based on the information of the characteristic points and the characteristic points, the pose optimization is carried out by combining the three-dimensional coordinates of the identification points and the priori characteristics to obtain positioning data.
The invention also provides a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of any of the methods described above when the computer program is executed.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the preceding claims.
The laser radar SLAM positioning method, the device, the equipment and the storage medium provided by the invention have the following beneficial effects: 1. enhancing positioning robustness, the identified points are often distinct features in the environment, which can enhance the robustness of the system to positioning, especially in the presence of laser point cloud matching difficulties. 2. The prior information of the identification points can be used as a strong constraint to reduce the positioning drift, and the estimation errors can be corrected more effectively by incorporating the position information of the identification points into the optimization process.
Drawings
FIG. 1 is a schematic diagram showing steps of a laser radar SLAM positioning method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a laser radar SLAM positioning device according to an embodiment of the present invention;
fig. 3 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The laser radar SLAM positioning method provided by the invention is implemented by taking computer equipment as an execution main body.
Referring to fig. 1, a flow chart of a laser radar SLAM positioning method provided by the invention includes the following steps:
S1, scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, wherein identification points are arranged in the target scene.
By fusing the three-dimensional point cloud obtained by the laser radar with the visual image obtained by the visual sensor, a point cloud subgraph fused with laser information and visual information is generated, more comprehensive and accurate environment perception is provided, and the positioning performance and the mapping performance of the laser radar SLAM system are improved.
Illustrating: the method for fusing the laser information and the visual information can be realized by the following steps: 1, sensor synchronization to ensure that data of the lidar and data of the vision sensor are synchronized in time; 2, projecting the three-dimensional point cloud onto a visual image plane to obtain two-dimensional points under a camera coordinate system; 3. extracting features from the camera image, and establishing a corresponding relation between feature points in the three-dimensional point cloud and feature points in the visual image by matching the feature points; and 4, fusing the two-dimensional characteristic points of the visual image obtained by matching with the three-dimensional information of the three-dimensional point cloud to generate a laser SLAM point cloud fused with the laser information and the visual information.
Furthermore, in the field of lidar SLAM localization, the placement of identification points in a target scene may bring certain benefits, especially in challenging tunnel scenes. For example: the technical effect of improving the perception of the laser radar SLAM system to the environment is achieved by arranging the identification points in the target scene to increase the environmental characteristics.
S2, determining the scene degradation degree of the target scene based on the laser SLAM point cloud containing the visual image.
And S3, under the condition that the scene degradation degree of the target scene meets the preset condition, carrying out semantic identification processing on the laser SLAM point cloud so as to identify the identification point in the laser SLAM point cloud, extracting the three-dimensional coordinates of the identification point, and determining the prior characteristic of the identification point.
In one embodiment, performing semantic recognition processing on the laser SLAM point cloud to identify an identification point in the laser SLAM point cloud includes: and performing semantic recognition processing on the laser SLAM point cloud through a deep learning semantic segmentation network to recognize identification points in the laser SLAM point cloud.
Illustrating: identifying identified points in the laser SLAM point cloud by deep learning semantic segmentation network may include the steps of: 1. collecting a dataset comprising visual images, the dataset comprising identification points with correct labels for use as training data for a deep learning semantic segmentation network; 2. training a deep learning semantic segmentation network by using the prepared data set, wherein the semantic segmentation network has the task of assigning each pixel of the input image to a specific semantic class, and one class corresponds to an identification point; 3. ensuring that the laser SLAM point cloud and the visual image are aligned in time and space in order to correspond semantic segmentation results obtained from the visual image to the laser SLAM point cloud; 4. applying the trained semantic segmentation network to a new visual image, and acquiring a semantic label of each pixel to generate a semantic segmentation map with the same size as the image; 5. projecting the laser SLAM point cloud onto a corresponding image plane to obtain a three-dimensional point cloud corresponding to the semantic segmentation map; 6. fusing the semantic label of each pixel in the semantic segmentation map with the three-dimensional coordinates of the corresponding point cloud, and screening out the points in the laser SLAM point cloud identified as the identification points according to semantic information, so as to obtain the identification points in the laser SLAM point cloud.
And the three-dimensional coordinates of the points in the laser SLAM point cloud identified as the identification points, that is, the three-dimensional coordinates of the identification points.
The identification points are specific points which are pre-arranged in the target scene and used for assisting positioning and navigation. Whereas a priori features refer to information known prior to making an actual measurement or localization, in particular a priori features comprise the position and pose of the identification point arrangement in the target scene.
S4, acquiring characteristic points of the laser SLAM point cloud and information of the characteristic points, and carrying out pose optimization by combining three-dimensional coordinates and priori features of the identification points on the basis of the characteristic points and the information of the characteristic points to obtain positioning data.
Specifically, on the basis of obtaining the characteristic points of the laser SLAM point cloud and the information of the characteristic points, the three-dimensional coordinates and the priori characteristics of the identification points are combined, pose optimization is performed to obtain positioning data, and the method can be realized by the following steps: identifying characteristic points in the laser SLAM point cloud, and matching the points with known identification points in the scene; analyzing the spatial relationship between the feature points and the identification points, including the distance and the direction, to determine the corresponding relationship between the feature points and the identification points; a filter (such as a Kalman filter, a particle filter and the like) or other fusion algorithms are used for integrating the information from the characteristic points and the detailed data of the identification points so as to combine the information from different sources and improve the understanding of the system to the scene; combining the three-dimensional coordinates and the priori features (position and attitude information) of the identification points with the information of the feature points to form a comprehensive data set for subsequent pose estimation; and carrying out initial pose optimization estimation based on the fused data to obtain positioning data.
In summary, the present invention combines the three-dimensional coordinates and the prior features of the identification points on the basis of the information of the feature points and the feature points to perform pose optimization to obtain positioning data, thereby realizing the following technical effects: 1. the positioning robustness is enhanced, the identification points are obvious features in the environment, stable landmark information can be provided, and the positioning robustness of the system can be enhanced, especially in the case of difficult laser point cloud matching. 2. The prior information of the identification points can be used as a strong constraint to reduce the positioning drift, and the error of robot track estimation can be corrected more effectively by incorporating the position information of the identification points into the optimization process.
Overall, the use of the marker points results in a more robust and accurate laser radar SLAM localization effect.
In one embodiment, based on the feature points and the information of the feature points, pose optimization is performed by combining the three-dimensional coordinates of the identification points and the identification points to obtain positioning data, including: acquiring a weight value of the identification point in pose optimization; and based on the weight value, on the basis of the characteristic points and the information of the characteristic points, combining the three-dimensional coordinates of the identification points and the identification points, and performing pose optimization to obtain positioning data.
By introducing the concept of the weight value of the identification point, the relative contributions of different observation sources in positioning (such as the relative contributions of laser point cloud and the identification point in positioning) can be balanced better, and the whole system is prevented from being influenced by overlarge errors of a certain source, so that the technical effect of enhancing the system robustness is achieved.
In one embodiment, obtaining the weight value of the identification point in pose optimization includes: and determining a weight value of the identification point in pose optimization based on the scene degradation degree, wherein the scene degradation degree is proportional to the weight value.
According to the actual state of the target scene, the weight of the identification point is adjusted, when the scene quality is good, the identification point can play a larger role, and when the scene quality is poor, the influence of the identification point can be reduced. The system can more intelligently cope with the change of the environment, so that the use of the identification point has more flexibility and effect.
In one embodiment, the shape of the identification point includes: triangle, square, round, five-pointed star.
In one embodiment, the target scene includes any one of a tunnel scene and a corridor scene.
Illustrating: the identification points can be arranged on at least any one position of the ground, the top and the two sides in the tunnel scene.
Illustrating: the identification points can be arranged on at least any one position of the ground, the top and the two sides in the corridor scene.
Referring to fig. 2, a schematic structural diagram of a laser radar SLAM positioning device according to the present invention includes:
The scanning unit 1 is used for scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, wherein identification points are arranged in the target scene;
A determining unit 2, configured to determine a scene degradation degree of the target scene based on the laser SLAM point cloud including the visual image;
The identifying unit 3 is configured to perform semantic identification processing on the laser SLAM point cloud to identify an identification point in the laser SLAM point cloud, extract a three-dimensional coordinate of the identification point, and determine a priori feature of the identification point when a scene degradation degree of the target scene meets a preset condition;
And the acquisition unit 4 is used for acquiring the characteristic points of the laser SLAM point cloud and the information of the characteristic points, and carrying out pose optimization by combining the three-dimensional coordinates and the priori characteristics of the identification points on the basis of the information of the characteristic points and the characteristic points to obtain positioning data.
In this embodiment, for specific implementation of each unit in the above embodiment of the apparatus, please refer to the description in the above embodiment of the method, and no further description is given here.
Referring to fig. 3, in an embodiment of the present invention, there is further provided a computer device, which may be a server, and an internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, a display screen, an input device, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store the corresponding data in this embodiment. The network interface of the computer device is used for communicating with an external terminal through a network connection. Which computer program, when being executed by a processor, carries out the above-mentioned method.
It will be appreciated by those skilled in the art that the architecture shown in fig. 3 is merely a block diagram of a portion of the architecture in connection with the present inventive arrangements and is not intended to limit the computer devices to which the present inventive arrangements are applicable.
An embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above method. It is understood that the computer readable storage medium in this embodiment may be a volatile readable storage medium or a nonvolatile readable storage medium.
In summary, the method scans the target scene through the laser radar and the vision sensor to obtain the laser SLAM point cloud containing the vision image, wherein the target scene is provided with the identification points; the laser SLAM point cloud carries out semantic recognition processing to recognize identification points in the laser SLAM point cloud and extract three-dimensional coordinates of the identification points; acquiring prior characteristics of the identification points, wherein the prior characteristics comprise positions and postures of the identification points in the target scene; and on the basis of the characteristic points and the information of the characteristic points, the three-dimensional coordinates and the priori characteristics of the identification points are combined to optimize the pose so as to obtain positioning data, thereby effectively solving the technical problems of difficult characteristic matching and unstable positioning of a laser radar SLAM algorithm in scenes with less textures such as tunnels, long corridors and the like.
The following beneficial effects are realized: 1. enhancing positioning robustness, the identified points are often distinct features in the environment, which can enhance the robustness of the system to positioning, especially in the presence of laser point cloud matching difficulties. 2. The prior information of the identification points can be used as a strong constraint to reduce the positioning drift, and the estimation errors can be corrected more effectively by incorporating the position information of the identification points into the optimization process.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present invention and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (9)

1. The laser radar SLAM positioning method is characterized by comprising the following steps of:
scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, wherein identification points are arranged in the target scene;
Determining a scene degradation degree of the target scene based on the laser SLAM point cloud containing the visual image;
under the condition that the scene degradation degree of the target scene meets a preset condition, carrying out semantic identification processing on the laser SLAM point cloud to identify an identification point in the laser SLAM point cloud, extracting three-dimensional coordinates of the identification point, and determining prior characteristics of the identification point;
Acquiring characteristic points of the laser SLAM point cloud and information of the characteristic points, and carrying out pose optimization by combining three-dimensional coordinates and priori features of the identification points on the basis of the information of the characteristic points and the characteristic points to obtain positioning data;
The semantic recognition processing is performed on the laser SLAM point cloud to identify the identification point in the laser SLAM point cloud, including: collecting a dataset comprising visual images, the dataset comprising identification points with correct labels for use as training data for a deep learning semantic segmentation network; training a deep learning semantic segmentation network by using the collected data set, wherein the semantic segmentation network has the task of assigning each pixel of the input image to a specific semantic class, and one class corresponds to an identification point; controlling the laser SLAM point cloud and the visual image to be aligned in time and space; applying the trained semantic segmentation network to a new visual image, and acquiring a semantic label of each pixel to generate a semantic segmentation map with the same size as the image; projecting the laser SLAM point cloud onto a corresponding image plane to obtain a three-dimensional point cloud corresponding to the semantic segmentation map; fusing the semantic label of each pixel in the semantic segmentation map with the three-dimensional coordinates of the corresponding point cloud, and screening out the points in the laser SLAM point cloud identified as the identification points according to semantic information, so as to obtain the identification points in the laser SLAM point cloud;
The laser SLAM point cloud containing the visual image is obtained through the following steps: synchronizing the sensors to ensure that the data of the lidar and the data of the vision sensor are synchronized in time; projecting the three-dimensional point cloud onto a visual image plane to obtain two-dimensional points under a camera coordinate system; extracting features from the camera image, and establishing a corresponding relation between feature points in the three-dimensional point cloud and feature points in the visual image by matching the feature points; and fusing the two-dimensional characteristic points of the visual image obtained by matching with the three-dimensional information of the three-dimensional point cloud to generate a laser SLAM point cloud fused with the laser information and the visual information.
2. The laser radar SLAM positioning method of claim 1, wherein based on the feature points and the information of the feature points, pose optimization is performed to obtain positioning data by combining three-dimensional coordinates and prior features of the identification points, comprising:
Acquiring a weight value of the identification point in pose optimization;
and based on the weight value, on the basis of the characteristic points and the information of the characteristic points, combining the three-dimensional coordinates of the identification points and the prior characteristics, and performing pose optimization to obtain positioning data.
3. The laser radar SLAM positioning method of claim 2, wherein obtaining a weight value of the identified point in pose optimization comprises:
And determining a weight value of the identification point in pose optimization based on the scene degradation degree of the target scene, wherein the scene degradation degree is in direct proportion to the weight value.
4. The lidar SLAM positioning method of claim 1, wherein the shape of the marker point comprises: triangle, square, round, five-pointed star.
5. The lidar SLAM positioning method of claim 1, wherein the target scene comprises a tunnel scene.
6. The laser radar SLAM locating method of claim 5, wherein the marker is located at least at any one of a ground, a top, and both sides in the tunnel scene.
7. A lidar SLAM positioning device, comprising:
the scanning unit is used for scanning a target scene through a laser radar and a vision sensor to obtain a laser SLAM point cloud containing a vision image, and identification points are arranged in the target scene;
a determining unit, configured to determine a scene degradation degree of the target scene based on the laser SLAM point cloud including the visual image;
The identification unit is used for carrying out semantic identification processing on the laser SLAM point cloud under the condition that the scene degradation degree of the target scene meets the preset condition so as to identify the identification point in the laser SLAM point cloud, extract the three-dimensional coordinates of the identification point and determine the prior characteristic of the identification point;
The acquisition unit is used for acquiring the characteristic points of the laser SLAM point cloud and the information of the characteristic points, and carrying out pose optimization by combining the three-dimensional coordinates and the priori features of the identification points on the basis of the information of the characteristic points and the characteristic points to obtain positioning data;
Wherein, laser radar SLAM positioner still is used for: collecting a dataset comprising visual images, the dataset comprising identification points with correct labels for use as training data for a deep learning semantic segmentation network; training a deep learning semantic segmentation network by using the collected data set, wherein the semantic segmentation network has the task of assigning each pixel of the input image to a specific semantic class, and one class corresponds to an identification point; controlling the laser SLAM point cloud and the visual image to be aligned in time and space; applying the trained semantic segmentation network to a new visual image, and acquiring a semantic label of each pixel to generate a semantic segmentation map with the same size as the image; projecting the laser SLAM point cloud onto a corresponding image plane to obtain a three-dimensional point cloud corresponding to the semantic segmentation map; fusing the semantic label of each pixel in the semantic segmentation map with the three-dimensional coordinates of the corresponding point cloud, and screening out the points in the laser SLAM point cloud identified as the identification points according to semantic information, so as to obtain the identification points in the laser SLAM point cloud;
The laser SLAM point cloud containing the visual image is obtained through the following steps: synchronizing the sensors to ensure that the data of the lidar and the data of the vision sensor are synchronized in time; projecting the three-dimensional point cloud onto a visual image plane to obtain two-dimensional points under a camera coordinate system; extracting features from the camera image, and establishing a corresponding relation between feature points in the three-dimensional point cloud and feature points in the visual image by matching the feature points; and fusing the two-dimensional characteristic points of the visual image obtained by matching with the three-dimensional information of the three-dimensional point cloud to generate a laser SLAM point cloud fused with the laser information and the visual information.
8. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, carries out the steps of the method according to any one of claims 1 to 6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202410348136.5A 2024-03-26 2024-03-26 Laser radar SLAM positioning method, device, computer equipment and storage medium Active CN117949968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410348136.5A CN117949968B (en) 2024-03-26 2024-03-26 Laser radar SLAM positioning method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410348136.5A CN117949968B (en) 2024-03-26 2024-03-26 Laser radar SLAM positioning method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117949968A CN117949968A (en) 2024-04-30
CN117949968B true CN117949968B (en) 2024-06-21

Family

ID=90799681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410348136.5A Active CN117949968B (en) 2024-03-26 2024-03-26 Laser radar SLAM positioning method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117949968B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052903A (en) * 2021-03-17 2021-06-29 浙江大学 Vision and radar fusion positioning method for mobile robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11768292B2 (en) * 2018-03-14 2023-09-26 Uatc, Llc Three-dimensional object detection
CN111461245B (en) * 2020-04-09 2022-11-04 武汉大学 Wheeled robot semantic mapping method and system fusing point cloud and image
KR20230112296A (en) * 2022-01-20 2023-07-27 금오공과대학교 산학협력단 Implementation of a Mobile Target Search System with 3D SLAM and Object Localization in Indoor Environments
CN114638909A (en) * 2022-03-24 2022-06-17 杭州电子科技大学 Substation semantic map construction method based on laser SLAM and visual fusion
CN114782626B (en) * 2022-04-14 2024-06-07 国网河南省电力公司电力科学研究院 Transformer substation scene map building and positioning optimization method based on laser and vision fusion
CN117367427A (en) * 2023-10-07 2024-01-09 安徽大学 Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052903A (en) * 2021-03-17 2021-06-29 浙江大学 Vision and radar fusion positioning method for mobile robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
激光雷达与视觉融合的SLAM技术研究;查远;《中国优秀硕士学位论文全文数据库 信息科技辑》;20240315(第03期);I136-763 *

Also Published As

Publication number Publication date
CN117949968A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
Giubilato et al. An evaluation of ROS-compatible stereo visual SLAM methods on a nVidia Jetson TX2
CN110462343A (en) The automated graphics for vehicle based on map mark
EP3519770A1 (en) Methods and systems for generating and using localisation reference data
EP2423871A1 (en) Apparatus and method for generating an overview image of a plurality of images using an accuracy information
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
Chien et al. Visual odometry driven online calibration for monocular lidar-camera systems
CN113870343A (en) Relative pose calibration method and device, computer equipment and storage medium
US20210387636A1 (en) Method for estimating distance to and location of autonomous vehicle by using mono camera
CN111735439A (en) Map construction method, map construction device and computer-readable storage medium
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Kaufmann et al. Shadow-based matching for precise and robust absolute self-localization during lunar landings
CN114792338A (en) Vision fusion positioning method based on prior three-dimensional laser radar point cloud map
US20200191577A1 (en) Method and system for road image reconstruction and vehicle positioning
CN117949968B (en) Laser radar SLAM positioning method, device, computer equipment and storage medium
Aggarwal Machine vision based SelfPosition estimation of mobile robots
US11348278B2 (en) Object detection
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques
Li-Chee-Ming et al. Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments
Liu et al. A method of simultaneous location and mapping based on RGB-D cameras
Hu et al. Accurate fiducial mapping for pose estimation using manifold optimization
Markiewicz The example of using intensity orthoimages in tls data registration-A case study
Das et al. Sensor fusion in autonomous vehicle using LiDAR and camera sensor with Odometry
Al-Isawi et al. Pose estimation for mobile and flying robots via vision system
CN117906598B (en) Positioning method and device of unmanned aerial vehicle equipment, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant