WO2021017072A1 - 基于激光雷达的 slam 闭环检测方法及检测系统 - Google Patents
基于激光雷达的 slam 闭环检测方法及检测系统 Download PDFInfo
- Publication number
- WO2021017072A1 WO2021017072A1 PCT/CN2019/102850 CN2019102850W WO2021017072A1 WO 2021017072 A1 WO2021017072 A1 WO 2021017072A1 CN 2019102850 W CN2019102850 W CN 2019102850W WO 2021017072 A1 WO2021017072 A1 WO 2021017072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- closed
- loop detection
- module
- loop
- detection result
- Prior art date
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Definitions
- the present invention claims the priority of an earlier application filed with the State Intellectual Property Office of China on August 1, 2019, with an application number of 201910707655.5 and an invention title of "a laser radar-based SLAM closed-loop detection method and detection system".
- the content of the application is incorporated into this text by way of introduction.
- This application belongs to the field of cleaning equipment, and in particular relates to a SLAM closed-loop detection method and detection system based on lidar.
- Robots with autonomous consciousness require robots to have good perception, judgment and adaptability to the environment, so that they can complete many tasks that humans want them to complete.
- the autonomy of the robot is to complete some tasks through the autonomous decision-making of the robot in an unknown environment.
- SLAM simultaneous localization and mapping
- closed-loop detection also known as loop detection
- loop detection means that the robot returns to its original position as a loop.
- closed-loop detection aims to reduce the accumulated error when constructing environmental maps.
- the closed-loop detection is also a difficult point in SLAM, because if the closed-loop detection is successful, the accumulated error can be significantly reduced, helping the robot to perform obstacle avoidance navigation more accurately and quickly. The wrong detection result may distort the map.
- the closed loop of laser SLAM detection has relative hysteresis.
- the laser SLAM detects When looping, re-optimizing the map may deform the local map;
- Those that are not closed-loop but mistakenly regarded as closed-loop may also be Deform the local map.
- the embodiments of the present application provide a SLAM closed-loop detection method and detection system based on lidar to solve the problem that the laser SLAM closed-loop detection in the prior art has hysteresis, large accumulated errors, easy to cause closed-loop false detection, and affect the construction of environmental maps.
- the first aspect of the embodiments of the present application provides a lidar-based SLAM closed-loop detection method, including:
- the closed-loop detection is performed through the visual closed-loop detection module to obtain the first closed-loop detection result;
- the second aspect of the embodiments of the present application provides a lidar-based SLAM closed-loop detection system, including:
- the visual closed-loop detection module is configured to perform closed-loop detection based on camera data collected by the camera to obtain a first closed-loop detection result; based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module;
- the laser SLAM module is used to perform image optimization operations when the closed-loop detection information is acquired.
- the addition of the visual closed-loop detection module in the embodiment of the present application realizes the auxiliary effect to the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the early confirmation of the local closed loop, reduce the cumulative error, and filter out the wrong closed loop , Make the composition of the laser SLAM module more timely and accurate, realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.
- the addition of the visual closed-loop detection module realizes the auxiliary effect of the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the confirmation of the local closed loop in advance, reduce accumulated errors, filter out the wrong closed loop, and make the composition of the laser SLAM module more timely and accurate , Realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.
- FIG. 1 is a first flow chart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application
- FIG. 2 is a second flowchart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application
- FIG. 3 is a structural block diagram of a SLAM closed-loop detection system based on lidar provided by an embodiment of the present application;
- Fig. 4 is a structural diagram of a terminal provided by an embodiment of the present application.
- the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
- the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
- FIG. 1 is a first flow chart of a method for closed-loop detection of SLAM based on lidar provided by an embodiment of the present application.
- a SLAM closed-loop detection method based on lidar the method includes the following steps:
- Step 101 Based on the camera data collected by the camera, a closed-loop detection is performed through a visual closed-loop detection module to obtain a first closed-loop detection result.
- the camera data specifically includes image data obtained by the camera collecting external images.
- the camera data is obtained by taking images of the camera in the path of the robot.
- the vision closed-loop detection module specifically performs image comparison processing on the camera data collected by the camera, and judges whether the robot has passed the current position point to realize closed-loop detection. Specifically, it can be based on comparing the visual information of the current point collected by a monocular camera, a binocular camera, a multi-camera camera, or a fish-eye camera with the visual information of the past points, and judging whether the robot has walked past the current position point to achieve closed-loop detection.
- the visual closed-loop detection module may be the visual closed-loop detection part of the existing robot's visual SLAM module.
- the closed-loop detection performed by the visual closed-loop detection module based on the camera data collected by the camera to obtain the first closed-loop detection result includes:
- the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed-loop detection result is a closed-loop; and/or,
- the similarity between the previous image frame and the next image frame of the second image frame and the first image frame is greater than a threshold, then it is determined that the first closed-loop detection result is closed loop.
- the current location point is specifically the current location point of the robot on the travel path.
- the robot When the robot is traveling, it will collect images at each point of travel along the path of travel, and multiple images will be collected at each point of travel, and will be in the multiple images collected at each point of travel
- a key image frame is determined, and the key image frame and the travel point are correspondingly stored to form a database (that is, a key frame library) of image frames of historical position points obtained by recording.
- the first image frame of the current location point is matched with the recorded image frame of the historical location point in the database through the visual closed-loop detection module.
- the above-mentioned robot is a sweeper, for example.
- the historical position of the robot can be one-to-one correspondence with the key image frame number taken by the camera.
- the current position of the sweeper is also recorded, such as Postion (sweeper position, key frame number).
- the first image frame is a key image frame among multiple image frames collected by the robot at the current position point
- the key image frame may be an image frame with the best definition
- the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed loop detection result is a closed loop.
- the first closed-loop detection is determined The result is a closed loop.
- the similarity between the second image frame and the first image frame is greater than the threshold and the previous image frame and the subsequent image frame of the second image frame are the same as the first image If the similarity between frames is greater than the threshold, it is determined that the first closed-loop detection result is a closed-loop.
- the laser slam module provides a visual closed-loop signal.
- the image frame at the current position is compared with the key image frame in the key frame library.
- the similarity between the image frame at the current position and the key image frame in the database and the two frames before and after the key image frame is equal.
- the threshold such as 50%
- the closed loop is established, and the laser slam module provides a visual closed loop signal.
- the previous image frame and the next image frame are specifically the image frame corresponding to the previous location point and the image frame corresponding to the next location point of the location point corresponding to the second image frame. That is, the image frames respectively corresponding to the position points before and after the position points corresponding to the second image frame.
- the image feature points can be extracted from the image frame, and it is judged whether the displacement of the feature point can be matched with the displacement of the position point. When it can be matched, the similarity is determined The degree is greater than the set value.
- Step 102 Based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module through the visual closed-loop detection module.
- the closed-loop detection information is generated based on the first closed-loop detection result.
- the closed-loop detection information may include the first closed-loop detection result.
- Closed-loop detection is performed through the visual closed-loop detection module; when a closed-loop is detected, the visual closed-loop signal is directly sent to the laser SLAM module, the closed-loop detection result is sent to the laser SLAM module, and the laser SLAM module is actively notified that there is a closed loop in the current path, so that the laser SLAM can Use the closed-loop detection result directly to perform image optimization operations.
- Step 103 When the laser SLAM module obtains the closed-loop detection information, perform an image optimization operation.
- the laser SLAM module directly uses the closed-loop detection information of the visual closed-loop detection module as the basis for closed-loop detection, and uses this as the basis for whether the map needs to be updated, and decides whether to optimize the update operation of the map.
- the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
- the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
- the embodiments of the present application also provide different implementations of the SLAM closed-loop detection method based on lidar.
- FIG. 2 is a second flowchart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application.
- a SLAM closed-loop detection method based on lidar the method includes the following steps:
- Step 201 Perform closed-loop detection through the laser SLAM module to obtain a second closed-loop detection result, and send a verification request to the visual closed-loop detection module.
- the verification request carries the second closed-loop detection result.
- This process occurs before the closed-loop detection information is sent to the laser SLAM module through the visual closed-loop detection module based on the first closed-loop detection result.
- the laser SLAM module Before the visual closed-loop detection module sends closed-loop detection information to the laser SLAM module, during the robot's traveling process, the laser SLAM module will also scan the environment to realize laser point cloud collection. The laser SLAM module will first perform closed-loop detection based on the collected point cloud. , Obtain the detection result, and generate a verification request based on the closed-loop detection result and send it to the visual closed-loop detection module, so that the visual closed-loop detection module assists in verifying the correctness of the second closed-loop detection result, and eliminates the wrong closed-loop detection result of the laser SLAM module, Further ensure the accuracy of map composition.
- the laser SLAM module When the laser SLAM module detects a closed-loop signal, it will send it to the visual closed-loop detection module for detection to confirm whether the map information detected by the closed-loop is truly closed-loop, if there is a closed-loop, send a visual closed-loop signal to the laser SLAM module, if not, return The current image frame ends the image traversal comparison process.
- Step 202 based on the camera data collected by the camera, perform closed-loop detection through the visual closed-loop detection module to obtain a first closed-loop detection result.
- step 202 is the same as the implementation of step 101 in the foregoing embodiment, and will not be repeated here. Wherein, the order of occurrence between step 202 and step 201 is not distinguished.
- Step 203 When the visual closed-loop detection module receives the verification request, based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module through the visual closed-loop detection module.
- sending closed-loop detection information to the laser SLAM module through the visual closed-loop detection module includes:
- the visual closed-loop detection module outputs a verification pass The closed-loop detection feedback information to the laser SLAM module.
- the visual closed-loop detection module verifies the closed-loop detection results of the laser SLAM module, it specifically compares the detection results of the laser SLAM module with the closed-loop detection results made by itself to determine whether the two are consistent. If they are consistent and both are both If it is a closed loop, the verification result is to confirm that the laser SLAM module detects a closed loop.
- the method further includes: if the first closed-loop detection result is inconsistent with the second closed-loop detection result, sending verification information of closed-loop detection error to the laser SLAM module through the visual closed-loop detection module.
- the visual closed-loop detection module verifies the closed-loop detection result of the laser SLAM module, it compares the detection result of the laser SLAM module with the closed-loop detection result made by itself to determine whether the two are consistent. If they are not consistent, the visual closed-loop detection module Send the closed-loop detection error verification information to the laser SLAM module, and the laser SLAM module performs closed-loop detection again until the closed-loop detection results between the laser SLAM module and the visual closed-loop detection module are consistent and both are closed-loop, or the laser SLAM module directly uses the vision
- the closed-loop detection result of the closed-loop detection module shall prevail, and follow-up operations shall be performed.
- Step 204 When the laser SLAM module obtains the closed-loop detection information, perform an image optimization operation.
- This step is implemented in the same manner as step 103 in the foregoing embodiment, and will not be repeated here.
- the laser SLAM module Before the laser SLAM module performs image optimization operations, it needs to determine that the closed-loop detection result is a closed-loop, and has passed the result verification of the visual closed-loop detection module. If the closed-loop detection results of the two are consistent, the detection result is considered to be reliable enough, and the final detection result is determined To close the loop, perform subsequent image optimization operations.
- the visual closed-loop detection module assists the laser SLAM module to eliminate closed-loop false detections to improve detection accuracy.
- the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
- the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
- FIG. 3 is a structural diagram of a lidar-based SLAM closed-loop detection system provided by an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
- the SLAM closed-loop detection system based on lidar includes:
- the visual closed-loop detection module is configured to perform closed-loop detection based on camera data collected by the camera to obtain a first closed-loop detection result; based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module;
- the laser SLAM module is used to perform image optimization operations when the closed-loop detection information is acquired.
- the laser SLAM module is also used for:
- the visual closed-loop detection module is also used for:
- the step of sending closed-loop detection information to the laser SLAM module through the visual closed-loop detection module based on the first closed-loop detection result is performed.
- the visual closed-loop detection module is also used for:
- the closed-loop detection feedback information that passed the verification is output to all The laser SLAM module.
- the visual closed-loop detection module is also used for:
- the visual closed-loop detection module is also used for:
- the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed-loop detection result is a closed-loop; and/or,
- the similarity between the previous image frame and the next image frame of the second image frame and the first image frame is greater than a threshold, then it is determined that the first closed-loop detection result is closed loop.
- the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
- the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
- the lidar-based SLAM closed-loop detection system provided by the embodiments of the present application can implement the various processes of the foregoing embodiments of the lidar-based SLAM closed-loop detection method, and can achieve the same technical effects. To avoid repetition, details are not described here.
- Fig. 4 is a structural diagram of a terminal provided by an embodiment of the present application.
- the terminal 4 of this embodiment includes a processor 40, a memory 41, and a computer program 42 stored in the memory 41 and running on the processor 40.
- the terminal may be a robot, such as a sweeping robot or a warehouse cargo handling robot.
- the terminal 4 may include, but is not limited to, a processor 40 and a memory 41. Those skilled in the art can understand that FIG. 4 is only an example of the terminal 4 and does not constitute a limitation on the terminal 4. It may include more or less components than shown in the figure, or a combination of certain components, or different components, such as The terminal may also include input and output devices, network access devices, buses, and so on.
- the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), it can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 41 may be an internal storage unit of the terminal 4, such as a hard disk or memory of the terminal 4.
- the memory 41 may also be an external storage device of the terminal 4, such as a plug-in hard disk equipped on the terminal 4, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD) card, Flash Card, etc. Further, the memory 41 may also include both an internal storage unit of the terminal 4 and an external storage device.
- the memory 41 is used to store the computer program and other programs and data required by the terminal.
- the memory 41 can also be used to temporarily store data that has been output or will be output.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
- the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- ROM Read-Only Memory
- RAM Random Access Memory
- electrical carrier signals telecommunications signals
- software distribution media any entity or device capable of carrying the computer program code
- recording medium U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- the addition of the visual closed-loop detection module realizes the auxiliary effect of the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the confirmation of the local closed loop in advance, reduce accumulated errors, filter out the wrong closed loop, and make the composition of the laser SLAM module more timely and accurate , Realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.
Abstract
Description
Claims (10)
- 一种基于激光雷达的SLAM闭环检测方法,其特征在于,包括: 基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果; 在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块; 在所述激光SLAM模块获取到所述闭环检测信息时,执行图像优化操作。
- 根据权利要求1所述的基于激光雷达的SLAM闭环检测方法,其特征在于, 所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块之前,还包括: 通过所述激光SLAM模块进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块,其中,所述校验请求中携带有所述第二闭环检测结果; 在所述视觉闭环检测模块接收到所述校验请求时,执行所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块的步骤。
- 根据权利要求2所述的基于激光雷达的SLAM闭环检测方法,其特征在于, 所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块,包括: 若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则通过所述视觉闭环检测模块输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
- 根据权利要求3所述的基于激光雷达的SLAM闭环检测方法,其特征 在于,还包括: 若所述第一闭环检测结果与所述第二闭环检测结果不一致,则通过所述视觉闭环检测模块发送闭环检测错误的验证信息至所述激光SLAM模块。
-
根据权利要求1所述的基于激光雷达的SLAM闭环检测方法,其特征在于,所述基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果,包括: 通过视觉闭环检测模块将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配; 若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或, 若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
- 一种基于激光雷达的SLAM闭环检测系统,其特征在于,包括: 视觉闭环检测模块,用于基于相机采集到的相机数据进行闭环检测,得到第一闭环检测结果; 激光SLAM模块,用于在获取到所述闭环检测信息时,执行图像优化操作; 所述视觉闭环检测模块在所述第一闭环检测结果基础上,发送闭环检测信息至激光SLAM模块。
- 根据权利要求6所述的基于激光雷达的SLAM闭环检测系统,其特征在于, 所述激光SLAM模块,还用于: 进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块,其中,所述校验请求中携带有所述第二闭环检测结果; 所述视觉闭环检测模块,还用于: 在接收到所述校验请求时,执行所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块的步骤。
- 根据权利要求7所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
- 根据权利要求8所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 若所述第一闭环检测结果与所述第二闭环检测结果不一致,则发送闭环检测错误的验证信息至所述激光SLAM模块。
- 根据权利要求6所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配; 若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或, 若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910707655.5A CN110587597B (zh) | 2019-08-01 | 2019-08-01 | 一种基于激光雷达的slam闭环检测方法及检测系统 |
CN201910707655.5 | 2019-08-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021017072A1 true WO2021017072A1 (zh) | 2021-02-04 |
Family
ID=68853316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/102850 WO2021017072A1 (zh) | 2019-08-01 | 2019-09-27 | 基于激光雷达的 slam 闭环检测方法及检测系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110587597B (zh) |
WO (1) | WO2021017072A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744236A (zh) * | 2021-08-30 | 2021-12-03 | 阿里巴巴达摩院(杭州)科技有限公司 | 回环检测方法、装置、存储介质及计算机程序产品 |
CN114034299A (zh) * | 2021-11-08 | 2022-02-11 | 中南大学 | 一种基于主动激光slam的导航系统 |
CN114608552A (zh) * | 2022-01-19 | 2022-06-10 | 达闼机器人股份有限公司 | 一种机器人建图方法、系统、装置、设备及存储介质 |
WO2024007807A1 (zh) * | 2022-07-06 | 2024-01-11 | 杭州萤石软件有限公司 | 一种误差校正方法、装置及移动设备 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145634B (zh) * | 2019-12-31 | 2022-02-22 | 深圳市优必选科技股份有限公司 | 一种校正地图的方法及装置 |
CN113552864A (zh) * | 2020-04-15 | 2021-10-26 | 深圳市镭神智能系统有限公司 | 自移动主体的定位方法、装置、自移动主体及存储介质 |
CN111856441B (zh) * | 2020-06-09 | 2023-04-25 | 北京航空航天大学 | 一种基于视觉与毫米波雷达融合的列车定位方法 |
CN112595322B (zh) * | 2020-11-27 | 2024-05-07 | 浙江同善人工智能技术有限公司 | 一种融合orb闭环检测的激光slam方法 |
CN113829353B (zh) * | 2021-06-07 | 2023-06-13 | 深圳市普渡科技有限公司 | 机器人、地图构建方法、装置和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106153048A (zh) * | 2016-08-11 | 2016-11-23 | 广东技术师范学院 | 一种基于多传感器的机器人室内定位及制图系统 |
CN106272423A (zh) * | 2016-08-31 | 2017-01-04 | 哈尔滨工业大学深圳研究生院 | 一种针对大尺度环境的多机器人协同制图与定位的方法 |
CN106885574A (zh) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | 一种基于重跟踪策略的单目视觉机器人同步定位与地图构建方法 |
CN106897666A (zh) * | 2017-01-17 | 2017-06-27 | 上海交通大学 | 一种室内场景识别的闭环检测方法 |
CN108537844A (zh) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | 一种融合几何信息的视觉slam回环检测方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9201133B2 (en) * | 2011-11-11 | 2015-12-01 | The Board Of Trustees Of The Leland Stanford Junior University | Method and system for signal-based localization |
CN107246876B (zh) * | 2017-07-31 | 2020-07-07 | 中北润良新能源汽车(徐州)股份有限公司 | 一种无人驾驶汽车自主定位与地图构建的方法及系统 |
CN107529650B (zh) * | 2017-08-16 | 2021-05-18 | 广州视源电子科技股份有限公司 | 闭环检测方法、装置及计算机设备 |
CN109509230B (zh) * | 2018-11-13 | 2020-06-23 | 武汉大学 | 一种应用于多镜头组合式全景相机的slam方法 |
-
2019
- 2019-08-01 CN CN201910707655.5A patent/CN110587597B/zh active Active
- 2019-09-27 WO PCT/CN2019/102850 patent/WO2021017072A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106153048A (zh) * | 2016-08-11 | 2016-11-23 | 广东技术师范学院 | 一种基于多传感器的机器人室内定位及制图系统 |
CN106272423A (zh) * | 2016-08-31 | 2017-01-04 | 哈尔滨工业大学深圳研究生院 | 一种针对大尺度环境的多机器人协同制图与定位的方法 |
CN106897666A (zh) * | 2017-01-17 | 2017-06-27 | 上海交通大学 | 一种室内场景识别的闭环检测方法 |
CN106885574A (zh) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | 一种基于重跟踪策略的单目视觉机器人同步定位与地图构建方法 |
CN108537844A (zh) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | 一种融合几何信息的视觉slam回环检测方法 |
Non-Patent Citations (2)
Title |
---|
LIANG, XIAO: "INDOOR SLAM FOR ROBOTS BASED ON LASER AND MONO-VISION FUSION", INFORMATION & TECHNOLOGY, CHINA MASTER'S THESES FULL-TEXT DATABASE, 1 December 2015 (2015-12-01), pages 1 - 52, XP055776709, [retrieved on 20210216] * |
LIANG, XIAO: "Indoor Slam for Robots Based on Laser And Mono-vision Fusion", INFORMATION & TECHNOLOGY, CHINA MASTER'S THESES FULL-TEXT DATABASE, no. 4, 15 April 2016 (2016-04-15), XP055776701, ISSN: 1674-0246 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744236A (zh) * | 2021-08-30 | 2021-12-03 | 阿里巴巴达摩院(杭州)科技有限公司 | 回环检测方法、装置、存储介质及计算机程序产品 |
CN114034299A (zh) * | 2021-11-08 | 2022-02-11 | 中南大学 | 一种基于主动激光slam的导航系统 |
CN114034299B (zh) * | 2021-11-08 | 2024-04-26 | 中南大学 | 一种基于主动激光slam的导航系统 |
CN114608552A (zh) * | 2022-01-19 | 2022-06-10 | 达闼机器人股份有限公司 | 一种机器人建图方法、系统、装置、设备及存储介质 |
WO2024007807A1 (zh) * | 2022-07-06 | 2024-01-11 | 杭州萤石软件有限公司 | 一种误差校正方法、装置及移动设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110587597A (zh) | 2019-12-20 |
CN110587597B (zh) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021017072A1 (zh) | 基于激光雷达的 slam 闭环检测方法及检测系统 | |
US11629964B2 (en) | Navigation map updating method and apparatus and robot using the same | |
CN108027877B (zh) | 用于非障碍区检测的系统和方法 | |
US20220371602A1 (en) | Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system | |
CN111709975B (zh) | 多目标跟踪方法、装置、电子设备及存储介质 | |
US20200209880A1 (en) | Obstacle detection method and apparatus and robot using the same | |
CN111209978B (zh) | 三维视觉重定位方法、装置及计算设备、存储介质 | |
WO2023016271A1 (zh) | 位姿确定方法、电子设备及可读存储介质 | |
Xu et al. | Model-agnostic multi-agent perception framework | |
WO2023273169A1 (zh) | 一种融合视觉与激光的2.5d地图构建方法 | |
CN110853085B (zh) | 基于语义slam的建图方法和装置及电子设备 | |
WO2020181426A1 (zh) | 一种车道线检测方法、设备、移动平台及存储介质 | |
CN112201078B (zh) | 一种基于图神经网络的自动泊车停车位检测方法 | |
CN115376109B (zh) | 障碍物检测方法、障碍物检测装置以及存储介质 | |
CN111784730A (zh) | 一种对象跟踪方法、装置、电子设备及存储介质 | |
CN114550116A (zh) | 一种对象识别方法和装置 | |
WO2022036981A1 (zh) | 机器人及其地图构建方法和装置 | |
Dev et al. | Steering angle estimation for autonomous vehicle | |
CN113587928A (zh) | 导航方法、装置、电子设备、存储介质及计算机程序产品 | |
EP4279950A1 (en) | Fault diagnosis and handling method for vehicle-mounted laser radar, apparatus, medium and vehicle | |
CN113469045B (zh) | 无人集卡的视觉定位方法、系统、电子设备和存储介质 | |
Choi et al. | State Machine and Downhill Simplex Approach for Vision‐Based Nighttime Vehicle Detection | |
CN111656404B (zh) | 图像处理方法、系统及可移动平台 | |
CN114429631A (zh) | 三维对象检测方法、装置、设备以及存储介质 | |
CN114049615B (zh) | 行驶环境中交通对象融合关联方法、装置及边缘计算设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19939504 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939504 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/08/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939504 Country of ref document: EP Kind code of ref document: A1 |