WO2021017072A1 - 基于激光雷达的 slam 闭环检测方法及检测系统 - Google Patents

基于激光雷达的 slam 闭环检测方法及检测系统 Download PDF

Info

Publication number
WO2021017072A1
WO2021017072A1 PCT/CN2019/102850 CN2019102850W WO2021017072A1 WO 2021017072 A1 WO2021017072 A1 WO 2021017072A1 CN 2019102850 W CN2019102850 W CN 2019102850W WO 2021017072 A1 WO2021017072 A1 WO 2021017072A1
Authority
WO
WIPO (PCT)
Prior art keywords
closed
loop detection
module
loop
detection result
Prior art date
Application number
PCT/CN2019/102850
Other languages
English (en)
French (fr)
Inventor
叶力荣
任娟娟
张国栋
闫瑞君
孙振坤
Original Assignee
深圳市银星智能科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市银星智能科技股份有限公司 filed Critical 深圳市银星智能科技股份有限公司
Publication of WO2021017072A1 publication Critical patent/WO2021017072A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present invention claims the priority of an earlier application filed with the State Intellectual Property Office of China on August 1, 2019, with an application number of 201910707655.5 and an invention title of "a laser radar-based SLAM closed-loop detection method and detection system".
  • the content of the application is incorporated into this text by way of introduction.
  • This application belongs to the field of cleaning equipment, and in particular relates to a SLAM closed-loop detection method and detection system based on lidar.
  • Robots with autonomous consciousness require robots to have good perception, judgment and adaptability to the environment, so that they can complete many tasks that humans want them to complete.
  • the autonomy of the robot is to complete some tasks through the autonomous decision-making of the robot in an unknown environment.
  • SLAM simultaneous localization and mapping
  • closed-loop detection also known as loop detection
  • loop detection means that the robot returns to its original position as a loop.
  • closed-loop detection aims to reduce the accumulated error when constructing environmental maps.
  • the closed-loop detection is also a difficult point in SLAM, because if the closed-loop detection is successful, the accumulated error can be significantly reduced, helping the robot to perform obstacle avoidance navigation more accurately and quickly. The wrong detection result may distort the map.
  • the closed loop of laser SLAM detection has relative hysteresis.
  • the laser SLAM detects When looping, re-optimizing the map may deform the local map;
  • Those that are not closed-loop but mistakenly regarded as closed-loop may also be Deform the local map.
  • the embodiments of the present application provide a SLAM closed-loop detection method and detection system based on lidar to solve the problem that the laser SLAM closed-loop detection in the prior art has hysteresis, large accumulated errors, easy to cause closed-loop false detection, and affect the construction of environmental maps.
  • the first aspect of the embodiments of the present application provides a lidar-based SLAM closed-loop detection method, including:
  • the closed-loop detection is performed through the visual closed-loop detection module to obtain the first closed-loop detection result;
  • the second aspect of the embodiments of the present application provides a lidar-based SLAM closed-loop detection system, including:
  • the visual closed-loop detection module is configured to perform closed-loop detection based on camera data collected by the camera to obtain a first closed-loop detection result; based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module;
  • the laser SLAM module is used to perform image optimization operations when the closed-loop detection information is acquired.
  • the addition of the visual closed-loop detection module in the embodiment of the present application realizes the auxiliary effect to the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the early confirmation of the local closed loop, reduce the cumulative error, and filter out the wrong closed loop , Make the composition of the laser SLAM module more timely and accurate, realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.
  • the addition of the visual closed-loop detection module realizes the auxiliary effect of the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the confirmation of the local closed loop in advance, reduce accumulated errors, filter out the wrong closed loop, and make the composition of the laser SLAM module more timely and accurate , Realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.
  • FIG. 1 is a first flow chart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application
  • FIG. 2 is a second flowchart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application
  • FIG. 3 is a structural block diagram of a SLAM closed-loop detection system based on lidar provided by an embodiment of the present application;
  • Fig. 4 is a structural diagram of a terminal provided by an embodiment of the present application.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • FIG. 1 is a first flow chart of a method for closed-loop detection of SLAM based on lidar provided by an embodiment of the present application.
  • a SLAM closed-loop detection method based on lidar the method includes the following steps:
  • Step 101 Based on the camera data collected by the camera, a closed-loop detection is performed through a visual closed-loop detection module to obtain a first closed-loop detection result.
  • the camera data specifically includes image data obtained by the camera collecting external images.
  • the camera data is obtained by taking images of the camera in the path of the robot.
  • the vision closed-loop detection module specifically performs image comparison processing on the camera data collected by the camera, and judges whether the robot has passed the current position point to realize closed-loop detection. Specifically, it can be based on comparing the visual information of the current point collected by a monocular camera, a binocular camera, a multi-camera camera, or a fish-eye camera with the visual information of the past points, and judging whether the robot has walked past the current position point to achieve closed-loop detection.
  • the visual closed-loop detection module may be the visual closed-loop detection part of the existing robot's visual SLAM module.
  • the closed-loop detection performed by the visual closed-loop detection module based on the camera data collected by the camera to obtain the first closed-loop detection result includes:
  • the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed-loop detection result is a closed-loop; and/or,
  • the similarity between the previous image frame and the next image frame of the second image frame and the first image frame is greater than a threshold, then it is determined that the first closed-loop detection result is closed loop.
  • the current location point is specifically the current location point of the robot on the travel path.
  • the robot When the robot is traveling, it will collect images at each point of travel along the path of travel, and multiple images will be collected at each point of travel, and will be in the multiple images collected at each point of travel
  • a key image frame is determined, and the key image frame and the travel point are correspondingly stored to form a database (that is, a key frame library) of image frames of historical position points obtained by recording.
  • the first image frame of the current location point is matched with the recorded image frame of the historical location point in the database through the visual closed-loop detection module.
  • the above-mentioned robot is a sweeper, for example.
  • the historical position of the robot can be one-to-one correspondence with the key image frame number taken by the camera.
  • the current position of the sweeper is also recorded, such as Postion (sweeper position, key frame number).
  • the first image frame is a key image frame among multiple image frames collected by the robot at the current position point
  • the key image frame may be an image frame with the best definition
  • the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed loop detection result is a closed loop.
  • the first closed-loop detection is determined The result is a closed loop.
  • the similarity between the second image frame and the first image frame is greater than the threshold and the previous image frame and the subsequent image frame of the second image frame are the same as the first image If the similarity between frames is greater than the threshold, it is determined that the first closed-loop detection result is a closed-loop.
  • the laser slam module provides a visual closed-loop signal.
  • the image frame at the current position is compared with the key image frame in the key frame library.
  • the similarity between the image frame at the current position and the key image frame in the database and the two frames before and after the key image frame is equal.
  • the threshold such as 50%
  • the closed loop is established, and the laser slam module provides a visual closed loop signal.
  • the previous image frame and the next image frame are specifically the image frame corresponding to the previous location point and the image frame corresponding to the next location point of the location point corresponding to the second image frame. That is, the image frames respectively corresponding to the position points before and after the position points corresponding to the second image frame.
  • the image feature points can be extracted from the image frame, and it is judged whether the displacement of the feature point can be matched with the displacement of the position point. When it can be matched, the similarity is determined The degree is greater than the set value.
  • Step 102 Based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module through the visual closed-loop detection module.
  • the closed-loop detection information is generated based on the first closed-loop detection result.
  • the closed-loop detection information may include the first closed-loop detection result.
  • Closed-loop detection is performed through the visual closed-loop detection module; when a closed-loop is detected, the visual closed-loop signal is directly sent to the laser SLAM module, the closed-loop detection result is sent to the laser SLAM module, and the laser SLAM module is actively notified that there is a closed loop in the current path, so that the laser SLAM can Use the closed-loop detection result directly to perform image optimization operations.
  • Step 103 When the laser SLAM module obtains the closed-loop detection information, perform an image optimization operation.
  • the laser SLAM module directly uses the closed-loop detection information of the visual closed-loop detection module as the basis for closed-loop detection, and uses this as the basis for whether the map needs to be updated, and decides whether to optimize the update operation of the map.
  • the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
  • the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
  • the embodiments of the present application also provide different implementations of the SLAM closed-loop detection method based on lidar.
  • FIG. 2 is a second flowchart of a lidar-based SLAM closed-loop detection method provided by an embodiment of the present application.
  • a SLAM closed-loop detection method based on lidar the method includes the following steps:
  • Step 201 Perform closed-loop detection through the laser SLAM module to obtain a second closed-loop detection result, and send a verification request to the visual closed-loop detection module.
  • the verification request carries the second closed-loop detection result.
  • This process occurs before the closed-loop detection information is sent to the laser SLAM module through the visual closed-loop detection module based on the first closed-loop detection result.
  • the laser SLAM module Before the visual closed-loop detection module sends closed-loop detection information to the laser SLAM module, during the robot's traveling process, the laser SLAM module will also scan the environment to realize laser point cloud collection. The laser SLAM module will first perform closed-loop detection based on the collected point cloud. , Obtain the detection result, and generate a verification request based on the closed-loop detection result and send it to the visual closed-loop detection module, so that the visual closed-loop detection module assists in verifying the correctness of the second closed-loop detection result, and eliminates the wrong closed-loop detection result of the laser SLAM module, Further ensure the accuracy of map composition.
  • the laser SLAM module When the laser SLAM module detects a closed-loop signal, it will send it to the visual closed-loop detection module for detection to confirm whether the map information detected by the closed-loop is truly closed-loop, if there is a closed-loop, send a visual closed-loop signal to the laser SLAM module, if not, return The current image frame ends the image traversal comparison process.
  • Step 202 based on the camera data collected by the camera, perform closed-loop detection through the visual closed-loop detection module to obtain a first closed-loop detection result.
  • step 202 is the same as the implementation of step 101 in the foregoing embodiment, and will not be repeated here. Wherein, the order of occurrence between step 202 and step 201 is not distinguished.
  • Step 203 When the visual closed-loop detection module receives the verification request, based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module through the visual closed-loop detection module.
  • sending closed-loop detection information to the laser SLAM module through the visual closed-loop detection module includes:
  • the visual closed-loop detection module outputs a verification pass The closed-loop detection feedback information to the laser SLAM module.
  • the visual closed-loop detection module verifies the closed-loop detection results of the laser SLAM module, it specifically compares the detection results of the laser SLAM module with the closed-loop detection results made by itself to determine whether the two are consistent. If they are consistent and both are both If it is a closed loop, the verification result is to confirm that the laser SLAM module detects a closed loop.
  • the method further includes: if the first closed-loop detection result is inconsistent with the second closed-loop detection result, sending verification information of closed-loop detection error to the laser SLAM module through the visual closed-loop detection module.
  • the visual closed-loop detection module verifies the closed-loop detection result of the laser SLAM module, it compares the detection result of the laser SLAM module with the closed-loop detection result made by itself to determine whether the two are consistent. If they are not consistent, the visual closed-loop detection module Send the closed-loop detection error verification information to the laser SLAM module, and the laser SLAM module performs closed-loop detection again until the closed-loop detection results between the laser SLAM module and the visual closed-loop detection module are consistent and both are closed-loop, or the laser SLAM module directly uses the vision
  • the closed-loop detection result of the closed-loop detection module shall prevail, and follow-up operations shall be performed.
  • Step 204 When the laser SLAM module obtains the closed-loop detection information, perform an image optimization operation.
  • This step is implemented in the same manner as step 103 in the foregoing embodiment, and will not be repeated here.
  • the laser SLAM module Before the laser SLAM module performs image optimization operations, it needs to determine that the closed-loop detection result is a closed-loop, and has passed the result verification of the visual closed-loop detection module. If the closed-loop detection results of the two are consistent, the detection result is considered to be reliable enough, and the final detection result is determined To close the loop, perform subsequent image optimization operations.
  • the visual closed-loop detection module assists the laser SLAM module to eliminate closed-loop false detections to improve detection accuracy.
  • the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
  • the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
  • FIG. 3 is a structural diagram of a lidar-based SLAM closed-loop detection system provided by an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the SLAM closed-loop detection system based on lidar includes:
  • the visual closed-loop detection module is configured to perform closed-loop detection based on camera data collected by the camera to obtain a first closed-loop detection result; based on the first closed-loop detection result, send closed-loop detection information to the laser SLAM module;
  • the laser SLAM module is used to perform image optimization operations when the closed-loop detection information is acquired.
  • the laser SLAM module is also used for:
  • the visual closed-loop detection module is also used for:
  • the step of sending closed-loop detection information to the laser SLAM module through the visual closed-loop detection module based on the first closed-loop detection result is performed.
  • the visual closed-loop detection module is also used for:
  • the closed-loop detection feedback information that passed the verification is output to all The laser SLAM module.
  • the visual closed-loop detection module is also used for:
  • the visual closed-loop detection module is also used for:
  • the similarity between the second image frame and the first image frame in the image frames of the historical location point is greater than a threshold, it is determined that the first closed-loop detection result is a closed-loop; and/or,
  • the similarity between the previous image frame and the next image frame of the second image frame and the first image frame is greater than a threshold, then it is determined that the first closed-loop detection result is closed loop.
  • the addition of the visual closed-loop detection module realizes the auxiliary effect on the laser SLAM module, which can detect the closed-loop in a relatively timely manner, and even realize the early confirmation of the local closed-loop, reduce the cumulative error, filter out the wrong closed-loop, and make the laser SLAM
  • the composition of the module is more timely and accurate, realizing the closed-loop detection of the laser SLAM module, and improving the accuracy of the construction of the environment map and subsequent path planning.
  • the lidar-based SLAM closed-loop detection system provided by the embodiments of the present application can implement the various processes of the foregoing embodiments of the lidar-based SLAM closed-loop detection method, and can achieve the same technical effects. To avoid repetition, details are not described here.
  • Fig. 4 is a structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal 4 of this embodiment includes a processor 40, a memory 41, and a computer program 42 stored in the memory 41 and running on the processor 40.
  • the terminal may be a robot, such as a sweeping robot or a warehouse cargo handling robot.
  • the terminal 4 may include, but is not limited to, a processor 40 and a memory 41. Those skilled in the art can understand that FIG. 4 is only an example of the terminal 4 and does not constitute a limitation on the terminal 4. It may include more or less components than shown in the figure, or a combination of certain components, or different components, such as The terminal may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), it can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the terminal 4, such as a hard disk or memory of the terminal 4.
  • the memory 41 may also be an external storage device of the terminal 4, such as a plug-in hard disk equipped on the terminal 4, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD) card, Flash Card, etc. Further, the memory 41 may also include both an internal storage unit of the terminal 4 and an external storage device.
  • the memory 41 is used to store the computer program and other programs and data required by the terminal.
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media any entity or device capable of carrying the computer program code
  • recording medium U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • the addition of the visual closed-loop detection module realizes the auxiliary effect of the laser SLAM module, which can detect the closed loop in a relatively timely manner, and even realize the confirmation of the local closed loop in advance, reduce accumulated errors, filter out the wrong closed loop, and make the composition of the laser SLAM module more timely and accurate , Realize the closed-loop detection of the laser SLAM module, and improve the accuracy of the construction of the environment map and subsequent path planning.

Abstract

一种基于激光雷达的SLAM闭环检测方法及检测系统,适用于清洁设备领域,其中方法包括:基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果(101);在第一闭环检测结果基础上,通过视觉闭环检测模块发送闭环检测信息至激光同步定位与建图SLAM模块(102);在激光SLAM模块获取到闭环检测信息时,执行图像优化操作(103),提升环境地图的构建及后续路径规划的准确度。

Description

基于激光雷达的SLAM闭环检测方法及检测系统
本发明要求2019年08月01日向中国国家知识产权局递交的申请号为201910707655.5,发明名称为“一种基于激光雷达的SLAM闭环检测方法及检测系统”的在先申请的优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请属于清洁设备领域,尤其涉及一种基于激光雷达的SLAM闭环检测方法及检测系统。
背景技术
伴随着信号处理、人工智能、机械制造等技术的进步,拥有环境感知、自主导航以及人机交互的机器人的需求也越来越强烈。具有自主意识的机器人要求机器人对于环境有很好的感知、判断和适应能力,从而可以完成很多人类希望它们去完成的任务。机器人的自主性就是在未知的环境中通过机器人的自主决策来完成一些任务。而在未知的环境中,通常采用SLAM(simultaneous localization and mapping,同步定位与建图)技术来实现对未知环境的检测与路径导航。
其中,闭环检测,又称回环检测,指机器人回到原来的位置即为回环。闭环检测作为SLM中的关键部分,目的是减少构建环境地图时的累积误差。
而闭环检测也是SLAM中的难点,因为在于:如果闭环检测成功,则可以显著地减小累积误差,帮助机器人更精准、快速的进行避障导航工作。而错误的检测结果可能使地图变形,特别是对于基于激光雷达来进行的SLAM技术而言,进行闭环检测存在两个方面主要问题,一是激光SLAM检测闭环有相对滞后性,当激光SLAM检测到回环时,再进行图优化,可能会使局部地图变形;二是在相似的环境中可能存在检测闭环错误或是大型的环境中累积误差的产生,不是闭环的却误认为是闭环,也可能会使局部地图变形。当存在以上问题时,则建图效果变差,使后续路径规划等也存在不良影响。
技术问题
本申请实施例提供了一种基于激光雷达的SLAM闭环检测方法及检测系统,以解决现有技术中激光SLAM闭环检测具有滞后性,累计误差大,易造成闭环误检测,影响环境地图的构建及后续路径规划的问题。
技术解决方案
本申请实施例的第一方面提供了一种基于激光雷达的SLAM闭环检测方法,包括:
基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果;
在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块;
在所述激光SLAM模块获取到所述闭环检测信息时,执行图像优化操作。
本申请实施例的第二方面提供了一种基于激光雷达的SLAM闭环检测系统,包括:
视觉闭环检测模块,用于基于相机采集到的相机数据进行闭环检测,得到第一闭环检测结果;在所述第一闭环检测结果基础上,发送闭环检测信息至激光SLAM模块;
激光SLAM模块,用于在获取到所述闭环检测信息时,执行图像优化操作。
由上可见,本申请实施例中,视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
有益效果
视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的变形形式。
图1是本申请实施例提供的一种基于激光雷达的SLAM闭环检测方法的流程图一;
图2是本申请实施例提供的一种基于激光雷达的SLAM闭环检测方法的流程图二;
图3是本申请实施例提供的一种基于激光雷达的SLAM闭环检测系统的结构框图;
图4是本申请实施例提供的一种终端的结构图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
应理解,本实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
参见图1,图1是本申请实施例提供的一种基于激光雷达的SLAM闭环检测方法的流程图一。如图1所示,一种基于激光雷达的SLAM闭环检测方法,该方法包括以下步骤:
步骤101,基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果。
该相机数据具体包括相机对外部图像进行采集得到的图像数据。该相机数据为机器人在行进路径中相机进行图像拍摄得到。
视觉闭环检测模块具体为对由相机采集到的相机数据进行图像比对处理,判断机器人是否走过当前位置点,实现闭环检测。具体可以是基于单目摄像头或者双目摄像头或者多目摄像头或者鱼眼相机采集到的当前点和以往走过的点的视觉信息进行比较,判断机器人是否走过当前位置点,来实现闭环检测。
其中,可选地,该视觉闭环检测模块可以是现有机器人的视觉SLAM模块中的视觉闭环检测部分。
具体地,作为一可选的实施方式,所述基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果,包括:
通过视觉闭环检测模块将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配;
若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或,
若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
当前位置点具体为机器人在行进路径上所处的当前位置点。
无论是激光SLAM模块还是视觉闭环检测模块,在相似的环境下很容易错误的检测到闭环,这样就需要当前帧与关键帧库里的关键帧进行匹配,来确定是否真的存在闭环。
机器人在行进中,会随着行进路径,在每一行进点上进行图像的采集,每一行进点上均会采集到多张图像,会在每一行进点上所采集到的多张图像中确定一张关键图像帧,对该关键图像帧与该行进点进行对应存储,形成记录得到的历史位置点的图像帧的数据库(即关键帧库)。通过视觉闭环检测模块将当前位置点的第一图像帧与记录得到的该数据库中的历史位置点的图像帧进行匹配。
上述机器人例如为扫地机,在具体实施时,可以将机器人的历史位置与相机拍摄的关键图像帧序号一一对应,每次保存关键帧时都会将扫地机的当前位置也一并记录下来,如Postion(扫地机位置,关键帧序号)。
其中,上述步骤中,第一图像帧为机器人在当前位置点上所采集的多个图像帧中的一个关键图像帧,该关键图像帧可以为一个清晰度最优的图像帧。
在进行闭环检测判断过程中,具体可以是:
若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环。
若所述历史位置点的图像帧中,所述第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值和所述第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
将当前位置上的图像帧与关键帧库里的关键图像帧进行对比,当前位置上图像帧与数据库中某一关键图像帧间的相似度大于阈值时,例如50%,则闭环成立,则给激光slam模块提供视觉闭环信号。
将当前位置上的图像帧与关键帧库里的关键图像帧进行对比,当前位置上图像帧与数据库中某一关键图像帧的前后两个图像帧之间的相似度均大于阈值时,例如50%,则闭环成立,则给激光slam模块提供视觉闭环信号。
将当前位置上的图像帧与关键帧库里的关键图像帧进行对比,当前位置上图像帧与数据库中与某一关键图像帧及与该关键图像帧的前后两个帧之间的相似度均大于阈值时,例如50%,则闭环成立,则给激光slam模块提供视觉闭环信号。
其中,该前一图像帧及后一图像帧具体为与第二图像帧所对应位置点的前一位置点所对应的图像帧及后一位置点所对应的图像帧。即与第二图像帧所对应位置点的前后相邻位置点所分别对应的图像帧。综合进行图像比对检测,提升闭环检测准确性。
具体地,在进行相似度判断时,可以是从图像帧中提取图像特征点,判断特征点的位移与位置点的位移之间的偏移量是否能够相匹配,当能匹配时,则确定相似度大于设定值。
步骤102,在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块。
闭环检测信息为基于第一闭环检测结果生成。具体地,该闭环检测信息中可以包含第一闭环检测结果。
通过视觉闭环检测模块进行闭环检测;在检测到存在闭环时,直接发送视觉闭环信号至激光SLAM模块,将闭环检测结果给到激光SLAM模块,主动告知激光SLAM模块当前路径存在闭环,使激光SLAM可以直接使用该闭环检测结果,执行图像优化操作。
步骤103,在所述激光SLAM模块获取到所述闭环检测信息时,执行图像优化操作。
激光SLAM模块直接将视觉闭环检测模块的闭环检测信息作为闭环检测依据,并以此作为地图是否需要更新的依据,决定是否进行地图的优化更新操作。
本申请实施例中,视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
本申请实施例中还提供了基于激光雷达的SLAM闭环检测方法的不同实施方式。
参见图2,图2是本申请实施例提供的一种基于激光雷达的SLAM闭环检测方法的流程图二。如图2所示,一种基于激光雷达的SLAM闭环检测方法,该方法包括以下步骤:
步骤201,通过所述激光SLAM模块进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块。
其中,所述校验请求中携带有所述第二闭环检测结果。
该过程发生于在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块之前。
在视觉闭环检测模块向激光SLAM模块发送闭环检测信息之前,在机器人行进过程中,激光SLAM模块也会对环境进行扫描实现激光点云采集,激光SLAM模块会先依据采集的点云自行进行闭环检测,得到检测结果,并基于该闭环检测结果生成校验请求发送至视觉闭环检测模块,以使视觉闭环检测模块辅助验证第二闭环检测结果的正确与否,排除激光SLAM模块的错误闭环检测结果,进一步确保地图构图的准确性。
当激光SLAM模块检测到闭环信号时,会发给视觉闭环检测模块进行检测,确认闭环检测的地图信息是否真正的闭环,如果存在闭环则给激光SLAM模块发送视觉闭环信号,如果不存在,则返回当前图像帧,结束图像的遍历比对过程。
步骤202,基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果。
该步骤与前述实施方式中的步骤101的实现方式相同,此处不再赘述。其中,该步骤202与步骤201之间的发生不分先后顺序。
步骤203,在所述视觉闭环检测模块接收到所述校验请求时,在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块。
进一步地,作为一可选的实施方式,该在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块,包括:
若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则通过所述视觉闭环检测模块输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
视觉闭环检测模块在对激光SLAM模块的闭环检测结果进行验证时,具体为将激光SLAM模块的检测结果与自己所作的闭环检测结果进行比对判断,看两者是否一致,若一致且两者均为闭环,则验证结果为确认激光SLAM模块检测到了闭环。
进一步地,该方法中还包括:若所述第一闭环检测结果与所述第二闭环检测结果不一致,则通过所述视觉闭环检测模块发送闭环检测错误的验证信息至所述激光SLAM模块。
视觉闭环检测模块在对激光SLAM模块的闭环检测结果进行验证时,将激光SLAM模块的检测结果与自己所作的闭环检测结果进行比对判断,看两者是否一致,若不一致,则视觉闭环检测模块发送闭环检测错误的验证信息至激光SLAM模块,激光SLAM模块重新进行闭环检测,直至激光SLAM模块与视觉闭环检测模块两者间的闭环检测结果相一致且均为闭环,或者激光SLAM模块直接以视觉闭环检测模块的闭环检测结果为准,进行后续操作。
步骤204,在所述激光SLAM模块获取到所述闭环检测信息时,执行图像优化操作。
该步骤与前述实施方式中的步骤103的实现方式相同,此处不再赘述。
激光SLAM模块在执行图像优化操作之前,需要确定闭环检测结果为闭环,且通过了视觉闭环检测模块的结果验证,两者的闭环检测结果为一致,则认为检测结果足够可靠,将最终检测结果确定为闭环,执行后续的图像优化操作。通过视觉闭环检测模块协助激光SLAM模块进行闭环误检测的剔除,提升检测准确度。
本申请实施例中,视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
参见图3,图3是本申请实施例提供的一种基于激光雷达的SLAM闭环检测系统的结构图,为了便于说明,仅示出了与本申请实施例相关的部分。
所述基于激光雷达的SLAM闭环检测系统,包括:
视觉闭环检测模块,用于基于相机采集到的相机数据进行闭环检测,得到第一闭环检测结果;在所述第一闭环检测结果基础上,发送闭环检测信息至激光SLAM模块;
激光SLAM模块,用于在获取到所述闭环检测信息时,执行图像优化操作。
其中,所述激光SLAM模块,还用于:
进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块,其中,所述校验请求中携带有所述第二闭环检测结果;
所述视觉闭环检测模块,还用于:
在接收到所述校验请求时,执行所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块的步骤。
其中,所述视觉闭环检测模块,还用于:
若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
其中,所述视觉闭环检测模块,还用于:
若所述第一闭环检测结果与所述第二闭环检测结果不一致,则发送闭环检测错误的验证信息至所述激光SLAM模块。
其中,所述视觉闭环检测模块,还用于:
将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配;
若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或,
若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
本申请实施例中,视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
本申请实施例提供的基于激光雷达的SLAM闭环检测系统能够实现上述基于激光雷达的SLAM闭环检测方法的实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
图4是本申请实施例提供的一种终端的结构图。如该图所示,该实施例的终端4包括:处理器40、存储器41以及存储在所述存储器41中并可在所述处理器40上运行的计算机程序42。该终端可以是机器人,例如为扫地机器人、仓库货物搬运机器人。
所述终端4可包括,但不仅限于,处理器40、存储器41。本领域技术人员可以理解,图4仅仅是终端4的示例,并不构成对终端4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端还可以包括输入输出设备、网络接入设备、总线等。
所称处理器40可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器41可以是所述终端4的内部存储单元,例如终端4的硬盘或内存。所述存储器41也可以是所述终端4的外部存储设备,例如所述终端4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器41还可以既包括所述终端4的内部存储单元也包括外部存储设备。所述存储器41用于存储所述计算机程序以及所述终端所需的其他程序和数据。所述存储器41还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
工业实用性
视觉闭环检测模块的加入,实现对激光SLAM模块的辅助作用,可以比较及时的检测到闭环,甚至实现提前确认局部闭环,减少累计误差,滤掉错误的闭环,使激光SLAM模块的构图更加及时准确,实现对激光SLAM模块的闭环检测,提升环境地图的构建及后续路径规划的准确度。
 

Claims (10)

  1. 一种基于激光雷达的SLAM闭环检测方法,其特征在于,包括: 基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果; 在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块; 在所述激光SLAM模块获取到所述闭环检测信息时,执行图像优化操作。
  2. 根据权利要求1所述的基于激光雷达的SLAM闭环检测方法,其特征在于, 所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块之前,还包括: 通过所述激光SLAM模块进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块,其中,所述校验请求中携带有所述第二闭环检测结果; 在所述视觉闭环检测模块接收到所述校验请求时,执行所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块的步骤。
  3. 根据权利要求2所述的基于激光雷达的SLAM闭环检测方法,其特征在于, 所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块,包括: 若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则通过所述视觉闭环检测模块输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
  4. 根据权利要求3所述的基于激光雷达的SLAM闭环检测方法,其特征 在于,还包括: 若所述第一闭环检测结果与所述第二闭环检测结果不一致,则通过所述视觉闭环检测模块发送闭环检测错误的验证信息至所述激光SLAM模块。

  5. 根据权利要求1所述的基于激光雷达的SLAM闭环检测方法,其特征在于,所述基于相机采集到的相机数据,通过视觉闭环检测模块进行闭环检测,得到第一闭环检测结果,包括: 通过视觉闭环检测模块将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配; 若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或, 若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
  6. 一种基于激光雷达的SLAM闭环检测系统,其特征在于,包括: 视觉闭环检测模块,用于基于相机采集到的相机数据进行闭环检测,得到第一闭环检测结果; 激光SLAM模块,用于在获取到所述闭环检测信息时,执行图像优化操作; 所述视觉闭环检测模块在所述第一闭环检测结果基础上,发送闭环检测信息至激光SLAM模块。
  7. 根据权利要求6所述的基于激光雷达的SLAM闭环检测系统,其特征在于, 所述激光SLAM模块,还用于: 进行闭环检测,得到第二闭环检测结果,并发送校验请求至所述视觉闭环检测模块,其中,所述校验请求中携带有所述第二闭环检测结果; 所述视觉闭环检测模块,还用于: 在接收到所述校验请求时,执行所述在所述第一闭环检测结果基础上,通过所述视觉闭环检测模块发送闭环检测信息至激光SLAM模块的步骤。
  8. 根据权利要求7所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 若所述第一闭环检测结果与所述第二闭环检测结果相一致,且所述第一闭环检测结果与所述第二闭环检测结果均为闭环,则输出验证通过的闭环检测反馈信息至所述激光SLAM模块。
  9. 根据权利要求8所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 若所述第一闭环检测结果与所述第二闭环检测结果不一致,则发送闭环检测错误的验证信息至所述激光SLAM模块。
  10. 根据权利要求6所述的基于激光雷达的SLAM闭环检测系统,其特征在于,所述视觉闭环检测模块,还用于: 将当前位置点的第一图像帧与记录得到的历史位置点的图像帧进行匹配; 若所述历史位置点的图像帧中,第二图像帧与所述第一图像帧的相似度大于阈值,则确定所述第一闭环检测结果为闭环;和/或, 若所述历史位置点的图像帧中,第二图像帧的前一图像帧及后一图像帧与所述第一图像帧间的相似度均大于阈值,则确定所述第一闭环检测结果为闭环。
PCT/CN2019/102850 2019-08-01 2019-09-27 基于激光雷达的 slam 闭环检测方法及检测系统 WO2021017072A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910707655.5A CN110587597B (zh) 2019-08-01 2019-08-01 一种基于激光雷达的slam闭环检测方法及检测系统
CN201910707655.5 2019-08-01

Publications (1)

Publication Number Publication Date
WO2021017072A1 true WO2021017072A1 (zh) 2021-02-04

Family

ID=68853316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102850 WO2021017072A1 (zh) 2019-08-01 2019-09-27 基于激光雷达的 slam 闭环检测方法及检测系统

Country Status (2)

Country Link
CN (1) CN110587597B (zh)
WO (1) WO2021017072A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744236A (zh) * 2021-08-30 2021-12-03 阿里巴巴达摩院(杭州)科技有限公司 回环检测方法、装置、存储介质及计算机程序产品
CN114034299A (zh) * 2021-11-08 2022-02-11 中南大学 一种基于主动激光slam的导航系统
CN114608552A (zh) * 2022-01-19 2022-06-10 达闼机器人股份有限公司 一种机器人建图方法、系统、装置、设备及存储介质
WO2024007807A1 (zh) * 2022-07-06 2024-01-11 杭州萤石软件有限公司 一种误差校正方法、装置及移动设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145634B (zh) * 2019-12-31 2022-02-22 深圳市优必选科技股份有限公司 一种校正地图的方法及装置
CN113552864A (zh) * 2020-04-15 2021-10-26 深圳市镭神智能系统有限公司 自移动主体的定位方法、装置、自移动主体及存储介质
CN111856441B (zh) * 2020-06-09 2023-04-25 北京航空航天大学 一种基于视觉与毫米波雷达融合的列车定位方法
CN112595322B (zh) * 2020-11-27 2024-05-07 浙江同善人工智能技术有限公司 一种融合orb闭环检测的激光slam方法
CN113829353B (zh) * 2021-06-07 2023-06-13 深圳市普渡科技有限公司 机器人、地图构建方法、装置和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106153048A (zh) * 2016-08-11 2016-11-23 广东技术师范学院 一种基于多传感器的机器人室内定位及制图系统
CN106272423A (zh) * 2016-08-31 2017-01-04 哈尔滨工业大学深圳研究生院 一种针对大尺度环境的多机器人协同制图与定位的方法
CN106885574A (zh) * 2017-02-15 2017-06-23 北京大学深圳研究生院 一种基于重跟踪策略的单目视觉机器人同步定位与地图构建方法
CN106897666A (zh) * 2017-01-17 2017-06-27 上海交通大学 一种室内场景识别的闭环检测方法
CN108537844A (zh) * 2018-03-16 2018-09-14 上海交通大学 一种融合几何信息的视觉slam回环检测方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9201133B2 (en) * 2011-11-11 2015-12-01 The Board Of Trustees Of The Leland Stanford Junior University Method and system for signal-based localization
CN107246876B (zh) * 2017-07-31 2020-07-07 中北润良新能源汽车(徐州)股份有限公司 一种无人驾驶汽车自主定位与地图构建的方法及系统
CN107529650B (zh) * 2017-08-16 2021-05-18 广州视源电子科技股份有限公司 闭环检测方法、装置及计算机设备
CN109509230B (zh) * 2018-11-13 2020-06-23 武汉大学 一种应用于多镜头组合式全景相机的slam方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106153048A (zh) * 2016-08-11 2016-11-23 广东技术师范学院 一种基于多传感器的机器人室内定位及制图系统
CN106272423A (zh) * 2016-08-31 2017-01-04 哈尔滨工业大学深圳研究生院 一种针对大尺度环境的多机器人协同制图与定位的方法
CN106897666A (zh) * 2017-01-17 2017-06-27 上海交通大学 一种室内场景识别的闭环检测方法
CN106885574A (zh) * 2017-02-15 2017-06-23 北京大学深圳研究生院 一种基于重跟踪策略的单目视觉机器人同步定位与地图构建方法
CN108537844A (zh) * 2018-03-16 2018-09-14 上海交通大学 一种融合几何信息的视觉slam回环检测方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG, XIAO: "INDOOR SLAM FOR ROBOTS BASED ON LASER AND MONO-VISION FUSION", INFORMATION & TECHNOLOGY, CHINA MASTER'S THESES FULL-TEXT DATABASE, 1 December 2015 (2015-12-01), pages 1 - 52, XP055776709, [retrieved on 20210216] *
LIANG, XIAO: "Indoor Slam for Robots Based on Laser And Mono-vision Fusion", INFORMATION & TECHNOLOGY, CHINA MASTER'S THESES FULL-TEXT DATABASE, no. 4, 15 April 2016 (2016-04-15), XP055776701, ISSN: 1674-0246 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744236A (zh) * 2021-08-30 2021-12-03 阿里巴巴达摩院(杭州)科技有限公司 回环检测方法、装置、存储介质及计算机程序产品
CN114034299A (zh) * 2021-11-08 2022-02-11 中南大学 一种基于主动激光slam的导航系统
CN114034299B (zh) * 2021-11-08 2024-04-26 中南大学 一种基于主动激光slam的导航系统
CN114608552A (zh) * 2022-01-19 2022-06-10 达闼机器人股份有限公司 一种机器人建图方法、系统、装置、设备及存储介质
WO2024007807A1 (zh) * 2022-07-06 2024-01-11 杭州萤石软件有限公司 一种误差校正方法、装置及移动设备

Also Published As

Publication number Publication date
CN110587597A (zh) 2019-12-20
CN110587597B (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2021017072A1 (zh) 基于激光雷达的 slam 闭环检测方法及检测系统
US11629964B2 (en) Navigation map updating method and apparatus and robot using the same
CN108027877B (zh) 用于非障碍区检测的系统和方法
US20220371602A1 (en) Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system
CN111709975B (zh) 多目标跟踪方法、装置、电子设备及存储介质
US20200209880A1 (en) Obstacle detection method and apparatus and robot using the same
CN111209978B (zh) 三维视觉重定位方法、装置及计算设备、存储介质
WO2023016271A1 (zh) 位姿确定方法、电子设备及可读存储介质
Xu et al. Model-agnostic multi-agent perception framework
WO2023273169A1 (zh) 一种融合视觉与激光的2.5d地图构建方法
CN110853085B (zh) 基于语义slam的建图方法和装置及电子设备
WO2020181426A1 (zh) 一种车道线检测方法、设备、移动平台及存储介质
CN112201078B (zh) 一种基于图神经网络的自动泊车停车位检测方法
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN111784730A (zh) 一种对象跟踪方法、装置、电子设备及存储介质
CN114550116A (zh) 一种对象识别方法和装置
WO2022036981A1 (zh) 机器人及其地图构建方法和装置
Dev et al. Steering angle estimation for autonomous vehicle
CN113587928A (zh) 导航方法、装置、电子设备、存储介质及计算机程序产品
EP4279950A1 (en) Fault diagnosis and handling method for vehicle-mounted laser radar, apparatus, medium and vehicle
CN113469045B (zh) 无人集卡的视觉定位方法、系统、电子设备和存储介质
Choi et al. State Machine and Downhill Simplex Approach for Vision‐Based Nighttime Vehicle Detection
CN111656404B (zh) 图像处理方法、系统及可移动平台
CN114429631A (zh) 三维对象检测方法、装置、设备以及存储介质
CN114049615B (zh) 行驶环境中交通对象融合关联方法、装置及边缘计算设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939504

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939504

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/08/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19939504

Country of ref document: EP

Kind code of ref document: A1