WO2021147545A1 - 深度图的处理方法、小障碍物检测方法及系统、机器人及介质 - Google Patents

深度图的处理方法、小障碍物检测方法及系统、机器人及介质 Download PDF

Info

Publication number
WO2021147545A1
WO2021147545A1 PCT/CN2020/135024 CN2020135024W WO2021147545A1 WO 2021147545 A1 WO2021147545 A1 WO 2021147545A1 CN 2020135024 W CN2020135024 W CN 2020135024W WO 2021147545 A1 WO2021147545 A1 WO 2021147545A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth map
image
camera
structured light
point cloud
Prior art date
Application number
PCT/CN2020/135024
Other languages
English (en)
French (fr)
Inventor
刘勇
朱俊安
黄寅
郭璁
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Priority to EP20916169.4A priority Critical patent/EP4083917A4/en
Priority to US17/794,045 priority patent/US20230063535A1/en
Publication of WO2021147545A1 publication Critical patent/WO2021147545A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the invention relates to the technical field of robots, in particular to a depth map processing method, a small obstacle detection method and system, a robot and a medium.
  • Obstacle detection is an important part of autonomous robot navigation and has been extensively studied in the field of robotics and vision, and has been applied in some consumer-grade products. If small obstacles cannot be accurately detected, the safety of the robot's movement will be affected. However, in the actual environment, due to the various types and small size of small obstacles, the detection of small obstacles is still a challenge.
  • Obstacle detection in the robot scene generally obtains the three-dimensional information (such as three-dimensional position, contour) of the target through distance acquisition sensors or algorithms.
  • three-dimensional information such as three-dimensional position, contour
  • small obstacles need to obtain more accurate three-dimensional information during detection, which requires higher measurement accuracy and resolution of sensors and algorithms.
  • Active sensors such as radar or sonar have high measurement accuracy, but the resolution is low; depth cameras based on infrared structured light can achieve higher resolution, but they are susceptible to sunlight interference. When the interference is large, the imaging has holes and imaging There is insufficient robustness when the target is small.
  • Passive sensors such as binocular stereo matching or sequential image stereo matching, if indiscriminate dense reconstruction is used, there will be a large amount of calculation, and it will be difficult to reconstruct small targets, especially when there is a lot of background noise.
  • detection schemes and algorithms are required to be applicable to various types of small obstacles. If the obstacle is directly detected, then the detection object needs to be defined in advance, and the robustness has certain limitations.
  • the purpose of the present invention is to provide a depth map processing method, small obstacle detection method and system, robot and medium, which improve the robustness and accuracy of the scheme and algorithm, thereby improving the detection of various small obstacles Accuracy.
  • a depth map processing method including:
  • the sensor includes a binocular camera and a structured light depth camera, the binocular camera acquires a left image and a right image, the structured light depth camera acquires a structured light depth map, and the internal parameter distortion calibration of the binocular camera , Calibration of external parameters and calibration of the external parameters of the binocular camera and the structured light depth camera;
  • Distortion and epipolar correction performing distortion and epipolar correction on the left and right images
  • Data alignment using the external parameters of the structured light depth camera to align the structured light depth map to the coordinate system of the left image and the right image to obtain a binocular depth map;
  • Sparse stereo matching performing sparse stereo matching on the hollow part of the structured light depth map and obtaining disparity, converting the disparity into depth, using the depth and fusing the structured light depth map and the binocular depth map, Rebuild a robust depth map.
  • the operation of sparse stereo matching specifically includes:
  • the hole mask is extracted, sparse stereo matching is performed on the image in the hole mask, and the parallax is obtained.
  • the distortion and epipolar correction operations specifically include:
  • the matching points used for sparse stereo matching are constrained to be aligned on a horizontal straight line.
  • the present invention also provides a small obstacle detection method.
  • the detection method includes the depth map processing method described above.
  • the detection method is used to detect small obstacles on the ground.
  • the method further includes:
  • a fusion scheme is used to perform three-dimensional reconstruction on the target image to obtain a complete dense point cloud.
  • the operation of sparse stereo matching further includes:
  • the robust depth map is divided into blocks, the depth map in each block is converted into the point cloud, and the point cloud is ground-fitted with a plane model. If the point cloud in the block is If the plane assumption is not satisfied, remove the point cloud in the block, otherwise keep the point cloud in the block;
  • the reserved blocks will be checked twice through the deep neural network, and the blocks that have passed the second check will be grown based on the plane normal and the center of gravity, and the three-dimensional plane equations and boundaries of a large piece of ground will be segmented Point cloud
  • the obstacle area is mapped into a complete point cloud to complete the three-dimensional detection of the small obstacle.
  • the depth map in the block that conforms to the plane model but is not on the ground can be excluded through the secondary verification, thereby improving the accuracy of the detection of small obstacles as a whole.
  • the binocular camera includes a left camera and a right camera
  • the ground image acquired by the left camera includes a left image
  • the ground image acquired by the right camera includes a right image
  • the structured light depth camera acquires The structured light depth map of the ground.
  • the present invention also provides a small obstacle detection system, which includes the small obstacle detection method as described above.
  • it further includes a robot body, the binocular camera and the structured light depth camera are arranged on the robot body, and the binocular camera and the structured light depth camera are arranged obliquely toward the direction of the ground.
  • a robot comprising a processor and a memory, and a computer program is stored in the memory, and the processor is used to execute the computer program to implement the depth map processing method as described above or the small obstacle as described above ⁇ Detection method.
  • a computer storage medium stores a computer program, and when the computer program is executed, it executes the above-mentioned depth map processing method or the above-mentioned small obstacle detection method.
  • the coverage area of the image collected by the binocular camera and the structured light depth camera on the ground is improved, and the completeness of the recognition of small obstacles is significantly improved.
  • the depth map processing method small obstacle detection method and system provided by the present invention, there is no need to match the full image stereo, but only the sparse stereo matching is performed on the hollow part of the structured light depth map, which significantly reduces the depth map processing as a whole The amount of calculation, and improve the robustness of the system.
  • FIG. 1 shows a schematic flowchart of a depth map processing method involved in an embodiment of the present invention
  • FIG. 2 shows a schematic flowchart of a small obstacle detection method according to an embodiment of the present invention
  • FIG. 3 shows a schematic flow chart of the sparse stereo matching of the small obstacle detection method according to the embodiment of the present invention.
  • the embodiment of the present invention relates to a depth map processing method, a small obstacle detection method and system.
  • the depth map processing method 100 involved in this embodiment includes:
  • the sensor calibration includes a binocular camera and a structured light depth camera, the binocular camera acquires a left image and a right image, the structured light depth camera acquires a structured light depth map, and the internal reference of the binocular camera Distortion calibration, external parameter calibration, and external parameter calibration of the binocular camera and the structured light depth camera;
  • Distortion and epipolar line correction performing distortion and epipolar line correction on the left image and the right image;
  • Sparse stereo matching performing sparse stereo matching on the hollow part of the structured light depth map and obtaining disparity, converting the disparity into depth, using the depth and fusing the structured light depth map and the binocular depth Figure, reconstruct a robust depth map.
  • the operation of the sparse stereo matching (104) specifically includes:
  • the hole mask is extracted, sparse stereo matching is performed on the image in the hole mask, and the parallax is obtained.
  • the operation of the distortion and epipolar correction (102) specifically includes:
  • the matching points used for sparse stereo matching are constrained to be aligned on a horizontal straight line.
  • the embodiment of the present invention also relates to a small obstacle detection method 200, which includes the depth map processing method described above. Regarding the processing method of the depth map, I will not go into details here.
  • the detection method is used to detect small obstacles on the ground, and the method further includes:
  • the sparse stereo matching operation further includes:
  • the depth map in the block that matches the plane model but is not on the ground can be excluded through the secondary verification, thereby improving the accuracy of the detection of small obstacles as a whole.
  • the binocular camera includes a left camera and a right camera
  • the ground image acquired by the left camera includes a left image
  • the ground image acquired by the right camera includes a right image
  • the structured light A depth camera acquires the structured light depth map of the ground.
  • the embodiment of the present invention also relates to a small obstacle detection system.
  • the detection system includes the small obstacle detection method as described above. Regarding the small obstacle detection system, I will not repeat it here.
  • the small obstacle detection system further includes a robot body, the binocular camera and the structured light depth camera are disposed on the robot body, and the binocular camera and the structured light depth camera are inclined toward the ground. set up.
  • the coverage area of the image collected by the binocular camera and the structured light depth camera on the ground is improved, and the completeness of the recognition of small obstacles is significantly improved.
  • an embodiment of the present invention further provides a robot, which includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the robot's processor is used to provide calculation and control capabilities.
  • the robot's memory includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the robot's network interface is used to communicate with external terminals through a network connection.
  • the computer program is executed by the processor to realize a depth map processing method or a small obstacle detection method.
  • the embodiment of the present invention also provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed, it executes the above-mentioned depth map processing method or small obstacle detection method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种深度图的处理方法、小障碍物检测方法及系统,包括:传感器标定、畸变与极线矫正、数据对齐、稀疏立体匹配。根据本发明提供的深度图的处理方法、小障碍物检测方法及系统,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了深度图处理的计算量,并且提升了对系统的鲁棒性。

Description

深度图的处理方法、小障碍物检测方法及系统、机器人及介质
本申请以2020年1月20日提交的申请号为202010062330.9,名称为“深度图的处理方法、小障碍物检测方法及系统”的中国发明专利申请为基础,并要求其优先权。
技术领域
本发明涉及机器人技术领域,特别涉及一种深度图的处理方法、小障碍物检测方法及系统、机器人及介质。
背景技术
障碍物检测是机器人自主导航的重要组成部分且在机器人学和视觉领域得到了广泛研究且在一些消费级产品上得到了应用。小障碍物如果不能得到准确的检测,会导致机器人移动安全性受到影响。但在实际环境中,由于小障碍物种类繁杂,体积较小,对小障碍物检测仍然是一个挑战。
发明内容
机器人场景的障碍物检测一般通过距离获取传感器或算法获取目标的三维信息(如三维位置,轮廓)。一方面,小障碍物由于其体积小,检测时需要获取更准确的三维信息,这对传感器和算法的测量精度及分辨率要求更高。主动传感器中如雷达或者声呐有很高测量精度,但分辨率较低;基于红外结构光的深度相机能达到较高分辨率,但其易受阳光干扰,当干扰影响大时成像存在空洞且成像目标较小时存在鲁棒性不足。被动传感器如双目立体匹配或序列图像立体匹配,若采用无差别稠密重建,存在计算量大,对小目标重建困难存尤其是背景噪声较多时。另一方面,小障碍物种类繁杂,需要检测方案及算法能适用各 种类型的小障碍物。如果直接检测障碍物,那么需要对检测对象有预先的定义,鲁棒性存在一定限制。
鉴于此,本发明的目的在于提供一种深度图的处理方法、小障碍物检测方法及系统、机器人及介质,提升了方案和算法的鲁棒性和精度,从而提升对各种小障碍物检测的准确性。
为了实现上述目的,本发明实施方式提供如下技术方案:
一种深度图的处理方法,包括:
传感器标定,所述传感器包括双目相机和结构光深度相机,所述双目相机获取左图像和右图像,所述结构光深度相机获取结构光深度图,对所述双目相机的内参畸变标定、外参标定以及对所述双目相机和所述结构光深度相机的外参标定;
畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
由此,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了深度图处理的计算量,并且提升了对系统的鲁棒性。
其中,所述稀疏立体匹配的操作,具体包括:
提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
其中,所述畸变与极线矫正的操作,具体包括:
将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
在这种情况下,显著减少了后续执行立体匹配的时间并大幅提升了准确度。
本发明还提供一种小障碍物检测方法,所述检测方法包括如上所述的深度图的处理方法,所述检测方法用于检测地面的小障碍物,所述方法还包括:
通过所述双目相机和结构光深度相机分别获取地面图像,
对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障碍物的点云进行分离检测;
将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
采用融合方案对所述目标图像进行三维重构获取完整密集点云。
在这种情况下,通过对所述地面图像的占主体的背景执行稠密重建,以及采用“减背景”检测方法对所述小障碍物的点云进行分离检测,整体提升了方法的鲁棒性,并且通过将小障碍物映射到图像,并执行图像分割,可以将小障碍物完整地分割,以便实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。
其中,所述稀疏立体匹配的操作,进一步包括:
对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
在这种情况下,通过二次校验可排除符合平面模型但不是地面的块内的深度图,进而总体提升小障碍物的检测的准确性。
其中,所述双目相机包括左相机和右相机,所述左相机获取的所述地面图像包括左图像,所述右相机获取的所述地面图像包括右图像,所述结构光深度相机获取所述地面的所述结构光深度图。
本发明还提供一种小障碍物检测系统,所述检测系统包括如上所述的小障碍物检测方法。
其中,还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
一种机器人,所述机器人包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器用于执行所述计算机程序以实现如上所述的深度图的处理方法或者如上所述的小障碍物检测方法。
一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如上所述的深度图的处理方法或者如上所述的小障碍物检测方法。
在这种情况下,提升了双目相机和结构光深度相机采集的到图像对地面的覆盖区域,显著提升了对小障碍物识别的完整性。
根据本发明所提供的深度图的处理方法、小障碍物检测方法及系统,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了深度图处理的计算量,并且提升了对系统的鲁棒性。
附图说明
图1示出了本发明的实施方式所涉及的深度图的处理方法的流程示意图;
图2示出了本发明的实施方式所涉及的小障碍物检测方法的流程示意图;
图3示出了本发明的实施方式所涉及的小障碍物检测方法的稀疏立体匹配的流程示意图。
具体实施方式
以下,参考附图,详细地说明本发明的优选实施方式。在下面的说明中,对于相同的部件赋予相同的符号,省略重复的说明。另外,附图只是示意性的图,部件相互之间的尺寸的比例或者部件的形状等可以与实际的不同。
本发明实施方式涉及深度图的处理方法、小障碍物检测方法及系统。
如图1所示,本实施方式所涉及的深度图的处理方法100,包括:
101、传感器标定,所述传感器包括双目相机和结构光深度相机,所述双目相机获取左图像和右图像,所述结构光深度相机获取结构光深度图,对所述双目相机的内参畸变标定、外参标定以及对所述双目相机和所述结构光深度相机的外参标定;
102、畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
103、数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
104、稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
由此,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了深度图处理的计算量,并且提升了对系统的鲁棒性。
在本实施方式中,所述稀疏立体匹配(104)的操作,具体包括:
提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
在本实施方式中,所述畸变与极线矫正(102)的操作,具体包括:
将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
在这种情况下,显著减少了后续执行立体匹配的时间并大幅提升了准确度。
本发明实施方式还涉及一种小障碍物检测方法200,所述检测方法包括如上所述的深度图的处理方法。关于深度图的处理方法,在此不做赘述。
如图2所示,在本实施方式中,所述检测方法用于检测地面的小障碍物,所述方法还包括:
201、通过所述双目相机和结构光深度相机分别获取地面图像;
202、对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
203、通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障碍物的点云进行分离检测;
204、将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
205、采用融合方案对所述目标图像进行三维重构获取完整密集点云。
在这种情况下,通过对所述地面图像的占主体的背景执行稠密重建,以及采用“减背景”检测方法对所述小障碍物的点云进行分离检测,整体提升了方法的鲁棒性,并且通过将小障碍物映射到图像,并执行图像分割,可以将小障碍物完整地分割,以便实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。
如图3所示,在本实施方式中,所述稀疏立体匹配的操作,进一步包括:
2041、对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
2042、通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
2043、获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
2044、将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
2045、将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
在这种情况下,通过二次校验可排除符合平面模型但不是地面的块内的深 度图,进而总体提升小障碍物的检测的准确性。
在本实施方式中,所述双目相机包括左相机和右相机,所述左相机获取的所述地面图像包括左图像,所述右相机获取的所述地面图像包括右图像,所述结构光深度相机获取所述地面的所述结构光深度图。
本发明实施方式还涉及一种小障碍物检测系统。所述检测系统包括如上所述的小障碍物检测方法。关于小障碍物检测系统,在此不做赘述。
在本实施方式中,小障碍物检测系统还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
在这种情况下,提升了双目相机和结构光深度相机采集的到图像对地面的覆盖区域,显著提升了对小障碍物识别的完整性。
可选地,本发明实施例还提供一种机器人,所述机器人包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该机器人的处理器用于提供计算和控制能力。该机器人的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该机器人的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种深度图的处理方法或者小障碍物检测方法。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如上所述的深度图的处理方法或者小障碍物检测方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。 非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述的实施方式,并不构成对该技术方案保护范围的限定。任何在上述实施方式的精神和原则之内所作的修改、等同更换和改进等,均应包含在该技术方案的保护范围之内。

Claims (10)

  1. 一种深度图的处理方法,其特征在于,所述方法包括:
    传感器标定,所述传感器包括双目相机和结构光深度相机,所述双目相机获取左图像和右图像,所述结构光深度相机获取结构光深度图,对所述双目相机的内参畸变标定、外参标定以及对所述双目相机和所述结构光深度相机的外参标定;
    畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
    数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
    稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
  2. 如权利要求1所述的深度图的处理方法,其特征在于,所述稀疏立体匹配的操作,具体包括:
    提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
  3. 如权利要求1所述的深度图的处理方法,其特征在于,所述畸变与极线矫正的操作,具体包括:
    将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
  4. 一种小障碍物检测方法,其特征在于,所述检测方法包括如权利要求1-3任一项所述的深度图的处理方法,所述检测方法用于检测地面的小障碍物,所述方法还包括:
    通过所述双目相机和结构光深度相机分别获取地面图像;
    对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
    通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障 碍物的点云进行分离检测;
    将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
    采用融合方案对所述目标图像进行三维重构获取完整密集点云。
  5. 如权利要求4所述的小障碍物检测方法,其特征在于,所述稀疏立体匹配的操作,进一步包括:
    对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
    通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
    获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
    将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
    将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
  6. 如权利要求4所述的小障碍物检测方法,其特征在于,所述双目相机包括左相机和右相机,所述左相机获取的所述地面图像包括左图像,所述右相机获取的所述地面图像包括右图像,所述结构光深度相机获取所述地面的所述结构光深度图。
  7. 一种小障碍物检测系统,其特征在于,所述检测系统包括如权利要求4-6任一项所述的小障碍物检测方法。
  8. 如权利要求7所述的小障碍物检测系统,其特征在于,还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
  9. 一种机器人,其特征在于,所述机器人包括处理器和存储器,所述存储 器中存储有计算机程序,所述处理器用于执行所述计算机程序以实现如权利要求1至3任一项所述的深度图的处理方法或者如权利要求4至6任一项所述的小障碍物检测方法。
  10. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如权利要求1至3任一项所述的深度图的处理方法或者如权利要求4至6任一项所述的小障碍物检测方法。
PCT/CN2020/135024 2020-01-20 2020-12-09 深度图的处理方法、小障碍物检测方法及系统、机器人及介质 WO2021147545A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20916169.4A EP4083917A4 (en) 2020-01-20 2020-12-09 DEPTH IMAGE PROCESSING METHOD, SMALL OBSTACLE DETECTION METHOD AND SYSTEM, ROBOT AND MEDIUM
US17/794,045 US20230063535A1 (en) 2020-01-20 2020-12-09 Depth image processing method, small obstacle detection method and system, robot, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010062330.9A CN111260715B (zh) 2020-01-20 2020-01-20 深度图的处理方法、小障碍物检测方法及系统
CN202010062330.9 2020-01-20

Publications (1)

Publication Number Publication Date
WO2021147545A1 true WO2021147545A1 (zh) 2021-07-29

Family

ID=70949024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135024 WO2021147545A1 (zh) 2020-01-20 2020-12-09 深度图的处理方法、小障碍物检测方法及系统、机器人及介质

Country Status (4)

Country Link
US (1) US20230063535A1 (zh)
EP (1) EP4083917A4 (zh)
CN (1) CN111260715B (zh)
WO (1) WO2021147545A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260715B (zh) * 2020-01-20 2023-09-08 深圳市普渡科技有限公司 深度图的处理方法、小障碍物检测方法及系统
CN113208882A (zh) * 2021-03-16 2021-08-06 宁波职业技术学院 一种基于深度学习的盲人智能避障方法及系统
CN113436269B (zh) * 2021-06-15 2023-06-30 影石创新科技股份有限公司 图像稠密立体匹配方法、装置和计算机设备
CN113554713A (zh) * 2021-07-14 2021-10-26 大连四达高技术发展有限公司 飞机蒙皮移动机器人制孔视觉定位及检测方法
CN115880448B (zh) * 2022-12-06 2024-05-14 西安工大天成科技有限公司 基于双目成像的三维测量方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106796728A (zh) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 生成三维点云的方法、装置、计算机系统和移动设备
CN108230403A (zh) * 2018-01-23 2018-06-29 北京易智能科技有限公司 一种基于空间分割的障碍物检测方法
US20190130773A1 (en) * 2017-11-02 2019-05-02 Autel Robotics Co., Ltd. Obstacle avoidance method and device, moveable object and computer readable storage medium
CN110264510A (zh) * 2019-05-28 2019-09-20 北京邮电大学 一种基于双目采集图像提取景深信息的方法
CN111260715A (zh) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 深度图的处理方法、小障碍物检测方法及系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015074078A1 (en) * 2013-11-18 2015-05-21 Pelican Imaging Corporation Estimating depth from projected texture using camera arrays
CN103955920B (zh) * 2014-04-14 2017-04-12 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
CN106504284B (zh) * 2016-10-24 2019-04-12 成都通甲优博科技有限责任公司 一种基于立体匹配与结构光相结合的深度图获取方法
CN108629812A (zh) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 一种基于双目相机的测距方法
CN109615652B (zh) * 2018-10-23 2020-10-27 西安交通大学 一种深度信息获取方法及装置
CN110580724B (zh) * 2019-08-28 2022-02-25 贝壳技术有限公司 一种对双目相机组进行标定的方法、装置和存储介质
CN110686599B (zh) * 2019-10-31 2020-07-03 中国科学院自动化研究所 基于彩色格雷码结构光的三维测量方法、系统、装置
CN111260773B (zh) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 小障碍物的三维重建方法、检测方法及检测系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106796728A (zh) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 生成三维点云的方法、装置、计算机系统和移动设备
US20190130773A1 (en) * 2017-11-02 2019-05-02 Autel Robotics Co., Ltd. Obstacle avoidance method and device, moveable object and computer readable storage medium
CN108230403A (zh) * 2018-01-23 2018-06-29 北京易智能科技有限公司 一种基于空间分割的障碍物检测方法
CN110264510A (zh) * 2019-05-28 2019-09-20 北京邮电大学 一种基于双目采集图像提取景深信息的方法
CN111260715A (zh) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 深度图的处理方法、小障碍物检测方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4083917A4 *

Also Published As

Publication number Publication date
CN111260715A (zh) 2020-06-09
CN111260715B (zh) 2023-09-08
EP4083917A1 (en) 2022-11-02
EP4083917A4 (en) 2024-02-14
US20230063535A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
WO2021147548A1 (zh) 小障碍物的三维重建方法、检测方法及系统、机器人及介质
WO2021147545A1 (zh) 深度图的处理方法、小障碍物检测方法及系统、机器人及介质
US11010924B2 (en) Method and device for determining external parameter of stereoscopic camera
JP6852936B1 (ja) 深度点線特徴に基づくドローン視覚走行距離計方法
CN112785702A (zh) 一种基于2d激光雷达和双目相机紧耦合的slam方法
CN107560592B (zh) 一种用于光电跟踪仪联动目标的精确测距方法
CN106650701B (zh) 基于双目视觉的室内阴影环境下障碍物检测方法及装置
WO2012166814A1 (en) Online environment mapping
WO2021063128A1 (zh) 单相机环境中主动式刚体的位姿定位方法及相关设备
CN102982334B (zh) 基于目标边缘特征与灰度相似性的稀疏视差获取方法
Cheng et al. Building boundary extraction from high resolution imagery and lidar data
CN107885224A (zh) 基于三目立体视觉的无人机避障方法
KR20210090384A (ko) 카메라 및 라이다 센서를 이용한 3d 객체 검출방법 및 장치
CN111798507A (zh) 一种输电线安全距离测量方法、计算机设备和存储介质
CN110992463B (zh) 一种基于三目视觉的输电导线弧垂的三维重建方法及系统
CN114549549B (zh) 一种动态环境下基于实例分割的动态目标建模跟踪方法
Wang Automatic extraction of building outline from high resolution aerial imagery
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
WO2020133080A1 (zh) 物体定位方法、装置、计算机设备及存储介质
Paudel et al. 2D-3D camera fusion for visual odometry in outdoor environments
CN116883590A (zh) 一种三维人脸点云优化方法、介质及系统
Jang et al. Topographic information extraction from KOMPSAT satellite stereo data using SGM
He et al. Planar constraints for an improved uav-image-based dense point cloud generation
Chen et al. 3d map building based on stereo vision
Yoshisada et al. Indoor map generation from multiple LiDAR point clouds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20916169

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020916169

Country of ref document: EP

Effective date: 20220729

NENP Non-entry into the national phase

Ref country code: DE