WO2023185375A1 - Depth map generation system and method, and autonomous mobile device - Google Patents

Depth map generation system and method, and autonomous mobile device Download PDF

Info

Publication number
WO2023185375A1
WO2023185375A1 PCT/CN2023/079554 CN2023079554W WO2023185375A1 WO 2023185375 A1 WO2023185375 A1 WO 2023185375A1 CN 2023079554 W CN2023079554 W CN 2023079554W WO 2023185375 A1 WO2023185375 A1 WO 2023185375A1
Authority
WO
WIPO (PCT)
Prior art keywords
baseline
depth
pixel
speckle
image
Prior art date
Application number
PCT/CN2023/079554
Other languages
French (fr)
Chinese (zh)
Inventor
杜斌
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Publication of WO2023185375A1 publication Critical patent/WO2023185375A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

A depth map generation system and method, and an autonomous mobile device. The method comprises: a plurality of projectors (103) projecting light; a sensor (102) acquiring an image to be processed; a processor (101) comparing the image to be processed with a preset speckle image, so as to obtain disparities corresponding to pixels at speckle locations in the image to be processed; for any baseline, on the basis of the baseline and the disparities corresponding to the pixels at the speckle locations, calculating depth values, which correspond to the baseline, of the pixels at the speckle locations; for any baseline, on the basis of the depth values, which correspond to the baseline, of the pixels at the speckle locations, generating a depth map corresponding to the baseline; and fusing depth maps corresponding to baselines, so as to obtain a target depth map. Therefore, a plurality of baselines for different distances between a plurality of projectors (103) and a sensor (102) can be used to respectively calculate corresponding depth maps, and the calculated depth maps are fused, thereby improving the accuracy of depth calculation.

Description

一种深度图生成系统、方法及自主移动设备A depth map generation system, method and autonomous mobile device
本申请要求于2022年3月30日提交中国专利局、申请号为202210328464.X发明名称为“一种深度图生成系统、方法及自主移动设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202210328464. This reference is incorporated into this application.
技术领域Technical field
本申请涉及信息技术领域,特别是涉及一种深度图生成系统、方法及自主移动设备。This application relates to the field of information technology, and in particular to a depth map generation system, method and autonomous mobile device.
背景技术Background technique
目前,随着智能设备的普及,自动避障设备的应用已经越来越广泛。例如,扫地机器人可以对周围的障碍进行识别和躲避。而在进行障碍识别时,需要通过投射器投射散斑,然后通过传感器采集包括散斑的图像,最后利用采集到的图像进行深度计算,并生成深度图,通过深度图进行障碍的识别和躲避。At present, with the popularization of smart devices, the application of automatic obstacle avoidance equipment has become more and more widespread. For example, a sweeping robot can recognize and avoid obstacles around it. When identifying obstacles, it is necessary to project speckles through a projector, then collect images including speckles through sensors, and finally use the collected images to perform depth calculations and generate depth maps, through which obstacles can be identified and avoided.
现有技术中,为了提高深度计算的精度,一般需要布置多个传感器采集,利用不同传感器和投射器之间的基线,利用三维距离测量方法进行测量。然而,通过该方法不但对芯片的要求很高,而且会导致系统复杂度高。In the existing technology, in order to improve the accuracy of depth calculation, it is generally necessary to arrange multiple sensors for collection, use baselines between different sensors and projectors, and use a three-dimensional distance measurement method to perform measurements. However, this method not only places high requirements on the chip, but also leads to high system complexity.
发明内容Contents of the invention
本申请实施例的目的在于提供一种深度图生成系统、方法及自主移动设备,以实现提高深度计算的精度。具体技术方案如下:The purpose of the embodiments of the present application is to provide a depth map generation system, method and autonomous mobile device to improve the accuracy of depth calculation. The specific technical solutions are as follows:
本申请实施例的第一方面,首先提供了一种深度图生成系统,包括传感器、处理器和多个投射器;The first aspect of the embodiment of the present application first provides a depth map generation system, including a sensor, a processor and multiple projectors;
所述多个投射器,用于投射光线;The plurality of projectors are used to project light;
所述传感器,用于采集待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;The sensor is used to collect an image to be processed, wherein the image to be processed includes speckles after reflection of light projected by multiple projectors;
所述处理器,用于将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射 器和所述传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各所述基线对应的深度图进行融合,得到目标深度图。The processor is configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image Including the initial reference position of each pixel at each speckle position; for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position. , where the baseline represents the distance between the projector and the sensor, different projection The distance between the detector and the sensor is different; for any baseline, a depth map corresponding to the baseline is generated according to the depth value of each pixel at each speckle position corresponding to the baseline; and the depth map corresponding to each baseline is processed. Fusion to obtain the target depth map.
可选的,所述处理器,具体用于针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。Optionally, the processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the parallax corresponding to each pixel at each speckle location; for any Baseline, according to the preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline Corresponds to a preset depth range, and different baselines correspond to different preset depth ranges.
可选的,所述处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。Optionally, the processor is specifically configured to select a plane equation corresponding to any baseline; and calculate the parallax of each speckle position according to the selected plane equation and the parallax corresponding to each pixel of each speckle position. The depth of each pixel is used to obtain the depth value of each pixel at each speckle position.
可选的,所述多个投射器,具体用于交替通过各所述投射器投射光线;Optionally, the plurality of projectors are specifically used to alternately project light through each of the projectors;
所述传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed every preset time interval.
可选的,所述多个投射器中距离所述传感器最近的投射器的曝光时长小于,距离所述传感器最远的投射器的曝光时长。Optionally, the exposure time of the projector closest to the sensor among the plurality of projectors is shorter than the exposure time of the projector farthest from the sensor.
本申请实施例的第二方面,提供了一种深度图生成方法,应用于深度图生成系统中的处理器,所述深度图生成系统包括:传感器、处理器和多个投射器,所述方法包括:A second aspect of the embodiment of the present application provides a depth map generation method, which is applied to a processor in a depth map generation system. The depth map generation system includes: a sensor, a processor, and multiple projectors. The method include:
获取待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;Obtaining an image to be processed, wherein the image to be processed includes reflected speckles of light projected by multiple projectors;
将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;The image to be processed is compared with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image includes each pixel at each speckle position. The initial reference position of the pixel;
针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;For any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, where the baseline represents the projector and the sensor The distance between different projectors and the sensor is different;
针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;For any baseline, generate a depth map corresponding to the baseline based on the depth value of each pixel at each speckle position corresponding to the baseline;
对各所述基线对应的深度图进行融合,得到目标深度图。The depth maps corresponding to each of the baselines are fused to obtain a target depth map.
可选的,所述针对任一基线,根据该基线和所述各散斑位置的各像素对 应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:Optionally, for any baseline, each pixel pair according to the baseline and each speckle position According to the corresponding disparity, calculate the depth value of each pixel at each speckle position corresponding to the baseline, including:
针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;For any baseline, calculate the depth values of each pixel at multiple speckle locations based on the disparity corresponding to the baseline and each pixel at each speckle location;
针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。For any baseline, according to the preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where, Each baseline corresponds to a preset depth range, and different baselines correspond to different preset depth ranges.
可选的,所述针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:Optionally, for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, including:
针对任一基线,选取该基线对应的平面方程;For any baseline, select the plane equation corresponding to the baseline;
根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。According to the selected plane equation and the parallax corresponding to each pixel at each speckle position, the depth of each pixel at each speckle position is calculated to obtain the depth value of each pixel at each speckle position.
本申请实施例的第三方面,提供了一种自主移动设备,包括深度图生成系统,所述深度图生成系统包括:传感器、处理器和多个投射器;A third aspect of the embodiments of the present application provides an autonomous mobile device, including a depth map generation system, where the depth map generation system includes: a sensor, a processor, and a plurality of projectors;
所述多个投射器,用于投射光线;The plurality of projectors are used to project light;
所述传感器,用于采集待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;The sensor is used to collect an image to be processed, wherein the image to be processed includes speckles after reflection of light projected by multiple projectors;
所述处理器,用于将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各所述基线对应的深度图进行融合,得到目标深度图。The processor is configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image Including the initial reference position of each pixel at each speckle position; for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position. , wherein the baseline represents the distance between the projector and the sensor, and the distance between different projectors and the sensor is different; for any baseline, the depth of each pixel at each speckle position corresponding to the baseline value, generate a depth map corresponding to the baseline; fuse the depth maps corresponding to each baseline to obtain a target depth map.
可选的,所述处理器,具体用于针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。 Optionally, the processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the parallax corresponding to each pixel at each speckle location; for any Baseline, according to the preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline Corresponds to a preset depth range, and different baselines correspond to different preset depth ranges.
可选的,所述处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。Optionally, the processor is specifically configured to select a plane equation corresponding to any baseline; and calculate the parallax of each speckle position according to the selected plane equation and the parallax corresponding to each pixel of each speckle position. The depth of each pixel is used to obtain the depth value of each pixel at each speckle position.
可选的,所述多个投射器,具体用于交替通过各所述投射器投射光线;Optionally, the plurality of projectors are specifically used to alternately project light through each of the projectors;
所述传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed every preset time interval.
可选的,所述多个投射器中距离所述传感器最近的投射器的曝光时长小于,距离所述传感器最远的投射器的曝光时长。Optionally, the exposure time of the projector closest to the sensor among the plurality of projectors is shorter than the exposure time of the projector farthest from the sensor.
本申请实施例的另一方面,提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一所述的深度图生成方法。Another aspect of the embodiments of the present application provides a computer-readable storage medium, which is characterized in that a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, any of the above-mentioned tasks are implemented. Depth map generation method described above.
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一所述的深度图生成方法。Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to execute any one of the depth map generation methods described above.
本申请实施例有益效果:Beneficial effects of the embodiments of this application:
本申请实施例提供的一种深度图生成系统、方法及自主移动设备,深度图生成系统,包括传感器、处理器和多个投射器;所述多个投射器,用于投射光线;所述传感器,用于采集待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;所述处理器,用于将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各所述基线对应的深度图进行融合,得到目标深度图。通过本申请实施例的深度图生成系统,可以利用多个投射器和传感器之间的距离不同的多条基线,分别生成对应的深度图,并对生成的深度图进行融合,从而提高深度计算的准确率。Embodiments of the present application provide a depth map generation system, method and autonomous mobile device. The depth map generation system includes a sensor, a processor and multiple projectors; the multiple projectors are used to project light; the sensor , used to collect images to be processed, wherein the images to be processed include speckles after reflection of light projected by multiple projectors; the processor is used to compare the images to be processed with a preset speckle image By comparison, the disparity corresponding to each pixel at each speckle position in the image to be processed is obtained, wherein the preset speckle image includes the initial reference position of each pixel at each speckle position; for any baseline, according to the The parallax corresponding to the baseline and each pixel at each speckle position is calculated, and the depth value of each pixel at each speckle position corresponding to the baseline is calculated, where the baseline represents the distance between the projector and the sensor. Different projections The distance between the detector and the sensor is different; for any baseline, a depth map corresponding to the baseline is generated according to the depth value of each pixel at each speckle position corresponding to the baseline; and the depth map corresponding to each baseline is processed. Fusion to obtain the target depth map. Through the depth map generation system of the embodiment of the present application, multiple baselines with different distances between projectors and sensors can be used to generate corresponding depth maps respectively, and the generated depth maps can be fused, thereby improving the efficiency of depth calculation. Accuracy.
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。 Of course, implementing any product or method of the present application does not necessarily require achieving all the above-mentioned advantages simultaneously.
附图说明Description of drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。The drawings described here are used to provide a further understanding of the present application and constitute a part of the present application. The illustrative embodiments of the present application and their descriptions are used to explain the present application and do not constitute an improper limitation of the present application.
图1为本申请实施例提供的深度图生成系统的一种结构示意图;Figure 1 is a schematic structural diagram of a depth map generation system provided by an embodiment of the present application;
图2为本申请实施例提供的基线的示意图Figure 2 is a schematic diagram of the baseline provided by the embodiment of the present application.
图3为本申请实施例提供的不同基线的深度范围对应关系;Figure 3 shows the corresponding relationship between the depth ranges of different baselines provided by the embodiment of the present application;
图4为本申请实施例提供的深度图生成系统的另一种结构示意图;Figure 4 is another structural schematic diagram of a depth map generation system provided by an embodiment of the present application;
图5为本申请实施例提供的深度图生成方法的一种流程示意图;Figure 5 is a schematic flowchart of a depth map generation method provided by an embodiment of the present application;
图6为本申请实施例提供的深度图生成方法的另一种流程示意图;Figure 6 is another schematic flowchart of a depth map generation method provided by an embodiment of the present application;
图7为本申请实施例提供的自主移动设备的一种结构示意图;Figure 7 is a schematic structural diagram of an autonomous mobile device provided by an embodiment of the present application;
图8为本申请实施例提供的电子设备的一种结构示意图。FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of the present application clearer, the present application will be further described in detail below with reference to the accompanying drawings and examples. Obviously, the described embodiments are only some of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员基于本申请所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only some of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art based on this application fall within the scope of protection of this application.
首先,对本申请实施例中可能使用到的专业术语进行解释:First, the professional terms that may be used in the embodiments of this application are explained:
基线:两个相机光圈中心之间或者相机光圈中心和散斑投射器中心的水平距离称为深度相机的基线。Baseline: The horizontal distance between the centers of two camera apertures or the center of a camera aperture and the center of a speckle projector is called the baseline of a depth camera.
视差:双目视差也被称为立体视差,物体离图像sensor(传感器)越近,两个sensor采集到物体的差别也越大,这就形成了双目视差。后续算法可以利用对这种视差的测量,估计出物体到相机的距离。Parallax: Binocular parallax is also called stereoscopic parallax. The closer the object is to the image sensor (sensor), the greater the difference in the object collected by the two sensors, which forms binocular parallax. Subsequent algorithms can use measurements of this parallax to estimate the object's distance from the camera.
散斑投射器:用于在被摄物体上投射散斑的装置。Speckle Projector: A device used to project speckles on a subject.
本申请实施例的第一方面,首先提供了一种深度图生成系统,参见图1, 包括传感器102、处理器101和多个投射器103;The first aspect of the embodiment of this application first provides a depth map generation system, see Figure 1. Includes a sensor 102, a processor 101 and a plurality of projectors 103;
多个投射器103,用于投射光线;A plurality of projectors 103 for projecting light;
传感器102,用于采集待处理图像,其中,待处理图像包括多个投射器投射的光线经反射后的散斑;The sensor 102 is used to collect images to be processed, where the images to be processed include speckles after reflection of light projected by multiple projectors;
处理器101,用于将待处理图像与预设散斑图像进行比对,得到待处理图像中各散斑位置的各像素对应的视差,其中,预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,基线表示投射器和传感器之间的距离,不同投射器和传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各基线对应的深度图进行融合,得到目标深度图。The processor 101 is configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, where the preset speckle image includes each pixel at each speckle position. The initial reference position of the pixel; for any baseline, the depth value of each pixel at each speckle position corresponding to the baseline is calculated based on the disparity corresponding to the baseline and each pixel at each speckle position, where the baseline represents the projector and sensor The distance between different projectors and sensors is different; for any baseline, a depth map corresponding to the baseline is generated based on the depth value of each pixel at each speckle position corresponding to the baseline; for each baseline The depth maps are fused to obtain the target depth map.
其中,本申请实施例中的多个投射器可以是在物理上独立的多个投射器。在实际使用过程中,多个投射器应当尽量安装在同一水平线,从而减小计算时的时长损失。当投射器安装在智能移动设备上时,多个投射器和传感器可以安装在该智能移动设备的相同侧或不同侧。其中,多个投射器中,每一投射器和传感器的距离不同,从而形成的基线也不同,例如,参见图2,投射器1、投射器2、投射器3和传感器(Sensor)的距离分别为s1、s2、s3,则可以形成三条基线s1、s2、s3。Among them, the multiple projectors in the embodiment of the present application may be physically independent multiple projectors. In actual use, multiple projectors should be installed on the same horizontal line as much as possible to reduce the time loss during calculation. When the projector is installed on the smart mobile device, multiple projectors and sensors can be installed on the same side or different sides of the smart mobile device. Among the multiple projectors, the distance between each projector and the sensor is different, so the baselines formed are also different. For example, see Figure 2, the distances between projector 1, projector 2, projector 3 and the sensor (Sensor) are respectively are s1, s2, and s3, then three baselines s1, s2, and s3 can be formed.
其中,传感器采集到待处理图像后,还可以对采集到的待处理图像进行预处理,具体的,预处理的方法包括对比度增强,直方图均衡化,二值化等方法,通过预处理可以增加图像特征和进行亮度均衡化。Among them, after the sensor collects the image to be processed, it can also preprocess the collected image to be processed. Specifically, the preprocessing methods include contrast enhancement, histogram equalization, binarization and other methods. Preprocessing can increase the Image features and brightness equalization.
其中,将待处理图像与预设散斑图像进行比对,得到待处理图像中各散斑位置的各像素对应的视差,可以通过将传感器采集到的待处理图像中各散斑位置的各像素和预设散斑图像进行对比,得到各散斑位置的各像素对应的视差,并生成对应的视差图。在针对任一基线,根据该基线和各散斑位置的各像素计算各散斑的深度,得到各散斑的深度值时,可以根据生成的视差图和预设平面方程计算对应的深度值,生成深度图。例如,将传感器采集的散斑图像和标定阶段记录的参考平面散斑图像进行图像匹配计算视差图;利用上述视差图,参考平面方程以及相机焦距,散斑和投射器之间的距离计算深度图。 Among them, the image to be processed is compared with the preset speckle image to obtain the parallax corresponding to each pixel of each speckle position in the image to be processed. The disparity of each pixel of each speckle position in the image to be processed collected by the sensor can be obtained. Compare with the preset speckle image to obtain the disparity corresponding to each pixel at each speckle position, and generate a corresponding disparity map. For any baseline, calculate the depth of each speckle based on the baseline and each pixel at each speckle position. When obtaining the depth value of each speckle, the corresponding depth value can be calculated based on the generated disparity map and the preset plane equation. Generate depth map. For example, the speckle image collected by the sensor and the reference plane speckle image recorded during the calibration stage are matched to calculate the disparity map; the depth map is calculated using the above disparity map, the reference plane equation, the camera focal length, the distance between the speckle and the projector .
其中,针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,可以通过多种预设算法进行计算,例如,通过单目深度算法进行计算。例如,Sensor和投射器之间两两进行深度计算:Sensor和投射器1利用单目深度计算法以S1为基线计算出深度图m1;Sensor和投射器2利用单目深度计算法以S2为基线计算出深度图m2;Sensor和投射器3利用单目深度计算法以S3为基线计算出深度图m3。具体的计算过程可以参见后续实施例。Among them, for any baseline, the depth value of each pixel at each speckle position corresponding to the baseline is calculated based on the disparity corresponding to the baseline and each pixel at each speckle position. The calculation can be performed through a variety of preset algorithms, for example, Calculated through monocular depth algorithm. For example, Sensor and projector perform depth calculations in pairs: Sensor and projector 1 use the monocular depth calculation method to calculate the depth map m1 with S1 as the baseline; Sensor and projector 2 use the monocular depth calculation method with S2 as the baseline. Calculate the depth map m2; Sensor and projector 3 use the monocular depth calculation method to calculate the depth map m3 with S3 as the baseline. The specific calculation process can be found in subsequent embodiments.
本申请实施例中,发明人研究发现,根据不同基线计算深度值时,对应不同基线的最佳精度往往是一定的深度范围,参见图3。因此,根据各基线计算并生成深度图时,每一基线只计算一定距离范围内的深度,从而提高计算的精度。然后再对各基线生成的深度图进行融合,得到目标深度图,从而通过融合得到完整的深度图。In the embodiment of the present application, the inventor found that when calculating depth values based on different baselines, the best accuracy corresponding to different baselines is often a certain depth range, see Figure 3. Therefore, when calculating and generating a depth map based on each baseline, only the depth within a certain distance range is calculated for each baseline, thereby improving the accuracy of the calculation. Then the depth maps generated by each baseline are fused to obtain the target depth map, and a complete depth map is obtained through fusion.
现有技术中,为了提高深度计算的精度,会设置多个传感器,然而接入大量的图像传感器,不但对SOC(System on Chip,系统级芯片)的要求很高,而且使用多个SOC的话又会涉及多个SOC同步问题,导致系统复杂度高,不利于系统实时性和问题性,同时还有成本增加问题。通过本申请实施例的方法,通过多投射器不但可以提高深度计算的精度,并且投射器一般只需要PWM(Pulse width modulation,脉冲宽度调制)控制,电路简单,易于控制。In the existing technology, in order to improve the accuracy of depth calculation, multiple sensors will be installed. However, accessing a large number of image sensors not only places high requirements on SOC (System on Chip, system-on-chip), but also uses multiple SOCs. It will involve multiple SOC synchronization issues, resulting in high system complexity, which is not conducive to the real-time and problematic nature of the system, and also increases costs. Through the method of the embodiment of the present application, not only can the accuracy of depth calculation be improved through multiple projectors, but the projectors generally only need PWM (Pulse width modulation) control, and the circuit is simple and easy to control.
可见,通过本申请实施例的深度图生成系统,可以利用表征多个投射器和传感器之间的距离的不同的多条基线,分别计算对应的深度图,并对计算得到的深度图进行融合,从而提高深度计算的准确率。It can be seen that through the depth map generation system of the embodiment of the present application, different baselines representing the distances between multiple projectors and sensors can be used to calculate corresponding depth maps respectively, and the calculated depth maps can be fused. Thereby improving the accuracy of depth calculation.
可选的,处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。Optionally, the processor is specifically configured to select a plane equation corresponding to any baseline; calculate each pixel at each speckle position according to the selected plane equation and the parallax corresponding to each pixel at each speckle position. depth, and obtain the depth value of each pixel at each speckle position.
在实际使用过程中,可以预先标定传感器内参,然后利用传感器内参校正图像。例如,选择一个参考平面,通过现有的摄像机标定算法确定参考平面在相机坐标系下的平面方程,以及投射器和sensor的基线距离s1,s2,s3。投射器投射散斑图像到参考平面,对于各个投射器分别记录参考图像R1,R2, R3根据算法设置的最大视差计算值dmax,计算深度区间[0,d1),[d1,d2),[d2,d3),[d3,+∞),其中,f为相机校正后的等效焦距,从而实现对sensor和投射器之间两两进行标定。In actual use, the sensor internal parameters can be calibrated in advance, and then the sensor internal parameters can be used to correct the image. For example, select a reference plane, and determine the plane equation of the reference plane in the camera coordinate system through the existing camera calibration algorithm, as well as the baseline distances s1, s2, and s3 between the projector and the sensor. The projector projects the speckle image to the reference plane, and records the reference images R1, R2, respectively, for each projector. R3 calculates the depth interval [0, d1), [d1, d2), [d2, d3), [d3, +∞), based on the maximum disparity calculation value dmax set by the algorithm. Among them, f is the equivalent focal length after correction of the camera, thereby achieving calibration between the sensor and the projector.
其中,通过平面方程和视差,计算各散斑的深度时,需要获取并根据基线、视差、视场、深度等之间的对应关系,计算不同视差对应的深度。因此,在实际使用过程中,可以预先针对不同基线创建平面方程,该平面方程可以表示该基线对应的视差和深度的对应关系。在计算深度时,可以先根据基线选取对应的平面方程,然后根据视差计算对应的深度,从而提高计算的效率和精度。Among them, when calculating the depth of each speckle through plane equations and parallax, it is necessary to obtain and calculate the depth corresponding to different parallax based on the correspondence between the baseline, parallax, field of view, depth, etc. Therefore, during actual use, plane equations can be created in advance for different baselines, and the plane equations can represent the corresponding relationship between the parallax and depth corresponding to the baseline. When calculating depth, you can first select the corresponding plane equation based on the baseline, and then calculate the corresponding depth based on the parallax, thereby improving calculation efficiency and accuracy.
可选的,处理器,具体用于针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。Optionally, the processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the disparity corresponding to each pixel at each speckle location; for any baseline, based on the parallax The preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline corresponds to a preset Depth range, different baselines correspond to different preset depth ranges.
例如,Sensor和投射器之间两两进行深度计算:Sensor和投射器1利用单目深度计算法以S1为基线计算出深度图m1;Sensor和投射器2利用单目深度计算法以S2为基线计算出深度图m2;Sensor和投射器3利用单目深度计算法以S3为基线计算出深度图m3。在进行融合时,可以以最小基线s1计算得到m1为参考,最终的深度图m根据m1中每个点落在标定时确定的深度区间[0,d1),[d1,d2),[d2,d3),[d3,+∞)范围确定从哪一幅深度图中选取对应的值。公式如下:
For example, Sensor and projector perform depth calculations in pairs: Sensor and projector 1 use the monocular depth calculation method to calculate the depth map m1 with S1 as the baseline; Sensor and projector 2 use the monocular depth calculation method with S2 as the baseline. Calculate the depth map m2; Sensor and projector 3 use the monocular depth calculation method to calculate the depth map m3 with S3 as the baseline. When performing fusion, m1 can be calculated with the minimum baseline s1 as a reference. The final depth map m is based on each point in m1 falling into the depth interval [0, d1), [d1, d2), [d2, d3), [d3,+∞) range determines which depth map to select the corresponding value from. The formula is as follows:
最后,针对任一基线,根据提取到的亮度值生成深度图。Finally, for any baseline, a depth map is generated based on the extracted brightness values.
可选的,多个投射器,具体用于交替通过各投射器投射光线;Optional, multiple projectors, specifically used to project light through each projector alternately;
传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed at a preset interval.
可选的,多个投射器中,距离传感器最近的投射器的曝光时长小于,距离传感器最远的投射器的曝光时长。 Optionally, among the multiple projectors, the exposure time of the projector closest to the sensor is shorter than the exposure time of the projector farthest from the sensor.
在实际使用过程中,不同投射器的曝光时长,可以与对应的基线成正比。例如,以3个投射器为例,对应的基线分别表示为d1、d2、d3,三个投射器对应的曝光时间表示为t1、t2、t3,可以通过以下步骤计算t1、t2、t3:In actual use, the exposure duration of different projectors can be directly proportional to the corresponding baseline. For example, taking three projectors as an example, the corresponding baselines are represented as d1, d2, and d3 respectively, and the corresponding exposure times of the three projectors are represented as t1, t2, and t3. You can calculate t1, t2, and t3 through the following steps:
若合成深度图一帧的时间是T,则t1+t2+t3=T;
t1/t2/t3=d1/d2/d3;
If the time to synthesize one frame of the depth map is T, then t1+t2+t3=T;
t1/t2/t3=d1/d2/d3;
则,t1=d1*T/(d1+d2+d3);
t2=d2*T/(d1+d2+d3);
t3=d3*T/(d1+d2+d3)。
Then, t1=d1*T/(d1+d2+d3);
t2=d2*T/(d1+d2+d3);
t3=d3*T/(d1+d2+d3).
参见图4,深度图生成系统包括一个图像传感器,多个投射器,投射器驱动电路,图像信号处理模块(图像信号处理器),深度图像处理模块,数字信号处理模块(数字信号处理器),编码器,投射器控制模块。Referring to Figure 4, the depth map generation system includes an image sensor, a plurality of projectors, a projector drive circuit, an image signal processing module (image signal processor), a depth image processing module, and a digital signal processing module (digital signal processor). Encoder, projector control module.
投射器控制模块,用于控制多个投射器交替对空间进行编码。每个投射器可以分别对应一个图像信号处理器,投射器1编码的图像输入至图像信号处理模块1,投射器2编码的图像输入至图像信号处理模块2,投射器3编码的图像输入至图像信号处理模块3。图像信号处理模块1,图像信号处理模块2,图像信号处理模块3可以是物理独立的3个模块,也可以是同一模块进行分时复用。同理,DPU(深度图像处理模块)1,DPU2,DPU3,数字信号处理器1,数字信号处理器2也可以是物理独立的多个模块,也可以是同一模块进行分时复用。The projector control module is used to control multiple projectors to alternately encode the space. Each projector can correspond to an image signal processor respectively. The image encoded by projector 1 is input to the image signal processing module 1. The image encoded by projector 2 is input to the image signal processing module 2. The image encoded by projector 3 is input to the image signal processor. Signal processing module 3. Image signal processing module 1, image signal processing module 2, and image signal processing module 3 can be three physically independent modules, or the same module can be time-division multiplexed. In the same way, DPU (depth image processing module) 1, DPU2, DPU3, digital signal processor 1, and digital signal processor 2 can also be multiple physically independent modules, or the same module can be time-division multiplexed.
对于3个投射器图像采集的过程,以输出帧率为30FPS为例,获取3个投射器图像帧的总时长为1/30秒,平均一帧的获取时间是1/90秒。考虑到随着距离越远投射器的投射能量越小,为了远近能量更均匀,深度图质量更高,短基线投射器图像帧的曝光时长小于长基线投射器图像帧的曝光时长。例如,投射器图像帧1曝光时长5ms,投射器图像帧2曝光时长10ms,投射器图像帧3曝光时长15ms。For the image acquisition process of three projectors, taking the output frame rate of 30FPS as an example, the total time to acquire image frames from three projectors is 1/30 seconds, and the average acquisition time of one frame is 1/90 seconds. Considering that the projection energy of the projector decreases with the distance, in order to make the far and near energy more uniform and the depth map quality to be higher, the exposure duration of the image frame of the short baseline projector is shorter than the exposure duration of the image frame of the long baseline projector. For example, the exposure time of projector image frame 1 is 5ms, the exposure time of projector image frame 2 is 10ms, and the exposure time of projector image frame 3 is 15ms.
其中,本申请实施例中通过多个投射器和传感器组成多目光系统,利用该多目光系统可以在缺乏纹理的情况下,通过投射器用红外结构光照明物体,并且在物体的表面投射出人造的纹理,再利用双模式的深度获取方式获取深度图,深度图的质量更高,可靠性更强。在一般情况下投射器与两个相机的间距s1和s2,小于两个相机的间距s3,在室外高亮环境下,近距离处结构 光强度高,可以利用单目模式获取深度图,远距离处由于结构光衰减可以利用双目深度计算方式获取融合后深度图,通过双模式的深度获取方法,得到的图像质量对比传统结构光或者双目方案更高。Among them, in the embodiment of the present application, multiple projectors and sensors are used to form a multi-view system. This multi-view system can illuminate an object with infrared structured light through the projector and project artificial artifacts on the surface of the object in the absence of texture. Texture, and then use the dual-mode depth acquisition method to obtain the depth map. The depth map has higher quality and stronger reliability. Under normal circumstances, the distances s1 and s2 between the projector and the two cameras are smaller than the distance s3 between the two cameras. In an outdoor bright environment, the structure at a close distance The light intensity is high, and the monocular mode can be used to obtain the depth map. Due to the attenuation of structured light at long distances, the binocular depth calculation method can be used to obtain the fused depth map. Through the dual-mode depth acquisition method, the image quality obtained is compared with traditional structured light or The binocular solution is higher.
为了说明本申请的方案,参见图5以下结合具体实施例进行说明:In order to illustrate the solution of the present application, refer to Figure 5 and the following description will be given in conjunction with specific embodiments:
深度图生成方法:Depth map generation method:
一、离线处理:1. Offline processing:
离线对sensor和投射器之间两两进行标定:选择一个参考平面,通过现有的摄像机标定算法确定参考平面在相机坐标系下的平面方程F,以及投射器和sensor的基线距离s1,s2,s3。Offline calibration between the sensor and the projector: select a reference plane, and determine the plane equation F of the reference plane in the camera coordinate system through the existing camera calibration algorithm, as well as the baseline distances s1, s2 between the projector and the sensor. s3.
投射器投射散斑图像到参考平面,对于各个投射器分别记录参考图像R1,R2,R3;根据算法设置的最大视差计算值dmax,计算深度区间[0,d1),[d1,d2),[d2,d3),[d3,+∞),其中f为相机校正后的等效焦距。The projector projects the speckle image to the reference plane, and each projector records the reference images R1, R2, and R3 respectively; according to the maximum parallax calculation value dmax set by the algorithm, the depth interval [0, d1), [d1, d2), [ d2,d3),[d3,+∞), where f is the equivalent focal length of the camera after correction.
二、在线处理:2. Online processing:
步骤一:激光散斑投射器进行空间编码。Step 1: Laser speckle projector performs spatial encoding.
步骤二:sensor采集图像并预处理,预处理的方法包括对比度增强,直方图均衡化,二值化等方法,通过预处理可以增加图像特征和进行亮度均衡化。Step 2: The sensor collects images and preprocesses them. Preprocessing methods include contrast enhancement, histogram equalization, binarization and other methods. Preprocessing can increase image features and equalize brightness.
步骤三:sensor和投射器之间两两进行深度计算:Sensor和投射器1利用单目深度计算法以S1为基线计算出深度图m1,Sensor和投射器2利用单目深度计算法以S2为基线计算出深度图m2,Sensor和投射器3利用单目深度计算法以S3为基线计算出深度图m3。Step 3: Depth calculation is performed between the sensor and the projector in pairs: Sensor and projector 1 use the monocular depth calculation method to calculate the depth map m1 with S1 as the baseline, and Sensor and projector 2 use the monocular depth calculation method with S2 as the baseline. The depth map m2 is calculated from the baseline, and the Sensor and Projector 3 use the monocular depth calculation method to calculate the depth map m3 with S3 as the baseline.
单目深度计算的方式如下:利用离线标定的相机内参校正图像;相机获取的散斑图像和标定阶段记录的参考平面散斑图像进行图像匹配计算获得视差图;利用上述视差图,参考平面方程以及相机焦距,散斑和投射器之间的距离计算出深度图。The monocular depth calculation method is as follows: use the offline calibrated camera internal parameters to correct the image; perform image matching calculation on the speckle image obtained by the camera and the reference plane speckle image recorded during the calibration stage to obtain a disparity map; use the above disparity map, the reference plane equation and The depth map is calculated from the distance between the camera focal length, speckle, and projector.
步骤四:对上一步骤获得的3个深度图进行融合操作:以最小基线s1计算得到m1为参考,最终的深度图m根据m1中每个点落在标定时确定的深度区 间[0,d1),[d1,d2),[d2,d3),[d3,+∞)范围确定从哪一幅深度图中选取对应的值。Step 4: Perform a fusion operation on the three depth maps obtained in the previous step: m1 is calculated with the minimum baseline s1 as a reference, and the final depth map m is based on each point in m1 falling into the depth area determined during calibration. The range between [0,d1), [d1,d2), [d2,d3), [d3,+∞) determines which depth map to select the corresponding value from.
本申请实施例的第二方面,提供了一种深度图生成方法,应用于深度图生成系统中的处理器,深度图生成系统包括:传感器、处理器和多个投射器,参见图6,上述方法包括:The second aspect of the embodiment of the present application provides a depth map generation method, which is applied to a processor in a depth map generation system. The depth map generation system includes: a sensor, a processor, and multiple projectors. See Figure 6, as mentioned above. Methods include:
步骤S61,获取待处理图像,其中,待处理图像包括多个投射器投射的光线经反射后的散斑;Step S61: Obtain an image to be processed, where the image to be processed includes speckles after reflection of light projected by multiple projectors;
步骤S62,将待处理图像与预设散斑图像进行比对,得到待处理图像中各散斑位置的各像素对应的视差,其中,预设散斑图像包括各散斑位置的各像素的初始参考位置;Step S62: Compare the image to be processed with the preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, where the preset speckle image includes the initial parallax of each pixel at each speckle position. reference position;
步骤S63,针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,基线表示投射器和传感器之间的距离,不同投射器和传感器之间的距离不同;Step S63: For any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, where the baseline represents the distance between the projector and the sensor. Distance, the distance between different projectors and sensors is different;
步骤S64,针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;Step S64: For any baseline, generate a depth map corresponding to the baseline based on the depth value of each pixel at each speckle position corresponding to the baseline;
步骤S65,对各基线对应的深度图进行融合,得到目标深度图。Step S65: fuse the depth maps corresponding to each baseline to obtain a target depth map.
可选的,针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:Optionally, for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, including:
针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;For any baseline, calculate the depth values of each pixel at multiple speckle locations based on the disparity corresponding to the baseline and each pixel at each speckle location;
针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。For any baseline, according to the preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where, Each baseline corresponds to a preset depth range, and different baselines correspond to different preset depth ranges.
可选的,针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:Optionally, for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, including:
针对任一基线,选取该基线对应的平面方程;For any baseline, select the plane equation corresponding to the baseline;
根据选取的平面方程和各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。According to the selected plane equation and the disparity corresponding to each pixel at each speckle position, the depth of each pixel at each speckle position is calculated to obtain the depth value of each pixel at each speckle position.
可见,通过本申请实施例的深度图生成方法,可以利用多个投射器和传 感器之间的距离不同的多条基线,分别生成对应的深度图,并对生成的深度图进行融合,从而提高深度计算的准确率。It can be seen that through the depth map generation method of the embodiment of the present application, multiple projectors and transmitters can be used Multiple baselines with different distances between sensors are used to generate corresponding depth maps respectively, and the generated depth maps are fused to improve the accuracy of depth calculation.
本申请实施例的第三方面,提供了一种自主移动设备,包括深度图生成系统,深度图生成系统包括:传感器、处理器和多个投射器;A third aspect of the embodiment of the present application provides an autonomous mobile device, including a depth map generation system. The depth map generation system includes: a sensor, a processor, and multiple projectors;
多个投射器,用于投射光线;Multiple projectors for projecting light;
传感器,用于采集待处理图像,其中,待处理图像包括多个投射器投射的光线经反射后的散斑;A sensor, used to collect images to be processed, where the images to be processed include speckles of light projected by multiple projectors after reflection;
处理器,用于将待处理图像与预设散斑图像进行比对,得到待处理图像中各散斑位置的各像素对应的视差,其中,预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,基线表示投射器和传感器之间的距离,不同投射器和传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各基线对应的深度图进行融合,得到目标深度图。A processor configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, where the preset speckle image includes each pixel at each speckle position. the initial reference position; for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline according to the disparity corresponding to the baseline and each pixel at each speckle position, where the baseline represents the relationship between the projector and the sensor The distance between different projectors and sensors is different; for any baseline, a depth map corresponding to the baseline is generated based on the depth value of each pixel at each speckle position corresponding to the baseline; for the depth corresponding to each baseline The images are fused to obtain the target depth map.
可选的,处理器,具体用于针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。Optionally, the processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the disparity corresponding to each pixel at each speckle location; for any baseline, based on the parallax The preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline corresponds to a preset Depth range, different baselines correspond to different preset depth ranges.
可选的,处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。Optionally, the processor is specifically configured to select a plane equation corresponding to any baseline; calculate the depth of each pixel at each speckle position based on the selected plane equation and the parallax corresponding to each pixel at each speckle position. , obtain the depth value of each pixel at each speckle position.
可选的,多个投射器,具体用于交替通过各投射器投射光线;Optional, multiple projectors, specifically used to project light through each projector alternately;
传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed at a preset interval.
可选的,多个投射器中距离传感器最近的投射器的曝光时长小于,距离传感器最远的投射器的曝光时长。Optionally, the exposure time of the projector closest to the sensor among the multiple projectors is shorter than the exposure time of the projector farthest from the sensor.
具体的,上述自主移动设备可以是移动机器人、扫地机器人等可以自动移动的设备。该自动移动设备可以根据深度图生成系统进行深度的测量和深度图的生成,然后根据探测结果进行自主移动或停止,例如,参见图7,该自主移动设备的前方和后端可以各安装多个投射器和一个传感器,可以通过处 理器控制其前后的投射器对前后进行散斑的投射,并通过前后的传感器进行散斑图像的采集,然后将采集到的图像输入处理器进行计算,从而得到对应的深度计算结果,如距离前方或后方的障碍物的聚类,从而当通过深度探测发现前方有障碍物时停止,或通过深度探测发现前方障碍物移除后继续前进。Specifically, the above-mentioned autonomous mobile device may be a mobile robot, a sweeping robot, or other devices that can move automatically. The automatic mobile device can measure the depth and generate the depth map according to the depth map generation system, and then move or stop autonomously according to the detection results. For example, see Figure 7, the front and rear ends of the autonomous mobile device can each be equipped with multiple projector and a sensor that can be The processor controls the projectors at the front and rear to project speckles at the front and rear, and collects speckle images through the sensors at the front and rear, and then inputs the collected images into the processor for calculation, thereby obtaining the corresponding depth calculation results, such as distance Clustering of obstacles in front or behind, so that it stops when an obstacle is found in front through depth detection, or continues after the obstacle in front is removed through depth detection.
可见,通过本申请实施例的处理器,可以利用多个投射器和传感器之间的距离不同的多条基线,分别生成对应的深度图,并对生成的深度图进行融合,从而提高深度计算的准确率。It can be seen that through the processor of the embodiment of the present application, multiple baselines with different distances between the projectors and sensors can be used to generate corresponding depth maps respectively, and the generated depth maps can be fused, thereby improving the efficiency of depth calculation. Accuracy.
本申请实施例还提供了一种电子设备,如图8所示,包括处理器801、通信接口802、存储器803和通信总线804,其中,处理器801,通信接口802,存储器803通过通信总线804完成相互间的通信,The embodiment of the present application also provides an electronic device, as shown in Figure 8, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804. The processor 801, the communication interface 802, and the memory 803 communicate through the communication bus 804. complete mutual communication,
存储器803,用于存放计算机程序;Memory 803, used to store computer programs;
处理器801,用于执行存储器803上所存放的程序时,实现如下步骤:The processor 801 is used to execute the program stored in the memory 803 to implement the following steps:
获取待处理图像,其中,待处理图像包括多个投射器投射的光线经反射后的散斑;Obtaining an image to be processed, wherein the image to be processed includes speckles after reflection of light projected by multiple projectors;
将待处理图像与预设散斑图像进行比对,得到待处理图像中各散斑位置的各像素对应的视差,其中,预设散斑图像包括各散斑位置的各像素的初始参考位置;Compare the image to be processed with the preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, where the preset speckle image includes the initial reference position of each pixel at each speckle position;
针对任一基线,根据该基线和各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,基线表示投射器和传感器之间的距离,不同投射器和传感器之间的距离不同;For any baseline, calculate the depth value of each pixel at each speckle location corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle location, where the baseline represents the distance between the projector and the sensor, which is different. The distance between the projector and sensor varies;
针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;For any baseline, generate a depth map corresponding to the baseline based on the depth value of each pixel at each speckle position corresponding to the baseline;
对各基线对应的深度图进行融合,得到目标深度图。The depth maps corresponding to each baseline are fused to obtain the target depth map.
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The communication bus mentioned in the above-mentioned electronic equipment can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The communication bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
通信接口用于上述电子设备与其他设备之间的通信。The communication interface is used for communication between the above-mentioned electronic devices and other devices.
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可 以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。The memory may include Random Access Memory (RAM) or To include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory. Optionally, the memory may also be at least one storage device located far away from the aforementioned processor.
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。The above-mentioned processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), special integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一深度图生成方法的步骤。In yet another embodiment provided by the present application, a computer-readable storage medium is also provided. A computer program is stored in the computer-readable storage medium. When the computer program is executed by a processor, any one of the above depth map generation is implemented. Method steps.
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一深度图生成方法。In yet another embodiment provided by this application, a computer program product containing instructions is also provided, which, when run on a computer, causes the computer to execute any of the depth map generation methods in the above embodiments.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), etc.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示 这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply There is no such actual relationship or sequence between these entities or operations. Furthermore, the terms "comprises,""comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article, or apparatus that includes the stated element.
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于方法、自主移动设备、计算机可读存储介质及计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a related manner. The same and similar parts between the various embodiments can be referred to each other. Each embodiment focuses on its differences from other embodiments. In particular, for the method, autonomous mobile device, computer-readable storage medium and computer program product embodiments, since they are basically similar to the method embodiments, the description is relatively simple. For relevant details, please refer to the partial description of the method embodiments. .
以上所述仅为本申请的较佳实施例,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。 The above descriptions are only preferred embodiments of the present application and are not intended to limit the protection scope of the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application are included in the protection scope of this application.

Claims (14)

  1. 一种深度图生成系统,包括传感器、处理器和多个投射器;A depth map generation system including a sensor, a processor, and a plurality of projectors;
    所述多个投射器,用于投射光线;The plurality of projectors are used to project light;
    所述传感器,用于采集待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;The sensor is used to collect an image to be processed, wherein the image to be processed includes speckles after reflection of light projected by multiple projectors;
    所述处理器,用于将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各所述基线对应的深度图进行融合,得到目标深度图。The processor is configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image Including the initial reference position of each pixel at each speckle position; for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position. , wherein the baseline represents the distance between the projector and the sensor, and the distance between different projectors and the sensor is different; for any baseline, the depth of each pixel at each speckle position corresponding to the baseline value, generate a depth map corresponding to the baseline; fuse the depth maps corresponding to each baseline to obtain a target depth map.
  2. 根据权利要求1所述的系统,其中,The system of claim 1, wherein
    所述处理器,具体用于针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。The processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the parallax corresponding to each pixel at each speckle location; for any baseline, based on the parallax The preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline corresponds to a preset Depth range, different baselines correspond to different preset depth ranges.
  3. 根据权利要求1所述的系统,其中,The system of claim 1, wherein
    所述处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。The processor is specifically configured to select a plane equation corresponding to any baseline; and calculate the depth of each pixel at each speckle position based on the selected plane equation and the parallax corresponding to each pixel at each speckle position. , obtain the depth value of each pixel at each speckle position.
  4. 根据权利要求1所述的系统,其中,The system of claim 1, wherein
    所述多个投射器,具体用于交替通过各所述投射器投射光线;The plurality of projectors are specifically used to alternately project light through each of the projectors;
    所述传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed every preset time interval.
  5. 根据权利要求1所述的系统,其特征在于,The system according to claim 1, characterized in that:
    所述多个投射器中距离所述传感器最近的投射器的曝光时长小于,距离所述传感器最远的投射器的曝光时长。The exposure duration of the projector closest to the sensor among the plurality of projectors is shorter than the exposure duration of the projector farthest from the sensor.
  6. 一种深度图生成方法,应用于深度图生成系统中的处理器,所述深度图 生成系统包括:传感器、处理器和多个投射器,所述方法包括:A depth map generation method, applied to a processor in a depth map generation system, the depth map The generation system includes: a sensor, a processor and a plurality of projectors, and the method includes:
    获取待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;Obtaining an image to be processed, wherein the image to be processed includes reflected speckles of light projected by multiple projectors;
    将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;The image to be processed is compared with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image includes each pixel at each speckle position. The initial reference position of the pixel;
    针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;For any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position, where the baseline represents the projector and the sensor The distance between different projectors and the sensor is different;
    针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;For any baseline, generate a depth map corresponding to the baseline based on the depth value of each pixel at each speckle position corresponding to the baseline;
    对各所述基线对应的深度图进行融合,得到目标深度图。The depth maps corresponding to each of the baselines are fused to obtain a target depth map.
  7. 根据权利要求6所述的方法,其中,所述针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:The method according to claim 6, wherein for any baseline, the depth value of each pixel at each speckle position corresponding to the baseline is calculated based on the disparity corresponding to the baseline and each pixel at each speckle position. ,include:
    针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;For any baseline, calculate the depth values of each pixel at multiple speckle locations based on the disparity corresponding to the baseline and each pixel at each speckle location;
    针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。For any baseline, according to the preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where, Each baseline corresponds to a preset depth range, and different baselines correspond to different preset depth ranges.
  8. 根据权利要求6所述的方法,其中,所述针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,包括:The method according to claim 6, wherein for any baseline, the depth value of each pixel at each speckle position corresponding to the baseline is calculated based on the disparity corresponding to the baseline and each pixel at each speckle position. ,include:
    针对任一基线,选取该基线对应的平面方程;For any baseline, select the plane equation corresponding to the baseline;
    根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。According to the selected plane equation and the parallax corresponding to each pixel at each speckle position, the depth of each pixel at each speckle position is calculated to obtain the depth value of each pixel at each speckle position.
  9. 一种自主移动设备,包括深度图生成系统,所述深度图生成系统包括:传感器、处理器和多个投射器;An autonomous mobile device includes a depth map generation system including: a sensor, a processor, and a plurality of projectors;
    所述多个投射器,用于投射光线; The plurality of projectors are used to project light;
    所述传感器,用于采集待处理图像,其中,所述待处理图像包括多个投射器投射的光线经反射后的散斑;The sensor is used to collect an image to be processed, wherein the image to be processed includes speckles after reflection of light projected by multiple projectors;
    所述处理器,用于将所述待处理图像与预设散斑图像进行比对,得到所述待处理图像中各散斑位置的各像素对应的视差,其中,所述预设散斑图像包括各散斑位置的各像素的初始参考位置;针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算该基线对应的各散斑位置的各像素的深度值,其中,所述基线表示投射器和所述传感器之间的距离,不同投射器和所述传感器之间的距离不同;针对任一基线,根据该基线对应的各散斑位置的各像素的深度值,生成该基线对应的深度图;对各所述基线对应的深度图进行融合,得到目标深度图。The processor is configured to compare the image to be processed with a preset speckle image to obtain the parallax corresponding to each pixel at each speckle position in the image to be processed, wherein the preset speckle image Including the initial reference position of each pixel at each speckle position; for any baseline, calculate the depth value of each pixel at each speckle position corresponding to the baseline based on the disparity corresponding to the baseline and each pixel at each speckle position. , wherein the baseline represents the distance between the projector and the sensor, and the distance between different projectors and the sensor is different; for any baseline, the depth of each pixel at each speckle position corresponding to the baseline value, generate a depth map corresponding to the baseline; fuse the depth maps corresponding to each baseline to obtain a target depth map.
  10. 根据权利要求9所述的自主移动设备,其中,The autonomous mobile device of claim 9, wherein
    所述处理器,具体用于针对任一基线,根据该基线和所述各散斑位置的各像素对应的视差,计算多个散斑位置的各像素的深度值;针对任一基线,根据该基线对应的预设深度范围,提取深度值位于该预设深度范围内的散斑的深度值,得到该基线对应的各散斑位置的各像素的深度值,其中,每一基线对应一个预设深度范围,不同基线对应的预设深度范围不同。The processor is specifically configured to calculate, for any baseline, the depth value of each pixel at multiple speckle locations based on the baseline and the parallax corresponding to each pixel at each speckle location; for any baseline, based on the parallax The preset depth range corresponding to the baseline, extract the depth value of the speckle whose depth value is within the preset depth range, and obtain the depth value of each pixel at each speckle position corresponding to the baseline, where each baseline corresponds to a preset Depth range, different baselines correspond to different preset depth ranges.
  11. 根据权利要求9所述的自主移动设备,其中,The autonomous mobile device of claim 9, wherein
    所述处理器,具体用于针对任一基线,选取该基线对应的平面方程;根据选取的平面方程和所述各散斑位置的各像素对应的视差,计算各散斑位置的各像素的深度,得到各散斑位置的各像素的深度值。The processor is specifically configured to select a plane equation corresponding to any baseline; and calculate the depth of each pixel at each speckle position based on the selected plane equation and the parallax corresponding to each pixel at each speckle position. , obtain the depth value of each pixel at each speckle position.
  12. 根据权利要求9所述的自主移动设备,其中,The autonomous mobile device of claim 9, wherein
    所述多个投射器,具体用于交替通过各所述投射器投射光线;The plurality of projectors are specifically used to alternately project light through each of the projectors;
    所述传感器,具体用于每间隔预设时长采集一次待处理图像。The sensor is specifically used to collect images to be processed every preset time interval.
  13. 根据权利要求9所述的自主移动设备,其特征在于,The autonomous mobile device according to claim 9, characterized in that
    所述多个投射器中距离所述传感器最近的投射器的曝光时长小于,距离所述传感器最远的投射器的曝光时长。The exposure duration of the projector closest to the sensor among the plurality of projectors is shorter than the exposure duration of the projector farthest from the sensor.
  14. 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求6-8任一所述的方法步骤。 A computer-readable storage medium. A computer program is stored in the computer-readable storage medium. When the computer program is executed by a processor, the method steps described in any one of claims 6-8 are implemented.
PCT/CN2023/079554 2022-03-30 2023-03-03 Depth map generation system and method, and autonomous mobile device WO2023185375A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210328464.X 2022-03-30
CN202210328464.XA CN114627174A (en) 2022-03-30 2022-03-30 Depth map generation system and method and autonomous mobile device

Publications (1)

Publication Number Publication Date
WO2023185375A1 true WO2023185375A1 (en) 2023-10-05

Family

ID=81904659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079554 WO2023185375A1 (en) 2022-03-30 2023-03-03 Depth map generation system and method, and autonomous mobile device

Country Status (2)

Country Link
CN (1) CN114627174A (en)
WO (1) WO2023185375A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627174A (en) * 2022-03-30 2022-06-14 杭州萤石软件有限公司 Depth map generation system and method and autonomous mobile device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10715787B1 (en) * 2019-03-22 2020-07-14 Ulsee Inc. Depth imaging system and method for controlling depth imaging system thereof
CN112150528A (en) * 2019-06-27 2020-12-29 Oppo广东移动通信有限公司 Depth image acquisition method, terminal and computer readable storage medium
CN112927280A (en) * 2021-03-11 2021-06-08 北京的卢深视科技有限公司 Method and device for acquiring depth image and monocular speckle structured light system
CN113311451A (en) * 2021-05-07 2021-08-27 西安交通大学 Laser speckle projection ToF depth sensing method and device
CN113936050A (en) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 Speckle image generation method, electronic device, and storage medium
CN114627174A (en) * 2022-03-30 2022-06-14 杭州萤石软件有限公司 Depth map generation system and method and autonomous mobile device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10715787B1 (en) * 2019-03-22 2020-07-14 Ulsee Inc. Depth imaging system and method for controlling depth imaging system thereof
CN112150528A (en) * 2019-06-27 2020-12-29 Oppo广东移动通信有限公司 Depth image acquisition method, terminal and computer readable storage medium
CN112927280A (en) * 2021-03-11 2021-06-08 北京的卢深视科技有限公司 Method and device for acquiring depth image and monocular speckle structured light system
CN113311451A (en) * 2021-05-07 2021-08-27 西安交通大学 Laser speckle projection ToF depth sensing method and device
CN113936050A (en) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 Speckle image generation method, electronic device, and storage medium
CN114627174A (en) * 2022-03-30 2022-06-14 杭州萤石软件有限公司 Depth map generation system and method and autonomous mobile device

Also Published As

Publication number Publication date
CN114627174A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US9424650B2 (en) Sensor fusion for depth estimation
US8718326B2 (en) System and method for extracting three-dimensional coordinates
EP3416370B1 (en) Photography focusing method, device, and apparatus for terminal
WO2018120027A1 (en) Method and apparatus for detecting obstacles
CN107113415A (en) The method and apparatus for obtaining and merging for many technology depth maps
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN112150528A (en) Depth image acquisition method, terminal and computer readable storage medium
US20090214107A1 (en) Image processing apparatus, method, and program
US20220156954A1 (en) Stereo matching method, image processing chip and mobile vehicle
WO2023185375A1 (en) Depth map generation system and method, and autonomous mobile device
CN108924408B (en) Depth imaging method and system
CN113052066B (en) Multi-mode fusion method based on multi-view and image segmentation in three-dimensional target detection
JPWO2011125937A1 (en) Calibration data selection device, selection method, selection program, and three-dimensional position measurement device
CN104639927A (en) Method for shooting stereoscopic image and electronic device
JP2018538709A (en) Method and apparatus for generating data representing a pixel beam
WO2023142352A1 (en) Depth image acquisition method and device, terminal, imaging system and medium
WO2022218161A1 (en) Method and apparatus for target matching, device, and storage medium
JP2019511851A (en) Method and apparatus for generating data representative of a pixel beam
US11879993B2 (en) Time of flight sensor module, method, apparatus and computer program for determining distance information based on time of flight sensor data
CN111028299A (en) System and method for calculating spatial distance of calibration points based on point attribute data set in image
TW201727578A (en) An apparatus and a method for generating data representing a pixel beam
CN105423916A (en) Measurement method and measurement system for object dimension
CN113014899B (en) Binocular image parallax determination method, device and system
EP3350982B1 (en) An apparatus and a method for generating data representing a pixel beam
JP3998863B2 (en) Depth detection device and imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777753

Country of ref document: EP

Kind code of ref document: A1