WO2020223868A1 - Terrain information processing method and apparatus, and unmanned vehicle - Google Patents

Terrain information processing method and apparatus, and unmanned vehicle Download PDF

Info

Publication number
WO2020223868A1
WO2020223868A1 PCT/CN2019/085655 CN2019085655W WO2020223868A1 WO 2020223868 A1 WO2020223868 A1 WO 2020223868A1 CN 2019085655 W CN2019085655 W CN 2019085655W WO 2020223868 A1 WO2020223868 A1 WO 2020223868A1
Authority
WO
WIPO (PCT)
Prior art keywords
height
map
frame
height map
fusion
Prior art date
Application number
PCT/CN2019/085655
Other languages
French (fr)
Chinese (zh)
Inventor
刘晓洋
郑杨杨
张晓炜
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980011875.2A priority Critical patent/CN111712855A/en
Priority to PCT/CN2019/085655 priority patent/WO2020223868A1/en
Publication of WO2020223868A1 publication Critical patent/WO2020223868A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • an embodiment of the present application provides an unmanned vehicle, including: a vehicle body and the ground information processing device according to the embodiment of the present application in the second aspect, wherein the ground information processing device is installed in the vehicle On the body.
  • FIG. 5 is a schematic structural diagram of a ground information processing device provided by an embodiment of the application.
  • the height map of the previous frame in the height map of frame N-1 is the first frame height map
  • the fused height map after the fusion of the first frame height map is the first frame Height map
  • the fusion height map of the height map of the first 2 frames is obtained
  • the fusion height map of the height map of the first 2 frames and the third frame are obtained Height map, get the fusion height map of the height map of the first 3 frames; and so on, get the fusion height map of the height map of the first N-2 frames
  • after obtaining the height map of the N-1th frame according to the height of the previous N-2 frames
  • the fusion height map of the image and the height map of the N-1th frame are obtained, and the fusion height map after the fusion of the height map of the previous N-1 frames is obtained.
  • the height value of these height pixels in the fusion height map of the height map of the first 3 frames is equal to the height of the height pixel in the fusion height map of the height map of the first 2 frames
  • the average value of the height value and the height value in the third frame height map that is, the average value of the height pixel point in the first frame height map, the second frame height map, and the third frame height map).
  • the unmanned vehicle after performing S304, further estimates the ground of the ground area according to the fused height map after the fusion of the N frames of height maps. Since the fusion height map integrates the height information of the ground area in the N frames of height maps, the ground area of the ground area is estimated according to a frame of the fusion height map obtained after the fusion of multiple frames of height maps, and the obtained ground estimation result is more accurate.
  • the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame is obtained.
  • a ground model of the ground area is obtained by fitting
  • the height map of the ground area in the depth map of each frame is obtained.
  • the depth sensor 601 is used to collect a depth map.

Abstract

A terrain information processing method and apparatus, and an unmanned vehicle. The method comprises: obtaining N frames of depth maps acquired by a depth sensor (S301); performing terrain segmentation on each frame of depth maps to obtain a terrain area in each frame of depth maps (S302); according to the terrain area in each frame of depth maps, obtaining a height map of the terrain area in each frame of depth maps (S303); and according to the height map of the terrain area in each frame of depth maps, obtaining a fusion height map after N frames of height maps are fused, the fusion height map being used for terrain estimation of the terrain area (S304). Since the fusion height map fuses height information of the terrain area in the N frames of height maps, the influence of a single frame of height map on a height value of the terrain area is reduced, and the height value of the fusion height map is more approximate to the actual height value of the terrain area. Terrain estimation of the terrain area obtained by means of the fusion height map is more accurate.

Description

地面信息处理方法、装置和无人驾驶车辆Ground information processing method, device and unmanned vehicle 技术领域Technical field
本申请实施例涉及无人驾驶技术领域,尤其涉及一种地面信息处理方法、装置和无人驾驶车辆。The embodiments of the present application relate to the field of unmanned driving technology, and in particular, to a ground information processing method, device, and unmanned vehicle.
背景技术Background technique
近年无人驾驶技术发展迅速,其中,地面模型估计技术为无人驾驶技术中的基础技术,在地面估计之后,估计后的地面模型可以用于将车道线从相机视角转换为俯视视角,以便后续对车道线的处理,也可以用于为无人驾驶车辆可通行区域进行检测,也可以用于为基于地图的在线定位系统检测无人驾驶车辆的当前姿态。Unmanned driving technology has developed rapidly in recent years. Among them, ground model estimation technology is the basic technology in unmanned driving technology. After ground estimation, the estimated ground model can be used to convert the lane line from the camera perspective to the overhead perspective for subsequent follow-up The processing of lane lines can also be used to detect the passable area of unmanned vehicles, and it can also be used to detect the current posture of unmanned vehicles for map-based online positioning systems.
目前的地面模型估计流程为:获取车辆在移动过程中相机视角下的多帧深度图,然后对该每帧深度图进行地面分割,获取地面区域,然后根据每帧的深度图的地面区域进行地面估计,获得地面模型。但是这种方式会造成地面信息丢失严重,使得获取的地面模型的准确率低。The current ground model estimation process is: obtain a multi-frame depth map from the camera's perspective when the vehicle is moving, and then perform ground segmentation on each frame of the depth map to obtain the ground area, and then perform ground based on the ground area of the depth map of each frame Estimate and obtain the ground model. However, this method will cause serious loss of ground information, resulting in low accuracy of the acquired ground model.
发明内容Summary of the invention
本申请实施例提供一种地面信息处理方法、装置和无人驾驶车辆,以获得融合高度图,以便通过该融合高度图获得的地面区域的地面估计更加准确。The embodiments of the present application provide a ground information processing method, device, and unmanned vehicle to obtain a fusion height map, so that the ground estimation of the ground area obtained through the fusion height map is more accurate.
第一方面,本申请实施例提供一种地面信息处理方法,包括:In the first aspect, an embodiment of the present application provides a ground information processing method, including:
获取深度传感器采集的N帧深度图,N为大于等于2的整数;Obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2;
对每帧深度图进行地面分割,获得每帧深度图中的地面区域;Perform ground segmentation on each depth map to obtain the ground area in each depth map;
根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;According to the ground area in the depth map of each frame, obtain the height map of the ground area in the depth map of each frame;
根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。According to the height map of the ground area in each frame depth map, a fusion height map after N frames of height map fusion is obtained, and the fusion height map is used for ground estimation of the ground area.
第二方面,本申请实施例提供一种地面信息处理装置,包括:深度传感器和处理器;In a second aspect, an embodiment of the present application provides a ground information processing device, including a depth sensor and a processor;
所述深度传感器,用于采集深度图;The depth sensor is used to collect a depth map;
所述处理器,用于获取所述深度传感器采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。The processor is configured to obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; according to the depth of each frame For the ground area in the figure, obtain the height map of the ground area in the depth map of each frame; according to the height map of the ground area in the depth map of each frame, obtain the fusion height map after N frames of height map fusion, and the fusion height map is used for the ground Ground estimate of the area.
第三方面,本申请实施例提供一种无人驾驶车辆,包括:深度传感器和处理器;In the third aspect, an embodiment of the present application provides an unmanned vehicle, including: a depth sensor and a processor;
所述深度传感器,用于采集深度图;The depth sensor is used to collect a depth map;
所述处理器,用于获取所述深度传感器采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。The processor is configured to obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; according to the depth of each frame For the ground area in the figure, obtain the height map of the ground area in the depth map of each frame; according to the height map of the ground area in the depth map of each frame, obtain the fusion height map after N frames of height map fusion, and the fusion height map is used for the ground Ground estimate of the area.
第四方面,本申请实施例提供一种无人驾驶车辆,包括:车辆本体以及如第二方面本申请实施例所述的地面信息处理装置,其中,所述地面信息处理装置安装于所述车辆本体上。In a fourth aspect, an embodiment of the present application provides an unmanned vehicle, including: a vehicle body and the ground information processing device according to the embodiment of the present application in the second aspect, wherein the ground information processing device is installed in the vehicle On the body.
第五方面,本申请实施例提供一种可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如第一方面本申请实施例所述的地面信息处理方法。In a fifth aspect, an embodiment of the present application provides a readable storage medium with a computer program stored on the readable storage medium; when the computer program is executed, it realizes the ground surface described in the embodiment of the present application in the first aspect. Information processing methods.
第六方面,本申请实施例提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,地面信息处理装置或无人驾驶车辆的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得地面信息处理装置或无人驾驶车辆实施如第一方面本申请实施例所述的地面信息处理方法。In a sixth aspect, an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a ground information processing device or an unmanned vehicle can be downloaded from The readable storage medium reads the computer program, and the at least one processor executes the computer program to cause the ground information processing device or the unmanned vehicle to implement the ground information processing method according to the embodiment of the present application in the first aspect .
本申请实施例提供的地面信息处理方法、装置和无人驾驶车辆,通过获取深度传感器采集的N帧深度图;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。由于该融合高度图融合了N帧高度图中地面区域的高度信息,所以降低了单帧高度图中的 噪声对地面区域的高度值的影响,使得融合高度图的高度值更加接近地面区域的实际高度值。通过该融合高度图获得的地面区域的地面估计更加准确。The ground information processing method, device, and unmanned vehicle provided by the embodiments of the present application acquire N frames of depth maps collected by a depth sensor; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; Obtain the height map of the ground area in the depth map of each frame for the ground area in the depth map of the frame; obtain the fusion height map after N frames of height map fusion according to the height map of the ground area in the depth map of each frame, and the fusion height map is used The ground estimate for the ground area. Since the fusion height map integrates the height information of the ground area in the N frame height map, the influence of noise in the single frame height map on the height value of the ground area is reduced, making the height value of the fusion height map closer to the actual ground area The height value. The ground estimation of the ground area obtained by the fusion height map is more accurate.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1是根据本申请的实施例的无人驾驶车辆100的示意性架构图;FIG. 1 is a schematic architecture diagram of an unmanned vehicle 100 according to an embodiment of the present application;
图2为本申请一实施例提供的应用场景示意图;Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application;
图3为本申请一实施例提供的地面信息处理方法的流程图;FIG. 3 is a flowchart of a ground information processing method provided by an embodiment of the application;
图4为本申请一实施例提供的高度图的示意图;FIG. 4 is a schematic diagram of a height map provided by an embodiment of the application;
图5为本申请一实施例提供的地面信息处理装置的结构示意图;5 is a schematic structural diagram of a ground information processing device provided by an embodiment of the application;
图6为本申请一实施例提供的无人驾驶车辆的结构示意图;FIG. 6 is a schematic structural diagram of an unmanned vehicle provided by an embodiment of the application;
图7为本申请另一实施例提供的无人驾驶车辆的结构示意图。FIG. 7 is a schematic structural diagram of an unmanned vehicle provided by another embodiment of the application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of this application, not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of this application.
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。It should be noted that when a component is said to be "fixed to" another component, it can be directly on the other component or a central component may also exist. When a component is considered to be "connected" to another component, it can be directly connected to another component or there may be a centered component at the same time.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terms used in the description of the application herein are only for the purpose of describing specific embodiments, and are not intended to limit the application. The term "and/or" as used herein includes any and all combinations of one or more related listed items.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况 下,下述的实施例及实施例中的特征可以相互组合。Hereinafter, some embodiments of the present application will be described in detail with reference to the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
本申请的实施例提供了地面信息处理方法、装置和无人驾驶车辆。图1是根据本申请的实施例的无人驾驶车辆100的示意性架构图。The embodiments of the present application provide ground information processing methods, devices, and unmanned vehicles. FIG. 1 is a schematic architecture diagram of an unmanned vehicle 100 according to an embodiment of the present application.
无人驾驶车辆100可以包括感知系统110、控制系统120和机械系统130。The unmanned vehicle 100 may include a sensing system 110, a control system 120, and a mechanical system 130.
其中,感知系统110用于测量无人驾驶车辆的状态信息,即无人驾驶车辆100的感知数据,感知数据可以表示无人驾驶车辆100的位置信息和/或状态信息,例如,位置、角度、速度、加速度和角速度等。感知系统110例如可以包括视觉传感器(例如包括多个单目或双目视觉装置)、激光雷达、毫米波雷达、惯性测量单元(Inertial Measurement Unit,IMU)、全球导航卫星系统、陀螺仪、超声传感器、电子罗盘、和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。Among them, the perception system 110 is used to measure the status information of the unmanned vehicle, that is, the perception data of the unmanned vehicle 100. The perception data can represent the location information and/or status information of the unmanned vehicle 100, for example, position, angle, Speed, acceleration and angular velocity, etc. The sensing system 110 may include, for example, a visual sensor (for example, including multiple monocular or binocular vision devices), lidar, millimeter wave radar, inertial measurement unit (IMU), global navigation satellite system, gyroscope, ultrasonic sensor At least one of sensors such as, electronic compass, and barometer. For example, the global navigation satellite system may be a global positioning system (Global Positioning System, GPS).
感知系统110获取到感知数据后,可以将感知数据传输给控制系统120。其中,控制系统120用于根据感知数据做出用于控制无人驾驶车辆100如何行驶的决策,例如:以多少的速度行驶,或者,以多少的刹车加速度刹车,或者,是否变道行驶,或者,左/右转行驶等。控制系统120例如可以包括:计算平台,例如车载超算平台,或者中央处理器、分布式处理单元等具有处理功能器件的至少一种。控制系统120还可以包括车辆上各种数据传输的通信链路。After the sensing system 110 obtains the sensing data, it can transmit the sensing data to the control system 120. Among them, the control system 120 is used to make decisions for controlling the driving of the unmanned vehicle 100 according to the sensing data, for example: how much speed to travel, or how much braking acceleration to brake, or whether to change lanes, or , Turn left/right, etc. The control system 120 may include, for example, a computing platform, such as a vehicle-mounted supercomputing platform, or at least one device having processing functions such as a central processing unit and a distributed processing unit. The control system 120 may also include a communication link for various data transmission on the vehicle.
控制系统120可以根据确定的决策向机械系统130输出一个或多个控制指令。其中,机械系统130用于响应来自控制系统120的一个或多个控制指令对无人驾驶车辆100进行控制,以完成上述决策,例如:机械系统130可以驱动无人驾驶车辆100的车轮转动,从而为无人驾驶车辆100的行驶提供动力,其中,车轮的转动速度可以影响到无人驾驶车辆的速度。其中,机械系统130例如可以包括:机械的车身发动机/电动机、控制的线控系统等等中的至少一种。The control system 120 may output one or more control commands to the mechanical system 130 according to the determined decision. Wherein, the mechanical system 130 is used to control the unmanned vehicle 100 in response to one or more control commands from the control system 120 to complete the aforementioned decision. For example, the mechanical system 130 can drive the wheels of the unmanned vehicle 100 to rotate, thereby Power is provided for the driving of the unmanned vehicle 100, wherein the rotation speed of the wheels can affect the speed of the unmanned vehicle. Among them, the mechanical system 130 may include, for example, at least one of a mechanical body engine/motor, a controlled wire control system, and the like.
应理解,上述对于无人驾驶车辆各组成部分的命名仅是出于标识的目的,并不应理解为对本申请的实施例的限制。It should be understood that the aforementioned naming of the components of the unmanned vehicle is only for identification purposes and should not be understood as a limitation to the embodiments of the present application.
其中,图2为本申请一实施例提供的应用场景示意图,如图2所示,无人驾驶车辆可以地面上行驶,并且无人驾驶车辆在地面行驶的过程中,可以 (例如通过上述的感知系统110)采集环境信息,该环境信息可以包括地面信息,然后对该地面信息进行处理,具体如何处理可以参见本申请下述各实施例所述。Figure 2 is a schematic diagram of an application scenario provided by an embodiment of this application. As shown in Figure 2, an unmanned vehicle can drive on the ground, and the unmanned vehicle can drive on the ground (for example, through the aforementioned perception The system 110) collects environmental information, which may include ground information, and then processes the ground information. For details on how to process it, please refer to the following embodiments of this application.
图3为本申请一实施例提供的地面信息处理方法的流程图,如图3所示,本实施例的方法可以包括:FIG. 3 is a flowchart of a ground information processing method provided by an embodiment of the application. As shown in FIG. 3, the method in this embodiment may include:
S301、获取深度传感器采集的N帧深度图。S301: Acquire N frames of depth maps collected by the depth sensor.
本实施例中,深度传感器可以根据其采集频率依次采集深度图,例如共采集到N帧深度图,N为大于等于2的整数,本实施例可以获取深度传感器采集的N帧深度图。In this embodiment, the depth sensor may sequentially acquire depth maps according to its acquisition frequency, for example, N frames of depth maps are acquired in total, where N is an integer greater than or equal to 2, and this embodiment can acquire N frames of depth maps collected by the depth sensor.
其中,本实施例的方法可以应用于无人驾驶车辆中,但本实施例并不限于此,也可以应用于其它可移动平台中,例如:机器人等。本实施例以应用于无人驾驶车辆为例,该无人驾驶车辆可以搭载有深度传感器,深度传感器可以用于采集该无人驾驶车辆在地面移动过程中所处理环境的深度图,相应地,可以获取无人驾驶车辆在地面移动过程中深度传感器采集的N帧深度图。Among them, the method of this embodiment can be applied to an unmanned vehicle, but this embodiment is not limited to this, and can also be applied to other movable platforms, such as robots. This embodiment is applied to an unmanned vehicle as an example. The unmanned vehicle may be equipped with a depth sensor, and the depth sensor may be used to collect a depth map of the environment processed by the unmanned vehicle during ground movement. Accordingly, N frames of depth maps collected by the depth sensor during the ground movement of the unmanned vehicle can be obtained.
可选地,深度传感器包括:包括双目相机、飞行时间(Time of flight,TOF)传感器或激光雷达,本实施例对此不做限定。Optionally, the depth sensor includes: a binocular camera, a time of flight (TOF) sensor or a lidar, which is not limited in this embodiment.
S302、对每帧深度图进行地面分割,获得每帧深度图中的地面区域。S302: Perform ground segmentation on each frame of depth map to obtain a ground area in each frame of depth map.
本实施例中,可以对深度传感器采集的每帧深度图进行地面分割,获得每帧深度图中的地面区域。其中,进行地图分割的处理过程可以参见基于u-disparity的方案,本实施例对此不再赘述,需要说明的是,本实施例也不限于上述基于u-disparity的方案。In this embodiment, each frame of the depth map collected by the depth sensor can be ground segmented to obtain the ground area in each frame of the depth map. Among them, the process of performing map segmentation can refer to the u-disparity-based solution, which will not be repeated in this embodiment. It should be noted that this embodiment is not limited to the u-disparity-based solution.
S303、根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图。S303: Obtain a height map of the ground area in the depth map of each frame according to the ground area in the depth map of each frame.
本实施例中,在获得每帧深度图的地面区域之后,根据该每帧深度图的地面区域,获得该帧深度图中地面区域的高度图。In this embodiment, after obtaining the ground area of the depth map of each frame, the height map of the ground area of the depth map of the frame is obtained according to the ground area of the depth map of each frame.
可选地,上述S303的一种可能的实现方式为:先根据每帧深度图中的地面区域,获得地面区域对应的点云数据;再根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图。本实施例中,根据每帧深度图中的地面区域,获得每帧深度图中地面区域对应的点云数据,可以包括将每帧深度图中地面区域中每个深度像素点确定为点云,从而获得地面区 域对应的点云数据。然后根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图,例如将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得到俯视图视角下的高度图,该俯视图视角下的高度图即为该帧深度图中地面区域的高度图。其中,获得的高度图中每一个高度像素点的高度值具有与该高度像素点对应的点云中点的高度。Optionally, a possible implementation of S303 is: first obtain the point cloud data corresponding to the ground area according to the ground area in the depth map of each frame; and then according to the point cloud data corresponding to the ground area in each frame of the depth map, Obtain the height map of the ground area in the depth map of each frame. In this embodiment, obtaining the point cloud data corresponding to the ground area in the depth map of each frame according to the ground area in the depth map of each frame may include determining each depth pixel in the ground area of the depth map in each frame as a point cloud, In order to obtain the point cloud data corresponding to the ground area. Then, according to the point cloud data corresponding to the ground area in the depth map of each frame, the height map of the ground area in the depth map of each frame is obtained. For example, the point cloud data corresponding to the ground area in each frame of the depth map is orthogonally projected to the top view perspective to obtain The height map in the top view perspective, and the height map in the top view perspective is the height map of the ground area in the depth map of the frame. Wherein, the height value of each height pixel in the obtained height map has the height of the point cloud corresponding to the height pixel.
可选地,所述俯视图视角为世界系坐标向下的方向。Optionally, the perspective of the top view is a downward direction of world coordinates.
可选地,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。例如:若多个点云数据具有相同的x坐标值和y坐标值,这相当于在同一个地面点的竖直柱体上有多个点云,比如地面上具有红绿灯的场景中,该地面区域的同一x坐标和y坐标对应多个不同z坐标的点云数据;此时可以取这些点云中的最小高度值为该地面点对应在高度图的高度像素点的高度值。Optionally, if multiple point cloud data are projected onto the same height pixel in the height map, the height value of the height pixel in the height map is the minimum value of the height values corresponding to the multiple point cloud data. For example: if multiple point cloud data have the same x coordinate value and y coordinate value, this is equivalent to having multiple point clouds on the vertical cylinder of the same ground point. For example, in a scene with traffic lights on the ground, the ground The same x-coordinate and y-coordinate of the area correspond to multiple point cloud data with different z-coordinates; at this time, the minimum height in these point clouds can be taken as the height value of the height pixel point of the ground point corresponding to the height map.
可选地,可以是多个点云或深度图中的多个深度像素点对应一个高度像素点,例如可以将x方向与y方向上的具有多个深度像素点(例如2*2)的深度像素区域对应一个高度像素点,先可以获取该多个深度像素点中每个深度像素点对应的最小高度值,然后将这些深度像素点对应的最小高度值的平均值或最大值或最小值确定为该高度像素点的高度值。Optionally, multiple depth pixels in multiple point clouds or depth maps can correspond to one height pixel. For example, the depths of multiple depth pixels (for example, 2*2) in the x direction and the y direction can be combined. The pixel area corresponds to a height pixel. First, the minimum height value corresponding to each depth pixel point of the multiple depth pixels can be obtained, and then the average value or maximum value or minimum value of the minimum height value corresponding to these depth pixels can be determined Is the height value of this height pixel.
需要说明的是,本实施例在实现上,可以在获取深度传感器采集的N帧深度图之后,执行S302和S303。也可以是获取深度传感器采集的每帧深度图之后即执行S302和S303,例如:获取深度传感器采集的第一帧深度图,对该第一帧深度图执行上述S302和S303的操作,然后再获取深度传感器采集的第二帧深度图,对第二帧深度图执行上述S302和S303的操作,以此类推,从而共获得N帧深度图中每帧深度图中地面区域的高度图。It should be noted that, in the implementation of this embodiment, S302 and S303 may be executed after obtaining N frames of depth maps collected by the depth sensor. It is also possible to execute S302 and S303 after acquiring each frame of the depth map collected by the depth sensor, for example: acquire the first frame of depth map collected by the depth sensor, perform the operations of S302 and S303 above on the first frame of depth map, and then acquire The second frame of depth map collected by the depth sensor performs the above-mentioned S302 and S303 operations on the second frame of depth map, and so on, so as to obtain a total of N frames of depth maps of the height map of the ground area in each frame of the depth map.
S304、根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。S304: Obtain a fusion height map after fusion of N frames of height maps according to the height map of the ground area in each frame depth map, and the fusion height map is used for ground estimation of the ground area.
本实施例中,根据N帧深度图中每帧深度图中地面区域的高度图,获得一帧高度图,该一帧高度图为N帧高度图融合后的融合高度图,所述融合高度图可用于地面区域的地面估计。由于该融合高度图融合了N帧高度图中地面区域的高度信息,所以降低了单帧高度图中的噪声对地面区域的高度值的影响,使得融合高度图的高度值更加接近地面区域的实际高度值。In this embodiment, a frame of height map is obtained according to the height map of the ground area in each frame of the depth map of N frames of depth map, and the frame of height map is a fused height map after the fusion of N frames of height maps. Can be used for ground estimation of ground area. Since the fusion height map integrates the height information of the ground area in the N frame height map, the influence of noise in the single frame height map on the height value of the ground area is reduced, making the height value of the fusion height map closer to the actual ground area The height value.
可选地,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。Optionally, the height value of each height pixel in the fusion height map is an average value of the height value of the height pixel in the N frames of height maps.
本实施例提供的地面信息处理方法,通过获取深度传感器采集的N帧深度图;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。由于该融合高度图融合了N帧高度图中地面区域的高度信息,所以降低了单帧高度图中的噪声对地面区域的高度值的影响,使得融合高度图的高度值更加接近地面区域的实际高度值。通过该融合高度图获得的地面区域的地面估计更加准确。The ground information processing method provided in this embodiment obtains N frames of depth maps collected by a depth sensor; performs ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; according to the ground area in each frame of depth map, Obtain a height map of the ground area in the depth map of each frame; according to the height map of the ground area in the depth map of each frame, a fusion height map after fusion of the height maps of N frames is obtained, and the fusion height map is used for ground estimation of the ground area. Since the fusion height map integrates the height information of the ground area in the N frame height map, the influence of noise in the single frame height map on the height value of the ground area is reduced, making the height value of the fusion height map closer to the actual ground area The height value. The ground estimation of the ground area obtained by the fusion height map is more accurate.
在一些实施例中,上述S304的一种可能的实现方式中可以包括:S3041和S3042:In some embodiments, a possible implementation of S304 above may include: S3041 and S3042:
S3041、根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图。S3041, according to the first N-1 frame height map of the N frame height map, obtain a fused height map after the fusion of the first N-1 frame height map.
S3042、根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。S3042, according to the fusion height map after the fusion of the height maps of the previous N-1 frames and the height map of the Nth frame, obtain the fusion height map after the fusion of the height maps of the first N frames.
本实施例中,N帧高度图即为N帧深度图中地面区域的高度图,N帧高度图为第1帧高度图、第2帧高度图、….、第N-1帧高度图、第N帧高度图,可以先根据第1帧高度图、第2帧高度图、….、第N-1帧高度图,获得N帧高度图中前N-1帧高度图融合后的融合高度图。然后再根据前N-1帧高度图融合后的融合高度图以及第N帧高度图(即第N帧深度图中地面区域的高度图),获得前N帧高度图融合后的融合高度图。可选地,获得前N帧高度图融合后的融合高度图中的每个高度像素点的高度值可以是该高度像素点在前N帧高度图中的高度值的平均值。In this embodiment, the N frame height map is the height map of the ground area in the N frame depth map, and the N frame height map is the first frame height map, the second frame height map,..., the N-1th frame height map, The height map of the Nth frame can be obtained according to the height map of the first frame, the height map of the second frame,..., and the height map of the N-1th frame to obtain the fusion height of the first N-1 frame of the height map of the N frame. Figure. Then, according to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame (that is, the height map of the ground area in the depth map of the Nth frame), the fusion height map after the fusion of the height maps of the first N frames is obtained. Optionally, the height value of each height pixel in the fused height map obtained after the height map fusion of the previous N frames is obtained may be the average value of the height value of the height pixel in the height map of the previous N frames.
可选地,上述S3041的一种可能的实现方式为:获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;再根据前i-1帧高度图融合后的融合高度以及第i帧高度图,获得前i帧高度图的融合高度图;然后更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Optionally, a possible implementation of the above S3041 is to obtain the fused height map after the fusion of the height map of the first i-1 frame in the height map of the previous N-1 frames, where i is greater than or equal to 2 and less than or equal to N-1 Integer; then according to the fusion height of the previous i-1 frame height map and the i-th frame height map, the fusion height map of the previous i frame height map is obtained; then i is equal to i+1 until i is equal to N-1, thus Obtain the fusion height map of the first N-1 frame height map.
本实施例中,以i等于2开始,N-1帧高度图中的前1帧高度图,即为第 1帧高度图,而且第1帧高度图融合后的融合高度图即为第1帧高度图,根据第1帧高度图以及第2帧高度图,获得前2帧高度图的融合高度图;在获得第3帧高度图后,根据前2帧高度图的融合高度图与第3帧高度图,获得前3帧高度图的融合高度图;以此类推,可以获得前N-2帧高度图的融合高度图;在获得第N-1帧高度图后,根据前N-2帧高度图的融合高度图与第N-1帧高度图,获得前N-1帧高度图融合后的融合高度图。可选地,获得前2帧高度图的融合高度图后,可以删除第1帧高度图以及第2帧高度图,获得前3帧高度图的融合高度图后,可以删除前2帧高度图的融合高度图与第3帧高度图,以此类推,获得前N-1帧高度图的融合高度图后,可以删除前N-2帧高度图的融合高度图与第N-1帧高度图,以减少存储空间压力,提高处理性能。In this embodiment, starting with i equal to 2, the height map of the previous frame in the height map of frame N-1 is the first frame height map, and the fused height map after the fusion of the first frame height map is the first frame Height map, according to the height map of the first frame and the height map of the second frame, the fusion height map of the height map of the first 2 frames is obtained; after the height map of the third frame is obtained, the fusion height map of the height map of the first 2 frames and the third frame are obtained Height map, get the fusion height map of the height map of the first 3 frames; and so on, get the fusion height map of the height map of the first N-2 frames; after obtaining the height map of the N-1th frame, according to the height of the previous N-2 frames The fusion height map of the image and the height map of the N-1th frame are obtained, and the fusion height map after the fusion of the height map of the previous N-1 frames is obtained. Optionally, after obtaining the fusion height map of the height map of the first 2 frames, you can delete the height map of the first frame and the height map of the second frame. After obtaining the fusion height map of the height map of the first 3 frames, you can delete the height map of the first 2 frames. The fusion height map and the third frame height map, and so on, after obtaining the fusion height map of the first N-1 frame height map, the fusion height map of the previous N-2 frame height map and the N-1 frame height map can be deleted. To reduce storage space pressure and improve processing performance.
其中,以根据第1帧高度图以及第2帧高度图,获得前2帧高度图的融合高度图为例,前2帧高度图的融合高度图的高度像素点与第2帧高度图中的高度像素点相同,不同的是高度像素点对应的高度值,若第2帧高度图中的高度像素点在第1帧高度图中也存在,则该高度像素点在前2帧高度图的融合高度图中的高度值为该高度像素点在第2帧高度图中的高度值与该高度像素点在第1帧高度图中的高度值的平均值,若第2帧高度图中的高度像素点在第1帧高度图中不存在,则该高度像素点在前2帧高度图的融合高度图中的高度值为该高度像素点在第2帧高度图中的高度值。Among them, taking the fusion height map of the first 2 frames of height map obtained according to the height map of the first frame and the height map of the second frame as an example, the height pixels of the fusion height map of the height map of the first 2 frames and the height map of the second frame The height pixels are the same, and the difference is the height value corresponding to the height pixel. If the height pixel in the height map of the second frame also exists in the height map of the first frame, the height pixel is the fusion of the height map of the first two frames The height value in the height map is the average value of the height value of the height pixel in the second frame height map and the height value of the height pixel in the first frame height map. If the height pixel in the second frame height map If the point does not exist in the height map of the first frame, the height value of the height pixel in the fusion height map of the height map of the previous 2 frames is the height value of the height pixel in the height map of the second frame.
相应地,上述S3042的一种可能的实现方式为,将前N-1帧高度图融合后的融合高度图以及第N帧高度图中同一高度像素点的高度值的平均值,确定为前N帧高度图融合后的融合高度图中该同一高度像素点的高度值。Correspondingly, a possible implementation of the above S3042 is to determine the fused height map after fusion of the height maps of the previous N-1 frames and the average value of the height values of the same height pixels in the height map of the Nth frame as the first N The height value of the pixel at the same height in the fused height map after the frame height map is fused.
其中,前N帧高度图的融合高度图的高度像素点与第N帧高度图中的高度像素点相同,不同的是高度像素点对应的高度值,若第N帧高度图中的高度像素点在前N-1帧高度图中也存在,则该高度像素点在前N-1帧高度图的融合高度图中的高度值为该高度像素点在第N帧高度图中的高度值与该高度像素点在前N-1帧高度图的融合高度图中的高度值的平均值,若第N帧高度图中的高度像素点在前N-1帧高度图的融合高度图中不存在,则该高度像素点在前N帧高度图的融合高度图中的高度值为该高度像素点在第N帧高度图中的高度值。Among them, the height pixels of the fusion height map of the first N frames of height map are the same as the height pixels of the Nth frame height map, and the difference is the height value corresponding to the height pixels. If the height pixels of the Nth frame height map It also exists in the height map of the first N-1 frame, and the height value of the height pixel in the fusion height map of the height map of the previous N-1 frame is the same as the height value of the height pixel in the height map of the Nth frame. The average value of the height pixels in the fusion height map of the height map of the previous N-1 frames. If the height pixel in the height map of the Nth frame does not exist in the fusion height map of the height map of the previous N-1 frame, Then, the height value of the height pixel in the fusion height map of the height map of the first N frames is the height value of the height pixel in the height map of the Nth frame.
因此,上述S3042用公式来表示可以为:根据前N-1帧高度图融合后的融合高度图、第N帧高度图、公式一和公式二,获得前N帧高度图的融合高度图;Therefore, the above S3042 can be expressed by a formula: according to the fusion height map after the fusion of the height maps of the previous N-1 frames, the height map of the Nth frame, formula 1 and formula 2, obtain the fusion height map of the height map of the first N frames;
公式一:
Figure PCTCN2019085655-appb-000001
Formula one:
Figure PCTCN2019085655-appb-000001
公式二:w N,j=w N-1,j+1; Formula 2: w N,j =w N-1,j +1;
其中,h N,j表示前N帧高度图融合后的融合高度图中高度像素点j的高度值,Z N,j表示第N帧高度图中所述高度像素点j的高度值,h N-1,j表示前N-1帧高度图融合后的融合高度图中所述高度像素点j的高度值,w N,j表示所述高度像素点j在N帧高度图中出现的次数,w N-1,j表示所述高度像素点j在N-1帧高度图中出现的次数。 Among them, h N,j represents the height value of the height pixel j in the fusion height map after the fusion of the height maps of the first N frames, Z N,j represents the height value of the height pixel j in the height map of the Nth frame, h N -1, j represents the height value of the height pixel j in the fusion height map after the fusion of the previous N-1 frame height maps, w N, j represents the number of times the height pixel j appears in the N frame height map, w N-1,j represents the number of times the height pixel point j appears in the height map of the N-1 frame.
参见图4,图4为本申请一实施例提供的高度图的示意图,以N等于3为例,左上角的实线框例如为第1帧高度图,中间的实线框例如为第2帧高度图,右下角的实线框例如为第3帧高度图。每帧高度图相对于世界系沿着高度图平面的平移操作,而没有旋转,且平移量保证为整数个高度像素点。第2帧高度图相对于第1帧高度图在水平方向和竖直方向分别平移了一个高度像素点,第3帧高度图相对于第2帧高度图在水平方向和竖直方向分别平移了一个高度像素点。Refer to Figure 4, which is a schematic diagram of a height map provided by an embodiment of this application. Taking N equals 3 as an example, the solid line box in the upper left corner is, for example, the first frame height map, and the middle solid line box is, for example, the second frame. For the height map, the solid line box in the lower right corner is, for example, the third frame height map. The height map of each frame is translated along the height map plane relative to the world without rotation, and the amount of translation is guaranteed to be an integer number of height pixels. The height map of frame 2 is translated by one height pixel in the horizontal and vertical directions relative to the height map of frame 1, and the height of frame 3 is translated by one pixel in the horizontal and vertical directions relative to the height map of frame 2. Height in pixels.
在将第1帧高度图与第2帧高度图做融合处理时,前2帧高度图的融合高度图的高度像素点与第2帧高度图的高度像素点相同,其中,第2帧高度图中倒数第一列以及倒数第一行的高度像素点未在第1帧高度图出现,因此,第2帧高度图中倒数第一列以及倒数第一行的高度像素点在前2帧高度图出现的次数为1,前2帧高度图的融合高度图中倒数第一列以及倒数第一行的高度像素点的高度值等于第2帧高度图的高度像素点中倒数第一列以及倒数第一行的高度像素点的高度值,第2帧高度图中除倒数第一列以及倒数第一行之外的高度像素点均在第1帧高度图中出现,因此,第2帧高度图中除倒数第一列以及倒数第一行之外的高度像素点出现的次数2,前2帧高度图的融合高度图中除倒数第一列以及倒数第一行之外的高度像素点的高度值等于该高度像素点在第2帧高度图的高度值与在第1帧高度图的高度值的平均值。When the height map of the first frame and the height map of the second frame are fused, the height pixels of the fused height map of the first two frames of height map are the same as the height pixels of the height map of the second frame. Among them, the height map of the second frame The height pixels of the first-to-last column and the first-to-last row do not appear in the height map of the first frame, therefore, the height pixels of the first-to-last column and the first-to-last row of the height map of the second frame are in the height map of the first 2 frames The number of occurrences is 1. The height values of the first-to-last column and the first-to-last row of the height map of the height map of the first two frames are equal to the first-to-last column and the second-to-last of the height pixels of the height map of the second frame. The height value of the height pixels of a row, the height pixels except for the first-to-last column and the first-to-last row in the height map of the second frame appear in the height map of the first frame, therefore, the height map of the second frame Number of occurrences of height pixels other than the first-to-last column and the first-to-last row 2, the height value of the height pixels except the first-to-last column and the first-to-last row in the fusion height map of the height map of the previous 2 frames It is equal to the average value of the height value of the height pixel in the second frame height map and the height value of the first frame height map.
在将前2帧高度图的融合高度图与第3帧高度图做融合处理时(即将第1帧高度图、第2帧高度图和第3帧高度图做融合处理),前3帧高度图的 融合高度图的高度像素点与第3帧高度图的高度像素点相同,其中,第3帧高度图中倒数第一列以及倒数第一行的高度像素点未在前2帧高度图的融合高度图(即第1帧高度图和第2帧高度图)中出现,因此,第3帧高度图中倒数第一列以及倒数第一行的高度像素点在前3帧高度图出现的次数为1,前3帧高度图的融合高度图中倒数第一列以及倒数第一行的高度像素点的高度值等于第3帧高度图的高度像素点中倒数第一列以及倒数第一行的高度像素点的高度值,第3帧高度图中倒数第二列以及倒数第二行的高度像素点均在第2帧高度图但未在第1帧高度图中出现,因此,第3帧高度图中倒数第二列以及倒数第二行的高度像素点出现的次数2,前3帧高度图的融合高度图中倒数第二列以及倒数第二行的高度像素点的高度值等于该高度像素点在第2帧高度图的高度值与在第3帧高度图的高度值的平均值。第3帧高度图中除倒数第一列、倒数第二列、倒数第一行以及倒数第二行之外的高度像素点均在第1帧高度图、第2帧高度图、第3帧高度图中出现,因此,这些高度像素点出现的次数3,前3帧高度图的融合高度图中这些高度像素点的高度值等于该高度像素点在前2帧高度图的融合高度图中的高度值与在第3帧高度图中的高度值的平均值(即该高度像素点在第1帧高度图、第2帧高度图、第3帧高度图中的高度值的平均值)。When the fusion height map of the first 2 frame height maps and the third frame height map are fused (that is, the first frame height map, the second frame height map and the third frame height map are fused), the first 3 frame height maps The height pixels of the fusion height map are the same as the height pixels of the height map of the third frame, and the height pixels of the first to last column and the first row of the height map of the third frame are not fused in the height of the first 2 frames The height map (that is, the height map of the first frame and the height map of the second frame) appears. Therefore, the number of times the height pixels of the first-to-last column and the first-to-last row of the height map of the third frame appear in the height map of the first three frames is 1. The height values of the height pixels in the first-to-last column and the first-to-last row of the height map of the first 3 frames are equal to the height of the first-to-last column and the first-to-last row of the height pixels in the height map of the third frame The height value of the pixel. The height pixels of the second-to-last column and the second-to-last row of the height map of the third frame are all in the height map of the second frame but do not appear in the height map of the first frame. Therefore, the height map of the third frame The number of times that the height pixels in the second-to-last column and the second-to-last row of the middle, the height of the second-to-last column and the second-to-last row of the height pixel in the fusion height map of the first 3 frames are equal to the height pixel The average value of the height value of the height map in the second frame and the height value of the height map in the third frame. Except for the first-to-last column, the second-to-last column, the first-to-last row, and the second-to-last row of the height map in the third frame, the height pixels are all in the height map of the first frame, the height map of the second frame, and the height of the third frame It appears in the figure. Therefore, the number of times these height pixels appear is 3. The height value of these height pixels in the fusion height map of the height map of the first 3 frames is equal to the height of the height pixel in the fusion height map of the height map of the first 2 frames The average value of the height value and the height value in the third frame height map (that is, the average value of the height pixel point in the first frame height map, the second frame height map, and the third frame height map).
在一些实施例中,无人驾驶车辆在执行上述S304之后,还根据所述N帧高度图融合后的融合高度图,对所述地面区域的地面进行估计。由于由于该融合高度图融合了N帧高度图中地面区域的高度信息,因此根据多帧高度图融合后获得的一帧融合高度图对地面区域的地面进行估计,获得的地面估计结果更准确。In some embodiments, after performing S304, the unmanned vehicle further estimates the ground of the ground area according to the fused height map after the fusion of the N frames of height maps. Since the fusion height map integrates the height information of the ground area in the N frames of height maps, the ground area of the ground area is estimated according to a frame of the fusion height map obtained after the fusion of multiple frames of height maps, and the obtained ground estimation result is more accurate.
可选地,上述的根据所述N帧高度图融合后的融合高度图,对所述地面区域的地面进行估计的一种可能的实现方式可以为:根据地面区域的所述N帧高度图融合后的融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。Optionally, according to the above-mentioned fusion height map after fusion of the N frames of height maps, a possible implementation manner of estimating the ground of the ground area may be: fusion according to the N frames of height map of the ground area After fusing the height map and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting; the ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
其中,上述的地理位置信息可以包括经度和纬度,或者,上述的地理位置信息可以包括无人驾驶车辆的即时构建地图位置信息,本实施例并不限于此。Wherein, the foregoing geographic location information may include longitude and latitude, or the foregoing geographic location information may include instantaneously constructed map location information of an unmanned vehicle, and the present embodiment is not limited to this.
可选地,在拟合获得地面区域的地面模型之后,还可以根据地面区域的 地面模型获得地面区域的地面模型图,并通过显示界面向用户显示该地面模型图,以便用户直观地获得地面估计结果。Optionally, after the ground model of the ground area is obtained by fitting, the ground model of the ground area can be obtained according to the ground model of the ground area, and the ground model diagram can be displayed to the user through the display interface, so that the user can intuitively obtain the ground estimate result.
可选地,所述地面模型包括:B样条曲面模型,或者,多项式曲面模型。Optionally, the ground model includes: a B-spline surface model or a polynomial surface model.
以B样条曲面模型为例,B样条曲面模型的地面模型可以表示为:Taking the B-spline surface model as an example, the ground model of the B-spline surface model can be expressed as:
Figure PCTCN2019085655-appb-000002
Figure PCTCN2019085655-appb-000002
其中,B a,n(x)以及B b,m(y)表示地面区域的地理位置信息,f(x,y)表示地面区域的高度,W a,b表示权重系数,n为x维度的阶次,m为y维度的阶次。 Among them, B a, n (x) and B b, m (y) represent the geographic location information of the ground area, f (x, y) represents the height of the ground area, Wa , b represent the weight coefficient, and n is the x dimension The order, m is the order of the y dimension.
对于B样条而言,阶次越高,可以表达的曲线越复杂,然而过拟合的风险越高,而阶次越低,可以表达的曲线相对越简单,然而不容易过拟合。同时B样条的控制点的数量也会影响曲线的复杂度与过拟合的风险。因此合理选择控制点的分布与阶次(n、m)非常关键。在地面拟合的例子中,此处的x方向为无人驾驶车辆的前后方向,y方向为无人驾驶车辆的左右方向。在实际场景中,由于通常x方向(前后方向)观察距离较远,且地面可能存在上下坡的情况;而y方向(即左右方向),由于通常地面的道路宽度有限,且深度传感器的fov也有限;因此,控制点与阶次可配置为:x维度的阶次n为2,每20米左右设置一个控制点,前向的控制点的区间为0到100米,y维度的阶次m为1,分别在左右20米处设置控制点,一共2个控制点。For B-spline, the higher the order, the more complicated the curve can be expressed, but the higher the risk of overfitting, and the lower the order, the simpler the curve can be expressed, but it is not easy to overfit. At the same time, the number of B-spline control points will also affect the complexity of the curve and the risk of overfitting. Therefore, it is very important to choose the distribution and order (n, m) of control points reasonably. In the ground fitting example, the x direction here is the front and rear direction of the unmanned vehicle, and the y direction is the left and right direction of the unmanned vehicle. In actual scenes, usually the x-direction (front-rear direction) has a long observation distance, and the ground may have up and down slopes; while the y-direction (ie, the left-right direction), because the road width on the ground is usually limited, and the depth sensor's fov is also Limited; therefore, the control point and order can be configured as: the order n of the x dimension is 2, and a control point is set every 20 meters or so, the interval of the forward control point is 0 to 100 meters, and the order of the y dimension is m Set the control points at 20 meters on the left and right respectively, and there are 2 control points in total.
需要说明的是,本实施例的无人驾驶车辆在执行上述S304之后,也可以是其它设备根据所述融合高度图,对所述地面区域的地面进行估计。It should be noted that after the unmanned vehicle in this embodiment executes the above S304, other devices may also estimate the ground of the ground area according to the fusion height map.
本申请实施例中还提供了一种计算机存储介质,该计算机存储介质中存储有程序指令,所述程序执行时可包括如图3及其对应实施例中的地面信息处理方法的部分或全部步骤。The embodiment of the present application also provides a computer storage medium, the computer storage medium stores program instructions, and the program execution may include some or all of the steps of the ground information processing method in FIG. 3 and its corresponding embodiments. .
图5为本申请一实施例提供的地面信息处理装置的结构示意图,如图5所示,本实施例的地面信息处理装置500可以包括:深度传感器501和处理器502。上述深度传感器501和处理器502可以通过总线连接。FIG. 5 is a schematic structural diagram of a ground information processing apparatus provided by an embodiment of this application. As shown in FIG. 5, the ground information processing apparatus 500 of this embodiment may include a depth sensor 501 and a processor 502. The aforementioned depth sensor 501 and the processor 502 may be connected via a bus.
所述深度传感器501,用于采集深度图。The depth sensor 501 is used to collect a depth map.
所述处理器502,用于获取所述深度传感器501采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高 度图,所述融合高度图用于地面区域的地面估计。The processor 502 is configured to obtain N frames of depth maps collected by the depth sensor 501, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; Obtain the height map of the ground area in the depth map of each frame for the ground area in the depth map of the frame; obtain the fusion height map after N frames of height map fusion according to the height map of the ground area in the depth map of each frame, and the fusion height map is used The ground estimate for the ground area.
在一些实施例中,所述处理器502,具体用于:In some embodiments, the processor 502 is specifically configured to:
根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图;According to the first N-1 frame height map of the N frame height map, obtain the fused height map after the fusion of the first N-1 frame height map;
根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。According to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame, the fusion height map after the fusion of the height maps of the first N frames is obtained.
在一些实施例中,所述处理器502,具体用于:In some embodiments, the processor 502 is specifically configured to:
获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;Obtain the fused height map after the fusion of the height map of the previous i-1 frame in the height map of the previous N-1 frames, where i is an integer greater than or equal to 2 and less than or equal to N-1;
根据前i-1帧高度图融合后的融合高度图以及第i帧高度图,获得前i帧高度图的融合高度图;Obtain the fusion height map of the previous i frame height map according to the fusion height map after the fusion of the height map of the previous i-1 frame and the height map of the i-th frame;
更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Update i to be equal to i+1 until i is equal to N-1, so as to obtain the fused height map of the height map of the previous N-1 frames.
在一些实施例中,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。In some embodiments, the height value of each height pixel in the fusion height map is the average value of the height value of the height pixel in the N frames of height maps.
在一些实施例中,所述处理器502,还用于根据各帧深度图对应的高度图,获得N帧高度图融合后的融合高度图之后,根据所述融合高度图,对所述地面区域的地面进行估计。In some embodiments, the processor 502 is further configured to obtain a fused height map after the fusion of N frames of height maps according to the height map corresponding to the depth map of each frame, and then compare the ground area according to the fused height map. The ground is estimated.
在一些实施例中,所述处理器502,具体用于:In some embodiments, the processor 502 is specifically configured to:
根据地面区域的所述融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;According to the fusion height map of the ground area and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting;
所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。The ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
在一些实施例中,所述地理位置信息包括经度和纬度。In some embodiments, the geographic location information includes longitude and latitude.
在一些实施例中,所述地面模型包括:B样条曲面模型,或者,多项式曲面模型。In some embodiments, the ground model includes: a B-spline surface model, or a polynomial surface model.
在一些实施例中,所述处理器502,具体用于:In some embodiments, the processor 502 is specifically configured to:
根据每帧深度图中的地面区域,获得地面区域对应的点云数据;Obtain the point cloud data corresponding to the ground area according to the ground area in each depth map;
根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图。According to the point cloud data corresponding to the ground area in the depth map of each frame, the height map of the ground area in the depth map of each frame is obtained.
在一些实施例中,所述处理器502,具体用于:In some embodiments, the processor 502 is specifically configured to:
将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得 到俯视图视角下的所述高度图。Orthogonally project the point cloud data corresponding to the ground area in the depth map of each frame to the top view perspective to obtain the height map in the top view perspective.
在一些实施例中,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。In some embodiments, if multiple point cloud data are projected onto the same height pixel in the height map, the height value of the height pixel in the height map is the minimum value of the height values corresponding to the multiple point cloud data .
在一些实施例中,所述俯视图视角为世界系坐标向下的方向。In some embodiments, the perspective of the top view is a downward direction of world coordinates.
在一些实施例中,所述地面信息处理装置500应用于无人驾驶车辆中,所述深度传感器501机载在所述无人驾驶车辆上。In some embodiments, the ground information processing device 500 is applied to an unmanned vehicle, and the depth sensor 501 is airborne on the unmanned vehicle.
所述处理器502具体用于:获取所述无人驾驶车辆在移动过程中,所述深度传感器采集的N帧深度图。The processor 502 is specifically configured to obtain N frames of depth maps collected by the depth sensor during the movement of the unmanned vehicle.
在一些实施例中,所述深度传感器501包括双目相机、TOF传感器或激光雷达。In some embodiments, the depth sensor 501 includes a binocular camera, a TOF sensor or a lidar.
可选地,本实施例的地面信息处理装置500还可以包括:存储器(图中未示出),存储器用于存储程序代码,当程序代码被执行时,所述地面信息处理装置500可以实现上述的技术方案。Optionally, the ground information processing device 500 of this embodiment may further include: a memory (not shown in the figure), the memory is used to store program code, and when the program code is executed, the ground information processing device 500 can implement the above Technical solutions.
本实施例的装置,可以用于执行图3及其对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The device of this embodiment can be used to implement the technical solutions of the embodiment of FIG. 3 and its corresponding method, and its implementation principles and technical effects are similar, and will not be repeated here.
图6为本申请一实施例提供的无人驾驶车辆的结构示意图,如图6所示,本实施例的无人驾驶车辆600可以包括:深度传感器601和处理器602。上述深度传感器601和处理器602可以通过总线连接。FIG. 6 is a schematic structural diagram of an unmanned vehicle provided by an embodiment of this application. As shown in FIG. 6, the unmanned vehicle 600 of this embodiment may include a depth sensor 601 and a processor 602. The above-mentioned depth sensor 601 and the processor 602 may be connected through a bus.
所述深度传感器601,用于采集深度图。The depth sensor 601 is used to collect a depth map.
所述处理器602,用于获取所述深度传感器601采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。The processor 602 is configured to obtain N frames of depth maps collected by the depth sensor 601, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; Obtain the height map of the ground area in the depth map of each frame for the ground area in the depth map of the frame; obtain the fusion height map after N frames of height map fusion according to the height map of the ground area in the depth map of each frame, and the fusion height map is used The ground estimate for the ground area.
在一些实施例中,所述处理器602,具体用于:In some embodiments, the processor 602 is specifically configured to:
根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图;According to the first N-1 frame height map of the N frame height map, obtain the fused height map after the fusion of the first N-1 frame height map;
根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。According to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame, the fusion height map after the fusion of the height maps of the first N frames is obtained.
在一些实施例中,所述处理器602,具体用于:In some embodiments, the processor 602 is specifically configured to:
获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;Obtain the fused height map after the fusion of the height map of the previous i-1 frame in the height map of the previous N-1 frames, where i is an integer greater than or equal to 2 and less than or equal to N-1;
根据前i-1帧高度图融合后的融合高度图以及第i帧高度图,获得前i帧高度图的融合高度图;Obtain the fusion height map of the previous i frame height map according to the fusion height map after the fusion of the height map of the previous i-1 frame and the height map of the i-th frame;
更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Update i to be equal to i+1 until i is equal to N-1, so as to obtain the fused height map of the height map of the previous N-1 frames.
在一些实施例中,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。In some embodiments, the height value of each height pixel in the fusion height map is the average value of the height value of the height pixel in the N frames of height maps.
在一些实施例中,所述处理器602,还用于根据各帧深度图对应的高度图,获得N帧高度图融合后的融合高度图之后,根据所述融合高度图,对所述地面区域的地面进行估计。In some embodiments, the processor 602 is further configured to obtain a fused height map after the fusion of the N frames of height maps according to the height map corresponding to the depth map of each frame, and then calculate the ground area according to the fused height map. The ground is estimated.
在一些实施例中,所述处理器602,具体用于:In some embodiments, the processor 602 is specifically configured to:
根据地面区域的所述融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;According to the fusion height map of the ground area and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting;
所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。The ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
在一些实施例中,所述地理位置信息包括经度和纬度。In some embodiments, the geographic location information includes longitude and latitude.
在一些实施例中,所述地面模型包括:B样条曲面模型,或者,多项式曲面模型。In some embodiments, the ground model includes: a B-spline surface model, or a polynomial surface model.
在一些实施例中,所述处理器602,具体用于:In some embodiments, the processor 602 is specifically configured to:
根据每帧深度图中的地面区域,获得地面区域对应的点云数据;Obtain the point cloud data corresponding to the ground area according to the ground area in each depth map;
根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图。According to the point cloud data corresponding to the ground area in the depth map of each frame, the height map of the ground area in the depth map of each frame is obtained.
在一些实施例中,所述处理器602,具体用于:In some embodiments, the processor 602 is specifically configured to:
将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得到俯视图视角下的所述高度图。Orthogonally project the point cloud data corresponding to the ground area in each frame of the depth map to the top view perspective to obtain the height map in the top view perspective.
在一些实施例中,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。In some embodiments, if multiple point cloud data are projected onto the same height pixel in the height map, the height value of the height pixel in the height map is the minimum value of the height values corresponding to the multiple point cloud data .
在一些实施例中,所述俯视图视角为世界系坐标向下的方向。In some embodiments, the perspective of the top view is a downward direction of world coordinates.
在一些实施例中,所述处理器602具体用于:获取所述无人驾驶车辆在 移动过程中,所述深度传感器601采集的N帧深度图。In some embodiments, the processor 602 is specifically configured to obtain N frames of depth maps collected by the depth sensor 601 when the unmanned vehicle is moving.
在一些实施例中,所述深度传感器601包括双目相机、TOF传感器或激光雷达。In some embodiments, the depth sensor 601 includes a binocular camera, a TOF sensor or a lidar.
可选地,本实施例的无人驾驶车辆600还可以包括:存储器(图中未示出),存储器用于存储程序代码,当程序代码被执行时,所述无人驾驶车辆600可以实现上述的技术方案。Optionally, the unmanned vehicle 600 of this embodiment may further include: a memory (not shown in the figure), the memory is used to store program code, and when the program code is executed, the unmanned vehicle 600 can implement the foregoing Technical solutions.
本实施例的无人驾驶车辆,可以用于执行图3及其对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The unmanned vehicle in this embodiment can be used to implement the technical solutions of FIG. 3 and its corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
图7为本申请另一实施例提供的无人驾驶车辆的结构示意图,如图7所示,本实施例的无人驾驶车辆700可以包括:车辆本体701以及地面信息处理装置702。FIG. 7 is a schematic structural diagram of an unmanned vehicle provided by another embodiment of the application. As shown in FIG. 7, the unmanned vehicle 700 of this embodiment may include a vehicle body 701 and a ground information processing device 702.
其中,所述地面信息处理装置702安装于所述车辆本体701上。地面信息处理装置702可以是独立于车辆本体701的装置。Wherein, the ground information processing device 702 is installed on the vehicle body 701. The ground information processing device 702 may be a device independent of the vehicle body 701.
其中,地面信息处理装置702可以采用图5所示装置实施例的结构,其对应地,可以执行图3及其对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。Wherein, the ground information processing device 702 can adopt the structure of the device embodiment shown in FIG. 5, and correspondingly, it can implement the technical solutions of FIG. 3 and its corresponding method embodiments. The implementation principles and technical effects are similar, and will not be repeated here. .
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware. The foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the application range.

Claims (44)

  1. 一种地面信息处理方法,其特征在于,包括:A ground information processing method, characterized in that it comprises:
    获取深度传感器采集的N帧深度图,N为大于等于2的整数;Obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2;
    对每帧深度图进行地面分割,获得每帧深度图中的地面区域;Perform ground segmentation on each depth map to obtain the ground area in each depth map;
    根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;According to the ground area in the depth map of each frame, obtain the height map of the ground area in the depth map of each frame;
    根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。According to the height map of the ground area in each frame depth map, a fusion height map after N frames of height map fusion is obtained, and the fusion height map is used for ground estimation of the ground area.
  2. 根据权利要求1所述的方法,其特征在于,所述根据各帧深度图中地面区域对应的高度图,获得N帧高度图融合后的融合高度图,包括:The method according to claim 1, wherein the obtaining a fused height map after fusion of N frames of height maps according to the height map corresponding to the ground area in each frame depth map comprises:
    根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图;According to the first N-1 frame height map of the N frame height map, obtain the fused height map after the fusion of the first N-1 frame height map;
    根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。According to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame, the fusion height map after the fusion of the height maps of the first N frames is obtained.
  3. 根据权利要求2所述的方法,其特征在于,所述根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图,包括:The method according to claim 2, wherein the obtaining a fused height map after the fusion of the first N-1 frame height maps according to the first N-1 frame height maps of the N frame height map comprises:
    获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;Obtain the fused height map after the fusion of the height map of the previous i-1 frame in the height map of the previous N-1 frames, where i is an integer greater than or equal to 2 and less than or equal to N-1;
    根据前i-1帧高度图融合后的融合高度图以及第i帧高度图,获得前i帧高度图的融合高度图;Obtain the fusion height map of the previous i frame height map according to the fusion height map after the fusion of the height map of the previous i-1 frame and the height map of the i-th frame;
    更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Update i to be equal to i+1 until i is equal to N-1, so as to obtain the fused height map of the height map of the previous N-1 frames.
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。The method according to any one of claims 1 to 3, wherein the height value of each height pixel in the fusion height map is the average value of the height value of the height pixel in the N frames of height maps.
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据各帧深度图中地面区域对应的高度图,获得N帧高度图融合后的融合高度图之后,还包括:The method according to any one of claims 1 to 4, wherein after obtaining a fused height map after fusion of N frames of height maps according to the height map corresponding to the ground area in each frame depth map, the method further comprises:
    根据所述融合高度图,对所述地面区域的地面进行估计。According to the fusion height map, the ground of the ground area is estimated.
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述融合高度图,对所述地面区域的地面进行估计,包括:The method according to claim 5, wherein the estimating the ground of the ground area according to the fusion height map comprises:
    根据地面区域的所述融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;According to the fusion height map of the ground area and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting;
    所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。The ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
  7. 根据权利要求6所述的方法,其特征在于,所述地理位置信息包括经度和纬度。The method according to claim 6, wherein the geographic location information includes longitude and latitude.
  8. 根据权利要求6或7所述的方法,其特征在于,所述地面模型包括:B样条曲面模型,或者,多项式曲面模型。The method according to claim 6 or 7, wherein the ground model comprises: a B-spline surface model or a polynomial surface model.
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图,包括:The method according to any one of claims 1-8, wherein the obtaining a height map of the ground area in the depth map of each frame according to the ground area in the depth map of each frame comprises:
    根据每帧深度图中的地面区域,获得地面区域对应的点云数据;Obtain the point cloud data corresponding to the ground area according to the ground area in each depth map;
    根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图。According to the point cloud data corresponding to the ground area in the depth map of each frame, the height map of the ground area in the depth map of each frame is obtained.
  10. 根据权利要求9所述的方法,其特征在于,所述根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图,包括:The method according to claim 9, wherein the obtaining a height map of the ground area in each frame of the depth map according to the point cloud data corresponding to the ground area in the depth map of each frame comprises:
    将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得到俯视图视角下的所述高度图。Orthogonally project the point cloud data corresponding to the ground area in each frame of the depth map to the top view perspective to obtain the height map in the top view perspective.
  11. 根据权利要求10所述的方法,其特征在于,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。The method according to claim 10, wherein if multiple point cloud data are projected onto the same height pixel in the height map, the height value of the height pixel in the height map is the multiple point cloud data The minimum value of the corresponding height value.
  12. 根据权利要求10或11所述的方法,其特征在于,所述俯视图视角为世界系坐标向下的方向。The method according to claim 10 or 11, wherein the perspective of the top view is a downward direction of world coordinates.
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述方法应用于无人驾驶车辆中,所述深度传感器机载在所述无人驾驶车辆上;The method according to any one of claims 1-12, wherein the method is applied to an unmanned vehicle, and the depth sensor is airborne on the unmanned vehicle;
    所述获取深度传感器采集的N帧深度图,包括:The acquiring N frames of depth maps collected by the depth sensor includes:
    获取所述无人驾驶车辆在移动过程中,所述深度传感器采集的N帧深度图。Acquire N frames of depth maps collected by the depth sensor when the unmanned vehicle is moving.
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述深度传感器包括双目相机、飞行时间TOF传感器或激光雷达。The method according to any one of claims 1-13, wherein the depth sensor comprises a binocular camera, a time-of-flight TOF sensor or a lidar.
  15. 一种地面信息处理装置,其特征在于,包括:深度传感器和处理器;A ground information processing device, characterized by comprising: a depth sensor and a processor;
    所述深度传感器,用于采集深度图;The depth sensor is used to collect a depth map;
    所述处理器,用于获取所述深度传感器采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。The processor is configured to obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; according to the depth of each frame For the ground area in the figure, obtain the height map of the ground area in the depth map of each frame; according to the height map of the ground area in the depth map of each frame, obtain the fusion height map after N frames of height map fusion, and the fusion height map is used for the ground Ground estimate of the area.
  16. 根据权利要求15所述的装置,其特征在于,所述处理器,具体用于:The device according to claim 15, wherein the processor is specifically configured to:
    根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合高度图;According to the first N-1 frame height map of the N frame height map, obtain the fused height map after the fusion of the first N-1 frame height map;
    根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。According to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame, the fusion height map after the fusion of the height maps of the first N frames is obtained.
  17. 根据权利要求16所述的装置,其特征在于,所述处理器,具体用于:The device according to claim 16, wherein the processor is specifically configured to:
    获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;Obtain the fused height map after the fusion of the height map of the previous i-1 frame in the height map of the previous N-1 frames, where i is an integer greater than or equal to 2 and less than or equal to N-1;
    根据前i-1帧高度图融合后的融合高度图以及第i帧高度图,获得前i帧高度图的融合高度图;Obtain the fusion height map of the previous i frame height map according to the fusion height map after the fusion of the height map of the previous i-1 frame and the height map of the i-th frame;
    更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Update i to be equal to i+1 until i is equal to N-1, so as to obtain the fused height map of the height map of the previous N-1 frames.
  18. 根据权利要求15-17任一项所述的装置,其特征在于,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。The device according to any one of claims 15-17, wherein the height value of each height pixel in the fusion height map is the average value of the height value of the height pixel in the N frames of height maps.
  19. 根据权利要求15-18任一项所述的装置,其特征在于,所述处理器,还用于根据各帧深度图对应的高度图,获得N帧高度图融合后的融合高度图之后,根据所述融合高度图,对所述地面区域的地面进行估计。The device according to any one of claims 15-18, wherein the processor is further configured to obtain a fused height map after fusion of N frames of height maps according to the height map corresponding to the depth map of each frame, according to The fusion height map estimates the ground of the ground area.
  20. 根据权利要求19所述的装置,其特征在于,所述处理器,具体用于:The device according to claim 19, wherein the processor is specifically configured to:
    根据地面区域的所述融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;According to the fusion height map of the ground area and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting;
    所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。The ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
  21. 根据权利要求20所述的装置,其特征在于,所述地理位置信息包括经度和纬度。The device according to claim 20, wherein the geographic location information includes longitude and latitude.
  22. 根据权利要求20或21所述的装置,其特征在于,所述地面模型包 括:B样条曲面模型,或者,多项式曲面模型。The device according to claim 20 or 21, wherein the ground model comprises: a B-spline surface model or a polynomial surface model.
  23. 根据权利要求15-22任一项所述的装置,其特征在于,所述处理器,具体用于:The device according to any one of claims 15-22, wherein the processor is specifically configured to:
    根据每帧深度图中的地面区域,获得地面区域对应的点云数据;Obtain the point cloud data corresponding to the ground area according to the ground area in each depth map;
    根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区域的高度图。According to the point cloud data corresponding to the ground area in the depth map of each frame, the height map of the ground area in the depth map of each frame is obtained.
  24. 根据权利要求23所述的装置,其特征在于,所述处理器,具体用于:The device according to claim 23, wherein the processor is specifically configured to:
    将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得到俯视图视角下的所述高度图。Orthogonally project the point cloud data corresponding to the ground area in each frame of the depth map to the top view perspective to obtain the height map in the top view perspective.
  25. 根据权利要求24所述的装置,其特征在于,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。The device according to claim 24, wherein if multiple point cloud data are projected onto the same height pixel in the height map, the height of the height pixel in the height map is the value of the multiple point cloud data The minimum value of the corresponding height value.
  26. 根据权利要求24或25所述的装置,其特征在于,所述俯视图视角为世界系坐标向下的方向。The device according to claim 24 or 25, wherein the perspective of the top view is a downward direction of world coordinates.
  27. 根据权利要求15-26任一项所述的装置,其特征在于,所述装置应用于无人驾驶车辆中,所述深度传感器机载在所述无人驾驶车辆上;The device according to any one of claims 15-26, wherein the device is applied to an unmanned vehicle, and the depth sensor is mounted on the unmanned vehicle;
    所述处理器具体用于:获取所述无人驾驶车辆在移动过程中,所述深度传感器采集的N帧深度图。The processor is specifically configured to obtain N frames of depth maps collected by the depth sensor during the movement of the unmanned vehicle.
  28. 根据权利要求15-27任一项所述的装置,其特征在于,所述深度传感器包括双目相机、飞行时间TOF传感器或激光雷达。The device according to any one of claims 15-27, wherein the depth sensor comprises a binocular camera, a time-of-flight TOF sensor or a lidar.
  29. 一种无人驾驶车辆,其特征在于,包括:深度传感器和处理器;An unmanned vehicle, characterized by comprising: a depth sensor and a processor;
    所述深度传感器,用于采集深度图;The depth sensor is used to collect a depth map;
    所述处理器,用于获取所述深度传感器采集的N帧深度图,N为大于等于2的整数;对每帧深度图进行地面分割,获得每帧深度图中的地面区域;根据每帧深度图中的地面区域,获得每帧深度图中地面区域的高度图;根据各帧深度图中地面区域的高度图,获得N帧高度图融合后的融合高度图,所述融合高度图用于地面区域的地面估计。The processor is configured to obtain N frames of depth maps collected by the depth sensor, where N is an integer greater than or equal to 2; perform ground segmentation on each frame of depth map to obtain the ground area in each frame of depth map; according to the depth of each frame For the ground area in the figure, obtain the height map of the ground area in the depth map of each frame; according to the height map of the ground area in the depth map of each frame, obtain the fusion height map after N frames of height map fusion, and the fusion height map is used for the ground Ground estimate of the area.
  30. 根据权利要求29所述的无人驾驶车辆,其特征在于,所述处理器,具体用于:The unmanned vehicle according to claim 29, wherein the processor is specifically configured to:
    根据N帧高度图中前N-1帧高度图,获得前N-1帧高度图融合后的融合 高度图;According to the first N-1 frame height map of the N frame height map, obtain the fused height map after the fusion of the first N-1 frame height map;
    根据前N-1帧高度图融合后的融合高度图以及第N帧高度图,获得前N帧高度图融合后的融合高度图。According to the fusion height map after the fusion of the height maps of the first N-1 frames and the height map of the Nth frame, the fusion height map after the fusion of the height maps of the first N frames is obtained.
  31. 根据权利要求30所述的无人驾驶车辆,其特征在于,所述处理器,具体用于:The unmanned vehicle according to claim 30, wherein the processor is specifically configured to:
    获取前N-1帧高度图中前i-1帧高度图融合后的融合高度图,i为大于等于2且小于等于N-1的整数;Obtain the fused height map after the fusion of the height map of the previous i-1 frame in the height map of the previous N-1 frames, where i is an integer greater than or equal to 2 and less than or equal to N-1;
    根据前i-1帧高度图融合后的融合高度图以及第i帧高度图,获得前i帧高度图的融合高度图;Obtain the fusion height map of the previous i frame height map according to the fusion height map after the fusion of the height map of the previous i-1 frame and the height map of the i-th frame;
    更新i等于i+1,直至i等于N-1,从而获得前N-1帧高度图的融合高度图。Update i to be equal to i+1 until i is equal to N-1, so as to obtain the fused height map of the height map of the previous N-1 frames.
  32. 根据权利要求29-31任一项所述的无人驾驶车辆,其特征在于,所述融合高度图中的每个高度像素点的高度值为该高度像素点在N帧高度图中高度值的平均值。The unmanned vehicle according to any one of claims 29-31, wherein the height value of each height pixel in the fusion height map is the value of the height of the height pixel in the N frame height map average value.
  33. 根据权利要求29-32任一项所述的无人驾驶车辆,其特征在于,所述处理器,还用于根据各帧深度图对应的高度图,获得N帧高度图融合后的融合高度图之后,根据所述融合高度图,对所述地面区域的地面进行估计。The unmanned vehicle according to any one of claims 29-32, wherein the processor is further configured to obtain a fused height map after fusion of N frames of height maps according to the height map corresponding to the depth map of each frame After that, the ground of the ground area is estimated according to the fusion height map.
  34. 根据权利要求33所述的无人驾驶车辆,其特征在于,所述处理器,具体用于:The unmanned vehicle according to claim 33, wherein the processor is specifically configured to:
    根据地面区域的所述融合高度图以及地面区域的地理位置信息,拟合获得地面区域的地面模型;According to the fusion height map of the ground area and the geographic location information of the ground area, a ground model of the ground area is obtained by fitting;
    所述地面模型为地面区域的高度关于地面区域的地理位置信息的函数。The ground model is a function of the height of the ground area with respect to the geographic location information of the ground area.
  35. 根据权利要求34所述的无人驾驶车辆,其特征在于,所述地理位置信息包括经度和纬度。The unmanned vehicle according to claim 34, wherein the geographic location information includes longitude and latitude.
  36. 根据权利要求34或35所述的无人驾驶车辆,其特征在于,所述地面模型包括:B样条曲面模型,或者,多项式曲面模型。The unmanned vehicle according to claim 34 or 35, wherein the ground model comprises: a B-spline surface model or a polynomial surface model.
  37. 根据权利要求29-36任一项所述的无人驾驶车辆,其特征在于,所述处理器,具体用于:The unmanned vehicle according to any one of claims 29-36, wherein the processor is specifically configured to:
    根据每帧深度图中的地面区域,获得地面区域对应的点云数据;Obtain the point cloud data corresponding to the ground area according to the ground area in each depth map;
    根据每帧深度图中地面区域对应的点云数据,获得每帧深度图中地面区 域的高度图。According to the point cloud data corresponding to the ground area in each depth map, the height map of the ground area in each depth map is obtained.
  38. 根据权利要求37所述的无人驾驶车辆,其特征在于,所述处理器,具体用于:The unmanned vehicle according to claim 37, wherein the processor is specifically configured to:
    将每帧深度图中地面区域对应的点云数据正交投影到俯视图视角下,得到俯视图视角下的所述高度图。Orthogonally project the point cloud data corresponding to the ground area in each frame of the depth map to the top view perspective to obtain the height map in the top view perspective.
  39. 根据权利要求38所述的无人驾驶车辆,其特征在于,若多个点云数据投影到高度图的同一高度像素点上,则所述高度图中该高度像素点的高度值为该多个点云数据对应的高度值的最小值。The unmanned vehicle according to claim 38, wherein if multiple point cloud data are projected onto the same height pixel in the height map, the height of the height pixel in the height map is the multiple The minimum height value corresponding to the point cloud data.
  40. 根据权利要求38或39所述的无人驾驶车辆,其特征在于,所述俯视图视角为世界系坐标向下的方向。The unmanned vehicle according to claim 38 or 39, wherein the perspective of the top view is a downward direction of world coordinates.
  41. 根据权利要求29-40任一项所述的无人驾驶车辆,其特征在于,所述处理器具体用于:获取所述无人驾驶车辆在移动过程中,所述深度传感器采集的N帧深度图。The unmanned vehicle according to any one of claims 29-40, wherein the processor is specifically configured to: acquire the depth of N frames collected by the depth sensor during the movement of the unmanned vehicle Figure.
  42. 根据权利要求29-41任一项所述的无人驾驶车辆,其特征在于,所述深度传感器包括双目相机、飞行时间TOF传感器或激光雷达。The unmanned vehicle according to any one of claims 29-41, wherein the depth sensor comprises a binocular camera, a time-of-flight TOF sensor or a lidar.
  43. 一种无人驾驶车辆,其特征在于,包括:车辆本体以及如权利要求15-28任一项所述的地面信息处理装置,其中,所述地面信息处理装置安装于所述车辆本体上。An unmanned vehicle, characterized by comprising: a vehicle body and the ground information processing device according to any one of claims 15-28, wherein the ground information processing device is installed on the vehicle body.
  44. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如权利要求1-14任一项所述的地面信息处理方法。A readable storage medium, characterized in that a computer program is stored on the readable storage medium; when the computer program is executed, the ground information processing method according to any one of claims 1-14 is realized.
PCT/CN2019/085655 2019-05-06 2019-05-06 Terrain information processing method and apparatus, and unmanned vehicle WO2020223868A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980011875.2A CN111712855A (en) 2019-05-06 2019-05-06 Ground information processing method and device and unmanned vehicle
PCT/CN2019/085655 WO2020223868A1 (en) 2019-05-06 2019-05-06 Terrain information processing method and apparatus, and unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/085655 WO2020223868A1 (en) 2019-05-06 2019-05-06 Terrain information processing method and apparatus, and unmanned vehicle

Publications (1)

Publication Number Publication Date
WO2020223868A1 true WO2020223868A1 (en) 2020-11-12

Family

ID=72536769

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085655 WO2020223868A1 (en) 2019-05-06 2019-05-06 Terrain information processing method and apparatus, and unmanned vehicle

Country Status (2)

Country Link
CN (1) CN111712855A (en)
WO (1) WO2020223868A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365575B (en) * 2020-11-10 2022-06-21 广州极飞科技股份有限公司 Ground plane data measuring method, device, mobile equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070005306A1 (en) * 2005-06-22 2007-01-04 Deere & Company, A Delaware Corporation Method and system for sensor signal fusion
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108629231A (en) * 2017-03-16 2018-10-09 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109145677A (en) * 2017-06-15 2019-01-04 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109270545A (en) * 2018-10-23 2019-01-25 百度在线网络技术(北京)有限公司 A kind of positioning true value method of calibration, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070005306A1 (en) * 2005-06-22 2007-01-04 Deere & Company, A Delaware Corporation Method and system for sensor signal fusion
CN108629231A (en) * 2017-03-16 2018-10-09 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
CN109145677A (en) * 2017-06-15 2019-01-04 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109270545A (en) * 2018-10-23 2019-01-25 百度在线网络技术(北京)有限公司 A kind of positioning true value method of calibration, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111712855A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
US11670193B2 (en) Extrinsic parameter of on-board sensor
CN111656136B (en) Vehicle positioning system using lidar
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
US11164369B2 (en) Methods and systems for constructing map data using poisson surface reconstruction
JP2022019642A (en) Positioning method and device based upon multi-sensor combination
CN110060297B (en) Information processing apparatus, information processing system, information processing method, and storage medium
WO2020000137A1 (en) Integrated sensor calibration in natural scenes
EP4057227A1 (en) Pose estimation of inertial measurement unit and camera mounted on a moving object
CN113916242B (en) Lane positioning method and device, storage medium and electronic equipment
JP2021508095A (en) Methods and systems for color point cloud generation
CN113673282A (en) Target detection method and device
EP4124829B1 (en) Map construction method, apparatus, device and storage medium
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
CN112824997A (en) Method and system for local lane of travel perception
CN113515128A (en) Unmanned vehicle real-time path planning method and storage medium
JP7337617B2 (en) Estimation device, estimation method and program
WO2020223868A1 (en) Terrain information processing method and apparatus, and unmanned vehicle
CN112987053A (en) Method and apparatus for monitoring yaw sensor
JP7278740B2 (en) Mobile control device
WO2022133986A1 (en) Accuracy estimation method and system
JP2018194417A (en) Position estimation device, mobile device
EP4181089A1 (en) Systems and methods for estimating cuboid headings based on heading estimations generated using different cuboid defining techniques
US20230140324A1 (en) Method of creating 3d volumetric scene
WO2022198590A1 (en) Calibration method and apparatus, intelligent driving system, and vehicle
CN117553823A (en) Method and system for correcting odometer in unmanned plane laser SLAM system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928025

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19928025

Country of ref document: EP

Kind code of ref document: A1