WO2020006765A1 - 地面检测方法、相关装置及计算机可读存储介质 - Google Patents

地面检测方法、相关装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020006765A1
WO2020006765A1 PCT/CN2018/094906 CN2018094906W WO2020006765A1 WO 2020006765 A1 WO2020006765 A1 WO 2020006765A1 CN 2018094906 W CN2018094906 W CN 2018094906W WO 2020006765 A1 WO2020006765 A1 WO 2020006765A1
Authority
WO
WIPO (PCT)
Prior art keywords
ground
coordinate system
point cloud
dimensional point
world coordinate
Prior art date
Application number
PCT/CN2018/094906
Other languages
English (en)
French (fr)
Inventor
李业
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201880001111.0A priority Critical patent/CN108885791B/zh
Priority to PCT/CN2018/094906 priority patent/WO2020006765A1/zh
Publication of WO2020006765A1 publication Critical patent/WO2020006765A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of detection technology, and in particular, to a ground detection method, a related device, and a computer-readable storage medium.
  • ground detection is an extremely important key technology.
  • Traditional ground detection methods based on RGB images generally rely on prior information such as ground color and edges, so they are widely used in simple environments and not applicable in complex environments.
  • ground detection methods based on depth images are gradually applied in complex environments.
  • the inventor discovered during the research of the prior art that the ground detection methods based on depth images in the prior art no longer rely on prior information such as the color and edges of the ground, but usually need to limit the position and attitude of the sensor, so It is universal.
  • a technical problem to be solved in some embodiments of the present application is to provide a ground detection method, a related device, and a computer-readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a ground detection method, which includes: acquiring a depth map and an attitude angle of a camera; constructing a three-dimensional point cloud in a world coordinate system according to the depth map and an attitude angle of the camera; The point cloud obtains the initial ground area, calculates the inclination of the initial ground area, and determines the ground detection result based on the inclination.
  • An embodiment of the present application further provides a ground detection device.
  • the ground detection device includes: a first acquisition module for acquiring a depth map and an attitude angle of a camera; a construction module for constructing a world according to the depth map and the attitude angle of the camera The three-dimensional point cloud in the coordinate system; the second acquisition module is used to obtain the initial ground area according to the three-dimensional point cloud in the world coordinate system; the detection module is used to calculate the inclination of the initial ground area and determine the ground detection result according to the inclination.
  • An embodiment of the present application further provides an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are processed by the at least one The processor executes to enable at least one processor to execute the ground detection method involved in any method embodiment of the present application.
  • the embodiment of the present application further provides a computer-readable storage medium storing computer instructions, and the computer instructions are used to cause a computer to execute the ground detection method involved in any method embodiment of the present application.
  • the embodiments of the present application construct a three-dimensional point cloud in the world coordinate system by using the acquired depth map and the attitude angle of the camera, and perform ground detection based on the three-dimensional point cloud in the world coordinate system without the need for Limiting the position and attitude of the sensor is universal.
  • FIG. 1 is a flowchart of a ground detection method in a first embodiment of the present application
  • FIG. 2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application
  • FIG. 3 is a relationship diagram between a camera coordinate system and world coordinates in the first embodiment of the present application.
  • FIG. 5 is a block diagram of a ground detection device in a third embodiment of the present application.
  • FIG. 6 is a block diagram of a ground detection device in a fourth embodiment of the present application.
  • FIG. 7 is a structural example diagram of an electronic device in a fifth embodiment of the present application.
  • the first embodiment of the present application relates to a ground detection method.
  • the execution subject of the ground detection method may be a blind guide helmet or an intelligent robot.
  • the specific process of the ground detection method is shown in Figure 1, and includes the following steps:
  • step 101 a depth map and a camera's attitude angle are acquired.
  • a depth map is acquired by a depth camera, and a posture angle of the camera is acquired by a posture sensor.
  • the depth map is subjected to scale normalization processing, and the subsequent ground detection steps are performed using the scale normalized depth map, which can speed up the calculation speed and quickly obtain the ground detection results.
  • the specific way of performing scale normalization processing on the depth map is: calculating the scale normalization factor according to the depth map and a preset normalization scale, and calculating the scale according to the depth map and the scale normalization factor. Normalized depth map.
  • the specific calculation process is as follows:
  • S represents the scale normalization factor
  • W represents the width of the depth map
  • H represents the height of the depth map
  • Norm represents a preset normalized scale. Norm is a pre-set known quantity, which remains the same for each depth map.
  • Formula (2) is used to calculate the depth map after normalization.
  • Formula (2) is expressed as follows:
  • W s represents the width of the depth-normalized depth map
  • H s represents the height of the depth-normalized depth map.
  • the depth-normalized depth map can be determined by W s and H s .
  • step 102 a three-dimensional point cloud in a world coordinate system is constructed according to the depth map and the attitude angle of the camera.
  • a three-dimensional point cloud in the camera coordinate system is constructed based on the scaled normalized depth map, and a three-dimensional point cloud in the world coordinate system is constructed according to the three-dimensional point cloud in the camera coordinate system and the attitude angle of the camera.
  • formula (3) is used to construct a three-dimensional point cloud in the camera coordinate system.
  • Formula (3) is expressed as follows:
  • u and v are normalized pixel position coordinates in the depth map
  • M 3 ⁇ 4 is the internal parameter matrix of the camera
  • X c , Y c and Z c are the coordinate values of the three-dimensional point cloud in the camera coordinate system.
  • Z c is a depth value of a pixel point in the normalized depth map, and is a known quantity.
  • formula (4) is used to construct a three-dimensional point cloud in the world coordinate system.
  • X w , Y w and Z w are the coordinate values of the three-dimensional point cloud in the world coordinate system
  • ⁇ , ⁇ and ⁇ are the attitude angles of the camera.
  • a rectangular coordinate system o-uv in pixels which is established with the upper left corner of the depth image as the origin, is called a pixel coordinate system.
  • the horizontal coordinate u and vertical coordinate v of a pixel are the number of columns and the number of rows in its image array, respectively.
  • the origin o 1 of the image coordinate system o 1 -xy is defined as the intersection of the camera optical axis and the depth image plane, and the x-axis is parallel to the u-axis, and the y-axis is parallel to the v-axis.
  • the camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, and the X c axis and Y c axis are respectively parallel to the x and y axes of the image coordinate system, and the Z c axis is the optical axis of the camera , Perpendicular to the image plane and intersect at o 1 point.
  • the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are the camera optical centers, and the horizontal direction to the right is the positive direction of the X w axis.
  • the vertical downward direction is the positive direction of the Y w axis
  • the vertical X w Y w plane and pointing directly forward is the positive direction of the Z w axis
  • a world coordinate system is established.
  • step 103 an initial ground area is obtained according to a three-dimensional point cloud in the world coordinate system.
  • the three-dimensional point cloud in the world coordinate system is automatically threshold-divided in the height direction to obtain a second ground region.
  • the third ground region is obtained by performing fixed threshold segmentation of the distance direction on the three-dimensional point cloud in the world coordinate system.
  • An initial ground area is obtained from the second ground area and the third ground area.
  • the coordinate values X w , Y w, and Z w of the three-dimensional point cloud in the world coordinate system are coordinate sets in three directions, and Y w is the coordinate set in the height direction, and Z w is the coordinate in the distance direction.
  • X w is the set of coordinates in the left-right direction.
  • the height direction in the embodiments of the present application refers to the direction specified by the Y w axis in the world coordinate system
  • the distance direction refers to the direction specified by the Z w axis in the world coordinate system, and points to a positive direction Ahead.
  • an automatic threshold segmentation of the height direction of the three-dimensional point cloud in the world coordinate system is performed, and the method of obtaining the second ground region is: according to the height direction selected by the user in the three-dimensional point cloud in the world coordinate system
  • the region of interest (ROI) calculates and obtains the first segmentation threshold; and calculates and obtains the second segmentation threshold based on the ground height of the depth map of the previous frame of the current depth map.
  • the three-dimensional point cloud in the world coordinate system is automatically threshold-divided in the height direction, and the second ground region is obtained by using formula (5), and formula (5) is expressed as follows:
  • a and b are weighting coefficients, which can be set by the user according to actual needs, ThdY roi is a first segmentation threshold, ThdY pre is a second segmentation threshold, and Y mask is a second ground area.
  • the automatic threshold segmentation algorithms that can be used include the mean method, Gauss method, or Otsu method. Since the automatic threshold segmentation algorithm is relatively mature, in this embodiment, This will not be repeated here.
  • the three-dimensional point cloud in the world coordinate system is subjected to fixed threshold segmentation in the distance direction, and the third ground region is obtained by using the minimum coordinate value of the distance direction selected by the user in the three-dimensional point cloud in the world coordinate system as the third
  • the segmentation threshold is set to Z min ;
  • the maximum coordinate value of the distance direction selected by the user in the three-dimensional point cloud in the world coordinate system is set as the fourth segmentation threshold and set to Z max ; according to the third segmentation threshold and the fourth segmentation threshold ,
  • the fixed threshold segmentation of the distance direction of the three-dimensional point cloud in the world coordinate system is performed to obtain a third ground area, which is set to Z msk , that is, the area obtained by retaining the Z w value between Z min and Z max is the third ground. region.
  • Gnd o is the initial ground area
  • Y mask is the second ground area
  • Z mask is the third ground area.
  • the specific physical meaning of the formula is that the suspected ground area in the height direction can be determined through the second ground area, and the range in the distance direction of the second ground area can be further limited through the third ground area, thereby ensuring the final acquisition. Accuracy of the initial ground area.
  • step 104 the inclination angle of the initial ground area is calculated and the ground detection result is determined according to the inclination angle.
  • the points on the initial ground area are used as the known quantities, and the least square method or the random sampling consistency algorithm is used to perform the plane fitting on the initial ground area to obtain the plane of the initial ground area.
  • General equation may also be used to perform plane fitting on the initial ground area, and the specific method of plane fitting is not limited in the embodiments of the present application.
  • the normal vector of the initial ground area can be determined.
  • the normal vector Vector with vertical upwards Use formula (7) to calculate the inclination of the initial ground area.
  • Formula (7) is expressed as follows:
  • the maximum inclination angle of the horizontal ground is ⁇ 0
  • the maximum inclination angle of the slope ground is ⁇ 1 , where 0 ⁇ 0 ⁇ 1.
  • the determination criterion of the initial ground area is set as shown in formula (8):
  • the determination criterion of formula (8) determine whether the ground is detected according to the magnitude of the inclination. If the ground is detected, the initial ground is filtered according to the distance from all points in the 3D point cloud to the initial ground area to obtain the first ground area. ; Otherwise, directly perform ground detection on the next frame depth map.
  • the type of the ground is determined by using the formula (8) according to the magnitude of the inclination of the ground.
  • the types of ground include: horizontal ground, uphill ground and downhill ground.
  • the ground detection method provided in this embodiment constructs a three-dimensional point cloud in the world coordinate system by using the acquired depth map and the attitude angle of the camera, and performs ground detection based on the three-dimensional point cloud in the world coordinate system. , Without the need to limit the position and attitude of the sensor, is universal.
  • the second embodiment of the present application relates to a ground detection method.
  • This embodiment is further improved on the basis of the first embodiment.
  • the specific improvement is as follows: the method of screening the initial ground is specifically described.
  • the flow of the ground detection method in this embodiment is shown in FIG. 4.
  • steps 201 to 209 are included, where steps 201 to 203 are substantially the same as steps 101 to 103 in the first embodiment, and are not repeated here.
  • steps 201 to 203 are substantially the same as steps 101 to 103 in the first embodiment, and are not repeated here.
  • the following mainly describes the differences.
  • steps 201 to 203 are substantially the same as steps 101 to 103 in the first embodiment, and are not repeated here.
  • the following mainly describes the differences.
  • the substance detection method provided in the first embodiment which will not be repeated here.
  • step 204 is performed.
  • step 205 it is determined whether the ground is detected according to the inclination. If the ground is detected, step 206 is performed, otherwise step 209 is performed.
  • step 206 the initial ground area is screened to obtain a first ground area.
  • the ground undulation tolerance ⁇ is set, and the distances from all points in the 3D point cloud to the initial ground area are calculated, where p is any point in the 3D point cloud. All points passing through the first ground area are determined according to formula (9), and the first ground area is obtained from a plane formed by the determined points.
  • Gnd 1 is the first ground area
  • is the ground fluctuation tolerance
  • Dist p is the distance from the point p in the three-dimensional point cloud to the initial ground area.
  • step 207 the average height of the first ground area is calculated.
  • the average height of the first ground area can be determined based on all points contained in the first ground area. Specifically, it can be obtained by calculating using formula (10), which is expressed by formula (10) as follows:
  • H is the average height of the first ground area
  • k is the number of points included in the first ground area
  • P i (y) is the y coordinate value corresponding to the i-th point in the first ground area.
  • step 208 the ground height of the depth map of the next frame is updated according to the average height of the first ground area.
  • the calculated ground height is transmitted to the next frame, so as to update the ground height of the depth map of the next frame.
  • step 209 ground detection is performed on the next frame depth map.
  • ground detection of the next frame depth map is directly performed. If the ground is currently detected based on the inclination, the next frame is detected at the average ground height determined according to the current frame. After the ground height is updated, ground detection is performed on the next frame depth map.
  • the ground detection method provided in this embodiment constructs a three-dimensional point cloud in the world coordinate system by using the acquired depth map and the attitude angle of the camera, and performs ground detection based on the three-dimensional point cloud in the world coordinate system.
  • the ground height of the next frame depth map is updated by the ground detection result of the current frame depth map, which reflects the time domain and makes the detection result more accurate.
  • the third embodiment of the present application relates to a ground detection device.
  • the specific structure is shown in FIG. 5.
  • the ground detection device includes a first acquisition module 301, a construction module 302, a second acquisition acquisition module 303, and a detection module 304.
  • the first acquisition module 301 is configured to acquire a depth map and an attitude angle of the camera.
  • a construction module 302 is configured to construct a three-dimensional point cloud in a world coordinate system according to a depth map and an attitude angle.
  • the second acquisition acquisition module 303 is configured to acquire an initial ground area according to a three-dimensional point cloud in the world coordinate system.
  • the detection module 304 is configured to calculate an inclination angle of an initial ground area, and determine a ground detection result according to the inclination angle.
  • this embodiment is a device example corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment.
  • the related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the first embodiment.
  • the fourth embodiment of the present application relates to a ground detection device.
  • This embodiment is substantially the same as the fourth embodiment, and the specific structure is shown in FIG. 6.
  • the main improvement is that the fourth embodiment specifically describes the detection module 304 in the third embodiment.
  • the detection module 304 specifically includes: a judgment sub-module 3041, a screening sub-module 3042, a calculation sub-module 3043, an update sub-module 3044, and a detection sub-module 3045.
  • the judging sub-module 3041 is used to judge whether the ground is enough to be detected according to the magnitude of the inclination. If the ground is detected, the initial sub-region is filtered by the filtering sub-module 3042; otherwise, the depth map of the next frame is directly used by the sub-module 3045 Perform ground inspection.
  • a screening sub-module 3042 is configured to filter the initial ground area according to the distances from all points in the three-dimensional point cloud to the initial ground area to obtain a first ground area.
  • a calculation sub-module 3043 is configured to calculate an average height of the first ground area.
  • An update submodule 3044 is configured to update the ground height of the depth map of the next frame according to the average height of the first ground area.
  • the detection sub-module 3045 is used to determine that the ground is not currently detected according to the inclination in the determination sub-module 3041, and directly perform ground detection on the depth map of the next frame. In the determination sub-module 3041 to determine the currently detected ground according to the inclination, update the ground height The next frame depth map is used for ground detection.
  • this embodiment is a device example corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment. Relevant technical details mentioned in the second embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, related technical details mentioned in this embodiment can also be applied in the second embodiment.
  • a fifth embodiment of the present application relates to an electronic device, and a specific structure thereof is shown in FIG. 7. It includes at least one processor 501; and a memory 502 communicatively connected to the at least one processor 501.
  • the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute a ground detection method.
  • the processor 501 uses a central processing unit (CPU) as an example, and the memory 502 uses a readable and writable memory (Random Access Memory, RAM) as an example.
  • the processor 501 and the memory 502 may be connected through a bus or in other manners. In FIG. 5, connection through a bus is taken as an example.
  • the memory 502 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. Stored in the memory 502.
  • the processor 501 executes various functional applications and data processing of the device by running non-volatile software programs, instructions, and modules stored in the memory 502, that is, the above-mentioned ground detection method is implemented.
  • the memory 502 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
  • the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 502 may optionally include a memory remotely set relative to the processor 501, and these remote memories may be connected to an external device through a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • One or more program modules are stored in the memory 502, and when executed by one or more processors 501, perform the substance detection method in any of the above method embodiments.
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • the sixth embodiment of the present application relates to a computer-readable storage medium.
  • a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the ground detection method involved in any method embodiment of the present application can be implemented. .
  • the program is stored in a storage medium and includes several instructions for making a device (which can be (Single-chip microcomputer, chip, etc.) or a processor (processor) executes all or part of the steps of the method described in each embodiment of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种地面检测方法、相关装置及计算机可读存储介质,涉及检测技术领域,所述方法包括:获取深度图和相机的姿态角(101);根据深度图和相机的姿态角构建世界坐标系下的三维点云(102);根据世界坐标系下的三维点云获取初始地面区域(103),计算初始地面区域的倾角,并根据倾角确定地面检测结果(104)。通过获取的深度图和相机的姿态角来构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行地面检测,而不需要对传感器的位置和姿态进行限定,具有普适性。

Description

地面检测方法、相关装置及计算机可读存储介质 技术领域
本申请涉及检测技术领域,特别涉及一种地面检测方法、相关装置及计算机可读存储介质。
背景技术
在导盲、机器人和自动驾驶等领域,地面检测是一项及其重要的关键技术。传统的基于RGB图像的地面检测方法,一般需要依赖地面的颜色和边缘等先验信息,因此在简单的环境中应用比较广泛,而在复杂环境中并不适用。随着三维传感器技术的发展,基于深度图像的地面检测方法在复杂环境中逐渐得到应用。
技术问题
发明人在研究现有技术过程中发现,现有技术中的基于深度图像的地面检测方法,虽然不再依赖地面的颜色和边缘等先验信息,但通常需要限定传感器的位置和姿态,因此不具有普适性。
技术解决方案
本申请部分实施例所要解决的一个技术问题在于提供一种地面检测方法、相关装置及计算机可读存储介质,以解决上述技术问题。
本申请的一个实施例提供了一种地面检测方法,包括:获取深度图和相机的姿态角;根据深度图和相机的姿态角构建世界坐标系下的三维点云;根据世界坐标系下的三维点云获取初始地面区域,计算初始地面区域的倾角,并根据倾角确定地面检测结果。
本申请实施例还提供了一种地面检测装置,该地面检测装置包括:第一获取模块,用于获取深度图和相机的姿态角;构建模块,用于根据深度图和相机的姿态角构建世界坐标系下的三维点云;第二获取模块,用于根据世界坐标系下的三维点云获取初始地面区域;检测模块,用于计算初始地面区域的倾角,并根据倾角确定地面检测结果。
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行本申请任意方法实施例中涉及的地面检测方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机指令,计算机指令用于使计算机执行本申请任意方法实施例中涉及的地面检测方法。
有益效果
本申请实施例相对于现有技术而言,通过获取的深度图和相机的姿态角来构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行地面检测,而不需要对传感器的位置和姿态进行限定,具有普适性。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请第一实施例中地面检测方法的流程图;
图2为本申请第一实施例中像素坐标系和相机坐标系的关系图;
图3为本申请第一实施例中相机坐标系和世界坐标的关系图;
图4是本申请第二实施例中地面检测方法的流程图;
图5是本申请第三实施例中地面检测装置的方框示意图;
图6是本申请第四实施例中地面检测装置的方框示意图;
图7是本申请第五实施例中电子设备的结构实例图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理 解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请的第一实施例涉及一种地面检测方法,该地面检测方法的执行主体可以是导盲头盔或智能机器人。该地面检测方法的具体流程如图1所示,包括以下步骤:
在步骤101中,获取深度图和相机的姿态角。
具体的说,在本实施例中,通过深度相机获取深度图,通过姿态传感器获取相机的姿态角。
其中,获取深度图之后,对深度图进行尺度归一化处理,利用尺度归一化后的深度图进行后续的地面检测步骤,能够加快计算速度,快速获得地面检测结果。
在一个具体实现中,对深度图进行尺度归一化处理的具体方式为:根据深度图和预设的归一化尺度计算尺度归一化因子,并根据深度图和尺度归一化因子计算尺度归一化后的深度图。具体计算过程如下:
利用公式(1)计算尺度归一化因子,公式(1)表示如下:
Figure PCTCN2018094906-appb-000001
其中,S表示尺度归一化因子,W表示深度图的宽,H表示深度图的高,Norm表示预设的归一化尺度。Norm是预先设置的已知量,对于每一个深度图该值保持不变。
利用公式(2)计算尺度归一化后的深度图,公式(2)表示如下:
Figure PCTCN2018094906-appb-000002
其中,W s表示尺度归一化后的深度图的宽,H s表示尺度归一化后的深度图的高。通过W s和H s就可以确定出尺度归一化后的深度图。
在步骤102中,根据深度图和相机的姿态角构建世界坐标系下的三维点云。
具体的说,根据尺度归一化后的深度图构建相机坐标系下的三 维点云,根据相机坐标系下的三维点云和相机的姿态角构建世界坐标系下的三维点云。具体利用公式(3)构建相机坐标系下的三维点云,公式(3)表示如下:
Figure PCTCN2018094906-appb-000003
其中,u和v是归一化后的深度图中像素点的位置坐标,M 3×4是相机的内参矩阵,X c、Y c和Z c是三维点云在相机坐标系中的坐标值,并且,Z c是归一化后的深度图中像素点的深度值,为已知量。
其中,具体利用公式(4)构建世界坐标系下的三维点云。
Figure PCTCN2018094906-appb-000004
其中,X w、Y w和Z w是三维点云在世界坐标系中的坐标值,α、β和γ是相机的姿态角。
需要说明的是,在进行坐标系方向确定时,设定标准图像坐标系为o 1-xy,则相机坐标系和像素坐标系的关系如图2所示,相机坐标系和世界坐标的关系如图3所示。
其中,如图2所示,以深度图像的左上角为原点建立的以像素为单位的直角坐标系o-uv称为像素坐标系。像素的横坐标u与纵坐标v分别是在其图像数组中所在的列数与所在行数。图像坐标系o 1-xy的原点o 1定义为相机光轴与深度图像平面的交点,且x轴与u轴平行,y轴与v轴平行。相机坐标系O c-X cY cZ c以相机光心O c为坐标原点,X c轴和Y c轴分别与图像坐标系的x轴和y轴平行,Z c轴为相机的光轴,和图像平面垂直并交于o 1点。
其中,如图3所示,世界坐标系O w-X wY wZ w的原点O w与相机坐标系的原点O c重合,均为相机光心,选取水平向右为X w轴正方向,垂直向下为Y w轴正方向,垂直X wY w平面并指向正前方为Z w轴正方向,建立世界坐标系。
在步骤103中,根据世界坐标系下的三维点云获取初始地面区域。
具体的说,对世界坐标系下的三维点云进行高度方向的自动阈值分割获得第二地面区域。对世界坐标系下的三维点云进行距离方向的固定阈值分割获得第三地面区域。根据第二地面区域和第三地面区域获得初始地面区域。
其中,三维点云在世界坐标系中的坐标值X w、Y w和Z w分别为三个方向上的坐标集合,而Y w即为高度方向的坐标集合,Z w即为距离方向的坐标集合,X w即为左右方向上的坐标集合。
需要说明的是,本申请实施例中的高度方向指的是在世界坐标系中Y w轴所指定的方向,距离方向指的是在世界坐标系中Z w轴所指定的方向,并且指向正前方。
在一个具体实现中,对世界坐标系下的三维点云进行高度方向的自动阈值分割,获得第二地面区域的方式为:根据用户在世界坐标系下的三维点云中选定的高度方向的感兴趣区域(Region Of Interes,ROI),计算获得第一分割阈值;根据当前深度图的前一帧深度图的地面高度,计算获得第二分割阈值。根据第一分割阈值和第二分割阈值,对世界坐标系下的三维点云进行高度方向的自动阈值分割,并利用公式(5)获得第二地面区域,公式(5)表示如下:
Y mask=a*ThdY roi+b*ThdY pre    (5)
其中,a和b为加权系数,可以由用户根据实际需要进行设定,ThdY roi为第一分割阈值,ThdY pre为第二分割阈值,Y mask为第二地面区域。
需要说明的是,在获得第一分割阈值和第二分割阈值时,可以采用的自动阈值分割算法包括均值法、高斯法或大津法等,由于自动阈值分割算法已经比较成熟,所以本实施例中不再对此进行赘述。
对世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第三地面区域的方式为:将用户在世界坐标系下的三维点云中选择的距离方向的最小坐标值,作为第三分割阈值,设为Z min;将用户在世界坐标系下的三维点云中选择的距离方向的最大坐标值,作为第四分割阈值,设为Z max;根据第三分割阈值和第四分割阈值,对世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第三地面区域,设为Z msk,即保留Z min和Z max之间的Z w值所获得的区域为第三地面区域。
其中,根据第二地面区域和第三地面区域,利用公式(6)获得初始地面区域,公式(6)表示如下:
Gnd o=Y mask∩Z mask    (6)
其中,Gnd o为初始地面区域,Y mask为第二地面区域,Z mask为第三地面区域。该公式的具体物理含义是,通过第二地面区域可以确定在高度方向上的疑似地面区域,通过第三地面区域可对第二地面区域在距离方向上的范围做进一步限定,从而保证最终获取的初始地面区域的准确性。
在步骤104中,计算初始地面区域的倾角并根据倾角确定地面检测结果。
需要说明的,在获得初始地面区域后,可以根据初始地面区域进行平面拟合,得到初始地面区域所在平面的一般方程Ax+By+Cz=D。
其中,在进行平面拟合时,是以初始地面区域上的点作为已知量,采用最小二乘法或随机抽样一致性算法,对初始地面区域进行平面拟合,以获得初始地面区域所在平面的一般方程。当然,也可以采用其它拟合方式对初始地面区域进行平面拟合,本申请实施例中并不限定平面拟合的具体方式。
其中,根据平面的一般方程可以确定初始地面区域的法向量
Figure PCTCN2018094906-appb-000005
根据法向量
Figure PCTCN2018094906-appb-000006
与垂直向上的单位向量
Figure PCTCN2018094906-appb-000007
利用公式(7)计算初始地面区域的倾角,公式(7)表示如下:
Figure PCTCN2018094906-appb-000008
其中,θ为初始地面区域的倾角,
Figure PCTCN2018094906-appb-000009
为初始地面区域的法向量,
Figure PCTCN2018094906-appb-000010
为垂直向上的单位向量。
具体的说,设置水平地面的最大倾角为θ 0,斜坡地面的最大倾角为θ 1,其中0<θ 01,设定对初始地面区域的判定标准为公式(8)所示:
Figure PCTCN2018094906-appb-000011
通过公式(8)的判定标准,根据倾角的大小判断是否检测到地面,若检测到地面,则根据三维点云中的所有点到初始地面区域的距离对初始地面进行筛选,获得第一地面区域;否则,直接对下一帧深度图进行地面检测。
需要说明的是,在检测到地面后,根据地面的倾角的大小,利用公式(8)确定地面的类型。其中,地面的类型包括:水平地面、上坡地面和下坡地面。
与现有技术相比,本实施方式提供的地面检测方法,通过获取的深度图和相机的姿态角来构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行地面检测,而不需要对传感器的位置和姿态进行限定,具有普适性。
本申请的第二实施例涉及一种地面检测方法,本实施例在第一实施例的基础上做了进一步改进,具体改进之处为:对初始地面进行筛选的方式进行了具体说明,另外,根据第一地面区域对下一帧深度图的地面高度进行更新,增加地面检测的准确性。本实施例中的地面检测方法的流程如图4所示。
具体的说,在本实施例中,包括步骤201至步骤209,其中步骤201至步骤203与第一实施方式中的步骤101至步骤103大致相同,此处不再赘述,下面主要介绍不同之处,未在本实施方式中详尽描述的技术细节,可参见第一实施例所提供的物质检测方法,此处不再赘述。
在步骤203之后,执行步骤204。
在步骤205中,根据倾角判断是否检测到地面,若检测到地面,则执行步骤206,否则执行步骤209。
在步骤206中,对初始地面区域进行筛选,获得第一地面区域。
具体的说,设置地面起伏容忍度σ,计算三维点云中的所有点到初始地面区域的距离,其中p为三维点云中的任意一点。并根据公式(9)确定经过第一地面区域的所有点,并由确定的点所构成的平面获得第一地面区域。
Figure PCTCN2018094906-appb-000012
其中,Gnd 1为第一地面区域,σ为地面起伏容忍度,Dist p为三维点云中的点p到初始地面区域的距离。
在步骤207中,计算第一地面区域的平均高度。
具体的说,在获得第一地面区域后,根据第一地面区域中所包含的所有点,可以确定出第一地面区域的平均高度,具体可以采用公式(10)计算获得,公式(10)表示如下:
Figure PCTCN2018094906-appb-000013
其中,H为第一地面区域的平均高度,k为第一地面区域中的所包含的点的个数,P i(y)为第一地面区域中第i个点所对应的y坐标值。
在步骤208中,根据第一地面区域的平均高度对下一帧深度图的地面高度进行更新。
其中,在计算获得第一地面区域的平均高度之后,将计算得到的地面高度传递给下一帧,从而实现对下一帧深度图的地面高度的更新。
在步骤209中,对下一帧深度图进行地面检测。
需要说明的是,如果根据倾角判断当前未检测到地面,则直接对下一帧深度图进行地面检测,如果根据倾角判断当前检测到地面,则在根据当前帧确定的平均地面高度对下一帧地面高度进行更新后,再对下一帧深度图进行地面检测。
与现有技术相比,本实施方式提供的地面检测方法,通过获取的深度图和相机的姿态角来构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行地面检测,而不需要对传感器的位置和姿态进行限定,具有普适性。并且通过当前帧深度图的地面检测结果对下一帧深度图的地面高度进行更新,体现了时域性,使检测结果更加准确。
本申请的第三实施方式涉及一种地面检测装置,具体结构如图 5所示。
如图5所示,地面检测装置包括第一获取模块301,构建模块302,第二获取获取模303和检测模块304。
其中,第一获取模块301,用于获取深度图和相机的姿态角。
构建模块302,用于根据深度图和姿态角构建世界坐标系下的三维点云。
第二获取获取模303,用于根据世界坐标系下的三维点云获取初始地面区域。
检测模块304,用于计算初始地面区域的倾角,并根据倾角确定地面检测结果。
不难发现,本实施方式为与第一实施方式相对应的装置实施例,本实施方式可与第一实施方式互相配合实施。第一实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第一实施方式中。
本申请的第四实施例涉及一种地面检测装置,该实施方式与第四实施方式大致相同,具体结构如图6所示。其中,主要改进之处在于:第四实施方式对第三实施方式中的检测模块304进行了具体描述。检测模块304具体包括:判断子模块3041,筛选子模块3042,计算子模块3043,更新子模块3044和检测子模块3045。
判断子模块3041,用于根据倾角的大小判断是够检测到地面,若检测到地面,则利用筛选子模块3042对初始地面区域进行筛选,否则,利用检测子模块3045直接对下一帧深度图进行地面检测。
筛选子模块3042,用于根据三维点云中的所有点到初始地面区域的距离对初始地面区域进行筛选,获得第一地面区域。
计算子模块3043,用于计算第一地面区域的平均高度。
更新子模块3044,用于根据第一地面区域的平均高度对下一帧深度图的地面高度进行更新。
检测子模块3045,用于在判断子模块3041根据倾角确定当前未检测到地面,直接对下一帧深度图进行地面检测,在判断子模块3041根据倾角确定当前检测到地面,则对更新地面高度的下一帧深 度图进行地面检测。
不难发现,本实施方式为与第二实施方式相对应的装置实施例,本实施方式可与第二实施方式互相配合实施。第二实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第二实施方式中。
以上所描述的装置实施例仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的,此处不做限制。
本申请的第五实施例涉及一种电子设备,具体结构如图7所示。包括至少一个处理器501;以及,与至少一个处理器501通信连接的存储器502。其中,存储器502存储有可被至少一个处理器501执行的指令,指令被至少一个处理器501执行,以使至少一个处理器501能够执行地面检测方法。
本实施例中,处理器501以中央处理器(Central Processing Unit,CPU)为例,存储器502以可读写存储器(Random Access Memory,RAM)为例。处理器501、存储器502可以通过总线或者其他方式连接,图5中以通过总线连接为例。存储器502作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中实现环境信息确定方法的程序就存储于存储器502中。处理器501通过运行存储在存储器502中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述地面检测方法。
存储器502可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器502可选包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部 网、局域网、移动通信网及其组合。
一个或者多个程序模块存储在存储器502中,当被一个或者多个处理器501执行时,执行上述任意方法实施例中的物质检测方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请的第六实施例涉及一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,该计算机程序被处理器执行时能够实现本申请任意方法实施例中涉及的地面检测方法。
本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (12)

  1. 一种地面检测方法,包括:
    获取深度图和相机的姿态角;
    根据所述深度图和所述相机的姿态角构建世界坐标系下的三维点云;
    根据所述世界坐标系下的三维点云获取初始地面区域;
    计算所述初始地面区域的倾角,并根据所述倾角确定地面检测结果。
  2. 如权利要求1所述的地面检测方法,其中,所述根据所述深度图和所述相机的姿态角构建世界坐标系下的三维点云之前,所述地面检测方法还包括:
    根据所述深度图和预设的归一化尺度计算尺度归一化因子;
    根据所述深度图和所述尺度归一化因子计算尺度归一化后的深度图。
  3. 如权利要求2所述的地面检测方法,其中,根据所述深度图和所述相机的姿态角构建世界坐标系下的三维点云,包括:
    根据所述尺度归一化后的深度图构建相机坐标系下的三维点云;
    根据所述相机坐标系下的三维点云和所述相机的姿态角构建世界坐标系下的三维点云。
  4. 如权利要求1至3任一项所述的地面检测方法,其中,所述根据所述倾角确定地面检测结果,包括:
    根据所述倾角的大小判断是否检测到地面,若检测到地面,则根据所述三维点云中的所有点到所述初始地面区域的距离对所述初始地面区域进行筛选,获得第一地面区域;
    否则,直接对下一帧深度图进行地面检测。
  5. 如权利要求4所述的地面检测方法,其中,所述检测到地面之后,所述地面检测方法还包括:
    根据所述倾角的大小确定所述地面的类型,其中,所述地面的类型包括:水平地面、上坡地面和下坡地面。
  6. 如权利要求4所述的地面检测方法,其中,所述获得第一地面区域之后,所述地面检测方法还包括:
    计算所述第一地面区域的平均高度;
    根据所述第一地面区域的平均高度对下一帧深度图的地面高度进行更新;
    对更新地面高度的下一帧深度图进行地面检测。
  7. 如权利要求1至6任一项所述的地面检测方法,其中,所述根据所述世界坐标系下的三维点云获取初始地面区域,包括:
    对所述世界坐标系下的三维点云进行高度方向的自动阈值分割,获得第二地面区域;
    对所述世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第三地面区域;
    根据所述第二地面区域和所述第三地面区域获得所述初始地面区域。
  8. 如权利要求7所述的地面检测方法,其中,所述对所述世界坐标系下的三维点云进行高度方向的自动阈值分割,获得第二地面区域,包括:
    根据用户在所述世界坐标系下的三维点云中选定的高度方向的感兴趣区域,计算获得第一分割阈值;
    根据前一帧深度图的地面高度,计算获得第二分割阈值;
    根据所述第一分割阈值和所述第二分割阈值,对所述世界坐标系下的三维点云进行高度方向的自动阈值分割,获得第二地面区域。
  9. 如权利要求7所述的地面检测方法,其中,所述对所述世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第三地面区域,包括:
    将用户在所述世界坐标系下的三维点云中选择的距离方向的最小坐标值,作为第三分割阈值;
    将用户在所述世界坐标系下的三维点云中选择的距离方向的最大坐标值,作为第四分割阈值;
    根据所述第三分割阈值和所述第四分割阈值,对所述世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第三地面区域。
  10. 一种地面检测装置,包括:
    第一获取模块,用于获取深度图和相机的姿态角;
    构建模块,用于根据所述深度图和所述姿态角构建世界坐标系下的三维点云;
    第二获取模块,用于根据所述世界坐标系下的三维点云获取初始地面区域;
    检测模块,用于计算所述初始地面区域的倾角,并根据所述倾角确定地面检测结果。
  11. 一种电子设备,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至9任一项所述的地面检测方法。
  12. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至9任一项所述的地面检测方法。
PCT/CN2018/094906 2018-07-06 2018-07-06 地面检测方法、相关装置及计算机可读存储介质 WO2020006765A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001111.0A CN108885791B (zh) 2018-07-06 2018-07-06 地面检测方法、相关装置及计算机可读存储介质
PCT/CN2018/094906 WO2020006765A1 (zh) 2018-07-06 2018-07-06 地面检测方法、相关装置及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/094906 WO2020006765A1 (zh) 2018-07-06 2018-07-06 地面检测方法、相关装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020006765A1 true WO2020006765A1 (zh) 2020-01-09

Family

ID=64325003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094906 WO2020006765A1 (zh) 2018-07-06 2018-07-06 地面检测方法、相关装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108885791B (zh)
WO (1) WO2020006765A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862017A (zh) * 2021-04-01 2021-05-28 北京百度网讯科技有限公司 点云数据的标注方法、装置、设备和介质
CN113140002A (zh) * 2021-03-22 2021-07-20 北京中科慧眼科技有限公司 基于双目立体相机的道路状况检测方法、系统和智能终端
CN113658226A (zh) * 2021-08-26 2021-11-16 中国人民大学 一种限高装置高度检测方法和系统
CN113781628A (zh) * 2020-11-26 2021-12-10 北京沃东天骏信息技术有限公司 一种三维场景搭建方法和装置
CN114029953A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114743169A (zh) * 2022-04-11 2022-07-12 南京领行科技股份有限公司 一种对象的异常检测方法、装置、电子设备及存储介质
WO2024060209A1 (zh) * 2022-09-23 2024-03-28 深圳市速腾聚创科技有限公司 一种处理点云的方法和雷达

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136174B (zh) * 2019-05-22 2021-06-22 北京华捷艾米科技有限公司 一种目标对象跟踪方法和装置
CN110378246A (zh) * 2019-06-26 2019-10-25 深圳前海达闼云端智能科技有限公司 地面检测方法、装置、计算机可读存储介质及电子设备
CN110399807B (zh) * 2019-07-04 2021-07-16 达闼机器人有限公司 检测地面障碍物的方法、装置、可读存储介质及电子设备
CN112750205B (zh) * 2019-10-30 2023-05-16 南京深视光点科技有限公司 平面动态检测系统及检测方法
CN111476841B (zh) * 2020-03-04 2020-12-29 哈尔滨工业大学 一种基于点云和图像的识别定位方法及系统
CN111586299B (zh) * 2020-05-09 2021-10-19 北京华捷艾米科技有限公司 一种图像处理方法和相关设备
CN112819752A (zh) * 2021-01-05 2021-05-18 中国铁建重工集团股份有限公司 紧固件状态检测方法、系统和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575A (zh) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 深度图像中的自动地面检测及摄像机相对位姿估计方法
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106214437A (zh) * 2016-07-22 2016-12-14 杭州视氪科技有限公司 一种智能盲人辅助眼镜
CN106813568A (zh) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 物体测量方法及装置
CN108235774A (zh) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 信息处理方法、装置、云处理设备以及计算机程序产品

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013035612A1 (ja) * 2011-09-09 2013-03-14 日本電気株式会社 障害物検知装置、障害物検知方法及び障害物検知プログラム
CN103955920B (zh) * 2014-04-14 2017-04-12 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
CN104143194B (zh) * 2014-08-20 2017-09-08 清华大学 一种点云分割方法及装置
CN105426828B (zh) * 2015-11-10 2019-02-15 浙江宇视科技有限公司 人脸检测方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575A (zh) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 深度图像中的自动地面检测及摄像机相对位姿估计方法
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106813568A (zh) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 物体测量方法及装置
CN106214437A (zh) * 2016-07-22 2016-12-14 杭州视氪科技有限公司 一种智能盲人辅助眼镜
CN108235774A (zh) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 信息处理方法、装置、云处理设备以及计算机程序产品

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781628A (zh) * 2020-11-26 2021-12-10 北京沃东天骏信息技术有限公司 一种三维场景搭建方法和装置
CN113140002A (zh) * 2021-03-22 2021-07-20 北京中科慧眼科技有限公司 基于双目立体相机的道路状况检测方法、系统和智能终端
CN113140002B (zh) * 2021-03-22 2022-12-13 北京中科慧眼科技有限公司 基于双目立体相机的道路状况检测方法、系统和智能终端
CN112862017A (zh) * 2021-04-01 2021-05-28 北京百度网讯科技有限公司 点云数据的标注方法、装置、设备和介质
CN112862017B (zh) * 2021-04-01 2023-08-01 北京百度网讯科技有限公司 点云数据的标注方法、装置、设备和介质
CN113658226A (zh) * 2021-08-26 2021-11-16 中国人民大学 一种限高装置高度检测方法和系统
CN113658226B (zh) * 2021-08-26 2023-09-05 中国人民大学 一种限高装置高度检测方法和系统
CN114029953A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114029953B (zh) * 2021-11-18 2022-12-20 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114743169A (zh) * 2022-04-11 2022-07-12 南京领行科技股份有限公司 一种对象的异常检测方法、装置、电子设备及存储介质
WO2024060209A1 (zh) * 2022-09-23 2024-03-28 深圳市速腾聚创科技有限公司 一种处理点云的方法和雷达

Also Published As

Publication number Publication date
CN108885791B (zh) 2022-04-08
CN108885791A (zh) 2018-11-23

Similar Documents

Publication Publication Date Title
WO2020006765A1 (zh) 地面检测方法、相关装置及计算机可读存储介质
WO2020024234A1 (zh) 路径导航方法、相关装置及计算机可读存储介质
WO2020007189A1 (zh) 避障提醒方法、装置、电子设备及可读存储介质
CN107844750B (zh) 一种水面全景图像目标检测识别方法
CN109345593B (zh) 一种摄像机姿态的检测方法及装置
WO2020006764A1 (zh) 通路检测方法、相关装置及计算机可读存储介质
CN106156723B (zh) 一种基于视觉的路口精定位方法
EP3627109A1 (en) Visual positioning method and apparatus, electronic device and system
CN108235774B (zh) 信息处理方法、装置、云处理设备以及计算机程序产品
CN113077476B (zh) 一种高度测量方法、终端设备以及计算机存储介质
WO2020019115A1 (zh) 融合建图方法、相关装置及计算机可读存储介质
US20220414908A1 (en) Image processing method
CN112967345B (zh) 鱼眼相机的外参标定方法、装置以及系统
CN113935428A (zh) 基于图像识别的三维点云聚类识别方法及系统
WO2022217794A1 (zh) 一种动态环境移动机器人的定位方法
CN112017236A (zh) 一种基于单目相机计算目标物位置的方法及装置
CN109658453B (zh) 圆心确定方法、装置、设备及存储介质
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置
TWI658431B (zh) 影像處理方法、影像處理裝置及電腦可讀取記錄媒體
CN102542563A (zh) 一种移动机器人前向单目视觉的建模方法
CN113643359A (zh) 一种目标对象定位方法、装置、设备及存储介质
CN111337939A (zh) 一种矩形物体外边框的估计方法及装置
TWI784754B (zh) 電子裝置以及物件偵測方法
WO2023070441A1 (zh) 可移动平台的定位方法和装置
EP4246455A1 (en) Method and device for detecting object and vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925565

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18925565

Country of ref document: EP

Kind code of ref document: A1