WO2022165672A1 - 点云处理方法、装置和计算机可读存储介质 - Google Patents

点云处理方法、装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2022165672A1
WO2022165672A1 PCT/CN2021/075078 CN2021075078W WO2022165672A1 WO 2022165672 A1 WO2022165672 A1 WO 2022165672A1 CN 2021075078 W CN2021075078 W CN 2021075078W WO 2022165672 A1 WO2022165672 A1 WO 2022165672A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
distance
points
surrounding
Prior art date
Application number
PCT/CN2021/075078
Other languages
English (en)
French (fr)
Inventor
黄胜
梁家斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/075078 priority Critical patent/WO2022165672A1/zh
Publication of WO2022165672A1 publication Critical patent/WO2022165672A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the technical field of point cloud processing, and in particular, to a point cloud processing method, apparatus, and computer-readable storage medium.
  • a point cloud can represent a three-dimensional real scene, which is composed of multiple independent point cloud points, and each point cloud point can include three-dimensional coordinates and attribute information.
  • the point cloud inevitably contains noise point cloud points, the so-called noise point cloud points, that is, point cloud points without corresponding entities in the real scene.
  • the existence of noise point cloud points makes the point cloud inconsistent with the real scene, and the display effect is poor, and the existence of noise point cloud points will also reduce the recognition accuracy of the point cloud when it is used for object recognition and other processing. Therefore, it is necessary to determine the noise point cloud point from the point cloud and remove the noise point cloud point.
  • the embodiments of the present application provide a point cloud processing method, device and computer-readable storage medium, one of the purposes is to accurately determine the noise point cloud points in the point cloud, so as to facilitate the noise point cloud point processing. remove.
  • a first aspect of the embodiments of the present application provides a point cloud processing method, including:
  • For the first point cloud point in the initial point cloud determine the actual distance between the first point cloud point and the surrounding point cloud points, and determine the distance between the first point cloud point and the surrounding point cloud points. a reference distance, which is negatively correlated with the point cloud density in the area where the first point cloud point is located;
  • the first point cloud point is a noise point cloud point.
  • a second aspect of the embodiments of the present application provides a point cloud processing device, including: a processor and a memory storing a computer program, the processor implements the following steps when executing the computer program:
  • For the first point cloud point in the initial point cloud determine the actual distance between the first point cloud point and the surrounding point cloud points, and determine the distance between the first point cloud point and the surrounding point cloud points. a reference distance, which is negatively correlated with the point cloud density in the area where the first point cloud point is located;
  • the first point cloud point is a noise point cloud point.
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the point cloud processing method provided in the first aspect above .
  • the point cloud points located in the area with high point cloud density, the corresponding actual distance can be compared with the smaller reference distance, and the point cloud points located in the area with low point cloud density , the corresponding actual distance can be compared with a larger reference distance, so that whether it is a point cloud point in an area with high point cloud density or low point cloud density, it can be accurately judged whether it is a noise point cloud point, which greatly improves the The judgment accuracy of noise point cloud points is improved.
  • FIG. 1 is a schematic diagram of a point cloud provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a point cloud processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a mapping relationship between a point cloud point and a first image provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a point cloud processing apparatus provided by an embodiment of the present application.
  • a point cloud can represent a three-dimensional real scene, which consists of multiple independent point cloud points, and each point cloud point can include its own three-dimensional coordinates and attribute information.
  • the point cloud can be obtained by laser scanning.
  • the target scene can be scanned by a laser radar to obtain the point cloud corresponding to the target scene.
  • the point cloud can also be obtained by 3D reconstruction using images. For example, multiple images with a certain degree of overlap can be taken of the target scene, so that these images can be reconstructed in 3D through a multi-view geometric algorithm to obtain the target scene. the corresponding point cloud.
  • noise point cloud points are point cloud points that do not have corresponding entities in the real scene. For example, in a real scene, there is no object in front of it, but in the point cloud corresponding to the scene obtained, there is a point cloud point in front of it. Since the point cloud point does not exist in the real scene, it belongs to the noise point cloud point .
  • noisy point cloud points distorts the representation of the point cloud to the real scene and affects the use effect of the point cloud. For example, if the point cloud is used for output viewing, the noise point cloud point in the point cloud will make people mistakenly believe that there is an object at the location of the noise point cloud point in the real scene. If the point cloud is used for intelligent processing such as object recognition, noise Point cloud points reduce the accuracy of recognition. Therefore, it is necessary to determine the noise point cloud points from the point cloud, so that the noise point cloud points can be removed.
  • whether a point cloud point is a noise point cloud point can be determined according to the distance from the point cloud point to surrounding point cloud points. Due to the relatively isolated existence of noise point cloud points, if a point cloud point is a noise point cloud point, the distance from the point cloud point to the surrounding point cloud points will be relatively larger, so a threshold can be set, if If the distance between a point cloud point and surrounding point cloud points is greater than the threshold, the point cloud point can be determined as a noise point cloud point.
  • all point cloud points in the point cloud use the same threshold to determine whether they are noise point cloud points, which may cause misjudgment in some cases.
  • the point cloud can contain areas with different point cloud densities, such as area B in Figure 1.
  • the point cloud density of area B is higher, and the distance between point cloud points is small, such as in Figure 1.
  • Area A the point cloud density of area A is lower, and the distance between point cloud points is larger.
  • the threshold value is determined according to the distance between point cloud points in area B, it will lead to a large number of points in area A. Point cloud points that are not noise are misjudged as noise point cloud points.
  • the threshold is determined according to the distance between point cloud points in area A, it will lead to point cloud points that are really noise in area B (noise point cloud points).
  • Point b) is misjudged as not being noise.
  • FIG. 2 is a flowchart of the point cloud processing method provided by the embodiment of the present application. The method includes:
  • the initial point cloud can be obtained in various ways, for example, it can be obtained by 3D reconstruction using multiple images corresponding to the target scene, or it can be obtained by scanning the target scene with a laser. Of course, there are other acquisitions.
  • the initial point cloud can also be obtained by detecting the target scene through ultrasonic waves, and so on.
  • the initial point cloud includes a plurality of point cloud points.
  • the above steps S204-S208 may be performed for each point cloud point in the initial point cloud, that is, the first point cloud point may be the initial point. Any point in the cloud.
  • the above steps S204-S208 may also be performed on the point cloud points in a specific area in the initial point cloud, that is, the first point cloud point may be any point cloud point in the specific area in the initial point cloud.
  • the actual distance corresponding to the first point cloud point is the actual distance between the first point cloud point and the surrounding point cloud points, that is, in the point cloud, the geometric coordinates of the first point cloud point and the surrounding point cloud points can be calculated according to Calculate the actual distance.
  • the distance between the first point cloud point and the nearest surrounding point cloud point may be determined as the actual distance.
  • the three nearest point cloud points around the first point cloud point are point cloud point A, point cloud point B, and point cloud point C.
  • the point cloud point B closest to the first point cloud point can be compared with the first point cloud point B and the first point cloud point.
  • the distance of the point is determined as the actual distance.
  • the distances between the first point cloud point and the nearest multiple surrounding point cloud points may also be fused, so that the distance obtained by fusion is determined as the actual distance.
  • N may be an integer greater than or equal to 1.
  • other point cloud points except the first point cloud point may be traversed, and the distance from the first point cloud point may be calculated for each of the other point cloud points, so that the calculated The distance is sorted, and the N surrounding point cloud points closest to the first point cloud point can be determined.
  • a KD tree corresponding to the initial point cloud can be constructed, so that N surrounding point cloud points corresponding to the first point cloud point can be found through the KD tree.
  • the so-called reference distance can be the distance between the first point cloud point and the surrounding point cloud points under the point cloud density of the area where it is located, or, if the first point cloud point is not a noise point cloud point, then the first point cloud point
  • the distance between the point and the surrounding point cloud points should be the reference distance under the point cloud density of the area where it is located. It can be understood that the reference distance is negatively correlated with the point cloud density in the area where the first point cloud point is located.
  • the reference distance corresponding to the first point cloud point is small, because Under such a point cloud density, the distance between the non-noise first point cloud point and the surrounding point cloud points is relatively small; if the point cloud density in the area where the first point cloud point is located is low, the corresponding The reference distance is large, because under such a point cloud density, the distance between the non-noise first point cloud point and the surrounding point cloud points is relatively large.
  • the noise point cloud point is usually a relatively isolated point, and its distance from the surrounding point cloud point is larger than the distance between the non-noise point cloud point and the surrounding point cloud point. Therefore, for the first point cloud point, if the first point cloud point is If the actual distance corresponding to one point cloud point is significantly larger than the reference distance corresponding to the first point cloud point, it can be determined that the first point cloud point is a noise point cloud point.
  • the difference between the actual distance and the reference distance can be calculated to obtain the distance difference. If the distance difference is greater than the difference threshold, it can be determined that the actual distance corresponding to the first point cloud point is significantly larger than the corresponding distance.
  • the reference distance of it can be determined that the first point cloud point is a noise point cloud point.
  • the difference threshold may be set according to actual requirements or experience, or may be determined according to the distance difference corresponding to each point cloud point in the initial point cloud. Specifically, each point cloud in the initial point cloud may be The actual distance and the reference distance are determined for each point, so that each point cloud point can determine its corresponding distance difference.
  • the comparison between the actual distance and the reference distance can also be performed by means of a ratio.
  • the ratio of the corresponding actual distance and the reference distance can be calculated to obtain the distance ratio. If the distance ratio is greater than the ratio threshold, it can be determined that the actual distance corresponding to the first point cloud point is obviously greater than With its corresponding reference distance, it can be determined that the first point cloud point is a noise point cloud point.
  • the ratio threshold may be set according to actual requirements or experience, or may be determined according to the distance ratio corresponding to each point cloud point in the initial point cloud. Determine the actual distance and the reference distance, so that each point cloud point can determine its corresponding distance ratio.
  • the average mean2 and standard deviation std_dev2 of each distance ratio can be determined, and can be calculated.
  • the ratio threshold threshold2 mean2+k*std_dev2 (k can be a natural number, which can be adjusted according to requirements, for example, if more points tend to be reserved, k can be set larger).
  • the point cloud points located in the area with high point cloud density, the corresponding actual distance can be compared with the smaller reference distance, and the point cloud points located in the area with low point cloud density , the corresponding actual distance can be compared with a larger reference distance, so that whether it is a point cloud point in an area with high point cloud density or low point cloud density, it can be accurately judged whether it is a noise point cloud point, which greatly improves the The judgment accuracy of noise point cloud points is improved.
  • the reference distance corresponding to the first point cloud point is the distance that the first point cloud point should have from the surrounding point cloud points under the point cloud density of the region when the first point cloud point is regarded as a non-noise point.
  • the algorithm used in the three-dimensional reconstruction can be selected in various ways, for example, it can be an SFM algorithm, an MVS algorithm, etc., which is not limited in this application.
  • the first image may be any image that can observe the first point cloud point.
  • the first image may be the best observed image of the first point cloud point.
  • the best observation image can be defined in various ways. For example, the image with the clearest observation of the first point cloud point can be determined as the best observation image; for another example, the camera lens direction and the location where the first point cloud point is located can be determined.
  • the image with the vertical surface of the object is determined as the best observation image. It is understandable that the vertical here does not mean absolute vertical.
  • the direction of the camera lens is generally facing the first point cloud point, it can be considered that the camera
  • the direction of the lens is perpendicular to the surface of the object where the first point cloud point is located; for another example, the image with the closest distance between the camera position and the first point cloud point can be determined as the best observation image.
  • the camera lens direction may be determined according to the pose of the camera in one embodiment.
  • the best observed image when determining the best observed image, it can be determined in combination with the clarity of the image, the orientation of the camera lens corresponding to the image, and the camera position corresponding to the image. The image with the closest point cloud point and the camera lens direction most facing the first point cloud point is determined as the best observation image.
  • Ground Sample Distance can represent the real distance corresponding to a single pixel in the image in the real world, that is, the corresponding width of the pixel mapped to the real world. Since the first point cloud point can be better observed in the first image, the generation (three-dimensional reconstruction) of the first point cloud point and its surrounding point cloud points is strongly related to the first image. In one embodiment, it can be considered that the first point cloud point and its surrounding point cloud points each correspond to a pixel in the first image. If the first point cloud point is not noise, then the first point cloud point and the surrounding point cloud points The distance between them should be exactly the distance corresponding to the real world between the pixels in the first image, that is, the ground sampling distance corresponding to the first image.
  • FIG. 3 is a schematic diagram of a mapping relationship between a point cloud point and a first image provided by an embodiment of the present application.
  • the reference distance between the first point cloud point and the surrounding point cloud points may be the target ground sampling distance
  • the target ground sampling distance may be a plurality of first point cloud points used for observing the first point cloud point. It is obtained by fusion of multiple ground sampling distances corresponding to the image. For example, for example, P first images that can better observe the first point cloud point can be determined from the multiple images used for 3D reconstruction, and the ground samples corresponding to the P first images can be determined. Therefore, the P ground sampling distances corresponding to the P first images can be fused, and the ground sampling distance obtained by fusion is determined as the reference distance between the first point cloud point and the surrounding point cloud points.
  • the reference distance between the first point cloud point and the surrounding point cloud points can be determined according to the frequency of laser scanning in the area where the first point cloud point is located.
  • different scanning frequencies can be used for different objects in the scene. For an object with a simple structure, such as a four-legged table, since only a small number of point cloud points are needed to represent its structure, it can be scanned with a lower scanning frequency. For objects with complex structures, such as statues with textures, since a large number of point cloud points are required to represent their structures, they can be scanned with a higher scanning frequency.
  • the corresponding point cloud density of this area in the point cloud will be higher. If an area is scanned with a lower scanning frequency, the corresponding area in the point cloud will be The point cloud density is low, that is, the point cloud density of an area is positively correlated with the laser scanning frequency of the area.
  • the reference distance is negatively correlated with the laser scanning frequency, that is, for the point cloud points in the area with high laser scanning frequency and high point cloud density, the corresponding reference distance is small.
  • the point cloud points in the area have a large corresponding reference distance, so by comparing the actual distance and the reference distance corresponding to the point cloud points, the noise point cloud points in the point cloud points in different point cloud density areas can be correctly determined.
  • each point cloud point is a noise point cloud point according to the corresponding actual distance and reference distance, and determine the noise point cloud in the initial point cloud. After the points, these noise point cloud points can be removed to obtain the target point cloud, that is, the point cloud of the initial point cloud after the noise point cloud points are removed.
  • the target point cloud can be used for output display. Since the noise point cloud points are removed, the target point cloud can more accurately reflect the real scene and have a better display effect.
  • the target point cloud may be used for specifying processing, and the specified processing may include one or more of the following processing: object recognition, object mapping, obstacle avoidance, and the like.
  • Object recognition can identify the object categories corresponding to different point cloud points in the target point cloud.
  • Object mapping can measure the size information corresponding to the object according to the point cloud points in the target point cloud.
  • Obstacle avoidance can be applied to applications such as unmanned For mobile platforms such as drones, since the target point cloud removes the noise point cloud points, the drone can perform the obstacle avoidance function with higher accuracy and reduce the occurrence of false obstacle avoidance (that is, there is no obstacle but the execution is performed. obstacle avoidance action).
  • the point cloud points located in the area with high point cloud density, the corresponding actual distance can be compared with the smaller reference distance, and the point cloud points located in the area with low point cloud density , the corresponding actual distance can be compared with a larger reference distance, so that whether it is a point cloud point in an area with high point cloud density or low point cloud density, it can be accurately judged whether it is a noise point cloud point, which greatly improves the The judgment accuracy of noise point cloud points is improved.
  • FIG. 4 is a schematic structural diagram of a point cloud processing apparatus provided by an embodiment of the present application.
  • the apparatus may include: a processor 410 and a memory 420 storing a computer program, and the processor is executing the computer program.
  • For the first point cloud point in the initial point cloud determine the actual distance between the first point cloud point and the surrounding point cloud points, and determine the distance between the first point cloud point and the surrounding point cloud points. a reference distance, which is negatively correlated with the point cloud density in the area where the first point cloud point is located;
  • the first point cloud point is a noise point cloud point.
  • the initial point cloud is obtained by performing three-dimensional reconstruction using multiple images corresponding to the target scene.
  • the reference distance between the first point cloud point and the surrounding point cloud points is a ground sampling distance corresponding to the first image used to observe the first point cloud point in the plurality of images.
  • the direction of the camera lens corresponding to the first image is perpendicular to the surface of the object where the first point cloud points are located.
  • the distance between the camera position corresponding to the first image and the first point cloud point is less than or equal to a distance threshold.
  • the initial point cloud is obtained by scanning the target scene with a laser.
  • the reference distance between the first point cloud point and the surrounding point cloud points is determined according to the laser scanning frequency of the area where the first point cloud point is located.
  • the actual distance between the first point cloud point and the surrounding point cloud points is obtained by merging the distances between the first point cloud point and the nearest multiple surrounding point cloud points.
  • the nearest multiple surrounding point cloud points are obtained by searching according to the KD tree corresponding to the initial point cloud.
  • the comparison result includes a distance ratio between the actual distance and the reference distance.
  • the processor determines whether the first point cloud point is a noise point cloud point according to the comparison result between the actual distance and the reference distance:
  • the distance ratio corresponding to the first point cloud point is greater than a ratio threshold, it is determined that the first point cloud point is a noise point cloud point.
  • the ratio threshold is determined according to the distance ratio corresponding to each point cloud point in the initial point cloud.
  • the processor is also used for:
  • the first point cloud point is removed.
  • the initial point cloud is used for output display after noise point cloud points are removed.
  • the initial point cloud is used for specifying processing after removing noise point cloud points, and the specified processing includes one or more of the following: object recognition, object mapping, and obstacle avoidance.
  • the point cloud points located in the area with high point cloud density, the corresponding actual distance can be compared with the smaller reference distance, and the point cloud points located in the area with low point cloud density , the corresponding actual distance can be compared with a larger reference distance, so that whether it is a point cloud point in an area with high point cloud density or low point cloud density, it can be accurately judged whether it is a noise point cloud point, which greatly improves the The judgment accuracy of noise point cloud points is improved.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the point cloud processing method provided by the embodiments of the present application.
  • Embodiments of the present application may take the form of a computer program product implemented on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请实施例公开了一种点云处理方法,包括:获取初始点云;对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。本申请实施例公开的点云处理方法、装置和计算机可读存储介质,可以准确的确定出点云中的噪声点云点,以便于对噪声点云点进行去除。

Description

点云处理方法、装置和计算机可读存储介质 技术领域
本申请涉及点云处理技术领域,尤其涉及一种点云处理方法、装置和计算机可读存储介质。
背景技术
点云可以表示三维的真实场景,其由多个独立的点云点构成,每个点云点可以包括三维坐标以及属性信息。点云中都难免包含噪声点云点,所谓噪声点云点,即在真实场景中没有对应的实体的点云点。噪声点云点的存在使得点云与真实场景不符,显示效果较差,并且,噪声点云点的存在也会降低点云在用于物体识别等处理时的识别准确度。因此,有必要从点云中确定出噪声点云点并对噪声点云点进行去除。
发明内容
有鉴于此,本申请实施例提供了一种点云处理方法、装置和计算机可读存储介质,目的之一是准确的确定出点云中的噪声点云点,以便于对噪声点云点进行去除。
本申请实施例第一方面提供一种点云处理方法,包括:
获取初始点云;
对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;
根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
本申请实施例的第二方面提供一种点云处理装置,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序时实现以下步骤:
获取初始点云;
对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际 距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;
根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
本申请实施例的第三方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所提供的点云处理方法。
本申请实施例提供的点云处理方法,位于点云密度高的区域的点云点,其所对应的实际距离可以和较小的参考距离进行比较,位于点云密度低的区域的点云点,其所对应的实际距离可以和较大的参考距离进行比较,从而无论是点云密度高还是点云密度低的区域的点云点,都可以准确判断其是否为噪声点云点,大大提高了噪声点云点的判断准确度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的点云示意图。
图2是本申请实施例提供的点云处理方法的流程图。
图3是本申请实施例提供的点云点和第一图像的映射关系示意图。
图4是本申请实施例提供的点云处理装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
点云可以表示三维的真实场景,其由多个独立的点云点构成,每个点云点可以包 括自己的三维坐标以及属性信息。点云的获取方式有多种,在一种实施方式中,可以通过激光扫描得到,比如,可以通过激光雷达对目标场景进行扫描,从而得到该目标场景对应的点云。在一种实施方式中,点云也可以利用图像进行三维重建得到,比如可以对目标场景拍摄多张具有一定重叠度的图像,从而可以通过多视几何算法对这些图像进行三维重建,得到目标场景对应的点云。
无论采用何种方式获取点云,获取的点云中都难免包含噪声点云点。所谓噪声点云点,即在真实场景中没有对应的实体的点云点。比如在真实场景中,前方没有任何物体,但获取的该场景对应的点云中,前方却有一个点云点,由于该点云点在真实场景中是不存在的,因此属于噪声点云点。
噪声点云点的存在使得点云对真实场景的表示失真,影响点云的使用效果。比如,若点云用于输出观看,则点云中的噪声点云点会让人误以为真实场景中该噪声点云点所在的位置存在物体,若点云用于物体识别等智能处理,噪声点云点会降低识别的准确度。因此,有必要从点云中确定噪声点云点,从而可以对噪声点云点进行去除。
在一种实施方式中,一个点云点是否是噪声点云点,可以根据该点云点到周边点云点的距离确定。由于噪声点云点通常的相对孤立的存在,因此若一个点云点是噪声点云点,则该点云点到周边点云点的距离相对会大一些,从而,可以设定一个阈值,若一个点云点到周边点云点的距离大于该阈值,则可以将该点云点确定为噪声点云点。
但上述实施方式中,点云中的所有点云点都使用相同的阈值来判断是否为噪声点云点,这在一些情况中可能会造成误判。可以参考图1,点云中可以包含不同点云密度的区域,比如图1中的区域B,区域B的点云密度较高,点云点之间的距离较小,又比如图1中的区域A,区域A的点云密度较低,点云点之间的距离较大。若所有点云点都使用相同的阈值来判断是否为噪声点云点,以图1为例,若该阈值是根据区域B中点云点之间的距离确定的,则会导致区域A中大量不是噪声的点云点被误判为噪声点云点,若该阈值是根据区域A中点云点之间的距离确定的,则会导致区域B中真正是噪声的点云点(噪声点云点b)被误判为不是噪声。
为此,本申请实施例提供了一种点云处理方法,可以参考图2,图2是本申请实施例提供的点云处理方法的流程图,该方法包括:
S202、获取初始点云。
S204、对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离。
S206、确定所述第一点云点与周边点云点之间的参考距离。
S208、根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
如前所述,点云的获取方式有多种。这里,初始点云可以是各种方式获取的点云,比如可以是利用目标场景对应的多张图像进行三维重建得到的,也可以是通过激光对目标场景进行扫描得到的,当然也有其他的获取方式,比如初始点云还可以是通过超声波对目标场景进行探测得到的等等。
初始点云包含多个点云点,在一种实施方式中,可以对初始点云中的每个点云点都执行上述S204-S208的步骤,即所述第一点云点可以是初始点云中的任一点云点。在一种实施方式中,也可以对初始点云中特定区域的点云点执行上述S204-S208的步骤,即所述第一点云点可以是初始点云中特定区域的任一点云点。
对第一点云点,可以确定其对应的实际距离。第一点云点对应的实际距离是第一点云点和周边点云点之间的实际距离,即在点云中,可以根据第一点云点的几何坐标和周边点云点的几何坐标计算得到该实际距离。
在确定第一点云点对应的实际距离时,可以有多种实施方式。在一种实施方式中,可以将第一点云点与最近的周边点云点的距离确定为所述实际距离。比如第一点云点周边最近的三个点云点是点云点A、点云点B和点云点C,可以将其中与第一点云点最近的点云点B与第一点云点的距离确定为所述实际距离。在一种实施方式中,也可以将第一点云点与最近的多个周边点云点的距离进行融合,从而将融合得到的距离确定为所述实际距离。可以继续使用上述点云点A、B、C的例子,则可以分别确定第一点云点与点云点A、B、C的距离,并可以将确定出的三个距离进行融合,融合的方式有多种,比如可以是三个距离进行加权融合,也可以是三个距离计算平均值,从而可以将融合得到的距离确定为第一点云点对应的实际距离。
在确定与第一点云点距离最近的N个周边点云点时(N可以是大于或等于1的整数),也可以有多种实施方式。在一种实施方式中,可以遍历除第一点云点外的其他点云点,并对每个所述其他点云点计算其与第一点云点的距离,从而通过对计算出的各个距离进行排序,可以确定与第一点云点最近的N个周边点云点。在一种实施方式中,可以对初始点云构建其对应的KD树,从而可以通过该KD树查找出第一点云点对应的N个周边点云点。
对第一点云点,可以确定其与周边点云点的参考距离。所谓参考距离,可以是第一点云点在其所在区域的点云密度下与周边点云点应该有的距离,或者说,若第一点云点不是噪声点云点,则第一点云点在其所在区域的点云密度下,其与周边点云点之 间的距离应该是该参考距离。可以理解,该参考距离与第一点云点所在区域的点云密度是负相关的,若第一点云点所在区域的点云密度高,则第一点云点对应的参考距离小,因为在这样的点云密度下,非噪声的第一点云点与周边点云点之间的距离比较小;若第一点云点所在区域的点云密度低,则第一点云点对应的参考距离大,因为在这样的点云密度下,非噪声的第一点云点与周边点云点之间的距离比较大。
在确定第一点云点对应的所述实际距离和所述参考距离后,可以根据所述实际距离与所述参考距离的比较结果,确定第一点云点是否为噪声点云点。如前所述,噪声点云点通常是相对孤立的点,其与周边点云点的距离比非噪声点云点与周边点云点的距离大,因此,对于第一点云点,若第一点云点对应的实际距离明显大于第一点云点对应的参考距离,则可以确定第一点云点是噪声点云点。
在将实际距离与参考距离进行比较时,可以有多种实施方式。在一种实施方式中,可以将实际距离与参考距离进行差值计算,得到距离差值,若该距离差值大于差值阈值,则可以确定第一点云点对应的实际距离明显大于其对应的参考距离,可以确定第一点云点是噪声点云点。这里,差值阈值可以是根据实际需求或经验设定的,也可以根据初始点云中各个点云点对应的所述距离差值确定,具体的,可以对初始点云中的每个点云点都确定所述实际距离和参考距离,从而每个点云点都可以确定其对应的距离差值,这里,在一个例子中,可以确定各个距离差值的平均值mean1,和标准差std_dev1,并可以计算出差值阈值threshold1=mean1+k*std_dev1(k可以是自然数,其可以根据需求进行调整,比如若倾向于保留更多的点,k可以设置的大一些)。
在一种实施方式中,对于实际距离和参考距离的比较,也可以通过比值的方式进行。具体的,对第一点云点,可以将其对应的实际距离和参考距离进行比值计算,得到距离比值,若该距离比值大于比值阈值,则可以确定第一点云点对应的实际距离明显大于其对应的参考距离,可以确定第一点云点是噪声点云点。这里,比值阈值可以是根据实际需求或经验设定的,也可以根据初始点云中各个点云点对应的所述距离比值确定,具体的,可以对初始点云中的每个点云点都确定所述实际距离和参考距离,从而每个点云点都可以确定其对应的距离比值,这里,在一个例子中,可以确定各个距离比值的平均值mean2,和标准差std_dev2,并可以计算出比值阈值threshold2=mean2+k*std_dev2(k可以是自然数,其可以根据需求进行调整,比如若倾向于保留更多的点,k可以设置的大一些)。
本申请实施例提供的点云处理方法,位于点云密度高的区域的点云点,其所对应的实际距离可以和较小的参考距离进行比较,位于点云密度低的区域的点云点,其所 对应的实际距离可以和较大的参考距离进行比较,从而无论是点云密度高还是点云密度低的区域的点云点,都可以准确判断其是否为噪声点云点,大大提高了噪声点云点的判断准确度。
如前所述,第一点云点对应的参考距离是将第一点云点视为非噪声点时,其在所在区域的点云密度下与周边点云点应该有的距离。在一种实施方式中,若初始点云是利用目标场景对应的多张图像进行三维重建得到的,则对于第一点云点,可以将所述多张图像中用于观测该第一点云点的第一图像对应地面采样距离确定为第一点云点对应的所述参考距离。这里,三维重建时所使用的算法可以有多种选择,比如可以是SFM算法、MVS算法等等,本申请对此不做限制。
所述多张图像中,可以有多张能够观测到第一点云点的图像,在一种实施方式中,第一图像可以是能够观测到第一点云点的任一图像,在一种实施方式中,第一图像可以是第一点云点的最佳观测图像。最佳观测图像可以有多种定义方式,比如可以是将观测第一点云点最清晰的图像确定为所述最佳观测图像;又比如,可以将相机镜头方向与第一点云点所在的物体表面垂直的图像确定为最佳观测图像,可以理解的是,这里的垂直并不是指绝对的垂直,在具体实施时,只要相机镜头方向大体上正对第一点云点,就可以认为相机镜头方向与第一点云点所在的物体表面垂直;又比如,可以将相机位置与第一点云点的距离最近的图像确定为最佳观测图像。这里,相机镜头方向在一种实施方式中可以根据相机的姿态确定。
在一种实施方式中,在确定最佳观测图像时,可以结合图像的清晰度、图像对应的相机镜头方向与图像对应的相机位置中的多种因素进行确定,比如,可以将相机位置与第一点云点最近、且、相机镜头方向最正对第一点云点的图像确定为最佳观测图像。
地面采样距离(GSD,Ground Sample Distance)可以表征图像中单个像素在现实世界中对应的真实距离,即该像素映射到现实世界中对应的宽度。由于第一图像可以较好的观测到第一点云点,因此第一点云点及其周边点云点的生成(三维重建)与第一图像强相关。在一种实施方式中,可以认为第一点云点及其周边点云点各自对应第一图像中的一个像素,若第一点云点不是噪声,则第一点云点与周边点云点之间的距离应该正好是第一图像中像素和像素之间对应到现实世界的距离,即第一图像对应的地面采样距离。第一图像对应的地面采样距离与第一点云点所在区域的点云密度也符合负相关的关系,即地面采样距离越大,点云点间的距离就越大,点云密度就越低,反之,地面采样距离越小,点云点间的距离就越小,点云密度就越高。可以参考图3, 图3是本申请实施例提供的点云点和第一图像的映射关系示意图。
在一种实施方式中,第一点云点与周边点云点之间的参考距离可以是目标地面采样距离,所述目标地面采样距离可以是多张用于观测第一点云点的第一图像对应的多个地面采样距离融合得到的。可以举个例子,比如可以从用于三维重建的多张图像中确定出P张能够较好观测到第一点云点的第一图像,并可以确定该P张第一图像各自对应的地面采样距离,从而可以将P张第一图像对应的P个地面采样距离进行融合,将融合得到的地面采样距离确定为第一点云点与周边点云点之间的参考距离。
在一种实施方式中,若初始点云是通过激光对目标场景进行扫描得到的,第一点云点与周边点云点之间的参考距离可以根据第一点云点所在区域的激光扫描频次确定。在通过激光对场景进行扫描时,对于场景中的不同物体可以采用不同的扫描频次。对于结构简单的物体,如四脚桌,由于要表示出其结构只需要少量的点云点,因此可以对其采用较低的扫描频次进行扫描。对于结构复杂的物体,比如带有纹理的雕像,由于要表示出其结构需要大量点云点,因此可以对其采用较高的扫描频次进行扫描。可以理解的,若一个区域采用较高的扫描频次进行扫描,则点云中该区域对应的点云密度较高,若一个区域采用较低的扫描频次进行扫描,则点云中该区域对应的点云密度较低,即一个区域的点云密度与该区域的激光扫描频次正相关。
可以记激光扫描频次为f,参考距离为rd,则在一种实施方式中,参考距离和激光扫描频次之间的关系可以通过以下式子表示:rd=1/f*c,其中c是常数,其可以根据不同的激光扫描仪进行定制。上述式子中,参考距离与激光扫描频次负相关,即对于激光扫描频次高、点云密度高的区域的点云点,其对应的参考距离小,对于激光扫描频次低、点云密度低的区域的点云点,其对应的参考距离大,从而通过比较点云点对应的实际距离和参考距离,能够正确的确定出不同点云密度区域的点云点中的噪声点云点。
对于初始点云,在一种实施方式中,可以对其中的每个点云点都根据对应的实际距离和参考距离进行是否为噪声点云点的判断,确定出初始点云中的噪声点云点后,可以去除这些噪声点云点,得到目标点云,即初始点云在去除了噪声点云点后的点云。
在一种实施方式中,该目标点云可以用于输出显示,由于去除了噪声点云点,该目标点云可以更准确的反映真实场景,有更好的显示效果。在一种实施方式中,该目标点云可以用于进行指定处理,指定处理可以包括以下一种或多种处理:物体识别、物体测绘、障碍物避障等。物体识别即可以识别出目标点云中不同点云点对应的物体类别,物体测绘即可以根据目标点云中的点云点测量出物体对应的尺寸信息,障碍物 避障可以应用于如无人机等可移动平台,由于目标点云去除了噪声点云点,因此无人机可以在执行避障功能时可以有更高的准确度,减少错误避障的发生(即没有障碍物却执行了避障动作)。
本申请实施例提供的点云处理方法,位于点云密度高的区域的点云点,其所对应的实际距离可以和较小的参考距离进行比较,位于点云密度低的区域的点云点,其所对应的实际距离可以和较大的参考距离进行比较,从而无论是点云密度高还是点云密度低的区域的点云点,都可以准确判断其是否为噪声点云点,大大提高了噪声点云点的判断准确度。
下面可以参考图4,图4是本申请实施例提供的点云处理装置的结构示意图,该装置可以包括:处理器410和存储有计算机程序的存储器420,所述处理器在执行所述计算机程序时实现以下步骤:
获取初始点云;
对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;
根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
可选的,所述初始点云是利用目标场景对应的多张图像进行三维重建得到的。
可选的,所述第一点云点与周边点云点之间的参考距离是所述多张图像中用于观测所述第一点云点的第一图像对应的地面采样距离。
可选的,所述第一图像对应的相机镜头方向与所述第一点云点所在的物体表面垂直。
可选的,所述第一图像对应的相机位置与所述第一点云点的距离小于或等于距离阈值。
可选的,所述初始点云是通过激光对目标场景进行扫描得到的。
可选的,所述第一点云点与周边点云点之间的参考距离是根据所述第一点云点所在区域的激光扫描频次确定的。
可选的,所述第一点云点与周边点云点之间的实际距离是所述第一点云点与最近的多个周边点云点的距离融合得到的。
可选的,所述最近的多个周边点云点是根据所述初始点云对应的KD树查找得到的。
可选的,所述比较结果包括所述实际距离与所述参考距离的距离比值。
可选的,所述处理器在根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点时用于:
若所述第一点云点对应的所述距离比值大于比值阈值,确定所述第一点云点是噪声点云点。
可选的,所述比值阈值是根据所述初始点云中各个点云点对应的所述距离比值确定的。
可选的,所述处理器还用于:
若确定所述第一点云点是噪声点云点,对所述第一点云点进行去除。
可选的,所述初始点云在去除了噪声点云点后用于输出显示。
可选的,所述初始点云在去除了噪声点云点后用于指定处理,所述指定处理包括以下一种或多种:物体识别、物体测绘、障碍物避障。
以上所提供的各种实施方式的点云处理装置,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的点云处理装置,位于点云密度高的区域的点云点,其所对应的实际距离可以和较小的参考距离进行比较,位于点云密度低的区域的点云点,其所对应的实际距离可以和较大的参考距离进行比较,从而无论是点云密度高还是点云密度低的区域的点云点,都可以准确判断其是否为噪声点云点,大大提高了噪声点云点的判断准确度。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的点云处理方法。
以上针对每个保护主题均提供了多种实施方式,在不存在冲突或矛盾的基础上,本领域技术人员可以根据实际情况自由对各种实施方式进行组合,由此构成各种不同的技术方案。而本申请文件限于篇幅,未能对所有组合而得的技术方案展开说明,但可以理解的是,这些未能展开的技术方案也属于本申请实施例公开的范围。
本申请实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器 (SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (31)

  1. 一种点云处理方法,其特征在于,包括:
    获取初始点云;
    对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;
    根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
  2. 根据权利要求1所述的方法,其特征在于,所述初始点云是利用目标场景对应的多张图像进行三维重建得到的。
  3. 根据权利要求2所述的方法,其特征在于,所述第一点云点与周边点云点之间的参考距离是所述多张图像中用于观测所述第一点云点的第一图像对应的地面采样距离。
  4. 根据权利要求3所述的方法,其特征在于,所述第一图像对应的相机镜头方向与所述第一点云点所在的物体表面垂直。
  5. 根据权利要求3所述的方法,其特征在于,所述第一图像对应的相机位置与所述第一点云点的距离小于或等于距离阈值。
  6. 根据权利要求1所述的方法,其特征在于,所述初始点云是通过激光对目标场景进行扫描得到的。
  7. 根据权利要求6所述的方法,其特征在于,所述第一点云点与周边点云点之间的参考距离是根据所述第一点云点所在区域的激光扫描频次确定的。
  8. 根据权利要求1所述的方法,其特征在于,所述第一点云点与周边点云点之间的实际距离是所述第一点云点与最近的多个周边点云点的距离融合得到的。
  9. 根据权利要求8所述的方法,其特征在于,所述最近的多个周边点云点是根据所述初始点云对应的KD树查找得到的。
  10. 根据权利要求1所述的方法,其特征在于,所述比较结果包括所述实际距离与所述参考距离的距离比值。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点,包括:
    若所述第一点云点对应的所述距离比值大于比值阈值,确定所述第一点云点是噪声点云点。
  12. 根据权利要求11所述的方法,其特征在于,所述比值阈值是根据所述初始点云中各个点云点对应的所述距离比值确定的。
  13. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若确定所述第一点云点是噪声点云点,对所述第一点云点进行去除。
  14. 根据权利要求13所述的方法,其特征在于,所述初始点云在去除了噪声点云点后用于输出显示。
  15. 根据权利要求13所述的方法,其特征在于,所述初始点云在去除了噪声点云点后用于指定处理,所述指定处理包括以下一种或多种:物体识别、物体测绘、障碍物避障。
  16. 一种点云处理装置,其特征在于,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序时实现以下步骤:
    获取初始点云;
    对所述初始点云中的第一点云点,确定所述第一点云点与周边点云点之间的实际距离,并确定所述第一点云点与周边点云点之间的参考距离,所述参考距离与所述第一点云点所在区域的点云密度负相关;
    根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点。
  17. 根据权利要求16所述的装置,其特征在于,所述初始点云是利用目标场景对应的多张图像进行三维重建得到的。
  18. 根据权利要求17所述的装置,其特征在于,所述第一点云点与周边点云点之间的参考距离是所述多张图像中用于观测所述第一点云点的第一图像对应的地面采样距离。
  19. 根据权利要求18所述的装置,其特征在于,所述第一图像对应的相机镜头方向与所述第一点云点所在的物体表面垂直。
  20. 根据权利要求18所述的装置,其特征在于,所述第一图像对应的相机位置与所述第一点云点的距离小于或等于距离阈值。
  21. 根据权利要求16所述的装置,其特征在于,所述初始点云是通过激光对目标场景进行扫描得到的。
  22. 根据权利要求21所述的装置,其特征在于,所述第一点云点与周边点云点之间的参考距离是根据所述第一点云点所在区域的激光扫描频次确定的。
  23. 根据权利要求16所述的装置,其特征在于,所述第一点云点与周边点云点之 间的实际距离是所述第一点云点与最近的多个周边点云点的距离融合得到的。
  24. 根据权利要求23所述的装置,其特征在于,所述最近的多个周边点云点是根据所述初始点云对应的KD树查找得到的。
  25. 根据权利要求16所述的装置,其特征在于,所述比较结果包括所述实际距离与所述参考距离的距离比值。
  26. 根据权利要求25所述的装置,其特征在于,所述处理器在根据所述实际距离与所述参考距离的比较结果,确定所述第一点云点是否为噪声点云点时用于:
    若所述第一点云点对应的所述距离比值大于比值阈值,确定所述第一点云点是噪声点云点。
  27. 根据权利要求26所述的装置,其特征在于,所述比值阈值是根据所述初始点云中各个点云点对应的所述距离比值确定的。
  28. 根据权利要求16所述的装置,其特征在于,所述处理器还用于:
    若确定所述第一点云点是噪声点云点,对所述第一点云点进行去除。
  29. 根据权利要求28所述的装置,其特征在于,所述初始点云在去除了噪声点云点后用于输出显示。
  30. 根据权利要求28所述的装置,其特征在于,所述初始点云在去除了噪声点云点后用于指定处理,所述指定处理包括以下一种或多种:物体识别、物体测绘、障碍物避障。
  31. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-15任一项所述的点云处理方法。
PCT/CN2021/075078 2021-02-03 2021-02-03 点云处理方法、装置和计算机可读存储介质 WO2022165672A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/075078 WO2022165672A1 (zh) 2021-02-03 2021-02-03 点云处理方法、装置和计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/075078 WO2022165672A1 (zh) 2021-02-03 2021-02-03 点云处理方法、装置和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022165672A1 true WO2022165672A1 (zh) 2022-08-11

Family

ID=82740685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075078 WO2022165672A1 (zh) 2021-02-03 2021-02-03 点云处理方法、装置和计算机可读存储介质

Country Status (1)

Country Link
WO (1) WO2022165672A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564673A (zh) * 2022-09-26 2023-01-03 浙江省测绘科学技术研究院 三维点云地下车库柱状物提取与矢量自动生成方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7995054B2 (en) * 2005-11-21 2011-08-09 Leica Geosystems Ag Identification of edge regions from 3D point data
CN109147038A (zh) * 2018-08-21 2019-01-04 北京工业大学 基于三维点云处理的管道三维建模方法
CN111861933A (zh) * 2020-07-29 2020-10-30 北方工业大学 基于空间划分的点云去噪方法及装置
CN112257722A (zh) * 2020-11-11 2021-01-22 南京工业大学 基于抗差非线性高斯-赫尔默特模型的点云拟合方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7995054B2 (en) * 2005-11-21 2011-08-09 Leica Geosystems Ag Identification of edge regions from 3D point data
CN109147038A (zh) * 2018-08-21 2019-01-04 北京工业大学 基于三维点云处理的管道三维建模方法
CN111861933A (zh) * 2020-07-29 2020-10-30 北方工业大学 基于空间划分的点云去噪方法及装置
CN112257722A (zh) * 2020-11-11 2021-01-22 南京工业大学 基于抗差非线性高斯-赫尔默特模型的点云拟合方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG YI,LIU XU-MIN,GUAN YONG: "Density-Based Detection for Outliers and Noises", JOURNAL OF COMPUTER APPLICATIONS, vol. 30, no. 3, 1 March 2020 (2020-03-01), pages 802 - 805+809, XP055956939, ISSN: 1001-9081 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564673A (zh) * 2022-09-26 2023-01-03 浙江省测绘科学技术研究院 三维点云地下车库柱状物提取与矢量自动生成方法及系统
CN115564673B (zh) * 2022-09-26 2024-03-15 浙江省测绘科学技术研究院 三维点云地下车库柱状物提取与矢量自动生成方法及系统

Similar Documents

Publication Publication Date Title
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US20190018730A1 (en) Point cloud filter method and apparatus
US9412040B2 (en) Method for extracting planes from 3D point cloud sensor data
WO2021120846A1 (zh) 三维重建方法、设备以及计算机可读介质
US8290305B2 (en) Registration of 3D point cloud data to 2D electro-optical image data
CN107392958B (zh) 一种基于双目立体摄像机确定物体体积的方法及装置
Kamencay et al. Improved Depth Map Estimation from Stereo Images Based on Hybrid Method.
CN107004256B (zh) 用于噪声深度或视差图像的实时自适应滤波的方法和装置
JP2020507853A (ja) 3次元点群の再構成のための方法および装置
WO2002073540A1 (en) Generation of a three-dimensional representation from multiple images using octrees
JP6934224B2 (ja) 三次元形状モデル生成装置、三次元形状モデル生成方法及びプログラム
Kim et al. Evaluation of 3D feature descriptors for multi-modal data registration
WO2021102913A1 (zh) 图像处理方法、装置及存储介质
CN111325763B (zh) 一种基于光场重聚焦的遮挡预测方法和装置
WO2022165672A1 (zh) 点云处理方法、装置和计算机可读存储介质
JP2020071793A (ja) 目標検出プログラム、目標検出装置、及び目標検出方法
WO2019121056A1 (fr) Methode de reconnaissance d'objets dans une scene observee en trois dimensions
CN112508803A (zh) 一种三维点云数据的去噪方法、装置及存储介质
WO2021062776A1 (zh) 一种参数标定方法、装置及设备
Teutsch et al. A parallel point cloud clustering algorithm for subset segmentation and outlier detection
CN114638996A (zh) 基于对抗学习的模型训练方法、装置、设备和存储介质
US10223803B2 (en) Method for characterising a scene by computing 3D orientation
GB2597238A (en) A computer implemented method of generating a parametric structural design model
US9710963B2 (en) Primitive fitting apparatus and method using point cloud
WO2022041119A1 (zh) 三维点云处理方法及装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21923700

Country of ref document: EP

Kind code of ref document: A1