WO2021052283A1 - 处理三维点云数据的方法和计算设备 - Google Patents

处理三维点云数据的方法和计算设备 Download PDF

Info

Publication number
WO2021052283A1
WO2021052283A1 PCT/CN2020/114995 CN2020114995W WO2021052283A1 WO 2021052283 A1 WO2021052283 A1 WO 2021052283A1 CN 2020114995 W CN2020114995 W CN 2020114995W WO 2021052283 A1 WO2021052283 A1 WO 2021052283A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
vector
neighboring points
point
ordered
Prior art date
Application number
PCT/CN2020/114995
Other languages
English (en)
French (fr)
Inventor
肖聪
王志美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021052283A1 publication Critical patent/WO2021052283A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This application relates to the field of data processing, and in particular to a method and computing device for processing three-dimensional point cloud data.
  • 3D point clouds are applied to 3D scene modeling.
  • lidar is one of the important sensors for vehicles. It is very difficult to analyze a point cloud scene by simply using the three-dimensional coordinate features of the three-dimensional point cloud.
  • the normal vector of the point cloud can describe the local spatial characteristics. It is the most widely used feature for analyzing the three-dimensional point cloud scene and can be widely used.
  • the principal component analysis (PCA) method can be used to roughly calculate the normal vector of the target point, and then through plane fitting, multiple points close to the fitting plane can be screened out to form Point set Q.
  • the present application provides a method for processing three-dimensional point cloud data, which can reduce the computational complexity of processing three-dimensional point cloud data.
  • a method for processing three-dimensional point cloud data for use in a computing device, where the computing device is connected to a sensor, and the method includes: acquiring three-dimensional point cloud data obtained by scanning a target scene by the sensor; Three-dimensional point cloud data, determine an ordered point cloud array, the ordered point cloud array includes a plurality of points; determine a plurality of first adjacent points of the target point in the ordered point cloud array, the target point is all Any point in the ordered point cloud array; and determining the normal vector of the target point according to the plurality of first neighboring points.
  • the three-dimensional point cloud data acquired by the sensor is converted into an ordered point cloud array, and the neighboring points of the target point are selected in the ordered point cloud array, and then the method of calculating the target point according to the selected neighboring points vector. Since the neighboring points are selected in the ordered point cloud array, the position information of the neighboring points in the ordered point cloud space is considered when calculating the normal vector of the target point, and the constructed plane can more accurately approximate the target point. The space is locally tangent to the plane, so that the calculated normal vector of the target point is more accurate.
  • the neighboring points of the target point are selected in the ordered point cloud sequence, instead of sequentially calculating the distance between each point in the point cloud data and the target point to filter the neighboring points, which reduces the computational complexity. It improves the real-time performance and calculation accuracy of processing three-dimensional point cloud data.
  • the determining the normal vector of the target point according to the plurality of first neighboring points includes: obtaining the non-parallel vector according to the plurality of first neighboring points The first vector and the second vector; determine the normal vector of the target point according to the result of the cross product of the first vector and the second vector.
  • the adjacent points of the target point in the ordered point cloud array are used to construct two non-parallel vectors, and the normal vector of the three-dimensional point cloud data is determined according to the non-parallel vectors.
  • This method uses the ordered point cloud The arrangement characteristic of the point array of the array simplifies the complexity of calculating the normal vector, and improves the real-time performance and calculation accuracy of processing three-dimensional point cloud data.
  • the obtaining a non-parallel first vector and a second vector according to the multiple first neighboring points includes: according to the two first neighboring points and The target point obtains the first vector and the second vector; or, obtains the first vector and the second vector according to the three first neighboring points; or, obtains the first vector and the second vector according to the three first adjacent points; or The first vector and the second vector are obtained from the neighboring point and the target point; or the first vector and the second vector are obtained according to the four first neighboring points.
  • the method further includes: determining a plurality of second neighboring points of the target point in the ordered point cloud array, and the plurality of second neighboring points and The plurality of first neighboring points are not the same or not completely the same; according to the plurality of second neighboring points, non-parallel third and fourth vectors are obtained; according to the first vector and the second vector Determining the normal vector of the target point according to the cross product result of the vector, including: according to the cross product result of the first vector and the second vector and the cross product result of the third vector and the fourth vector, Determine the normal vector of the target point.
  • the three-dimensional point cloud data is dense point cloud data
  • the determining an ordered point cloud array according to the three-dimensional point cloud data includes: The cloud data is down-sampled to obtain down-sampled three-dimensional point cloud data; the down-sampled three-dimensional point cloud data is converted to obtain the ordered point cloud array.
  • the distance between any one of the plurality of first neighboring points and the target point in the ordered point cloud array is less than the search step.
  • the method further includes: determining the search step length according to the distance between the sensor and the target point in the target scene.
  • the search step can be dynamically set according to the distance between the sensor origin and the position of the target point in the target scene, and the search step is used to select the neighboring points of the target point in the ordered point cloud array.
  • the search step size By dynamically setting the search step size, the range for selecting neighboring points can be determined according to different spatial resolutions, so that the plane fitted by the selected neighboring points is closer to the spatial local tangent plane of the target point, and the calculated target point The normal vector is more accurate.
  • a computing device for processing three-dimensional point cloud data, the computing device is connected to a sensor, and the computing device includes: an acquisition module for acquiring three-dimensional point cloud data obtained by scanning a target scene by the sensor
  • the determination module is used to determine an ordered point cloud array according to the three-dimensional point cloud data, the ordered point cloud array includes a plurality of points; the determination module is also used to determine that a target point is in the ordered point cloud A plurality of first neighboring points in the array, the target point is any point in the ordered point cloud array; the determining module is further configured to determine the target point according to the plurality of first neighboring points The normal vector.
  • the computing device of the second aspect is based on the same inventive concept as the method for processing three-dimensional point cloud data of the first aspect. Therefore, for the beneficial technical effects that the technical solution of the second aspect can achieve, please refer to the description of the first aspect. No longer.
  • the determining module is specifically configured to obtain a non-parallel first vector and a second vector according to the multiple first neighboring points; and according to the first vector And the result of the cross product of the second vector to determine the normal vector of the target point.
  • the determining module is specifically configured to obtain the first vector and the second vector according to the two first neighboring points and the target point; or, according to Obtaining the first vector and the second vector from the three first neighboring points; or obtaining the first vector and the second vector according to the three first neighboring points and the target point; Or, obtaining the first vector and the second vector according to the four first neighboring points.
  • the determining module is further configured to determine multiple second neighboring points of the target point in the ordered point cloud array, and the multiple second neighboring points Point is not the same or not exactly the same as the plurality of first neighboring points; according to the plurality of second neighboring points, non-parallel third and fourth vectors are obtained; The cross product result of the second vector and the cross product result of the third vector and the fourth vector determine the normal vector of the target point.
  • the three-dimensional point cloud data is dense point cloud data
  • the determining module is specifically configured to down-sample the three-dimensional point cloud data to obtain the down-sampled three-dimensional point cloud data.
  • the distance between any one of the plurality of first neighboring points and the target point in the ordered point cloud array is less than the search step.
  • the determining module is further configured to determine the search step length according to the distance between the sensor and the target point in the target scene.
  • a computing device including a processor and a memory, where the memory is used to store computer-executable instructions.
  • the processor executes the computer-executable instructions in the memory to perform calculations.
  • the device executes the method steps in the first aspect and any one of the possible implementation manners of the first aspect.
  • a non-transitory readable storage medium including program instructions.
  • the program instructions When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and the first aspect. The method in the implementation.
  • a computer program product which includes program instructions.
  • the program instructions When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and any one of the possible implementation manners of the first aspect. method.
  • Fig. 1 is a schematic diagram of a dense point cloud according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a sparse point cloud according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of an application scenario of an embodiment of the present application.
  • Figure 4 is a schematic diagram of the process of using PCA to process 3D point cloud data.
  • FIG. 5 is a schematic diagram of a process of processing three-dimensional point cloud data according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of the coordinate system of the sensor of the embodiment of the present application.
  • FIG. 7 is a schematic diagram of an ordered point cloud array according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of constructing a non-parallel vector according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an ordered point cloud array according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a process of processing three-dimensional point cloud data according to another embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a computing device according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a computing device according to another embodiment of the present application.
  • Three-dimensional point cloud also known as laser point cloud (PCD) or point cloud
  • PCD laser point cloud
  • point cloud is the use of laser to obtain the three-dimensional space coordinates of each sampling point on the surface of the object under the same spatial reference system, and a series of expression target spatial distributions obtained And a collection of massive points on the surface characteristics of the target.
  • the 3D point cloud lacks detailed texture information, it contains rich 3D spatial information.
  • Disordered point cloud means that the points in the three-dimensional point cloud are randomly arranged, and the points exist independently.
  • Ordered point cloud means that the points in the point cloud are arranged in order in the real three-dimensional space. In addition to containing their own spatial coordinate information, the points in the ordered point cloud are also arranged in coordinate rows and coordinate columns similar to the pixels in the image.
  • An ordered point cloud can be understood as a depth image with three-dimensional space information. The neighboring point of a certain point in the ordered point cloud is also its neighboring point in the three-dimensional space.
  • the three-dimensional point cloud, disordered point cloud, and ordered point cloud may also be referred to as three-dimensional point cloud data, disordered point cloud data, and ordered point cloud data, respectively.
  • the point arrangement array in the ordered point cloud data can be called an ordered point cloud array.
  • Point cloud normal vector The point cloud data is sampled on the surface of the object, and the normal vector of the surface of the object is the point cloud normal vector.
  • the point cloud normal vector is an important feature of the 3D point cloud, which can provide rich 3D spatial information and can be widely used in the target detection of the 3D point cloud.
  • Dense point cloud The dense point cloud can clearly identify the outline, features, etc. of the object, and can restore the appearance of the object more vividly.
  • FIG. 1 is a schematic diagram of a dense point cloud according to an embodiment of the application.
  • Sparse point cloud The source of the sparse point cloud is feature points, which are points in the image that have obvious features and are easy to detect and match, such as corners and edge points of buildings.
  • Lidar uses the emitted laser beam to detect the position of the target. According to the number of beams emitted, it can be divided into single beam and multi-beam lidar. The more beams, the wider the field of view, and the denser the point cloud obtained. The fewer the wire harnesses, the sparser the point cloud obtained.
  • the point cloud scanned by Velodyne's 16-line/32-line lidar can be understood as a sparse point cloud.
  • FIG. 2 is a schematic diagram of a sparse point cloud according to an embodiment of the application.
  • Principal component analysis is a method of statistical analysis and simplification of data sets. It uses orthogonal transformation to linearly transform the observations of a series of potentially related variables, thereby projecting the values of a series of linear unrelated variables. These unrelated variables are called principal components. Specifically, the principal component can be regarded as a linear equation, which contains a series of linear coefficients to indicate the projection direction.
  • Down-sampling It can also be called down-sampling or reduced image. That is, reduce the number of sampling points.
  • N*M image if the downsampling coefficient is x, it means that one point is taken every x points in each row and column of the original image to form an image.
  • raster sampling is to sample a point in a raster space into one point to achieve the purpose of downsampling, where N, M, and x are all integers greater than or equal to 1.
  • Fig. 3 is a schematic diagram of a possible application scenario of an embodiment of the present application. This scenario can be applied to obstacle detection and recognition in an autonomous driving scenario, for example.
  • a sensor 120 and a computing device 130 may be installed in the vehicle 110.
  • the sensor 120 is used to detect and scan three-dimensional point cloud data in the target scene.
  • the aforementioned sensor 120 may include a laser radar, a contact scanner, a depth camera, etc., which is not limited in this application.
  • the computing device 130 is connected to the sensor 120, and is used to obtain the three-dimensional point cloud data scanned by the sensor 120 and process the three-dimensional point cloud data. Then, according to the processing results, the characteristic information of the target scene is analyzed in order to make reasonable decisions and path planning.
  • the scene in FIG. 3 is merely an illustration, and the method of the present application can also be applied to other types of scenes, as long as the scene involves processing three-dimensional point cloud data. For example, it can also be applied to scenes such as intelligent robot navigation and somatosensory games.
  • Figure 4 is a schematic diagram of the process of using PCA to process 3D point cloud data. As shown in Figure 4, the method includes the following steps:
  • three-dimensional point cloud data can be obtained from sensors.
  • a k-dimensional tree (k-dimensional tree, k-d tree) can be established.
  • a k-d tree (short for k-dimensional tree) is a data structure that organizes points in a k-dimensional Euclidean space.
  • the k-d tree can be used in a variety of applications, such as multi-dimensional key-value search (such as range search and nearest neighbor search).
  • the k-d tree is a special case of binary space partitioning.
  • S402 Calculate the rough normal vector N1 of each point in the three-dimensional point cloud data.
  • PCA can be used to calculate the rough normal vector N1 of the target point P based on k neighboring points, and the target point is any point in the three-dimensional point cloud data.
  • solving the normal vector N1 includes solving the covariance matrix, eigenvalues and eigenvectors of k neighboring points.
  • the eigenvector corresponding to the maximum eigenvalue is the normal vector at the target point P.
  • C represents the covariance matrix
  • k represents the number of neighbor points of the target point
  • P i denotes the i th neighboring point
  • u represents the mean of the k-dimensional space coordinates of neighboring points
  • T represents a matrix transposition.
  • plane fitting includes a variety of methods, such as least squares plane fitting and so on. After the fitting plane is obtained, the normal vector N2 of the fitting plane can be calculated.
  • the size of the foregoing predetermined threshold may be determined according to practice.
  • the foregoing z points may account for 60% to 80% of the k neighbor points.
  • the size of the foregoing predetermined threshold may be determined according to practice.
  • the foregoing s points may account for 60% to 80% of the z neighboring points.
  • the normal vector of the target point P can be recalculated by taking s points in the point set Q as neighbor points.
  • This application provides a method for processing three-dimensional point cloud data.
  • the method uses the neighboring points of the target point in the ordered point cloud array to calculate the normal vector of each point in the three-dimensional point cloud.
  • the method uses the ordered point cloud The arrangement of the points of the array simplifies the complexity of calculating the normal vector, and improves the real-time performance and calculation accuracy of processing three-dimensional point cloud data.
  • FIG. 5 is a schematic flowchart of a method for processing three-dimensional point cloud data according to an embodiment of the present application. As shown in Figure 5, the method includes the following steps.
  • S501 Acquire 3D point cloud data obtained by scanning the target scene by the sensor.
  • the senor in an in-vehicle system, can be a lidar.
  • the three-dimensional point cloud data acquired by the sensor may be disordered point cloud data or an ordered point cloud array.
  • the arrangement of multiple points in the ordered point cloud array is the same as the real three-dimensional space, and each point includes three-dimensional space coordinate information.
  • the disordered point cloud data can be converted into an ordered point cloud array.
  • the horizontal and vertical distribution rates of the lidar can be used to convert a disordered point cloud into an ordered point cloud.
  • the row direction can be arranged in order along the laser transmitter of the lidar
  • the column direction can be arranged in the scanning order of the lidar to obtain a two-dimensional ordered data structure. For example, for a 32-line radar, scanning 360° returns 2000 points of data, then the ordered point cloud of the 32-line lidar is a 32 ⁇ 2000 structure.
  • FIG. 6 shows the coordinate system of the sensor in this embodiment of the application.
  • the coordinate system includes X-axis, Y-axis, and Z-axis that are perpendicular to each other, and O represents the origin of the coordinate.
  • the distance between the point P (x, y, z) in the three-dimensional point cloud and the origin of the sensor is denoted as R.
  • formulas (2)-(5) can be used to sort the point clouds to form an ordered point cloud array.
  • the row coordinates and column coordinates of the target point P can be calculated according to formulas, where the row coordinates and column coordinates of the target point P can be expressed as (row, col).
  • represents the angle between the point P and the XOY plane
  • represents the angle between the projection of the point P on the XOY and the Y axis.
  • FIG. 7 is a schematic diagram of an ordered point cloud array according to an embodiment of the application.
  • each black dot in Figure 7 represents a point in the three-dimensional space, and each point contains three-dimensional space coordinate information.
  • the row coordinates and column coordinates of the points in the ordered point cloud array represent the storage structure of the ordered point cloud array, and the points in the ordered point cloud array themselves do not contain row coordinate and column coordinate information.
  • the three-dimensional point cloud data obtained from the sensor is an ordered point cloud array
  • the three-dimensional point cloud data can be directly used as an ordered point cloud array, and the subsequent steps are performed.
  • S503 Determine multiple first neighboring points of the target point P in the ordered point cloud array, where the target point P is any one of the multiple points in the ordered point cloud array.
  • the neighboring points of the target point P are the neighboring points of the target point P in the ordered point cloud array. Therefore, when selecting the neighboring points, the relative position relationship between the target point and the neighboring points in the ordered point cloud array is considered, that is, the target point The row coordinate information and column coordinate information of the neighboring points in the ordered point cloud array.
  • selecting adjacent points for constructing a non-parallel vector select the adjacent point of the target point P in the ordered point cloud array, that is, the adjacent point is adjacent to the target point P in the ordered point cloud array, thereby using two non-parallel points
  • the plane constructed by the vector can more accurately approximate the spatial local tangent plane of the target point P.
  • the normal vector of the target point is determined according to the plurality of first neighboring points, which can be understood as approximating the plane composed of the target point P and the neighboring points as the tangent plane of the space where the target point P is located, and
  • the normal vector of this plane is regarded as the normal vector of the target point P.
  • the specific manner of calculating the normal vector of the target point P may include multiple manners, and the description will be continued below in conjunction with examples.
  • the three-dimensional point cloud data acquired by the sensor is converted into an ordered point cloud array, and the neighboring points of the target point are selected in the ordered point cloud array, and then the target point P is calculated according to the selected neighboring points Normal vector. Since the neighboring points are selected in the ordered point cloud array, the position information of the neighboring points in the ordered point cloud space is taken into account when calculating the normal vector of the target point P, and the constructed plane can more accurately approximate the target point The space of P is locally tangent to the plane, so that the normal vector of the target point P obtained by calculation is more accurate.
  • the neighboring points of the target point are selected in the ordered point cloud sequence, instead of sequentially calculating the distance between each point in the point cloud data and the target point to filter the neighboring points, which reduces the computational complexity. It improves the real-time performance and calculation accuracy of processing three-dimensional point cloud data.
  • the normal vector of the target point may be used to analyze the characteristics of the target scene.
  • the characteristic information of the target scene may be determined according to the normal vector of the target point P, and the target may be analyzed according to the characteristic information of the target scene.
  • the target point and its neighboring points can be used to construct two non-parallel vectors. And based on the tangent plane formed by these two non-parallel vectors, the normal vector of the tangent plane is calculated.
  • the determining a normal vector of the target point according to the plurality of first neighboring points includes: obtaining a non-parallel first vector and a second vector according to the plurality of first neighboring points; Determine the normal vector of the target point according to the result of the cross product of the first vector and the second vector.
  • the target point P may participate in the construction of the two non-parallel vectors, or may not participate in the construction of the two non-parallel vectors.
  • These two non-parallel vectors can be constructed from three points or four points.
  • FIG. 8 shows several ways of constructing non-parallel vectors in an embodiment of the present application.
  • these two non-parallel vectors can be composed of the target point P and two adjacent points. That is, through these three points, two non-parallel vectors are formed.
  • there are also multiple implementation manners for constructing two vectors from three points which is not limited in the embodiment of the present application, as long as the constructed two vectors meet the non-parallel condition.
  • the two non-parallel vectors can be composed of three adjacent points of the target point P, that is, the target point itself is not included.
  • the two non-parallel vectors can be formed by the target point P and three adjacent points of the target point P, that is, two non-parallel vectors are formed by four points.
  • there are also multiple implementation manners for constructing two vectors from four points which is not limited in the embodiment of the present application, as long as the constructed two vectors meet the non-parallel condition.
  • the two non-parallel vectors can be composed of four adjacent points of the target point P, that is, the target point P itself is not included.
  • the points in FIG. 8 are points in an ordered point cloud sequence, and when the neighboring points of the target point are selected, the neighboring points are selected in the ordered point cloud sequence.
  • the three-dimensional space coordinates of the target point P or neighboring points are used instead of the row and column coordinates of each point.
  • the two vectors in Fig. 8(c) are constructed based on three-dimensional space coordinates. Therefore, the two vectors constructed based on three-dimensional space coordinates are non-parallel vectors.
  • the three-dimensional space coordinates of the four neighboring points of the selected target point P are marked as P1 (x1, y1, z1), P2 (x2, y2, z2), P3 (x3) , Y3, z3), P4 (x4, y4, z4).
  • These four adjacent points can form two non-parallel vectors v1 and v2, which are expressed as: v1 (x1-x2, y1-y2, z1-z2), v2 (x3-x4, y3-y4, z3-z4).
  • Normal represents the normal vector
  • i, j, and k respectively represent the components of the X, Y, and Z axes in three different directions.
  • the right-hand spiral rule can be used to determine the direction of the cross-multiplication of the two vectors, so as to achieve the same direction of the normal vector at each point under the same viewing angle.
  • the normal vector obtained from the first vector and the second vector may be directly determined as the normal vector of the target point P.
  • different neighboring points can be reselected around the target point P, the normal vector is calculated multiple times, and the normal vectors calculated multiple times are weighted and averaged to obtain the normal vector of the target point P.
  • the method further includes: determining a plurality of second neighboring points of the target point in the ordered point cloud array, where the plurality of second neighboring points are not completely the same as the plurality of first neighboring points, According to the plurality of second neighboring points, a non-parallel third vector and a fourth vector are obtained; the normal vector of the target point is determined according to the result of the cross product of the first vector and the second vector, The method includes: determining the normal vector of the target point according to the cross product result of the first vector and the second vector and the cross product result of the third vector and the fourth vector.
  • the above-mentioned multiple second adjacent points are not the same or not completely the same as the multiple first adjacent points, which may include a situation where at least one of the multiple second adjacent points is different from the multiple first adjacent points, or Including a situation where there is no intersection between a plurality of second neighboring points and a plurality of first neighboring points.
  • the three-dimensional point cloud data acquired by the sensor is converted into an ordered point cloud array, and the adjacent points of the target point are selected in the ordered point cloud array to construct two non-parallel vectors, and then according to the non-parallel
  • the plane constructed by parallel vectors is used to calculate the normal vector of the target point P. Since the neighboring points are selected in the ordered point cloud array, the position information of the neighboring points in the ordered point cloud space is taken into account when calculating the normal vector of the target point P, and the constructed plane can more accurately approximate the target point
  • the space of P is locally tangent to the plane, so that the normal vector of the target point P obtained by calculation is more accurate.
  • the normal vector of the target point P can be calculated by searching at least two adjacent points, which greatly reduces the computational complexity of processing three-dimensional point cloud data and saves computational resources.
  • the point cloud data can be sampled to the required resolution by downsampling, and then the normal vector is calculated.
  • the spatial resolution of the sensor for different positions in space is different.
  • the farther the object to be scanned is from the sensor the lower its resolution.
  • the resolution res of the horizontal space at the target point P can be expressed as:
  • the distance between the selected neighboring points and the target point P in the ordered point cloud array should be less than the search step, thus The accuracy of the target point P can be guaranteed. Since the distance d between the sensor origin and the position of the target point P in the target scene is different, and the spatial resolution is also different, the size of the search step can be determined according to the distance d. For example, the smaller the distance d, the greater the spatial resolution of the sensor for scanning the target point P, and the closer the neighboring points of the target point P and the target point P are in a real three-dimensional scene. Then you can choose a larger search step, that is, the target point P has more adjacent points to choose from.
  • search step For example, if the search step is expressed by step, it can be expressed as:
  • step k/res+b (7)
  • k, b are linear parameters, and res represents the resolution of the horizontal space where the target point P is located.
  • the distance information of the neighboring points in the ordered point cloud space is considered, and the points whose distance from the target point is within the search step are selected as the neighboring points, thus It is ensured that the distance between the selected neighboring point and the target point in the real three-dimensional space is also close enough, so that the normal vector of the target point P obtained by calculation is more accurate.
  • a preset threshold which affects the accuracy of the final normal vector.
  • FIG. 9 is a schematic diagram of an ordered point cloud array according to an embodiment of the present application.
  • the search step may select a target point left and right points P and P R of the point P L along the row direction of the target point P. Among them , the distances between the points P R and P L and the target point P are smaller than the search step.
  • points P U and P D can be selected in the upper and lower adjacent rows of the target point P, respectively, where the distances between the points P U and P D and the target point P are smaller than the search step.
  • the distance between the adjacent point and the target point P refers to the distance in the ordered point cloud array, not the distance represented by the three-dimensional space coordinates.
  • the neighboring points can select the upper row, the lower row and the 8 points in the same row that are closely adjacent to the target point P.
  • the search step size can be dynamically set according to the distance d between the sensor origin and the position of the target point in the target scene.
  • the search step size is used to select the target point P adjacent to the ordered point cloud array point.
  • the range for selecting neighboring points can be determined according to different spatial resolutions, so that the plane fitted by the selected neighboring points is closer to the spatial local tangent plane of the target point P, so as to calculate the target point
  • the normal vector of P is more accurate.
  • FIG. 10 is a schematic diagram of a specific process of calculating a normal vector of three-dimensional point cloud data according to an embodiment of the present application. As shown in Figure 10, the method includes:
  • three-dimensional point cloud data scanned by lidar can be obtained.
  • the downsampling coefficient is x, it means that one point is taken every x points in each row and column of the original image to form an image.
  • a grid sampling method can be used, that is, a point in a grid space is sampled into one point to achieve the purpose of downsampling, where N, M, and x are all integers greater than or equal to 1.
  • the horizontal and vertical distribution rate of lidar can be used to convert a disordered point cloud into an ordered point cloud.
  • the row direction can be arranged in order along the laser transmitter of the lidar
  • the column direction can be arranged in the scanning order of the lidar to obtain a two-dimensional ordered data structure. For example, for a 32-line radar, scanning 360° returns 2000 points of data, then the ordered point cloud of the 32-line lidar is a 32 ⁇ 2000 structure.
  • the disordered point cloud data can be converted into an ordered point cloud array according to the formulas (2)-(5) described in the example in FIG. 6.
  • the search step size can be dynamically determined according to the distance d between the sensor origin and the target point P corresponding to the position in the target scene.
  • the specific manner of determining the search step size can refer to the foregoing description, which will not be repeated here.
  • the manner of selecting neighboring points can refer to the description in the preceding text, which will not be repeated here.
  • at least two neighboring points can be selected to calculate the normal vector of the target point, so the number of the multiple neighboring points N ⁇ 2.
  • enough neighboring points can be selected to construct multiple pairs of non-parallel normal vectors.
  • one or more pairs of non-parallel vectors can be constructed based on multiple neighboring points, and one or more normal vectors can be calculated based on each pair of non-parallel vectors. If only one normal vector is calculated, the normal vector is determined as the normal vector of the target point. If multiple normal vectors are calculated, the multiple normal vectors can be weighted and averaged to obtain the normal vector of the target point.
  • FIG. 11 is a schematic structural diagram of a computing device 1100 provided in an embodiment of the present application.
  • the computing device 1100 is connected to a sensor, and the computing device 1100 includes:
  • the obtaining module 1110 is used to obtain three-dimensional point cloud data of the target scene scanned by the sensor;
  • the determining module 1120 is configured to determine an ordered point cloud array according to the three-dimensional point cloud data, where the ordered point cloud array includes a plurality of points; the determining module 1120 is also configured to determine that a target point is at the ordered point A plurality of first neighboring points in the cloud array, the target point is any point in the ordered point cloud array; the determining module 1120 is further configured to determine the first neighboring points according to the plurality of first neighboring points The normal vector of the target point.
  • the determining module 1120 is specifically configured to obtain a first vector and a second vector that are non-parallel according to the multiple first neighboring points; and according to the first vector and the first vector The result of the cross product of the two vectors determines the normal vector of the target point.
  • the determining module 1120 is specifically configured to obtain the first vector and the second vector according to the two first neighboring points and the target point; or, according to three points.
  • the first adjacent point obtains the first vector and the second vector; or, the first vector and the second vector are obtained according to the three first adjacent points and the target point; or, according to Four of the first neighboring points obtain the first vector and the second vector.
  • the determining module 1120 is further configured to determine a plurality of second neighboring points of the target point in the ordered point cloud array, and the plurality of second neighboring points are related to the The plurality of first neighboring points are not completely the same; according to the plurality of second neighboring points, non-parallel third and fourth vectors are obtained; and specifically used for the cross of the first vector and the second vector The multiplication result and the cross multiplication result of the third vector and the fourth vector determine the normal vector of the target point.
  • the three-dimensional point cloud data is dense point cloud data
  • the determining module 1120 is specifically configured to down-sample the three-dimensional point cloud data to obtain down-sampled three-dimensional point cloud data; And converting the down-sampled three-dimensional point cloud data to obtain the ordered point cloud array.
  • the distance between any one of the plurality of first neighboring points and the target point in the ordered point cloud array is less than the search step.
  • the determining module 1120 is further configured to determine the search step length according to the distance between the origin of the sensor and the position of the target point in the target scene.
  • the computing device 1100 provided in the embodiments of the present application may be implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the above PLD may be a complex program.
  • Logic device complex programmable logical device, CPLD
  • field-programmable gate array field-programmable gate array
  • FPGA field-programmable gate array
  • GAL general array logic
  • the method for processing three-dimensional point cloud data in the embodiment of the present application can also be processed through software, and each module in the computing device 1100 can also be a software module.
  • the computing device 1100 may correspond to the method described in the embodiment of the present application, and the above-mentioned and other operations and/or functions of each module in the computing device 1100 are the same as those in FIGS. 5 and 10 of the present application. For the sake of brevity, the corresponding process of the method will not be repeated here.
  • FIG. 12 is a schematic structural diagram of a computing device 1200 provided by an embodiment of the present application.
  • the computing device 1200 includes a processor 1210, a memory 1220, a communication interface 1230, and a bus 1240.
  • processor 1210 in the computing device 1200 shown in FIG. 12 may correspond to the determining module 1120 of the computing device 1100 in FIG. 11, and the communication interface 1230 in the computing device 1200 may correspond to the acquiring module 1110 in FIG. 11.
  • the processor 1210 may be connected to the memory 1220.
  • the memory 1220 can be used to store the program code and data. Therefore, the memory 1220 may be a storage unit inside the processor 1210, or an external storage unit independent of the processor 1210, or may include a storage unit inside the processor 1210 and an external storage unit independent of the processor 1210. part.
  • the computing device 1200 may further include a bus 1240.
  • the memory 1220 and the communication interface 1230 may be connected to the processor 1210 through the bus 1240.
  • the bus 1240 may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus 1240 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one line is used to represent in FIG. 12, but it does not mean that there is only one bus or one type of bus.
  • the processor 1210 may adopt a central processing unit (central processing unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (ASICs), ready-made programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 1210 adopts one or more integrated circuits to execute related programs to implement the technical solutions provided in the embodiments of the present application.
  • the memory 1220 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1210.
  • a part of the processor 1210 may also include a non-volatile random access memory.
  • the processor 1210 may also store device type information.
  • the processor 1210 executes the computer-executable instructions in the memory 1220 to execute the operation steps of the foregoing method.
  • the computing device 1200 may correspond to the corresponding execution subject that executes the method embodiments of FIG. 5 and FIG. 10 according to the embodiment of the present application, and the foregoing and other operations of the various modules in the computing device 1200 and The/or functions are to implement the corresponding processes of the method embodiments in FIG. 5 and FIG. 10 respectively, and for the sake of brevity, details are not described herein again.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Abstract

一种处理三维点云数据的方法,该方法可以用于智能驾驶领域,以减少计算设备处理传感器获得的数据的计算开销。该方法包括:获取激光雷达等传感器扫描目标场景得到的三维点云数据(S501);根据三维点云数据,确定有序点云阵列,有序点云阵列包括多个点(S502);确定目标点在有序点云阵列中的多个第一邻近点,目标点为有序点云阵列中的多个点中的任意一个点(S503);根据多个第一邻近点,确定目标点的法向量(S504)。

Description

处理三维点云数据的方法和计算设备
本申请要求于2019年09月16日提交中国专利局、申请号为201910872307.3、申请名称为“处理三维点云数据的方法和计算设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理领域,尤其涉及处理三维点云数据的方法和计算设备。
背景技术
随着传感器技术的进步,三维点云越来越多的应用到三维场景建模中。例如,智能机器人导航、自动驾驶、体感游戏等领域。例如,随着自动驾驶的发展,激光雷达是车载重要传感器之一。单纯使用三维点云的三维空间坐标特征来分析一个点云场景是非常困难的,而点云法向量可以描述局部的空间特征,是分析三维点云场景所使用的最广泛的特征,可广泛应用在数据配准、切割、识别等方面。在现有的计算方法中,可以先利用主成分分析(principal component analysis,PCA)法粗略计算目标点的法向量,然后通过平面拟合,筛选出距离拟合平面较近的多个点,构成点集Q。然后基于点集Q,重新利用PCA计算目标点的法向量。在计算过程中,利用邻近点所拟合的平面与目标点的切平面越接近,计算出的法向量的精度越高。对于每个目标点,通常都需要查找邻近的15~30个点进行法向量的计算。对于稀疏点云来说,这些邻近点很大概率不处于同一平面,导致法向量的估计误差较大。另外,PCA分析涉及矩阵计算,开销较大。
发明内容
本申请提供一种处理三维点云数据的方法,能够减少处理三维点云数据的计算复杂度。
第一方面,提供了一种处理三维点云数据的方法,用于计算设备,所述计算设备与传感器相连,该方法包括:获取所述传感器扫描目标场景得到的三维点云数据;根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括多个点;确定目标点在所述有序点云阵列中的多个第一邻近点,所述目标点为所述有序点云阵列中的任意一个点;根据所述多个第一邻近点,确定所述目标点的法向量。
在本申请实施例中,将传感器获取的三维点云数据转换为有序点云阵列,并在有序点云阵列中选择目标点的邻近点,然后根据选择的邻近点来计算目标点的法向量。由于邻近点是在有序点云阵列中选取的,因此,在计算目标点的法向量时考虑了邻近点在有序点云空间中的位置信息,构建的平面能够更加精确地逼近目标点的空间局部切平面,从而计算得到的目标点的法向量更准确。
另外,本申请实施例中通过在有序点云序列中选择目标点的邻近点,而无需通过依次 计算点云数据中的各个点与目标点之间的距离来筛选邻近点,减少了计算复杂度,提高了处理三维点云数据的实时性和计算精度。
结合第一方面,在一种可能的实现方式中,所述根据所述多个第一邻近点,确定所述目标点的法向量,包括:根据所述多个第一邻近点,得到非平行的第一向量和第二向量;根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
在本申请实施例中,利用目标点在有序点云阵列中的邻近点来构造两个非平行向量,并根据非平行向量确定三维点云数据的法向量,该方法利用了有序点云阵列的点阵列的排布特性,简化了计算法向量的复杂度,提高了处理三维点云数据的实时性和计算精度。
结合第一方面,在一种可能的实现方式中,所述根据所述多个第一邻近点,得到非平行的第一向量和第二向量,包括:根据两个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据四个所述第一邻近点得到所述第一向量和所述第二向量。
结合第一方面,在一种可能的实现方式中,该方法还包括:确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不相同或不完全相同;根据所述多个第二邻近点,得到非平行的第三向量和第四向量;所述根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量,包括:根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
结合第一方面,在一种可能的实现方式中,所述三维点云数据为稠密点云数据,所述根据所述三维点云数据,确定有序点云阵列,包括:对所述三维点云数据进行降采样,得到降采样后的三维点云数据;对所述降采样后的三维点云数据进行转换,获取所述有序点云阵列。
结合第一方面,在一种可能的实现方式中,所述多个第一邻近点中的任意一个点与所述目标点在所述有序点云阵列中的距离小于搜索步长。
结合第一方面,在一种可能的实现方式中,该方法还包括:根据所述传感器与所述目标点在所述目标场景中的距离,确定所述搜索步长。
在本申请实施例中,可以根据传感器原点与目标点在目标场景中的位置之间的距离,动态的设置搜索步长,搜索步长用于选择目标点在有序点云阵列的邻近点。通过动态设置搜索步长的方式,能够根据空间分辨率不同来确定选择邻近点的范围,以使得选择的邻近点拟合的平面更接近目标点的空间局部切平面,从而计算得到的目标点的法向量更准确。
第二方面,提供了一种计算设备,用于处理三维点云数据,所述计算设备与传感器相连,该计算设备包括:获取模块,用于获取所述传感器扫描目标场景得到的三维点云数据;确定模块,用于根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括多个点;所述确定模块还用于确定目标点在所述有序点云阵列中的多个第一邻近点,所述目标点为所述有序点云阵列中的任意一个点;所述确定模块还用于根据所述多个第一邻近点,确定所述目标点的法向量。
应理解,第二方面的计算设备,和第一方面的处理三维点云数据的方法基于相同的发明构思,因此第二方面的技术方案能够取得的有益技术效果,可以参考第一方面的说明, 不再赘述。
结合第二方面,在一种可能的实现方式中,所述确定模块具体用于根据所述多个第一邻近点,得到非平行的第一向量和第二向量;以及根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
结合第二方面,在一种可能的实现方式中所述确定模块具体用于根据两个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据四个所述第一邻近点得到所述第一向量和所述第二向量。
结合第二方面,在一种可能的实现方式中,所述确定模块还用于确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不相同或不完全相同;根据所述多个第二邻近点,得到非平行的第三向量和第四向量;以及具体用于根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
结合第二方面,在一种可能的实现方式中,所述三维点云数据为稠密点云数据,所述确定模块具体用于对所述三维点云数据进行降采样,得到降采样后的三维点云数据;以及对所述降采样后的三维点云数据进行转换,获取所述有序点云阵列。
结合第二方面,在一种可能的实现方式中,所述多个第一邻近点中的任意一个点与所述目标点在所述有序点云阵列中的距离小于搜索步长。
结合第二方面,在一种可能的实现方式中,所述确定模块还用于根据所述传感器与所述目标点在所述目标场景的距离,确定所述搜索步长。
第三方面,提供了一种计算设备,包括处理器和存储器,所述存储器用于存储计算机执行指令,所述计算设备运行时,所述处理器执行所述存储器中的计算机执行指令以通过计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法步骤。
第四方面,提供了一种非瞬态的可读存储介质,包括程序指令,当所述程序指令被计算设备运行时,所述计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法。
第五方面,提供了一种计算机程序产品,包括程序指令,当所述程序指令被计算设备运行时,所述计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
图1是本申请一实施例的稠密点云的示意图。
图2是本申请一实施例的稀疏点云的示意图。
图3是本申请一实施例的应用场景的示意图。
图4是利用PCA处理三维点云数据的流程示意图。
图5是本申请一实施例的处理三维点云数据的流程示意图。
图6是本申请实施例的传感器的坐标系的示意图。
图7是本申请一实施例的有序点云阵列的示意图。
图8是本申请一实施例的构建非平行向量的示意图。
图9是本申请一实施例的有序点云阵列的示意图。
图10是本申请又一实施例的处理三维点云数据的流程示意图。
图11是本申请一实施例的计算设备的结构示意图。
图12是本申请又一实施例的计算设备的结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
为了便于理解,下文首先介绍几个本申请涉及到的概念和术语。
三维点云,也称为激光点云(point cloud,PCD)或点云,是利用激光在同一空间参考系下获取物体表面每个采样点的三维空间坐标,所得到的一系列表达目标空间分布和目标表面特性的海量点的集合。相比于图像,三维点云虽然缺乏详细的纹理信息,但是包含了丰富的三维空间信息。
无序点云:无序点云表示三维点云中的点为随机排列,点与点之间是独立存在的。
有序点云:有序点云表示点云中的点按照真实三维空间中的顺序排列。有序点云中的点除了包含本身的空间坐标信息之外,还按照类似于图像中的像素点的坐标行和坐标列排列。有序点云可以理解为一个带有三维空间信息的深度图像,某个点在有序点云中的邻近点也是其在三维空间的邻近点。
可选地,在本申请实施例中,三维点云、无序点云和有序点云也可以分别称为三维点云数据、无序点云数据和有序点云数据。有序点云数据中的点排布阵列可以称为有序点云阵列。
点云法向量:点云数据采样于物体表面,物体表面的法向量即点云法向量。点云法向量是三维点云的一个重要特征,能够提供丰富的三维空间信息,可以广泛应用于三维点云的目标检测。
稠密点云:稠密点云能够明显识别出实物的轮廓、特征等,能够更加形象地还原出实物面貌,例如,图1为本申请一实施例的稠密点云的示意图。
稀疏点云:稀疏点云的来源为特征点,特征点为图像中一些特征明显、便于检测、匹配的点,例如建筑物的角、边缘点等。激光雷达利用发出的激光束探测目标的位置。根据发出的线束的多少,可分为单线束和多线束激光雷达,线束越多,视野范围越广,获取的点云越稠密。线束越少,获取的点云越稀疏。例如,威列登(Velodyne)公司的16线/32线激光雷达扫描的点云都可以理解为稀疏点云。图2为本申请一实施例的稀疏点云的示意图。
主成分分析:是一种统计分析、简化数据集的方法。它利用正交变换来对一系列可能相关的变量的观测值进行线性变换,从而投影为一系列线性不相关变量的值,这些不相关变量称为主成分(principal components)。具体地,主成分可以看做一个线性方程,其包含一系列线性系数来指示投影方向。
降采样:又可以称为下采样或缩小图像。即减少采样点数。对于一幅N*M的图像来说,如果降采样系数为x,即指在原图中每行每列每隔x个点取一个点组成一幅图像。例如栅格采样,就是将一个栅格空间中的点采样成一个点,来达到降采样的目的,其中,N、 M、x均为大于等于1的整数。
图3是本申请实施例的一种可能的应用场景的示意图。该场景例如可以应用于自动驾驶场景障碍物检测和识别。如图3所示,在车辆110中可以安装传感器120和计算设备130。传感器120用于检测和扫描目标场景中的三维点云数据。作为示例,上述传感器120可以包括激光雷达、接触式扫描仪、深度相机等,本申请不作限定。计算设备130与传感器120相连,用于从传感器120获取其扫描的三维点云数据,并对三维点云数据进行处理。然后根据处理结果,分析目标场景的特征信息,以便于作出合理的决策和路径规划。
应理解,图3的场景仅仅作为例示,本申请的方法还可以应用于其它类型的场景,只要该场景涉及对三维点云数据的处理即可。例如,还可以应用于智能机器人导航,体感游戏等场景。
为了便于理解,首先介绍现有技术中的处理三维点云数据的原理。在计算三维点云数据中的目标点P处的法向量时,可以近似地将目标点P和邻近的k个点组成的平面(plane)作为目标点P所在空间的切平面,然后将该平面的法向量当成点P的法向量。因此,邻近点所在的平面与目标点P的切平面越接近,所计算出的法向量的精度就越高。在实践中,可以利用PCA来计算点云法向量。
图4是利用PCA处理三维点云数据的流程示意图。如图4所示,该方法包括以下步骤:
S401、获取三维点云数据。
例如,可以从传感器获取三维点云数据。为了加速后续梳理过程中点云的查找效率,可以建立k-维树(k-dimensional tree,k-d tree)。在计算机科学里,k-d树(k-维树的缩写)是在k维欧几里德空间组织点的数据结构。k-d树可以使用在多种应用场合,如多维键值搜索(例如范围搜寻及最邻近搜索)。k-d树是空间二分树(binary space partitioning)的一种特殊情况。
S402、计算三维点云数据中的每个点的粗略法向量N1。
其中,对于目标点,可以利用PCA根据k个近邻点计算得到目标点P的粗略法向量N1,目标点为三维点云数据中的任意一个点。其中,求解法向量N1包括求解k个近邻点的协方差矩阵、特征值和特征向量。其中,最大特征值对应的特征向量为目标点P处的法向量。
具体计算方式如以下公式(1)所示:
Figure PCTCN2020114995-appb-000001
其中,C表示协方差矩阵,k表示目标点的近邻点的数量,P i表示第i个近邻点,u表示k个近邻点的三维空间坐标的均值,T表示矩阵转置。在得到协方差矩阵C之后,再计算协方差矩阵C的特征值,取最大特征值对应的特征向量当成P点的法向量。作为示例,k个近邻点可以包括15~30个近邻点。
S403、利用目标点P的k个近邻点,拟合平面。
其中,平面拟合包括多种方式,例如最小二乘法平面拟合等。得到拟合平面之后可计算出拟合平面的法向量N2。
S404、从k个近邻点中选择距离拟合平面的距离小于预定阈值的z个点,其中z<k。
上述预定阈值的大小可根据实践确定,作为一个示例,上述z个点可以占k个近邻点中的60%~80%。
S405、对于S404中选择的z个点,确定每个点在S402中计算的粗略法向量N1与S404中计算的拟合平面的法向量N2之间的夹角,进一步在z个点中确定夹角小于预定阈值的s个点,s<z,这些筛选后得到的s个点组成了点集Q。
上述预定阈值的大小可根据实践确定,作为一个示例,上述s个点可以占z个近邻点中的60%~80%。
S406、基于点集Q,重新利用PCA计算目标点P的法向量。
具体地,可以根据公式(1),以点集Q中的s个点作为近邻点,重新计算目标点P的法向量。
但是,图4的方法中,计算三维点云数据中的每个点的法向量,都需要查找邻近的多个点,例如15~30个点,耗时较长。并且对于稀疏点云来说,这些邻近点很大概率不处于同一平面,导致法向量的估计误差较大。另外,PCA分析涉及矩阵计算,开销也较大。
本申请提供了一种处理三维点云数据的方法,该方法利用目标点在有序点云阵列中的邻近点来计算三维点云中每个点的法向量,该方法利用了有序点云阵列的点的排布特性,简化了计算法向量的复杂度,提高了处理三维点云数据的实时性和计算精度。
下面结合附图,详细介绍本申请实施例的处理三维点云数据的方法。图5是本申请实施例的处理三维点云数据的方法的流程示意图。如图5所示,该方法包括以下步骤。
S501、获取传感器扫描目标场景得到的三维点云数据。
例如,在车载系统中,该传感器可以是激光雷达。该传感器获取的三维点云数据可以是无序点云数据,也可以是有序点云阵列。
S502、根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括多个点。
其中,有序点云阵列中的多个点的排列方式与真实三维空间相同,每个点包括三维空间坐标信息。
若从传感器取的三维点云数据为无序点云数据,则可以将无序点云数据转换为有序点云阵列。例如,若传感器为激光雷达,则可以利用激光雷达的水平和垂直分布率,将无序点云转换成有序点云。其中,行方向可以沿激光雷达的激光发射器顺序排列,列方向可以沿激光雷达扫描顺序排序,得到二维有序数据结构。例如,对于32线的雷达,扫描360°返回2000个点的数据,则32线激光雷达的有序点云为32×2000的结构。
作为一个示例,图6示出的为本申请实施例的传感器的坐标系。该坐标系包括相互垂直的X轴、Y轴、Z轴,O表示坐标原点。三维点云中的点P(x,y,z)与传感器原点之间的距离表示为R。若传感器输入的三维点云数据为无序点云数据,则可以利用公式(2)-(5)对点云进行排序,形成有序点云阵列。具体地可以根据公式计算出目标点P的行坐标和列坐标,其中目标点P的行坐标和列坐标可以表示为(row,col)。
ω=arctan(z/sqrt(x 2+y 2))     (2)
α=arctan(x/y)        (3)
其中,ω表示点P与XOY平面的夹角,α表示点P在XOY上的投影与Y轴的夹角。
假设传感器的水平扫描角分辨率为dx,垂直角分辨率为dy,则点P(x,y,z)的有序点 云的行坐标和列坐标如以下公式所示。
row=ω/dy    (4)
col=α/dx     (5)
作为示例,图7为本申请一实施例的有序点云阵列的示意图。如图7所示,图7中的每个黑色圆点表示三维空间中的一个点,每个点包含三维空间坐标信息。需要说明的是,有序点云阵列中的点的行坐标和列坐标表示有序点云阵列的存储结构,有序点云阵列中的点本身不包含行坐标和列坐标信息。
可选地,若从传感器获取的三维点云数据为有序点云阵列,则可以直接将所述三维点云数据作为有序点云阵列,进行后续的步骤。
S503、确定目标点P在所述有序点云阵列中的多个第一邻近点,所述目标点P为所述有序点云阵列中的多个点中的任意一个点。
目标点P的邻近点是目标点P在有序点云阵列中的邻近点,因此在选择邻近点时考虑的是目标点与邻近点在有序点云阵列中的相对位置关系,即目标点和邻近点在有序点云阵列中的行坐标信息和列坐标信息。在选择构建非平行向量的邻近点时,选择目标点P在有序点云阵列中的邻近点,即该邻近点在有序点云阵列中与目标点P相邻,从而利用两个非平行向量构建的平面能够更加精确地逼近目标点P的空间局部切平面。
S504、根据所述多个第一邻近点,确定所述目标点的法向量。
可选地,根据所述多个第一邻近点,确定所述目标点的法向量,可以理解为,将目标点P及邻近点组成的平面近似当成目标点P所在空间的切平面,并将该平面的法向量当作目标点P的法向量。其中,在确定目标点的邻近点之后,计算目标点P的法向量的具体方式可以包括多种方式,下文中将结合示例继续进行描述。
在本申请实施例中,将传感器获取的三维点云数据转换为有序点云阵列,并在有序点云阵列中选择目标点的邻近点,然后根据选择的邻近点来计算目标点P的法向量。由于邻近点是在有序点云阵列中选取的,因此,在计算目标点P的法向量时考虑了邻近点在有序点云空间中的位置信息,构建的平面能够更加精确地逼近目标点P的空间局部切平面,从而计算得到的目标点P的法向量更准确。
另外,本申请实施例中通过在有序点云序列中选择目标点的邻近点,而无需通过依次计算点云数据中的各个点与目标点之间的距离来筛选邻近点,减少了计算复杂度,提高了处理三维点云数据的实时性和计算精度。
可选地,所述目标点的法向量可以用于分析目标场景的特征,例如,可以根据所述目标点P的法向量确定所述目标场景的特征信息,并根据目标场景的特征信息分析目标场景的空间特征等。
可选地,可以利用目标点以及其邻近点,构建两个非平行向量。并基于这两个非平行向量构成的切平面,计算切平面的法向量。在一个示例中,所述根据所述多个第一邻近点,确定所述目标点的法向量,包括:根据所述多个第一邻近点,得到非平行的第一向量和第二向量;根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
其中,目标点P可以参与构建这两个非平行向量,也可以不参与构建这两个非平行向量。这两个非平行向量可以由三个点构建,也可以由四个点构建。
例如,图8中示出了本申请实施例的构建非平行向量的几种方式。如图8中的(a)所示,这两个非平行向量可以由目标点P以及两个邻近点构成。即通过这三个点,构成两个非平行的向量。具体地,由三个点构建两个向量也存在多种实现方式,本申请实施例对此不作限定,只要构建的两个向量满足非平行的条件即可。
如图8中的(b)所示,该两个非平行向量可以由目标点P的三个邻近点构成,即不包括目标点本身。
如图8中的(c)所示,该两个非平行向量可以由目标点P以及目标点P的三个邻近点构成,即由四个点构成两个非平行的向量。具体地,由四个点构建两个向量也存在多种实现方式,本申请实施例对此不作限定,只要构建的两个向量满足非平行的条件即可。
如图8中的(d)所示,该两个非平行向量可以由目标点P的四个邻近点构成,即不包括目标点P本身。
需要说明的是,图8中的点为有序点云序列中的点,在选择目标点的邻近点时是在有序点云序列中选择邻近点。但是在构建向量时,使用的是目标点P或邻近点的三维空间坐标,而不是各个点的行、列坐标。例如,图8(c)中的两个向量是根据三维空间坐标来构建的,因此,这两个根据三维空间坐标构建的向量为非平行向量。
例如,以图8中的(d)为例,选择目标点P的四个邻近点的三维空间坐标分别标记为P1(x1,y1,z1),P2(x2,y2,z2),P3(x3,y3,z3),P4(x4,y4,z4)。这四个邻近点可以构成两个非平行向量v1和v2,其表示为:v1(x1-x2,y1-y2,z1-z2),v2(x3-x4,y3-y4,z3-z4)。
例如,两个非平行向量v1和v2进行叉乘,可以得到根据非平行向量v1和v2构建的平面的法向量,表示如下:
Figure PCTCN2020114995-appb-000002
其中,Normal表示法向量,i,j,k分别表示X、Y、Z轴三个不同方向的分量。
可选地,在计算法向量时,可以通过右手螺旋定则,确定两个向量叉乘的方向,以达到各个点的法向量在相同视角下方向一致。
可选地,在S504部分,可以直接将根据第一向量和第二向量获得的法向量确定为所述目标点P的法向量。或者,为了得到更精确的法向量,可以在目标点P周围重新选择不同的邻近点,多次计算法向量,并将多次计算的法向量进行加权平均,得到目标点P的法向量。
例如,该方法还包括:确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不完全相同,据所述多个第二邻近点,得到非平行的第三向量和第四向量;所述根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量,包括:根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
具体地,上述多个第二邻近点与多个第一邻近点不相同或者不完全相同,既可以包括多个第二邻近点与多个第一邻近点中至少有一个点不同的情形,也包括多个第二邻近点与多个第一邻近点中的点没有交集的情形。
在本申请实施例中,将传感器获取的三维点云数据转换为有序点云阵列,并在有序点云阵列中选择目标点的邻近点,以构建两个非平行的向量,然后根据非平行向量构建的平面来计算目标点P的法向量。由于邻近点是在有序点云阵列中选取的,因此,在计算目标点P的法向量时考虑了邻近点在有序点云空间中的位置信息,构建的平面能够更加精确地逼近目标点P的空间局部切平面,从而计算得到的目标点P的法向量更准确。
在本申请中,最少可以通过搜索两个邻近点,就可以计算目标点P的法向量,极大的减少了处理三维点云数据的计算复杂度,节约了计算资源。
可选地,对于输入稠密的点云数据,即稠密点云,可以通过降采样将点云数据采样到所需分辨率,然后再进行法向量的计算。
通常情况下,传感器对于空间中的不同位置的空间分辨率不相同,被扫描的物体距离传感器越远,则其分辨率越低。例如,假设目标点P距离传感器原点的距离为d,传感器水平扫描的分辨率为alpha,则目标点P处的水平空间的分辨率res可以用公式(6)表示为:
res=d*alpha      (6)
其中,在公式(6)中,alpha的取值越大,表示分辨率越低,res的取值越大,表示空间分辨率越低。
在选择目标点P的邻近点时,为了使选择的邻近点更接近目标点P的切平面,因此选择的邻近点与目标点P在有序点云阵列中的距离应小于搜索步长,从而可以保证目标点P的精度。由于传感器原点与目标点P在目标场景中的位置之间的距离d不同,其空间分辨率也不同,因此可以根据距离d确定搜索步长的大小。例如,距离d越小,传感器对目标点P扫描的空间分辨率就越大,目标点P的邻近点与目标点P在真实三维场景中也越接近。则可以选择更大的搜索步长,即目标点P可选择的邻近点更多。
例如,假设搜索步长用step表示,则可以如公式(7)表示为:
step=k/res+b     (7)
其中,k,b为线性参数,res表示目标点P所处的水平空间的分辨率。
在本申请实施例中,因此,在计算目标点P的法向量时考虑了邻近点在有序点云空间中的距离信息,选择与目标点距离在搜索步长以内的点为邻近点,从而保证选择的邻近点与目标点在真实三维空间中的距离也足够接近,使得计算得到的目标点P的法向量更准确。
在一些场景下,也可能存在通过搜索步长确定的邻近点中还是存在三维空间距离大于预设阈值的邻近点,从而影响最终确定的法向量的精度。为了减少上述情况的发生,可以在确定根据搜索步长筛选出来的邻近点之后,计算筛选出来的邻近点与目标点之间的三维空间距离,将三维空间距离大于预设阈值的邻近点排除在外,然后利用剩余的邻近点确定目标点P的法向量。采用这种方式既可以提高法向量的计算精度,又可以避免遍历计算三维点云数据中所有的点,而导致增加计算复杂度。
作为一个示例,图9是本申请一实施例的有序点云阵列的示意图。若需选择4个邻近点,则可以根据搜索步长,沿着目标点P的行方向选择目标点P左边和右边的点P R和点P L。其中点P R和点P L分别与目标点P之间的距离小于搜索步长。类似地,可以在目标点P的上下相邻行分别选择点P U和点P D,其中点P U和点P D分别与目标点P之间的距离 小于搜索步长。其中,上述邻近点与目标点P之间的距离是指其在有序点云阵列中的距离,而不是三维空间坐标表示的距离。例如,假设有序点云阵列的同一行或同一列之间的两个邻近点之间的单元距离相同,均表示为a。假设搜索步长为1.5a,则邻近点可以选择目标点P的上一行,下一行以及同一行中与目标点P紧密相邻的8个点中的点。
在本申请实施例中,可以根据传感器原点与目标点在目标场景中的位置之间的距离d,动态的设置搜索步长,搜索步长用于选择目标点P在有序点云阵列的邻近点。通过动态设置搜索步长的方式,能够根据空间分辨率不同来确定选择邻近点的范围,以使得选择的邻近点拟合的平面更接近目标点P的空间局部切平面,从而计算得到的目标点P的法向量更准确。
图10是本申请一实施例的计算三维点云数据的法向量的具体流程示意图。如图10所示,该方法包括:
S1001、获取三维点云数据。
例如,可以获取激光雷达扫描的三维点云数据。
S1002、若三维点云数据为稠密点云,对三维点云数据进行降采样处理,得到降采样后的三维点云数据。若三维点云数据为稀疏点云,则无需进行降采样处理。
例如,对于一幅N*M的图像来说,如果降采样系数为x,即指在原图中每行每列每隔x个点取一个点组成一幅图像。或者可以采用栅格采样方式,就是将一个栅格空间中的点采样成一个点,来达到降采样的目的,其中,N、M、x均为大于等于1的整数。
S1003、若三维点云数据为无序点云,则对所述无序点云进行处理,得到有序点云阵列。若三维点云数据为有序点云阵列,则无需处理。
例如,可以利用激光雷达的水平和垂直分布率,将无序点云转换成有序点云。其中,行方向可以沿激光雷达的激光发射器顺序排列,列方向可以沿激光雷达扫描顺序排序,得到二维有序数据结构。例如,对于32线的雷达,扫描360°返回2000个点的数据,则32线激光雷达的有序点云为32×2000的结构。
或者也可以根据图6的例子中描述的公式(2)-(5),将无序点云数据转换为有序点云阵列。
需要说明的是,以上转换为有序点云的方式仅仅作为例示,本申请实施例对转换有序点云的方式不作限定,也可以根据其他方式将无序点云数据转换为有序点云阵列。
S1004、遍历有序点云阵列中的每个点,并执行后续步骤,直到遍历完成。作为示例,在一次计算过程中,选择目标点P。
S1005、根据选择的目标点P,确定搜索步长。
可选地,搜索步长可以根据传感器原点与目标点P对应在目标场景中的位置之间的距离d动态的确定,确定搜索步长的具体方式可以参考前文的描述,此处不再赘述。
S1006、根据搜索步长,选择多个邻近点。
其中,选择邻近点的方式可以参考前文中的描述,此处不再赘述。根据前文中的描述,计算目标点的法向量最少可以只选择2个邻近点,因此所述多个邻近点的个数N≥2。可选地,为了获取的目标点的法向量足够精确,可以选择足够多的邻近点,以构建多对非平行的法向量。
S1007、根据选择的多个邻近点,计算目标点的法向量。
可选地,可以根据多个邻近点,构建一对或多对非平行向量,并根据每对非平行向量,计算出一个或多个法向量。若只计算得到一个法向量,则将该法向量确定为目标点的法向量。若计算得到多个法向量,则可以将多个法向量进行加权平均,得到目标点的法向量。
上文结合图1至图10,详细描述了本申请实施例提供的处理三维点云数据的方法,下文结合图11至图12,详细描述本申请的装置的实施例。应理解,方法实施例的描述和装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。
图11是本申请实施例提供的计算设备1100的结构性示意图。所述计算设备1100与传感器相连,该计算设备1100包括:
获取模块1110,用于获取所述传感器扫描的目标场景的三维点云数据;
确定模块1120,用于根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括多个点;所述确定模块1120还用于确定目标点在所述有序点云阵列中的多个第一邻近点,所述目标点为所述有序点云阵列中的任意一个点;所述确定模块1120还用于根据所述多个第一邻近点,确定所述目标点的法向量。
在一种可能的实现方式中,所述确定模块1120具体用于根据所述多个第一邻近点,得到非平行的第一向量和第二向量;以及根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
在一种可能的实现方式中,所述确定模块1120具体用于根据两个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点得到所述第一向量和所述第二向量;或,根据三个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,根据四个所述第一邻近点得到所述第一向量和所述第二向量。
在一种可能的实现方式中,所述确定模块1120还用于确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不完全相同;根据所述多个第二邻近点,得到非平行的第三向量和第四向量;以及具体用于根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
在一种可能的实现方式中,所述三维点云数据为稠密点云数据,所述确定模块1120具体用于对所述三维点云数据进行降采样,得到降采样后的三维点云数据;以及对所述降采样后的三维点云数据进行转换,获取所述有序点云阵列。
在一种可能的实现方式中,所述多个第一邻近点中的任意一个点与所述目标点在所述有序点云阵列中的距离小于搜索步长。
在一种可能的实现方式中,所述确定模块1120还用于根据所述传感器的原点与所述目标点在所述目标场景的位置之间的距离,确定所述搜索步长。
应理解的是,本申请实施例提供的计算设备1100可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。或者也可以通过软件本申请实施例的处理三维点云数据的方法,计算设备1100中的各个模块也可以为软件模块。
根据本申请实施例的计算设备1100可对应于执行本申请实施例中描述的方法,并且 计算设备1100中的各个模块的上述和其它操作和/或功能分别为了本申请的图5和图10的方法的相应流程,为了简洁,在此不再赘述。
图12是本申请实施例提供的一种计算设备1200的结构性示意性图。该计算设备1200包括:处理器1210、存储器1220、通信接口1230、总线1240。
应理解,图12所示的计算设备1200中的处理器1210可以对应于图11中计算设备1100的确定模块1120,计算设备1200中的通信接口1230可以对应于图11中获取模块1110。
其中,该处理器1210可以与存储器1220连接。该存储器1220可以用于存储该程序代码和数据。因此,该存储器1220可以是处理器1210内部的存储单元,也可以是与处理器1210独立的外部存储单元,还可以是包括处理器1210内部的存储单元和与处理器1210独立的外部存储单元的部件。
可选的,计算设备1200还可以包括总线1240。其中,存储器1220、通信接口1230可以通过总线1240与处理器1210连接。总线1240可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线1240可以分为地址总线、数据总线、控制总线等。为便于表示,图12中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。
应理解,在本申请实施例中,该处理器1210可以采用中央处理单元(central processing unit,CPU)。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器1210采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
该存储器1220可以包括只读存储器和随机存取存储器,并向处理器1210提供指令和数据。处理器1210的一部分还可以包括非易失性随机存取存储器。例如,处理器1210还可以存储设备类型的信息。
在计算设备1200运行时,所述处理器1210执行所述存储器1220中的计算机执行指令执行上述方法的操作步骤。
应理解,根据本申请实施例的计算设备1200可以对应于执行根据本申请实施例的图5和图10的方法实施例的相应执行主体,并且计算设备1200中的各个模块的上述和其它操作和/或功能分别为了实现图5和图10的方法实施例的相应流程,为了简洁,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种处理三维点云数据的方法,用于计算设备,所述计算设备与传感器相连,其特征在于,所述方法包括:
    获取所述传感器扫描目标场景得到的三维点云数据;
    根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括多个点;
    确定目标点在所述有序点云阵列中的多个第一邻近点,所述目标点为所述有序点云阵列中的任意一个点;
    根据所述多个第一邻近点,确定所述目标点的法向量。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述多个第一邻近点,确定所述目标点的法向量,包括:
    根据所述多个第一邻近点,得到非平行的第一向量和第二向量;
    根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述多个第一邻近点,得到非平行的第一向量和第二向量,包括:
    根据两个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,
    根据三个所述第一邻近点得到所述第一向量和所述第二向量;或,
    根据三个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或,
    根据四个所述第一邻近点得到所述第一向量和所述第二向量。
  4. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:
    确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不相同;
    根据所述多个第二邻近点,得到非平行的第三向量和第四向量;
    所述根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量,包括:
    根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述三维点云数据为稠密点云数据,所述根据所述三维点云数据,确定有序点云阵列,包括:
    对所述三维点云数据进行降采样,得到降采样后的三维点云数据;
    对所述降采样后的三维点云数据进行转换,获取所述有序点云阵列。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述多个第一邻近点中的任意一个点与所述目标点在所述有序点云阵列中的距离小于搜索步长。
  7. 如权利要求6所述的方法,其特征在于,所述方法还包括:
    根据所述传感器与所述目标点在所述目标场景中的距离,确定所述搜索步长。
  8. 一种计算设备,用于处理三维点云数据,所述计算设备与传感器相连,其特征在于,所述计算设备包括:
    获取模块,用于获取所述传感器扫描目标场景得到的三维点云数据;
    确定模块,用于根据所述三维点云数据,确定有序点云阵列,所述有序点云阵列包括 多个点;
    所述确定模块还用于确定目标点在所述有序点云阵列中的多个第一邻近点,所述目标点为所述有序点云阵列中的任意一个点;
    所述确定模块还用于根据所述多个第一邻近点,确定所述目标点的法向量。
  9. 如权利要求8所述的计算设备,其特征在于,所述确定模块用于:
    根据所述多个第一邻近点,得到非平行的第一向量和第二向量;以及
    根据所述第一向量和所述第二向量的叉乘结果,确定所述目标点的法向量。
  10. 如权利要求9所述的计算设备,其特征在于,所述确定模块用于:
    根据两个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或
    根据三个所述第一邻近点得到所述第一向量和所述第二向量;或
    根据三个所述第一邻近点和所述目标点得到所述第一向量和所述第二向量;或
    根据四个所述第一邻近点得到所述第一向量和所述第二向量。
  11. 如权利要求9或10所述的计算设备,其特征在于,所述确定模块还用于:
    确定所述目标点在所述有序点云阵列中的多个第二邻近点,所述多个第二邻近点与所述多个第一邻近点不相同;
    根据所述多个第二邻近点,得到非平行的第三向量和第四向量;以及
    根据所述第一向量和所述第二向量的叉乘结果以及所述第三向量和所述第四向量的叉乘结果,确定所述目标点的法向量。
  12. 如权利要求8至11中任一项所述的计算设备,其特征在于,所述三维点云数据为稠密点云数据,所述确定模块用于:
    对所述三维点云数据进行降采样,得到降采样后的三维点云数据;以及
    对所述降采样后的三维点云数据进行转换,获取所述有序点云阵列。
  13. 如权利要求8至12中任一项所述的计算设备,其特征在于,所述多个第一邻近点中的任意一个点与所述目标点在所述有序点云阵列中的距离小于搜索步长。
  14. 如权利要求13所述的计算设备,其特征在于,所述确定模块还用于:
    根据所述传感器与所述目标点在所述目标场景中的距离,确定所述搜索步长。
  15. 一种计算设备,其特征在于,包括:
    存储器,用于存储计算机指令;
    处理器,用于从所述存储器调用所述计算机指令,当所述处理器执行所述计算机指令时,使得所述计算设备执行权利要求1至7中任一项所述的方法。
PCT/CN2020/114995 2019-09-16 2020-09-14 处理三维点云数据的方法和计算设备 WO2021052283A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910872307.3 2019-09-16
CN201910872307.3A CN110782531A (zh) 2019-09-16 2019-09-16 处理三维点云数据的方法和计算设备

Publications (1)

Publication Number Publication Date
WO2021052283A1 true WO2021052283A1 (zh) 2021-03-25

Family

ID=69383453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114995 WO2021052283A1 (zh) 2019-09-16 2020-09-14 处理三维点云数据的方法和计算设备

Country Status (2)

Country Link
CN (1) CN110782531A (zh)
WO (1) WO2021052283A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782531A (zh) * 2019-09-16 2020-02-11 华为技术有限公司 处理三维点云数据的方法和计算设备
CN112102375B (zh) * 2020-07-22 2024-04-12 广州视源电子科技股份有限公司 一种点云配准可靠性检测的方法、装置、移动智慧设备
CN112075957B (zh) * 2020-07-27 2022-05-17 深圳瀚维智能医疗科技有限公司 乳腺环扫轨迹规划方法、装置及计算机可读存储介质
US11860304B2 (en) 2020-10-01 2024-01-02 Huawei Technologies Co., Ltd. Method and system for real-time landmark extraction from a sparse three-dimensional point cloud
CN114051628B (zh) * 2020-10-30 2023-04-04 华为技术有限公司 一种确定目标对象点云集的方法及装置
CN113781649A (zh) * 2021-09-07 2021-12-10 岱悟智能科技(上海)有限公司 一种基于三维扫描点云的建筑物平面图生成方法
CN114677322B (zh) * 2021-12-30 2023-04-07 东北农业大学 基于注意力引导点云特征学习的奶牛体况自动评分方法
CN115079126A (zh) * 2022-05-12 2022-09-20 探维科技(北京)有限公司 点云处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180225515A1 (en) * 2015-08-04 2018-08-09 Baidu Online Network Technology (Beijing) Co. Ltd. Method and apparatus for urban road recognition based on laser point cloud, storage medium, and device
CN108875804A (zh) * 2018-05-31 2018-11-23 腾讯科技(深圳)有限公司 一种基于激光点云数据的数据处理方法和相关装置
CN109063753A (zh) * 2018-07-18 2018-12-21 北方民族大学 一种基于卷积神经网络的三维点云模型分类方法
CN109523581A (zh) * 2017-09-19 2019-03-26 华为技术有限公司 一种三维点云对齐的方法和装置
CN110782531A (zh) * 2019-09-16 2020-02-11 华为技术有限公司 处理三维点云数据的方法和计算设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123746B (zh) * 2014-07-10 2017-07-25 上海大学 一种三维扫描点云中实时法向量的计算方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180225515A1 (en) * 2015-08-04 2018-08-09 Baidu Online Network Technology (Beijing) Co. Ltd. Method and apparatus for urban road recognition based on laser point cloud, storage medium, and device
CN109523581A (zh) * 2017-09-19 2019-03-26 华为技术有限公司 一种三维点云对齐的方法和装置
CN108875804A (zh) * 2018-05-31 2018-11-23 腾讯科技(深圳)有限公司 一种基于激光点云数据的数据处理方法和相关装置
CN109063753A (zh) * 2018-07-18 2018-12-21 北方民族大学 一种基于卷积神经网络的三维点云模型分类方法
CN110782531A (zh) * 2019-09-16 2020-02-11 华为技术有限公司 处理三维点云数据的方法和计算设备

Also Published As

Publication number Publication date
CN110782531A (zh) 2020-02-11

Similar Documents

Publication Publication Date Title
WO2021052283A1 (zh) 处理三维点云数据的方法和计算设备
JP6745328B2 (ja) 点群データを復旧するための方法及び装置
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
JP5430456B2 (ja) 幾何特徴抽出装置、幾何特徴抽出方法、及びプログラム、三次元計測装置、物体認識装置
CN110363817B (zh) 目标位姿估计方法、电子设备和介质
CN110992356A (zh) 目标对象检测方法、装置和计算机设备
EP2573734B1 (en) Systems and methods for evaluating plane similarity
CN111123242B (zh) 一种基于激光雷达和相机的联合标定方法及计算机可读存储介质
JP2019057227A (ja) テンプレート作成装置、物体認識処理装置、テンプレート作成方法及びプログラム
WO2022133770A1 (zh) 点云法向量的生成方法、装置、计算机设备和存储介质
CN112328715A (zh) 视觉定位方法及相关模型的训练方法及相关装置、设备
CN113762003B (zh) 一种目标对象的检测方法、装置、设备和存储介质
Sveier et al. Object detection in point clouds using conformal geometric algebra
CN115457492A (zh) 目标检测方法、装置、计算机设备及存储介质
CN112825199A (zh) 碰撞检测方法、装置、设备及存储介质
CN110673607A (zh) 动态场景下的特征点提取方法、装置、及终端设备
WO2022147655A1 (zh) 定位方法、空间信息获取方法、装置、拍摄设备
CN113420637A (zh) 自动驾驶中多尺度鸟瞰视角下的激光雷达检测方法
CN113793370A (zh) 三维点云配准方法、装置、电子设备及可读介质
CN111986299A (zh) 点云数据处理方法、装置、设备及存储介质
CN116168384A (zh) 点云目标检测方法、装置、电子设备及存储介质
CN115346020A (zh) 点云处理方法、避障方法、装置、机器人和存储介质
CN115719436A (zh) 模型训练方法、目标检测方法、装置、设备以及存储介质
Kovacs et al. Edge detection in discretized range images
CN113643328A (zh) 标定物的重建方法、装置、电子设备及计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20864881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20864881

Country of ref document: EP

Kind code of ref document: A1