WO2022141116A1 - 三维点云分割方法和装置、可移动平台 - Google Patents

三维点云分割方法和装置、可移动平台 Download PDF

Info

Publication number
WO2022141116A1
WO2022141116A1 PCT/CN2020/141076 CN2020141076W WO2022141116A1 WO 2022141116 A1 WO2022141116 A1 WO 2022141116A1 CN 2020141076 W CN2020141076 W CN 2020141076W WO 2022141116 A1 WO2022141116 A1 WO 2022141116A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
point cloud
point
dimensional point
points
Prior art date
Application number
PCT/CN2020/141076
Other languages
English (en)
French (fr)
Inventor
李星河
郑杨杨
刘晓洋
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/141076 priority Critical patent/WO2022141116A1/zh
Priority to CN202080070568.4A priority patent/CN114556442A/zh
Publication of WO2022141116A1 publication Critical patent/WO2022141116A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a three-dimensional point cloud segmentation method and device, and a movable platform.
  • a path planning module on the movable platform can perform decision planning on the traveling state (eg, pose and speed) of the movable platform.
  • the point cloud acquisition device on the movable platform needs to collect the 3D point cloud of the surrounding environment, and perform point cloud segmentation to distinguish the ground and obstacles in the 3D point cloud, and further Distinguish dynamic and static objects from obstacles. Therefore, point cloud segmentation is an important part of decision planning for the driving state of the mobile platform.
  • the traditional point cloud segmentation method is generally based on the local features or the global model of the road surface of the movable platform. Both of these methods have certain requirements on the point cloud density, so it is necessary to accumulate 3D point clouds for a period of time. However, When segmenting point clouds for dynamic objects, the above two point cloud segmentation methods are prone to motion blur, which reduces the reliability of point cloud segmentation.
  • the embodiments of the present disclosure propose a three-dimensional point cloud segmentation method and device, and a movable platform, so as to reliably perform point cloud segmentation on a three-dimensional point cloud of a dynamic object.
  • a method for segmenting a 3D point cloud which is used to segment a 3D point cloud collected by a movable platform, the method comprising: collecting a 3D point cloud based on a lidar on the movable platform The scanning angle and scanning time of the 3D point cloud, determine the adjacent 3D points in the physical space adjacent to the 3D point in the 3D point cloud; obtain the local area where the 3D point and the 3D point adjacent to the 3D point are located normal vector of ; perform point cloud segmentation on the three-dimensional point based on the normal vector.
  • a three-dimensional point cloud segmentation device including a processor, the three-dimensional point cloud segmentation device is configured to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform, and the processing The device is used to perform the following steps: based on the scanning angle and scanning moment when the lidar on the movable platform collects the 3D point cloud, determine the adjacent 3D points in the physical space that are adjacent to the 3D point in the 3D point cloud; The normal vector of the local area where the 3D point and the adjacent 3D points of the 3D point are located; the point cloud segmentation is performed on the 3D point based on the normal vector.
  • a movable platform comprising: a casing; a point cloud collecting device, provided on the casing, for collecting a three-dimensional point cloud; and a three-dimensional point cloud segmentation device, set Inside the casing, the method is used to execute the method described in any embodiment of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method described in any of the embodiments of the present disclosure.
  • the adjacent 3D points in the physical space are determined based on the scanning angle and scanning time when the lidar collects the 3D point cloud, and the normal vector of the local area where the 3D point and its adjacent 3D points are located is determined.
  • point cloud segmentation the natural geometric constraints generated by lidar scanning can be extracted to obtain the basis for point cloud segmentation.
  • the geometric features between point clouds are utilized to improve the reliability of point cloud segmentation.
  • Figure 1 is a schematic diagram of a point cloud segmentation process of some embodiments.
  • FIG. 2 is a schematic diagram of a decision-making planning process during travel of a mobile platform according to some embodiments.
  • FIG. 3 is a flowchart of a point cloud segmentation method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a point cloud arrangement manner according to an embodiment of the present disclosure.
  • FIG. 5A is a schematic diagram of a normal vector of a ground point according to an embodiment of the present disclosure.
  • 5B is a schematic diagram of a normal vector of a non-ground point according to an embodiment of the present disclosure.
  • FIG. 6A is a schematic diagram of a scanning manner of a radar according to an embodiment of the present disclosure.
  • FIG. 6B is a schematic diagram of an application scenario of an embodiment of the present disclosure.
  • FIG. 6C is a schematic diagram of a three-dimensional point cloud obtained by scanning in the application scenario shown in FIG. 6B .
  • FIG. 7 is an overall flowchart of a point cloud segmentation process according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a point cloud segmentation apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a movable platform according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • a path planning module on the movable platform can be used to make decision planning on the traveling state of the movable platform.
  • point cloud segmentation is an important part of decision-making planning for the driving state of the mobile platform.
  • FIG. 1 it is a schematic diagram of a point cloud segmentation process in some embodiments.
  • a 3D point cloud can be collected by a point cloud collection device on the movable platform, and then, in step 102, for a movable platform (such as an unmanned vehicle) running on the ground, the collected 3D point cloud can be collected.
  • the point cloud is divided into ground points and non-ground points.
  • the collected 3D point cloud can be segmented to segment the 3D points in the 3D point cloud into points on the road that the mobile platform is driving on and those not driving on the mobile platform. point on the road.
  • the following description will be made by taking the driving road as the ground.
  • step 103 if a 3D point is a ground point, step 104 is performed to add a ground point label to the 3D point; otherwise, step 105 is performed to perform dynamic and static segmentation on the 3D point, that is, segment the 3D point into stationary static point and dynamic point where motion occurs.
  • step 106 if a 3D point is a static point, step 107 is performed to add a static point label to the 3D point; otherwise, step 108 is performed to add a dynamic point label to the 3D point, and in step 109 the labelled output is output.
  • 3D point cloud to downstream modules all or part of the three-dimensional points in the three-dimensional point cloud can be marked.
  • the label may include at least one of a first label used to characterize whether the 3D point is a ground point and a second label used to characterize whether the 3D point is a static point, and may also include a label used to characterize other information of the 3D point. Label.
  • the downstream module may be a planning module on a movable platform, such as an electronic control unit (Electronic Control Unit, ECU), a central processing unit (Central Processing Unit, CPU) and the like.
  • ECU Electronic Control Unit
  • CPU Central Processing Unit
  • the Planning module can make decision planning on the driving state of the movable platform based on the label of the 3D point.
  • the driving state may include at least one of a pose and a speed of the movable platform.
  • FIG. 2 it is a schematic diagram of the decision planning process of some embodiments.
  • the planning module can receive the 3D point cloud and read the tags carried in the 3D point cloud.
  • step 203 it may be determined whether the three-dimensional point in the three-dimensional point cloud is a point on the road (eg, ground) on which the movable platform travels based on the label.
  • the road eg, ground
  • step 204 identify the three-dimensional point belonging to the lane line from the ground point, and determine the posture of the movable platform according to the direction of the lane line, so that the movable platform can follow the direction of the lane line. drive in the direction.
  • step 205 is executed to determine whether the non-ground point is a static point. If yes, step 206 is executed to determine the pose of the movable platform according to the orientation of the static point.
  • step 207 is executed to determine at least one of the attitude and speed of the movable platform according to the orientation and speed of the static point. For example, if the dynamic point is on the pre-planned travel path of the movable platform, and the moving speed of the dynamic point is less than or equal to the moving speed of the movable platform, control the movable platform to slow down, or adjust the posture of the movable platform, so that the movable platform bypasses the dynamic point.
  • the movable platform can be controlled to travel at the same speed as the dynamic point.
  • point cloud segmentation is an important part of decision-making and planning for the driving state of the mobile platform, and accurate point cloud segmentation is helpful for accurate decision-making and planning of the driving state of the mobile platform.
  • 3D point cloud segmentation methods There are generally two traditional 3D point cloud segmentation methods. One is the segmentation method based on local features, that is, first a frame of 3D point cloud is projected into a 2D grid map or a 3D voxel map, and then a map One or several adjacent grids or voxels are used as objects, and features such as absolute height, relative height, coordinate variance, local normal vector, principal component vector, etc. are extracted. Cloud segmentation.
  • the other is the segmentation method based on the global model, that is, using the random sampling (Random Sample Consensus, RANSAC) method to select candidate points to perform multiple global road surface fitting on the driving road surface of the movable platform, and after finding the optimal solution,
  • RANSAC Random Sample Consensus
  • both of the above two methods of point cloud segmentation are prone to motion blur, which reduces the reliability of point cloud segmentation.
  • the above two point cloud segmentation methods also have the following problems: the point cloud segmentation method based on local features needs to traverse the three-dimensional point cloud to build a map, which requires a large time and storage cost, and the local features are noisy when the point cloud is large. Local failure may occur.
  • the point cloud segmentation method based on the global model needs to use a low-complexity model to ensure that it will not be biased by outliers, so it cannot well adapt to the complex road environment. Segmentation accuracy is low.
  • the present disclosure provides a three-dimensional point cloud segmentation method, as shown in FIG. 3 , the method may include:
  • Step 301 Determine the adjacent 3D points in physical space that are adjacent to the 3D points in the 3D point cloud based on the scanning angle and scanning time when the lidar on the movable platform collects the 3D point cloud;
  • Step 302 Obtain the normal vector of the local area where the three-dimensional point and the three-dimensional point adjacent to the three-dimensional point are located;
  • Step 303 Perform point cloud segmentation on the three-dimensional point based on the normal vector.
  • the present disclosure can segment the three-dimensional point cloud collected by the lidar into three-dimensional points on the road where the movable platform travels, and three-dimensional points outside the road where the mobile platform travels.
  • the present disclosure determines the proximity of 3D points in physical space based on the scanning angle and scanning time when the 3D point cloud is collected by lidar.
  • Adjacent 3D points and perform point cloud segmentation based on the normal vector of the local area where the 3D point and its adjacent 3D points are located, can extract the natural geometric constraints generated by lidar scanning to obtain the basis for point cloud segmentation, in the point cloud segmentation process
  • the geometric features between point clouds are utilized in the method, which improves the reliability of point cloud segmentation, especially for radars with significant motion blur.
  • the present disclosure does not need to make plane assumptions, and can segment a relatively complex road surface; it does not need to build a grid map or a voxel map, nor does it need random sampling, it is very lightweight, and the segmentation result can be obtained only by traversing the point cloud once. Accumulating point clouds for a long time can output high-frequency segmentation results.
  • adjacent 3D points in physical space adjacent to the 3D points in the 3D point cloud may be determined.
  • the adjacent three-dimensional points of a three-dimensional point in physical space may include one or more three-dimensional points whose distances from the three-dimensional point in physical space are less than a predetermined distance threshold. Since the scanning angle of the laser radar when collecting the three-dimensional point cloud is continuously changing, the adjacent three-dimensional point can be determined based on the scanning angle and the scanning time when the laser radar is collecting the three-dimensional point cloud. For example, if the scanning angle of the three-dimensional point A and the three-dimensional point B by the lidar is smaller than the preset angle difference, the three-dimensional point A is considered to be the adjacent three-dimensional point of the three-dimensional point B.
  • the 3D point A is considered to be a 3D point adjacent to the 3D point B. If the scanning angle between the three-dimensional point A and the three-dimensional point B by the lidar is greater than or equal to the preset angle difference, or the time difference between the scanning time of the three-dimensional point A and the three-dimensional point B by the lidar is greater than or equal to the preset time length, Then it is considered that the three-dimensional point A is not the adjacent three-dimensional point of the three-dimensional point B.
  • the 3D point cloud can be acquired by various lidars.
  • the three-dimensional point cloud can be acquired by a single-line lidar.
  • the three-dimensional points adjacent to the three-dimensional point include at least one three-dimensional point that is continuously collected when the single-line lidar moves along the first direction, and the single-line lidar continuously moves along the second direction.
  • the first direction is different from the second direction.
  • the first direction is perpendicular to the second direction.
  • the first direction may be a direction perpendicular to the travel road surface of the movable platform
  • the second direction may be a direction perpendicular to both the first direction and the travel direction of the movable platform.
  • the first direction may be a vertical direction
  • the second direction may be a direction perpendicular to the running direction of the vehicle on the level ground.
  • the first direction may be a direction perpendicular to the glass plane (generally a horizontal direction)
  • the second direction may be a direction on the glass plane that is perpendicular to the driving direction of the robot.
  • the three-dimensional point cloud may be acquired by a multi-line lidar.
  • the three-dimensional points adjacent to the three-dimensional point include three-dimensional points obtained by successively scanning at least two adjacent lines of laser light in the multi-line laser radar.
  • the three-dimensional point cloud is acquired by an array lidar, and the three-dimensional points adjacent to the three-dimensional point include three-dimensional points obtained by scanning m ⁇ n array blocks in the array lidar at one time, where m and n are both greater than An integer of 1. Collecting 3D point cloud by multi-line lidar or array lidar can effectively improve the collection efficiency of 3D point cloud.
  • Beam0 to Beam5 in the figure represent different lines of the multi-line lidar
  • Scan0 to Scan5 represent the 0th to 5th scans
  • each black dot in the figure represents a three-dimensional point obtained by scanning
  • the point in the i-th row and the j-th column Indicates the three-dimensional point obtained by the jth scan of the i-th line of laser light.
  • six lines are taken as an example. In practical applications, the number of lines of the laser radar may also be other numbers, which is not limited in the present disclosure, and each line may be scanned simultaneously or sequentially.
  • the number of scans is not limited to six.
  • the three-dimensional points obtained by scanning are arranged in the form of a two-dimensional array based on the two dimensions of the different lines of the lidar and the number of scans. It should be noted that this arrangement is only for the convenience of understanding, and does not mean that the operation of arranging the three-dimensional point cloud into a two-dimensional array is performed in the solution of the present disclosure, nor does the position of each three-dimensional point in the two-dimensional array represent The real position of the corresponding 3D point in the physical space, but the adjacent position relationship of each 3D point in the 2D array is the same as the real adjacent position relationship of the corresponding 3D point in the physical space, and the adjacent position relationship is used to represent the two Whether a 3D point is a neighboring 3D point.
  • a 3D point in row 1, column 1 (called point A) in a two-dimensional array is a 3D point adjacent to the 3D point in row 1 and column 2 (called point B), then in physical space, point A is also the neighboring 3D point of point B. Since the difference between the scanning angles of two adjacent lines of lasers is small, and the scanning angles of Beam0 to Beam5 increase or decrease sequentially, the 3D points obtained by scanning the adjacent two lines of lasers are adjacent 3D points. At the same time, since the scanning angle of each line of laser light changes continuously, the three-dimensional points obtained by two adjacent scans of the same line of laser light are also adjacent three-dimensional points.
  • the three-dimensional points B1, B2, B3 and B4 may be determined as the three-dimensional points adjacent to the three-dimensional point A. Further, the three-dimensional points B5, B6, B7 and B8 may also be determined as the three-dimensional points adjacent to the three-dimensional point A.
  • the 3D point A can also be regarded as its own adjacent 3D point. In practical applications, not every scan has a return point, and the white dots in the figure represent invalid 3D points without return points. Before acquiring the normal vector of the local area, a plurality of consecutive invalid adjacent three-dimensional points in the three-dimensional point cloud may be directly marked as points with unknown properties.
  • the invalid 3D points scanned by adjacent multi-line lasers at the same time can be determined as multiple consecutive invalid adjacent 3D points; for array lidar, the same column or the same
  • the invalid three-dimensional points scanned by a plurality of consecutive laser sources in a row at one time are determined as a plurality of consecutive invalid adjacent three-dimensional points.
  • isolated points with no adjacent 3D points can be marked as points with unknown properties.
  • Multiple consecutive invalid adjacent 3D points and isolated points can be collectively referred to as unavailable 3D points. By marking the unavailable 3D points as points with unknown attributes, the subsequent point cloud segmentation does not require these unavailable 3D points. point for point cloud segmentation.
  • the three-dimensional normal vector of the local area where the three-dimensional point and the three-dimensional point adjacent to the three-dimensional point are located may be obtained first, and if the three-dimensional normal vector is not obtained, the three-dimensional point and the three-dimensional normal vector are obtained.
  • the three-dimensional normal vector and the two-dimensional normal vector should be significant normal vectors.
  • the three-dimensional point's adjacent three-dimensional points can be used to perform principal component analysis on the three-dimensional point to obtain the normal vector of the three-dimensional point.
  • a 3D point has a sufficient number of adjacent 3D points and the 3D point is not collinear with its adjacent 3D points, then a significant 3D normal vector can be obtained. If the number of adjacent 3D points of the 3D point is insufficient, or when the lidar performs reciprocating scanning, the scanned 3D points degenerate into collinear conditions due to the slowing down of the angular velocity change at the edge of the scan, then it is impossible to obtain significant two-dimensional points.
  • dimensional normal vector In the case of using a multi-line laser radar, three-dimensional points obtained by multiple scans of the same line of laser light can be acquired, and a two-dimensional normal vector of the local area where the three-dimensional points obtained by the multiple scans are located can be determined.
  • point cloud segmentation may be performed on the three-dimensional point based on the normal vector. If the three-dimensional normal vector of the local area is obtained, point cloud segmentation may be performed on the three-dimensional point based on the three-dimensional normal vector. If the three-dimensional normal vector of the local area is not obtained, point cloud segmentation may be performed on the three-dimensional point based on the two-dimensional normal vector of the local area.
  • the three-dimensional point may be divided into points on the travel plane of the movable platform. If the angle between the three-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than the second preset angle, the three-dimensional point may be divided into obstacles. If the angle between the three-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than or equal to the first preset angle and less than or equal to the second preset angle, the The three-dimensional points are marked as points of unknown properties. Wherein, the first preset angle is smaller than the second preset angle.
  • the three-dimensional point may be divided into points on the travel plane of the movable platform . If the angle between the two-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than the fourth preset angle, the three-dimensional point may be divided into obstacles. If the angle between the two-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than or equal to the third preset angle and less than or equal to the fourth preset angle, you can The three-dimensional points are marked as points of unknown properties. Wherein, the third preset angle is smaller than the fourth preset angle.
  • the point cloud segmentation process is exemplified below with reference to FIG. 5A and FIG. 5B .
  • the movable platform is taken as an example of a vehicle running on the ground, where the ground may be a plane or a curved surface. Since the angle change of the ground is generally smooth, point cloud segmentation can be performed on the three-dimensional point based on the angle between the normal vector of the local area and the normal vector of the ground. If the included angle is small, it means that the local area and the ground can be approximately regarded as two parallel planes, that is to say, the three-dimensional points on the local area will not hinder the driving of the vehicle, so that the local area can be 3D points on are divided into ground points. As shown in FIG.
  • both 3D point A and 3D point B can be divided into ground points.
  • the included angle is large, it means that the local area and the ground can be approximately regarded as two mutually perpendicular planes, that is, the local area is an inside relative to the ground, and the three-dimensional point on the local area It will hinder the driving of the vehicle, so that the three-dimensional points on the local area can be divided into non-ground points, and can also be specifically divided into obstacles. As shown in FIG.
  • the angle ⁇ 1 between the normal vector f A of the local area where the 3D point A is located and the ground normal vector, the angle ⁇ 2 between the normal vector f B of the local area where the 3D point B is located and the ground normal vector, and The angle ⁇ 3 between the normal vector f C of the local area where the 3D point C is located and the ground normal vector is relatively small. Therefore, the 3D points A, B, and C can be divided into non-ground points.
  • the first preset angle may be less than or equal to 35°, and the second preset angle may be greater than or equal to 65°.
  • the first preset angle is 30° or 20°.
  • the second preset included angle is 60° or 75°.
  • the third preset angle may be less than or equal to 20°, and the second preset angle may be greater than or equal to 70°.
  • the third preset angle is 15° or 10°.
  • the second preset included angle is 80° or 75°.
  • Various angle thresholds can be flexibly set according to actual needs, which will not be repeated here.
  • the 3D point cloud segmented in the above manner may include multiple 3D points that fail to be segmented, including 3D points with insignificant normal vectors and unavailable 3D points.
  • the 3D point cloud of multiple frames can be obtained; the driving road surface of the movable platform is fitted based on the 3D point cloud of multiple frames to obtain the driving road surface.
  • the fitting model based on the fitting model, secondary point cloud segmentation is performed on the three-dimensional points that fail to be segmented in the three-dimensional point cloud of multiple frames.
  • the road surface fitting may be implemented by polynomial fitting or other fitting methods, which are not limited in the present disclosure.
  • the pose information when the movable platform collects each frame of the 3D point cloud in the multiple frames of the 3D point cloud can be obtained; the pose information corresponding to each frame of the 3D point cloud is collected based on the movable platform information, transform the three-dimensional point cloud of each frame into a preset coordinate system; based on the points located on the travel plane of the movable platform in the three-dimensional point cloud of the transformed multiple frames, Fit the road.
  • the preset coordinate system may be the current coordinate system of the movable platform.
  • secondary point cloud segmentation may be performed on the three-dimensional points that fail to be segmented based on the distance between the three-dimensional points that fail to be segmented and the fitted model. For example, if the distance between the three-dimensional point that fails to be segmented and the fitting model is greater than a preset distance threshold, the three-dimensional point that fails to be segmented is divided into points on the travel plane of the movable platform. For another example, if the distance between the three-dimensional point that fails to be segmented and the fitting model is greater than a preset distance threshold, the three-dimensional point that fails to be segmented is marked as a point of unknown attribute. In this way, the accuracy of point cloud segmentation can be improved.
  • the three-dimensional point cloud can be acquired by lidar.
  • Different lidars may use different scanning methods.
  • Figure 6A shows several common radar scanning methods. The gray area in the figure is the scanning range of the lidar, and the arrow indicates the scanning direction of the laser.
  • scanning mode 1 the lidar performs circular scanning, and the scanning angle ⁇ 1 is equal to 360°.
  • scanning mode 2 the lidar scans back and forth at a relatively large angle ⁇ 2 , where ⁇ 2 can be an angle greater than or equal to 135°.
  • scanning mode 3 the lidar scans at a small fixed angle ⁇ 3 , where ⁇ 3 can be an angle less than or equal to 45°.
  • the above embodiment divides the scanning modes of the lidar based on the scanning angle. Those skilled in the art can understand that the specific angles given in the above are only exemplary descriptions. In practical applications, other angle thresholds can also be used for different scanning modes. Divide.
  • FIG. 6B shows a possible application scenario.
  • a lidar can be installed on the vehicle B, for example, a lidar is installed in front of the vehicle B to collect a three-dimensional point cloud in front of the vehicle B.
  • the lidar can also be installed on the top, the side or other parts of the vehicle B, which will not be repeated here.
  • vehicle B is stationary, and vehicle A is driving in front of vehicle B.
  • vehicle A is gradually moving away from vehicle B.
  • the scanning angle that can scan the vehicle A during the lidar scanning process is denoted as ⁇ .
  • The scanning angle that can scan the vehicle A during the lidar scanning process.
  • each frame of the 3D point cloud includes the 3D points on the vehicle A.
  • the time interval between two adjacent frames of 3D point clouds is relatively small, and the distance that vehicle A moves in the two adjacent frames of 3D point clouds is also relatively small, so the interval between the 3D points on vehicle A in the two adjacent frames of 3D point clouds is also relatively small. smaller. Therefore, from a visual point of view, the 3D point cloud scanned by scanning method 3 will have a serious "smearing" problem, as shown in 3D point cloud 2 in Figure 6C. This 3D point cloud with "smearing" is prone to errors during processing.
  • each three-dimensional point in the three-dimensional point cloud may be labeled based on the point cloud segmentation result.
  • the label may include at least one of numbers, letters, and symbols. Taking the label including a number as an example, bit 1 can be used to represent a 3D point on the road where the movable platform is traveling, bit 0 can be used to represent a 3D point outside the road where the movable platform is traveling, and bit 01 can be used to represent a 3D point with unknown attributes.
  • the labeled 3D point cloud After the labeled 3D point cloud is acquired, the labeled 3D point cloud can be output.
  • the output frame rate of the labeled 3D point cloud may reach a frequency equal to the input frame rate of the 3D point cloud.
  • the frame rate of the output tagged 3D point cloud is equal to 1/n of the frame rate of the input 3D point cloud.
  • the point cloud segmentation result can be used by the planning unit on the movable platform to plan the driving state of the movable platform.
  • the planning unit can determine the position of obstacles on the driving path based on the labels obtained from the segmentation results of the point cloud, thereby deciding whether to control the speed and attitude of the movable platform to avoid obstacles.
  • the point cloud segmentation result can also be output to a multimedia system on the movable platform, such as a display screen, a voice playback system, etc., for outputting multimedia prompt information to the user.
  • FIG. 7 it is an overall flowchart of a point cloud segmentation method according to an embodiment of the present disclosure.
  • the three-dimensional point cloud output by the lidar may be continuously acquired, and the frame rate of the three-dimensional point cloud may be 100 Hz, or may be other values.
  • the three-dimensional point cloud may be reorganized according to the scanning time of the lidar, for example, in the form shown in FIG. 4 .
  • this step can also be omitted, and in subsequent steps, adjacent 3D points can be directly obtained from the 3D point cloud according to the scanning time of the lidar for processing.
  • the 3D point cloud can also be reorganized according to the scanning angle of the lidar.
  • step 703 continuity analysis is performed on the 3D points in the 3D point cloud and their adjacent 3D points to determine the unavailable 3D points.
  • step 704 a three-dimensional (3D) normal vector or a two-dimensional (2D) normal vector of each three-dimensional point is extracted.
  • step 705 the point cloud is segmented based on the three-dimensional normal vector or the two-dimensional normal vector extracted in step 704, and the category label of each three-dimensional point in the three-dimensional point cloud is marked.
  • step 706 a labeled 3D point cloud is output, and the output frame rate of the 3D point cloud may be equal to the input frame rate, for example, both are 100 Hz.
  • step 707 it can be determined whether the current accumulation of n frames is full, where n can be predetermined based on the input frame rate of the downstream processing unit that processes the three-dimensional point cloud. If yes, go to step 708; otherwise, go back to step 701.
  • step 708 for the accumulated n frames of 3D point clouds, attitude data when the movable platform collects the n frames of 3D point clouds can be obtained.
  • the n frames of the three-dimensional point cloud may be transformed into the current coordinate system of the movable platform based on the pose data corresponding to the n frames of the three-dimensional point cloud.
  • ground model fitting may be performed using the ground points in the n-frame three-dimensional point cloud.
  • the ground point used in this step may be the ground point marked in step 705 .
  • the existing category labels can be optimized by using the distance between the ground model and the three-dimensional point.
  • a labeled 3D point cloud can be output, and the output frame rate is equal to 1/n of the frame rate of the input 3D point cloud, for example, it can be (100/n) Hz.
  • step 713 it is determined whether or not the program ends. If the procedure is not over, return to step 701 to continue point cloud segmentation.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • An embodiment of the present disclosure further provides a point cloud segmentation device, including a processor, where the processor is configured to perform the following steps:
  • Point cloud segmentation is performed on the three-dimensional point based on the normal vector.
  • the angular difference between the scanning angle of the 3D point by the lidar and the scanning angle of the 3D point adjacent to the 3D point is less than a preset angle difference; and/or the lidar pair
  • the time difference between the scan time of the three-dimensional point and the scan time of the three-dimensional point adjacent to the three-dimensional point is less than a preset time period.
  • the three-dimensional point cloud is acquired by a single-line lidar
  • the three-dimensional points adjacent to the three-dimensional point include at least one three-dimensional point continuously collected when the single-line lidar moves along the first direction, and all at least one 3D point continuously collected when the single-line lidar moves along a second direction, and the first direction is different from the second direction; or the 3D point cloud is collected by a multi-line lidar, the 3D point cloud is
  • the three-dimensional points adjacent to the point include three-dimensional points obtained by successively scanning at least two adjacent lines of lasers in the multi-line lidar; or the three-dimensional point cloud is acquired by the array lidar, and the three-dimensional points adjacent to the three-dimensional point are Including the three-dimensional points obtained by one scan of m ⁇ n array blocks in the array lidar, both m and n are integers greater than 1.
  • the processor is further configured to mark the unavailable three-dimensional points in the three-dimensional point cloud as points with unknown properties before acquiring the normal vector of the local area.
  • the unavailable 3D points include a continuous plurality of invalid adjacent 3D points, and isolated points where there are no adjacent 3D points.
  • the continuous multiple invalid adjacent 3D points include any one of the following: invalid 3D points scanned by adjacent multi-line lasers of the multi-line lidar at the same time; the same column or the same row of the array lidar Invalid 3D points scanned by consecutive multiple laser sources at one time.
  • the processor is configured to: if the 3D normal vector of the local area is obtained, perform point cloud segmentation on the 3D point based on the 3D normal vector; and/or if the local area is not obtained The three-dimensional normal vector of the area, and the point cloud segmentation is performed on the three-dimensional point based on the two-dimensional normal vector of the local area.
  • the processor is configured to: if the angle between the three-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is smaller than a first preset angle, divide the three-dimensional point into A point on the travel plane of the movable platform; and/or if the angle between the three-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than a second preset angle, the three-dimensional point and/or if the angle between the three-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than or equal to the first preset angle, and less than or equal to the first Two preset angles, marking the three-dimensional point as a point of unknown attribute; wherein, the first preset angle is smaller than the second preset angle.
  • the first preset angle is less than or equal to 35°, and the second preset angle is greater than or equal to 65°.
  • the processor is configured to: if the angle between the two-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is smaller than a third preset angle, divide the three-dimensional point is a point on the travel plane of the movable platform; and/or if the angle between the two-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than a fourth preset angle, the Three-dimensional points are divided into obstacles; and/or if the angle between the two-dimensional normal vector of the local area and the normal vector of the travel plane of the movable platform is greater than or equal to the third preset angle, and less than or equal to For the fourth preset angle, the three-dimensional point is marked as a point of unknown attribute; wherein, the third preset angle is smaller than the fourth preset angle.
  • the third preset angle is less than or equal to 20°, and the second preset angle is greater than or equal to 70°.
  • the processor is further configured to: acquire multiple frames of the 3D point cloud; Fitting the driving road surface of the movable platform to obtain a fitting model of the driving road surface;
  • the processor is configured to: acquire the pose information when the movable platform acquires each frame of the 3D point cloud in the multiple frames of the 3D point cloud; acquire the each frame based on the movable platform The pose information corresponding to the three-dimensional point cloud, transform the three-dimensional point cloud of each frame into a preset coordinate system; based on the points in the three-dimensional point cloud of the transformed multiple frames located on the driving plane of the movable platform, for The driving surface of the movable platform is fitted.
  • the processor is configured to: perform secondary point cloud segmentation on the three-dimensional points that fail to be segmented based on the distances between the three-dimensional points that fail to be segmented and the fitting model.
  • the processor is configured to: if the distance between the three-dimensional point that fails to be segmented and the fitting model is greater than a preset distance threshold, segment the three-dimensional point that fails to be segmented into the movable platform for driving point on the plane; and/or if the distance between the three-dimensional point that fails to be segmented and the fitted model is greater than a preset distance threshold, the three-dimensional point that fails to be segmented is marked as a point of unknown attribute.
  • the processor is further configured to: based on the point cloud segmentation result, for each 3D point in the 3D point cloud Click on the label.
  • the processor is further configured to: output the labeled 3D point cloud; wherein, the labeled 3D point cloud The output frame rate of is equal to the input frame rate of the 3D point cloud.
  • the point cloud segmentation result obtained by performing point cloud segmentation on the three-dimensional point cloud is used by the planning unit on the movable platform to plan the driving state of the movable platform.
  • the scanning angle of the lidar is less than or equal to 45°.
  • FIG. 8 shows a schematic diagram of a more specific hardware structure of a data processing apparatus provided by an embodiment of this specification.
  • the apparatus may include: a processor 801 , a memory 802 , an input/output interface 803 , a communication interface 804 and a bus 805 .
  • the processor 801 , the memory 802 , the input/output interface 803 and the communication interface 804 realize the communication connection among each other within the device through the bus 805 .
  • the processor 801 can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. program to implement the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor central processing unit
  • an application specific integrated circuit Application Specific Integrated Circuit, ASIC
  • ASIC Application Specific Integrated Circuit
  • the memory 802 can be implemented in the form of a ROM (Read Only Memory, read-only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like.
  • the memory 802 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of the present specification through software or firmware, relevant program codes are stored in the memory 802 and invoked by the processor 801 for execution.
  • the input/output interface 803 is used for connecting input/output modules to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 804 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, mobile network, WIFI, Bluetooth, etc.).
  • Bus 805 includes a path to transfer information between the various components of the device (eg, processor 801, memory 802, input/output interface 803, and communication interface 804).
  • the above-mentioned device only shows the processor 801, the memory 802, the input/output interface 803, the communication interface 804 and the bus 805, in the specific implementation process, the device may also include the necessary components for normal operation. other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present specification, rather than all the components shown in the figures.
  • an embodiment of the present disclosure further provides a movable platform 900 , including a housing 901 ; a point cloud collecting device 902 , disposed on the housing 901 , for collecting a three-dimensional point cloud; and a three-dimensional point cloud
  • the dividing device 903 is arranged in the casing 901 and is used for executing the method described in any embodiment of the present disclosure.
  • the movable platform 900 can be a drone, an unmanned vehicle, an unmanned ship, a mobile robot, etc.
  • the point cloud collection device 902 can be a visual sensor (for example, a binocular vision sensor, a trinocular vision sensor, etc.) etc.) or lidar.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the steps executed by the second processing unit in the method described in any of the foregoing embodiments.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.

Abstract

一种三维点云分割方法和装置、可移动平台,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点(301);获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量(302);基于所述法向量对所述三维点进行点云分割(303)。

Description

三维点云分割方法和装置、可移动平台 技术领域
本公开涉及计算机视觉技术领域,具体而言,涉及三维点云分割方法和装置、可移动平台。
背景技术
可移动平台在行驶过程中,可以通过可移动平台上的路径规划(planning)模块来对可移动平台的行驶状态(例如,位姿和速度)进行决策规划。为了使planning模块能够完成决策规划,需要由可移动平台上的点云采集装置来采集周围环境的三维点云,并进行点云分割,以区分出三维点云中的地面和障碍物,并进一步从障碍物中区分出动态对象和静态对象。因此,点云分割是对可移动平台的行驶状态进行决策规划的重要环节。
传统的点云分割方式一般是基于局部特征或者可移动平台行驶路面的全局模型进行点云分割,这两种方式都对点云密度有一定要求,因此需要累积一段时间的三维点云,但是,在对动态物体进行点云分割时,上述两种点云分割方式都容易产生运动模糊问题,降低了点云分割的可靠性。
发明内容
有鉴于此,本公开的实施例提出了三维点云分割方法和装置、可移动平台,以可靠地对动态物体的三维点云进行点云分割。
根据本公开实施例的第一方面,提供一种三维点云分割方法,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;基于所述法向量对所述三维点进行点云分割。
根据本公开实施例的第二方面,提供一种三维点云分割装置,包括处理器,所述三维点云分割装置用于对可移动平台采集到的三维点云进行点云分割,所述处理器 用于执行以下步骤:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;基于所述法向量对所述三维点进行点云分割。
根据本公开实施例的第三方面,提供一种可移动平台,包括:壳体;点云采集装置,设于所述壳体上,用于采集三维点云;以及三维点云分割装置,设于所述壳体内,用于执行本公开任一实施例所述的方法。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本公开任一实施例所述的方法。
应用本公开实施例方案,基于激光雷达采集三维点云时的扫描角度和扫描时刻确定三维点在物理空间中邻近的邻近三维点,并基于三维点及其邻近三维点所在的局部区域的法向量进行点云分割,能够提取出来自激光雷达扫描产生的天然几何约束得到进行点云分割的依据,在点云分割过程中利用了点云之间的几何特征,提高了点云分割的可靠性。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是一些实施例的点云分割过程的示意图。
图2是一些实施例的可移动平台行驶过程中的决策规划过程的示意图。
图3是本公开实施例的点云分割方法的流程图。
图4是本公开实施例的点云排布方式的示意图。
图5A是本公开实施例的地面点的法向量的示意图。
图5B是本公开实施例的非地面点的法向量的示意图。
图6A是本公开实施例的雷达的扫描方式的示意图。
图6B是本公开实施例的应用场景的示意图。
图6C是图6B所示的应用场景下扫描得到的三维点云的示意图。
图7是本公开实施例的点云分割过程的总体流程图。
图8是本公开实施例的点云分割装置的示意图。
图9是本公开实施例的可移动平台的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
可移动平台在行驶过程中,可以通过可移动平台上的路径规划(planning)模块来对可移动平台的行驶状态进行决策规划。其中,点云分割是对可移动平台的行驶状态进行决策规划的重要环节。如图1所示,是一些实施例的点云分割过程的示意图。在步骤101中,可以由可移动平台上的点云采集装置采集三维点云,然后,在步骤102中,对于行驶在地面上的可移动平台(例如无人车),可以对采集到的三维点云进行地面分割,即将三维点云中的三维点分割为地面点和非地面点。对于其他类型的可移动平台(例如可移动机器人),可以对采集到的三维点云进行分割,以将三维点云中的三维点分割为可移动平台行驶路面上的点和不在可移动平台行驶路面上的点。为了便于描述,下文以行驶路面为地面进行说明。在步骤103中,如果一个三维点为地面 点,则执行步骤104,为该三维点添加地面点标签,否则执行步骤105,对该三维点进行动静态分割,即将该三维点分割为静止不动的静态点和发生运动的动态点。在步骤106中,如果一个三维点为静态点,则执行步骤107,为该三维点添加静态点标签,否则执行步骤108,为该三维点添加动态点标签,并在步骤109中输出带标签的三维点云至下游模块。其中,可以为三维点云中的全部或者部分三维点打标签。所述标签可以包括用于表征三维点是否为地面点的第一标签和用于表征三维点是否为静态点的第二标签中的至少一者,还可以包括用于表征三维点的其他信息的标签。
所述下游模块可以是可移动平台上的planning模块,例如电子控制单元(Electronic Control Unit,ECU)、中央处理器(Central Processing Unit,CPU)等。Planning模块在接收到带标签的三维点云之后,可以基于三维点的标签对可移动平台的行驶状态进行决策规划。所述行驶状态可以包括可移动平台的位姿和速度中的至少一者。如图2所示,是一些实施例的决策规划过程的示意图。在步骤201和步骤202中,planning模块可以接收三维点云并读取三维点云中携带的标签。在步骤203中,可以基于标签确定三维点云中的三维点是否为可移动平台行驶路面(例如地面)上的点。以地面点为例,如果是,则执行步骤204,从地面点中识别出属于车道线的三维点,并根据车道线的方向确定可移动平台的姿态,以使可移动平台沿着车道线的方向行驶。如果是非地面点,则执行步骤205,判断该非地面点是否为静态点。如果是,则执行步骤206,根据静态点的方位确定可移动平台的位姿。例如,判断该静态点是否处于预先规划的行驶路径上,如果是,则重新规划路径,以避免可移动平台与静态点相撞。如果该非地面点为动态点,则执行步骤207,根据该静态点的方位和速度确定可移动平台的姿态和速度中的至少一者。例如,若该动态点处于可移动平台预先规划的行驶路径上,且该动态点的移动速度小于或等于可移动平台的移动速度,则控制可移动平台减速行驶,或者调整可移动平台的姿态,以使可移动平台绕过该动态点。又例如,可以控制可移动平台按照与动态点相同的速度行驶。
由此可知,点云分割是对可移动平台的行驶状态进行决策规划的重要环节,准确地进行点云分割有助于对可移动平台的行驶状态进行准确的决策规划。传统的三维点云分割方式一般有两种,一种是基于局部特征的分割方式,即,首先将一帧三维点云投影到二维的栅格地图或者三维的体素地图中,然后以地图中的一个或相邻几个栅格或体素为对象,提取绝对高度、相对高度、坐标方差、局部法向量、主成分向量等特征,基于提取的特征对该栅格内的三维点进行点云分割。另一种是基于全局模型的 分割方式,即,利用随机采样(Random Sample Consensus,RANSAC)方式选取备选点来对可移动平台的行驶路面进行多次全局路面拟合,找到最优解后,分析全局点与该地面模型的距离,将地面模型附近的点分割为地面点;远离模型的点分割为障碍物点。
然而,在对动态物体进行点云分割时,上述两种点云分割方式都容易产生运动模糊问题,降低了点云分割的可靠性。此外,上述两种点云分割方式还存在以下问题:基于局部特征的点云分割方式需要遍历三维点云来建立地图,时间开销和存储开销都比较大,且局部特征在点云噪声较大时可能会发生局部失效。基于全局模型的点云分割方式需要使用复杂度较低的模型以确保不会被外点带偏,这样就不能很好地适应复杂的路面环境,且由于拟合误差存在,对矮小障碍物的分割准确度较低。
基于此,本公开提供一种三维点云分割方法,如图3所示,所述方法可包括:
步骤301:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;
步骤302:获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;
步骤303:基于所述法向量对所述三维点进行点云分割。
本公开可以将激光雷达采集到的三维点云分割为可移动平台行驶路面上的三维点,以及可移动平台行驶路面以外的三维点。相比现有方法将激光雷达采集得到的一帧三维点云看成一整团无序点云,本公开基于激光雷达采集三维点云时的扫描角度和扫描时刻确定三维点在物理空间中邻近的邻近三维点,并基于三维点及其邻近三维点所在的局部区域的法向量进行点云分割,能够提取出来自激光雷达扫描产生的天然几何约束得到进行点云分割的依据,在点云分割过程中利用了点云之间的几何特征,提高了点云分割的可靠性,尤其是对于存在显著运动模糊的雷达尤其有效。此外,本公开无需进行平面假设,可以分割出比较复杂的路面;无需建立栅格地图或体素地图,也无需随机采样,非常轻量化,仅需遍历点云一次就可以得出分割结果,无需累积很长时间的点云,可以输出高频的分割结果。
在步骤301中,可以确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点。一个三维点在物理空间中邻近的邻近三维点可以包括在物理空间中与所述三维点之间的距离小于预设距离阈值的一个或多个三维点。由于激光雷达在采集三维点 云时的扫描角度是连续变化的,因此,可以基于激光雷达采集三维点云时的扫描角度和扫描时刻,来确定所述邻近三维点。例如,若激光雷达对三维点A与三维点B的扫描角度小于预设角度差,则认为三维点A为三维点B的邻近三维点。又例如,若激光雷达对三维点A与对三维点B的扫描时间的时间差小于预设时长,则认为三维点A为三维点B的邻近三维点。如果激光雷达对三维点A与三维点B的扫描角度大于或等于所述预设角度差,或者激光雷达对三维点A与对三维点B的扫描时间的时间差大于或等于所述预设时长,则认为三维点A不是三维点B的邻近三维点。
在一些实施例中,三维点云可以通过多种激光雷达采集得到。例如,所述三维点云可以由单线激光雷达采集得到。在这种情况下,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向。可选地,所述第一方向与所述第二方向垂直。例如,所述第一方向可以是与可移动平台的行驶路面垂直的方向,所述第二方向可以是与所述第一方向以及所述可移动平台的行驶方向均垂直的方向。在可移动平台为在水平地面上行驶的车辆时,所述第一方向可以是竖直方向,所述第二方向可以是水平地面上与车辆行驶方向垂直的方向。在可移动平台为擦玻璃机器人时,所述第一方向可以是与玻璃平面垂直的方向(一般为水平方向),所述第二方向可以是玻璃平面上与机器人行驶方向垂直的方向。
又例如,所述三维点云可以由多线激光雷达采集得到。在这种情况下,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点。再例如,所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。通过多线激光雷达或者阵列激光雷达采集三维点云,能够有效提高三维点云的采集效率。
下面结合图4,对由多线激光雷达采集得到三维点云中的情况进行举例说明。图中Beam0到Beam5表示多线激光雷达的不同线,Scan0到Scan5表示第0次到第5次扫描,图中每个黑色圆点表示扫描得到的一个三维点,第i行第j列的点表示第i线激光第j次扫描得到的三维点。此处以六线为例,在实际应用中,激光雷达的线数还可以是其他数量,本公开对此不做限制,各线可以同时扫描或者依次扫描。此外,扫描次数也不限于6次。这里基于激光雷达的不同线以及扫描次数是第几次这两个维度, 将扫描得到的三维点排布成了二维阵列的形式。应当说明的是,这种排布仅仅是为了便于理解,并不表示本公开的方案中执行了将三维点云排布成二维阵列的操作,二维阵列中各个三维点的位置也不代表对应三维点在物理空间中的真实位置,但二维阵列中各个三维点的邻近位置关系与对应三维点在物理空间中的真实的邻近位置关系是相同的,所述邻近位置关系用于表示两个三维点是否为邻近三维点。例如,二维阵列中第1行第1列的三维点(称为点A)是第1行第2列的三维点(称为点B)的邻近三维点,则在物理空间中,点A也是点B的邻近三维点。由于相邻两线激光的扫描角度之差较小,且Beam0到Beam5的扫描角度依次递增或者依次递减,因此,相邻两线激光扫描得到的三维点是邻近三维点。同时,由于每一线激光的扫描角度都是连续变化的,因此,同一线激光相邻两次扫描得到的三维点也是邻近三维点。
以二维阵列中的三维点A为例,可以将三维点B1、B2、B3和B4确定为三维点A的邻近三维点。进一步地,还可以将三维点B5、B6、B7和B8确定为三维点A的邻近三维点。三维点A也可以看成是其本身的邻近三维点。在实际应用中,并不是每次扫描都有返回点,图中白色的圆点表示没有返回点的无效的三维点。在获取所述局部区域的法向量之前,可以将所述三维点云中连续的多个无效的邻近三维点直接标记为未知属性的点。对于多线激光雷达而言,可以将其相邻多线激光同一次扫描到的无效三维点确定为连续的多个无效的邻近三维点;对于阵列激光雷达而言,可以将其同一列或者同一行的连续多个激光源一次扫描到的无效三维点确定为连续的多个无效的邻近三维点。除此之外,还可以将不存在邻近三维点的孤立点标记为未知属性的点。连续的多个无效的邻近三维点与孤立点可以统称为不可用的三维点,通过将不可用的三维点标记为未知属性的点,从而后续在进行点云分割时无需对这些不可用的三维点进行点云分割。
在步骤302中,可以先获取所述三维点与所述三维点的邻近三维点所在的局部区域的三维法向量,如果未获取到所述三维法向量,则获取所述三维点与所述三维点的邻近三维点所在的局部区域的二维法向量。其中,所述三维法向量和二维法向量应当为显著的法向量。可以采用一个三维点的邻近三维点对该三维点进行主成分分析,得到该三维点的法向量。如果一个三维点的邻近三维点的数量足够,且该三维点与其邻近三维点不共线,则可以获取到显著的三维法向量。如果三维点的邻近三维点数量不足,或者在激光雷达进行往复扫描时,在扫描的边缘由于角速度变化减慢导致扫描到的多个三维点退化成共线的情况,则无法获取到显著的二维法向量。在采用多线激 光雷达的情况下,可以获取同一线激光多次扫描得到的三维点,并确定所述多次扫描得到的三维点所在的局部区域的二维法向量。
在步骤303中,可以基于所述法向量对所述三维点进行点云分割。若获取到所述局部区域的三维法向量,可以基于所述三维法向量对所述三维点进行点云分割。若未获取到所述局部区域的三维法向量,可以基于所述局部区域的二维法向量对所述三维点进行点云分割。
若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,可以将所述三维点分割为所述可移动平台行驶平面上的点。若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,可以将所述三维点分割为障碍物。若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,可以将所述三维点标记为未知属性的点。其中,所述第一预设夹角小于所述第二预设夹角。
若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,可以将所述三维点分割为所述可移动平台行驶平面上的点。若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,可以将所述三维点分割为障碍物。若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,可以将所述三维点标记为未知属性的点。其中,所述第三预设夹角小于所述第四预设夹角。
下面结合图5A和图5B对点云分割过程进行举例说明。为了便于描述,以可移动平台为行驶在地面上的车辆为例进行说明,其中,地面可以是平面,也可以是曲面。由于地面的角度变化一般是平滑的,因此,可以基于所述局部区域的法向量与所述地面的法向量之间的夹角对所述三维点进行点云分割。如果所述夹角较小,说明所述局部区域与地面可以近似看成两个平行的面,也就是说,所述局部区域上的三维点不会阻碍车辆行驶,从而可以将所述局部区域上的三维点分割为地面点。如图5A所示,三维点A所在的局部区域的法向量f A与地面法向量的夹角θ 1以及三维点B所在的局部区域的法向量f B与地面法向量的夹角θ 2都比较小,因此,可以将三维点A和三维点B均分割为地面点。
如果所述夹角较大,说明所述局部区域与地面可以近似看成两个相互垂直的面,也就是说,所述局部区域相对于地面来说是一个里面,该局部区域上的三维点会阻碍车辆行驶,从而可以将所述局部区域上的三维点分割为非地面点,也可以具体分割为障碍物。如图5B所示,三维点A所在的局部区域的法向量f A与地面法向量的夹角θ 1、三维点B所在的局部区域的法向量f B与地面法向量的夹角θ 2以及三维点C所在的局部区域的法向量f C与地面法向量的夹角θ 3都比较小,因此,可以将三维点A、B、C均分割为非地面点。
在一些实施例中,所述第一预设夹角可以小于或等于35°,所述第二预设夹角可以大于或等于65°。例如,所述第一预设夹角为30°或者20°。又例如,所述第二预设夹角为60°或者75°。在一些实施例中,所述第三预设夹角可以小于或等于20°,所述第二预设夹角可以大于或等于70°。例如,所述第三预设夹角为15°或者10°。又例如,所述第二预设夹角为80°或者75°。可以根据实际需要灵活地设置各个角度阈值,此处不再赘述。
在一些实施例中,通过上述方式分割后的三维点云中可能包括多个分割失败的三维点,包括法向量不显著的三维点和不可用的三维点等。为了进一步对这些分割失败的三维点进行点云分割,可以获取多帧所述三维点云;基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。所述路面拟合可以通过多项式拟合或者其他拟合方式实现,本公开对此不做限制。
具体来说,可以获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。可选地,所述预设坐标系可以是可移动平台的当前坐标系。
拟合出路面模型之后,可以基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。例如,若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点。又例如,若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。通过这种方式,能够 提高点云分割的准确度。
在一些实施例中,三维点云可以由激光雷达采集得到。不同的激光雷达所采用的扫描方式可能不同。图6A示出了几种常见的雷达扫描方式,图中灰色区域为激光雷达的扫描范围,箭头表示激光的扫描方向。在扫描方式1中,激光雷达进行圆周扫描,扫描角度θ 1等于360°。在扫描方式2中,激光雷达以一个较大的角度θ 2进行往复扫描,其中,θ 2可以是大于或等于135°的角度。在扫描方式3中,激光雷达以一个较小的固定角度θ 3进行扫描,其中,θ 3可以是小于或等于45°的角度。上述实施例基于扫描角度对激光雷达的扫描方式进行了划分,本领域技术人员可以理解,其中给出的具体角度仅为示例性说明,在实际应用中还可以采用其他角度阈值对不同的扫描方式进行划分。
图6B示出了一种可能的应用场景。可以在车辆B上安装激光雷达,例如,在车辆B的前方安装激光雷达,用于采集车辆B前方的三维点云。当然,还可以在车辆B的顶部、侧方或者其他部位安装激光雷达,此处不再赘述。为了便于分析,假设车辆B是静止的,车辆B前方有车辆A在行驶,在不同的时刻t1、t2和t3,车辆A在逐渐远离车辆B。
在实际应用中,由于每帧三维点云中三维点的数量较少,不足以对三维点云进行处理,因此,需要累积多帧三维点云,得到1帧累积后的三维点云,再对累积后的三维点云进行处理。而不同的扫描方式经过累积,会得到不同形式的三维点云。将激光雷达扫描过程中能够扫描到车辆A的扫描角度记为θ,为了便于理解,假设上述θ 2和θ 3分别为180°和36°,并假设θ等于θ 3。如果激光雷达以上述扫描方式1或者扫描方式2进行扫描,则在激光雷达的整个扫描周期内,只有1/10的时间能够扫描到车辆A,因此,只有其中1/10的三维点云中包括车辆A上的三维点,且不同三维点云中车辆A上的三维点之间的间隔比较大,如图6C中三维点云1所示。而在激光雷达以扫描方式3进行扫描的情况下,在激光雷达的整个扫描周期内都能够扫描到车辆A,因此,每一帧三维点云中都包括车辆A上的三维点,且由于采集相邻两帧三维点云的时间间隔比较小,在相邻两帧三维点云中车辆A移动的距离也比较小,从而相邻两帧三维点云中车辆A上的三维点之间的间隔比较小。因此,从视觉上来看,通过扫描方式3扫描得到的三维点云会存在比较严重的“拖尾”问题,如图6C中的三维点云2所示。这种存在“拖尾”的三维点云在处理时容易产生错误。
因此,在处理用扫描方式3进行扫描的激光雷达采集的三维点云时,可以仅采用一帧三维点云进行点云分割。由于本公开的方式无需建立栅格地图或体素地图,也无需随机采样,非常轻量化,仅需遍历点云一次就可以得出分割结果,因此,即便一帧三维点云中三维点的数量较少,也可以获得较为准确的分割结果。同时,在对实时性要求较高的场景下,每输入一帧三维点云,即可得到一帧点云分割结果,提高了点云分割的实时性。对于采用扫描方式1和扫描方式2的激光雷达,可以进一步累积多帧点云,并结合拟合出的路面与三维点之间的距离进行二次点云分割,从而提高点云分割结果的准确性。
进一步地,还可以基于点云分割结果,为所述三维点云中的各个三维点打标签。所述标签可以包括数字、字母、符号中的至少一者。以标签包括数字为例,可以用比特1表示可移动平台行驶路面上的三维点,用比特0表示可移动平台行驶路面之外的三维点,还可以用比特01表示属性未知的三维点。
在获取到带标签的三维点云之后,可以输出所述带标签的三维点云。在仅采用一帧三维点云进行点云分割的实施例中,所述带标签的三维点云的输出帧率可以达到与所述三维点云的输入帧率相等的频率。在累积n帧三维点云并拟合路面模型的实施例中,输出带标签的三维点云的帧率等于输入三维点云帧率的1/n。
在实际应用中,点云分割结果可用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。例如,规划单元可以基于点云分割结果得到的标签,确定行驶路径上的障碍物的位置,从而决定是否需要控制可移动平台的速度和姿态以躲避障碍物。点云分割结果还可以输出至可移动平台上的多媒体系统,例如,显示屏、语音播放系统等,用于向用户输出多媒体提示信息。
如图7所示,是本公开实施例的点云分割方法的总体流程图。
在步骤701中,可以持续获取激光雷达输出的三维点云,所述三维点云的帧率可以是100Hz,或者也可以是其他值。
在步骤702中,可以按照激光雷达的扫描时刻重新组织三维点云,例如,组织成图4所示的形式。当然,本步骤也可以省略,在后续步骤中,可以直接根据激光雷达的扫描时刻从三维点云中获取邻近的三维点进行处理。除了按照激光雷达的扫描时刻重新组织三维点云,还可以按照激光雷达的扫描角度重新组织三维点云。
在步骤703中,对三维点云中的三维点及其邻近三维点进行连续性分析,以确 定出不可用的三维点。
在步骤704中,提取每个三维点的三维(3D)法向量或者二维(2D)法向量。
在步骤705中,基于步骤704中提取出的三维法向量或者二维法向量进行点云分割,并标记三维点云中各个三维点的类别标签。
在步骤706中,输出带标签的三维点云,三维点云的输出帧率与输入帧率可以相等,例如,都是100Hz。
在步骤707中,可以判断当前是否累积满n帧,其中,n可以基于对三维点云进行处理的下游处理单元的输入帧率预先确定。如果是,则执行步骤708;否则返回步骤701。
在步骤708中,针对累积的n帧三维点云,可以获取可移动平台采集所述n帧三维点云时的姿态数据。
在步骤709中,可以基于所述n帧三维点云对应的姿态数据将所述n帧三维点云变换到可移动平台的当前坐标系下。
在步骤710中,可以使用所述n帧三维点云中的地面点进行地面模型拟合。其中,本步骤中所使用的地面点可以是步骤705中标记出的地面点。
在步骤711中,可以利用地面模型与三维点的距离优化已有的类别标签。
在步骤712中,可以输出带标签的三维点云,输出的帧率等于输入三维点云的帧率的1/n,例如,可以是(100/n)Hz。
在步骤713中,判断程序是否结束。如果程序未结束,则返回步骤701继续进行点云分割。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
本公开实施例还提供一种点云分割装置,包括处理器,所述处理器用于执行以下步骤:
基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;
获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;
基于所述法向量对所述三维点进行点云分割。
在一些实施例中,所述激光雷达对所述三维点的扫描角度与对所述三维点的邻近三维点的扫描角度之间的角度差小于预设角度差;和/或所述激光雷达对所述三维点的扫描时间与对所述三维点的邻近三维点的扫描时间之间的时间差小于预设时长。
在一些实施例中,所述三维点云由单线激光雷达采集得到,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向;或者所述三维点云由多线激光雷达采集得到,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点;或者所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。
在一些实施例中,所述处理器还用于:在获取所述局部区域的法向量之前,将所述三维点云中不可用的三维点标记为未知属性的点。
在一些实施例中,所述不可用的三维点包括连续的多个无效的邻近三维点,以及不存在邻近三维点的孤立点。
在一些实施例中,所述连续的多个无效的邻近三维点包括以下任意一种:多线激光雷达的相邻多线激光同一次扫描到的无效三维点;阵列激光雷达同一列或者同一行的连续多个激光源一次扫描到的无效三维点。
在一些实施例中,所述处理器用于:若获取到所述局部区域的三维法向量,基于所述三维法向量对所述三维点进行点云分割;和/或若未获取到所述局部区域的三维法向量,基于所述局部区域的二维法向量对所述三维点进行点云分割。
在一些实施例中,所述处理器用于:若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,将所述三维点分割为障碍物;和/或若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,将所述三维点标记为未知属性的点;其中,所述第一预设夹角小于所述第二预设夹角。
在一些实施例中,所述第一预设夹角小于或等于35°,所述第二预设夹角大于或等于65°。
在一些实施例中,所述处理器用于:若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,将所述三维点分割为障碍物;和/或若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,将所述三维点标记为未知属性的点;其中,所述第三预设夹角小于所述第四预设夹角。
在一些实施例中,所述第三预设夹角小于或等于20°,所述第二预设夹角大于或等于70°。
在一些实施例中,在基于所述法向量对所述三维点进行点云分割之后,所述处理器还用于:获取多帧所述三维点云;基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;
基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。
在一些实施例中,所述处理器用于:获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。
在一些实施例中,所述处理器用于:基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。
在一些实施例中,所述处理器用于:若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点;和/或若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。
在一些实施例中,在基于所述三维点的法向量对所述三维点进行点云分割之后,所述处理器还用于:基于点云分割结果,为所述三维点云中的各个三维点打标签。
在一些实施例中,在为所述三维点云中的各个三维点打标签之后,所述处理器还用于:输出带标签的所述三维点云;其中,带标签的所述三维点云的输出帧率与所述三维点云的输入帧率相等。
在一些实施例中,对所述三维点云进行点云分割得到的点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。
在一些实施例中,所述激光雷达的扫描角度小于或等于45°。
本公开实施例的点云分割装置中处理器所执行的方法的具体实施例可参见前述方法实施例,此处不再赘述。
图8示出了本说明书实施例所提供的一种更为具体的数据处理装置硬件结构示意图,该设备可以包括:处理器801、存储器802、输入/输出接口803、通信接口804和总线805。其中处理器801、存储器802、输入/输出接口803和通信接口804通过总线805实现彼此之间在设备内部的通信连接。
处理器801可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。
存储器802可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器802可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器802中,并由处理器801来调用执行。
输入/输出接口803用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口804用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线805包括一通路,在设备的各个组件(例如处理器801、存储器802、输入/输出接口803和通信接口804)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器801、存储器802、输入/输出接口803、通信接口804以及总线805,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
如图9所示,本公开实施例还提供一种可移动平台900,包括壳体901;点云采集装置902,设于所述壳体901上,用于采集三维点云;以及三维点云分割装置903,设于所述壳体901内,用于执行本公开任一实施例所述的方法。其中,所述可移动平台900可以是无人机、无人车、无人船、可移动机器人等设备,所述点云采集装置902可以是视觉传感器(例如双目视觉传感器、三目视觉传感器等)或者激光雷达。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法中由第二处理单元执行的步骤。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实 现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
以上实施例中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征的任意进行组合也属于本公开的范围。
本领域技术人员在考虑公开及实践这里公开的说明书后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
以上所述仅为本公开的较佳实施例而已,并不用以限制本公开,凡在本公开的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本公开保护的范围之内。

Claims (40)

  1. 一种三维点云分割方法,其特征在于,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:
    基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;
    获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;
    基于所述法向量对所述三维点进行点云分割。
  2. 根据权利要求1所述的方法,其特征在于,所述激光雷达对所述三维点的扫描角度与对所述三维点的邻近三维点的扫描角度之间的角度差小于预设角度差;和/或
    所述激光雷达对所述三维点的扫描时间与对所述三维点的邻近三维点的扫描时间之间的时间差小于预设时长。
  3. 根据权利要求2所述的方法,其特征在于,所述三维点云由单线激光雷达采集得到,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向;或者
    所述三维点云由多线激光雷达采集得到,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点;或者
    所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在获取所述局部区域的法向量之前,将所述三维点云中不可用的三维点标记为未知属性的点。
  5. 根据权利要求4所述的方法,其特征在于,所述不可用的三维点包括连续的多个无效的邻近三维点,以及不存在邻近三维点的孤立点。
  6. 根据权利要求5所述的方法,其特征在于,所述连续的多个无效的邻近三维点包括以下任意一种:
    多线激光雷达的相邻多线激光同一次扫描到的无效三维点;
    阵列激光雷达的同一列或者同一行的连续多个激光源一次扫描到的无效三维点。
  7. 根据权利要求1所述的方法,其特征在于,所述基于所述法向量对所述三维点进行点云分割,包括:
    若获取到所述局部区域的三维法向量,基于所述三维法向量对所述三维点进行点云分割;和/或
    若未获取到所述局部区域的三维法向量,基于所述局部区域的二维法向量对所述三维点进行点云分割。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述局部区域的三维法向量对所述三维点进行点云分割,包括:
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,将所述三维点分割为障碍物;和/或
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,将所述三维点标记为未知属性的点;
    其中,所述第一预设夹角小于所述第二预设夹角。
  9. 根据权利要求8所述的方法,其特征在于,所述第一预设夹角小于或等于35°,所述第二预设夹角大于或等于65°。
  10. 根据权利要求7所述的方法,其特征在于,所述基于所述局部区域的二维法向量对所述三维点进行点云分割,包括:
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,将所述三维点分割为障碍物;和/或
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,将所述三维点标记为未知属性的点;
    其中,所述第三预设夹角小于所述第四预设夹角。
  11. 根据权利要求10所述的方法,其特征在于,所述第三预设夹角小于或等于20°,所述第二预设夹角大于或等于70°。
  12. 根据权利要求1所述的方法,其特征在于,在基于所述法向量对所述三维点进行点云分割之后,所述方法还包括:
    获取多帧所述三维点云;
    基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;
    基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。
  13. 根据权利要求12所述的方法,其特征在于,所述基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,包括:
    获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;
    基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;
    基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。
  14. 根据权利要求12所述的方法,其特征在于,所述基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割,包括:
    基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。
  15. 根据权利要求14所述的方法,其特征在于,所述基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割,包括:
    若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。
  16. 根据权利要求1所述的方法,其特征在于,在基于所述三维点的法向量对所述三维点进行点云分割之后,所述方法还包括:
    基于点云分割结果,为所述三维点云中的各个三维点打标签。
  17. 根据权利要求16所述的方法,其特征在于,在为所述三维点云中的各个三维点打标签之后,所述方法还包括:
    输出带标签的所述三维点云;
    其中,带标签的所述三维点云的输出帧率与所述三维点云的输入帧率相等。
  18. 根据权利要求1所述的方法,其特征在于,对所述三维点云进行点云分割得到的点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。
  19. 根据权利要求1所述的方法,其特征在于,所述激光雷达的扫描角度小于或 等于45°。
  20. 一种三维点云分割装置,包括处理器,其特征在于,所述三维点云分割装置用于对可移动平台采集到的三维点云进行点云分割,所述处理器用于执行以下步骤:
    基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;
    获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;
    基于所述法向量对所述三维点进行点云分割。
  21. 根据权利要求20所述的装置,其特征在于,所述激光雷达对所述三维点的扫描角度与对所述三维点的邻近三维点的扫描角度之间的角度差小于预设角度差;和/或
    所述激光雷达对所述三维点的扫描时间与对所述三维点的邻近三维点的扫描时间之间的时间差小于预设时长。
  22. 根据权利要求21所述的装置,其特征在于,所述三维点云由单线激光雷达采集得到,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向;或者
    所述三维点云由多线激光雷达采集得到,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点;或者
    所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。
  23. 根据权利要求20所述的装置,其特征在于,所述处理器还用于:
    在获取所述局部区域的法向量之前,将所述三维点云中不可用的三维点标记为未知属性的点。
  24. 根据权利要求23所述的装置,其特征在于,所述不可用的三维点包括连续的多个无效的邻近三维点,以及不存在邻近三维点的孤立点。
  25. 根据权利要求24所述的装置,其特征在于,所述连续的多个无效的邻近三维点包括以下任意一种:
    多线激光雷达的相邻多线激光同一次扫描到的无效三维点;
    阵列激光雷达的同一列或者同一行的连续多个激光源一次扫描到的无效三维点。
  26. 根据权利要求20所述的装置,其特征在于,所述处理器用于:
    若获取到所述局部区域的三维法向量,基于所述三维法向量对所述三维点进行点 云分割;和/或
    若未获取到所述局部区域的三维法向量,基于所述局部区域的二维法向量对所述三维点进行点云分割。
  27. 根据权利要求26所述的装置,其特征在于,所述处理器用于:
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,将所述三维点分割为障碍物;和/或
    若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,将所述三维点标记为未知属性的点;
    其中,所述第一预设夹角小于所述第二预设夹角。
  28. 根据权利要求27所述的装置,其特征在于,所述第一预设夹角小于或等于35°,所述第二预设夹角大于或等于65°。
  29. 根据权利要求26所述的装置,其特征在于,所述处理器用于:
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,将所述三维点分割为障碍物;和/或
    若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,将所述三维点标记为未知属性的点;
    其中,所述第三预设夹角小于所述第四预设夹角。
  30. 根据权利要求29所述的装置,其特征在于,所述第三预设夹角小于或等于20°,所述第二预设夹角大于或等于70°。
  31. 根据权利要求20所述的装置,其特征在于,在基于所述法向量对所述三维点进行点云分割之后,所述处理器还用于:
    获取多帧所述三维点云;
    基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;
    基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。
  32. 根据权利要求31所述的装置,其特征在于,所述处理器用于:
    获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;
    基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;
    基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。
  33. 根据权利要求31所述的装置,其特征在于,所述处理器用于:
    基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器用于:
    若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点;和/或
    若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。
  35. 根据权利要求20所述的装置,其特征在于,在基于所述三维点的法向量对所述三维点进行点云分割之后,所述处理器还用于:
    基于点云分割结果,为所述三维点云中的各个三维点打标签。
  36. 根据权利要求35所述的装置,其特征在于,在为所述三维点云中的各个三维点打标签之后,所述处理器还用于:
    输出带标签的所述三维点云;
    其中,带标签的所述三维点云的输出帧率与所述三维点云的输入帧率相等。
  37. 根据权利要求20所述的装置,其特征在于,对所述三维点云进行点云分割得到的点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。
  38. 根据权利要求20所述的方法,其特征在于,所述激光雷达的扫描角度小于或等于45°。
  39. 一种可移动平台,其特征在于,包括:
    壳体;
    点云采集装置,设于所述壳体上,用于采集三维点云;以及
    三维点云分割装置,设于所述壳体内,用于执行权利要求1至19任意一项所述的方法。
  40. 一种计算机可读存储介质,其特征在于,其上存储有计算机指令,该指令被处理器执行时实现权利要求1至19任意一项所述的方法。
PCT/CN2020/141076 2020-12-29 2020-12-29 三维点云分割方法和装置、可移动平台 WO2022141116A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/141076 WO2022141116A1 (zh) 2020-12-29 2020-12-29 三维点云分割方法和装置、可移动平台
CN202080070568.4A CN114556442A (zh) 2020-12-29 2020-12-29 三维点云分割方法和装置、可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141076 WO2022141116A1 (zh) 2020-12-29 2020-12-29 三维点云分割方法和装置、可移动平台

Publications (1)

Publication Number Publication Date
WO2022141116A1 true WO2022141116A1 (zh) 2022-07-07

Family

ID=81668402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141076 WO2022141116A1 (zh) 2020-12-29 2020-12-29 三维点云分割方法和装置、可移动平台

Country Status (2)

Country Link
CN (1) CN114556442A (zh)
WO (1) WO2022141116A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958303A (zh) * 2023-09-19 2023-10-27 山东博昂信息科技有限公司 一种基于室外激光雷达的智能建图方法及系统
CN117173424A (zh) * 2023-11-01 2023-12-05 武汉追月信息技术有限公司 一种点云坡面边缘线识别方法、系统及可读存储介质
CN117201685A (zh) * 2023-11-06 2023-12-08 中国民航大学 一种三维物体的表面覆盖扫描方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150206028A1 (en) * 2014-01-20 2015-07-23 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Point cloud reduction apparatus, system, and method
CN110441791A (zh) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 一种基于前倾2d激光雷达的地面障碍物检测方法
CN110992341A (zh) * 2019-12-04 2020-04-10 沈阳建筑大学 一种基于分割的机载LiDAR点云建筑物提取方法
CN111142514A (zh) * 2019-12-11 2020-05-12 深圳市优必选科技股份有限公司 一种机器人及其避障方法和装置
CN111639682A (zh) * 2020-05-13 2020-09-08 北京三快在线科技有限公司 基于点云数据的地面分割方法及装置
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150206028A1 (en) * 2014-01-20 2015-07-23 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Point cloud reduction apparatus, system, and method
CN110441791A (zh) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 一种基于前倾2d激光雷达的地面障碍物检测方法
CN110992341A (zh) * 2019-12-04 2020-04-10 沈阳建筑大学 一种基于分割的机载LiDAR点云建筑物提取方法
CN111142514A (zh) * 2019-12-11 2020-05-12 深圳市优必选科技股份有限公司 一种机器人及其避障方法和装置
CN111639682A (zh) * 2020-05-13 2020-09-08 北京三快在线科技有限公司 基于点云数据的地面分割方法及装置
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958303A (zh) * 2023-09-19 2023-10-27 山东博昂信息科技有限公司 一种基于室外激光雷达的智能建图方法及系统
CN117173424A (zh) * 2023-11-01 2023-12-05 武汉追月信息技术有限公司 一种点云坡面边缘线识别方法、系统及可读存储介质
CN117173424B (zh) * 2023-11-01 2024-01-26 武汉追月信息技术有限公司 一种点云坡面边缘线识别方法、系统及可读存储介质
CN117201685A (zh) * 2023-11-06 2023-12-08 中国民航大学 一种三维物体的表面覆盖扫描方法、装置、设备及介质
CN117201685B (zh) * 2023-11-06 2024-01-26 中国民航大学 一种三维物体的表面覆盖扫描方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114556442A (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2022141116A1 (zh) 三维点云分割方法和装置、可移动平台
US11086016B2 (en) Method and apparatus for tracking obstacle
JP7112993B2 (ja) レーザレーダの内部パラメータ精度検証方法とその装置、機器及び媒体
Wang et al. A point cloud-based robust road curb detection and tracking method
CN110675307B (zh) 基于vslam的3d稀疏点云到2d栅格图的实现方法
WO2020134082A1 (zh) 一种路径规划方法、装置和移动设备
US11567496B2 (en) Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US11216951B2 (en) Method and apparatus for representing environmental elements, system, and vehicle/robot
WO2021072710A1 (zh) 移动物体的点云融合方法、系统及计算机存储介质
CN111781608A (zh) 一种基于fmcw激光雷达的运动目标检测方法及系统
CN111986472B (zh) 车辆速度确定方法及车辆
US11592820B2 (en) Obstacle detection and vehicle navigation using resolution-adaptive fusion of point clouds
WO2022099620A1 (zh) 三维点云分割方法和装置、可移动平台
CN113325389A (zh) 一种无人车激光雷达定位方法、系统及存储介质
JP2022045947A (ja) 障害物検知装置、障害物検知システム及び障害物検知方法
CN111736167B (zh) 一种获取激光点云密度的方法和装置
Fu et al. Camera-based semantic enhanced vehicle segmentation for planar lidar
CN110517354B (zh) 用于三维点云分割的方法、装置、系统及介质
US11200677B2 (en) Method, system and apparatus for shelf edge detection
Wang et al. Improved LeGO-LOAM method based on outlier points elimination
WO2022126380A1 (zh) 三维点云分割方法和装置、可移动平台
WO2022083529A1 (zh) 一种数据处理方法及装置
CN114565906A (zh) 障碍物检测方法、装置、电子设备及存储介质
WO2018120932A1 (en) Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967464

Country of ref document: EP

Kind code of ref document: A1