WO2022099620A1 - Three-dimensional point cloud segmentation method and apparatus, and mobile platform - Google Patents

Three-dimensional point cloud segmentation method and apparatus, and mobile platform Download PDF

Info

Publication number
WO2022099620A1
WO2022099620A1 PCT/CN2020/128711 CN2020128711W WO2022099620A1 WO 2022099620 A1 WO2022099620 A1 WO 2022099620A1 CN 2020128711 W CN2020128711 W CN 2020128711W WO 2022099620 A1 WO2022099620 A1 WO 2022099620A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
dimensional point
points
dimensional
Prior art date
Application number
PCT/CN2020/128711
Other languages
French (fr)
Chinese (zh)
Inventor
李星河
韩路新
于亦奇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080071116.8A priority Critical patent/CN114631124A/en
Priority to PCT/CN2020/128711 priority patent/WO2022099620A1/en
Publication of WO2022099620A1 publication Critical patent/WO2022099620A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a three-dimensional point cloud segmentation method and device, and a movable platform.
  • a path planning module on the movable platform can perform decision planning on the traveling state (eg, pose and speed) of the movable platform.
  • the point cloud acquisition device on the movable platform needs to collect the 3D point cloud of the surrounding environment, and perform point cloud segmentation to distinguish the ground and obstacles in the 3D point cloud, and further Distinguish dynamic and static objects from obstacles. Therefore, point cloud segmentation is an important part of decision planning for the driving state of the mobile platform.
  • the embodiments of the present disclosure propose a three-dimensional point cloud segmentation method and device, and a movable platform, so as to accurately perform point cloud segmentation on the three-dimensional point cloud collected by the movable platform.
  • a method for segmenting a 3D point cloud which is used to segment a 3D point cloud collected by a movable platform, the method comprising: acquiring multiple points in the 3D point cloud. search for the multiple candidate points on the v-disparity plane, and determine the target candidate point located on the road surface of the movable platform among the multiple candidate points; fit the target candidate point based on the The model of the driving road surface is obtained, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane to obtain a point cloud segmentation result.
  • a three-dimensional point cloud segmentation device including a processor, the three-dimensional point cloud segmentation device is configured to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform, and the processing The device is configured to perform the following steps: acquiring multiple candidate points in the three-dimensional point cloud; searching the multiple candidate points on the v-disparity plane, and determining that the multiple candidate points are located on the movable platform for driving target candidate points on the road surface; fit a model of the driving road surface based on the target candidate points, and perform a second point cloud segmentation on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane.
  • a movable platform which is characterized by comprising: a casing; a point cloud collecting device, disposed on the casing, for collecting a three-dimensional point cloud; and a three-dimensional point cloud A dividing device, which is arranged in the casing, is used for executing the method described in any embodiment of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method described in any of the embodiments of the present disclosure.
  • the candidate points are searched on the v-disparity plane, the target candidate points located on the road surface of the movable platform among the plurality of candidate points are determined, and then based on the target candidate points
  • the model of the driving road surface is combined, and the model is used as the benchmark for point cloud segmentation, and the second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane, which improves the accuracy of point cloud segmentation.
  • the accuracy makes it possible to accurately segment the areas that are difficult to segment, such as slopes and distances in the 3D point cloud.
  • Figure 1 is a schematic diagram of a point cloud segmentation process of some embodiments.
  • FIG. 2 is a schematic diagram of a decision-making planning process during travel of a mobile platform according to some embodiments.
  • FIG. 3 is a flowchart of a point cloud segmentation method according to an embodiment of the present disclosure.
  • 4A and 4B are schematic diagrams of the uvd coordinate system according to an embodiment of the present disclosure, respectively.
  • FIG. 5 is a schematic diagram of a projection process of a u-disparity plane according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of the relationship between parallax and depth according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a point cloud segmentation apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a movable platform according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • a path planning module on the movable platform can be used to make decision planning on the traveling state of the movable platform.
  • point cloud segmentation is an important part of decision-making planning for the driving state of the mobile platform.
  • FIG. 1 it is a schematic diagram of a point cloud segmentation process in some embodiments.
  • a 3D point cloud can be collected by a point cloud collection device on the movable platform, and then, in step 102, for a movable platform (such as an unmanned vehicle) running on the ground, the collected 3D point cloud can be collected.
  • the point cloud is divided into ground points and non-ground points.
  • the collected 3D point cloud can be segmented to segment the 3D points in the 3D point cloud into points on the road that the mobile platform is driving on and those not driving on the mobile platform. point on the road.
  • the following description will be made by taking the driving road as the ground.
  • step 103 if a 3D point is a ground point, step 104 is performed to add a ground point label to the 3D point; otherwise, step 105 is performed to perform dynamic and static segmentation on the 3D point, that is, segment the 3D point into stationary static point and dynamic point where motion occurs.
  • step 106 if a 3D point is a static point, step 107 is performed to add a static point label to the 3D point; otherwise, step 108 is performed to add a dynamic point label to the 3D point, and in step 109 the labelled output is output.
  • 3D point cloud to downstream modules all or part of the three-dimensional points in the three-dimensional point cloud can be marked.
  • the label may include at least one of a first label used to characterize whether the 3D point is a ground point and a second label used to characterize whether the 3D point is a static point, and may also include a label used to characterize other information of the 3D point. Label.
  • the downstream module may be a planning module on a movable platform, such as an electronic control unit (Electronic Control Unit, ECU), a central processing unit (Central Processing Unit, CPU) and the like.
  • ECU Electronic Control Unit
  • CPU Central Processing Unit
  • the Planning module can make decision planning on the driving state of the movable platform based on the label of the 3D point.
  • the driving state may include at least one of a pose and a speed of the movable platform.
  • FIG. 2 it is a schematic diagram of the decision planning process of some embodiments.
  • the planning module can receive the 3D point cloud and read the tags carried in the 3D point cloud.
  • step 203 it may be determined whether the three-dimensional point in the three-dimensional point cloud is a point on the road (eg, ground) on which the movable platform travels based on the label.
  • the road eg, ground
  • step 204 identify the three-dimensional point belonging to the lane line from the ground point, and determine the posture of the movable platform according to the direction of the lane line, so that the movable platform can follow the direction of the lane line. drive in the direction.
  • step 205 is executed to determine whether the non-ground point is a static point. If yes, step 206 is executed to determine the pose of the movable platform according to the orientation of the static point.
  • step 207 is executed to determine at least one of the attitude and speed of the movable platform according to the orientation and speed of the static point. For example, if the dynamic point is on the pre-planned travel path of the movable platform, and the moving speed of the dynamic point is less than or equal to the moving speed of the movable platform, control the movable platform to slow down, or adjust the posture of the movable platform, so that the movable platform bypasses the dynamic point.
  • the movable platform can be controlled to travel at the same speed as the dynamic point.
  • point cloud segmentation is an important part of decision-making and planning for the driving state of the mobile platform, and accurate point cloud segmentation is helpful for accurate decision-making and planning of the driving state of the mobile platform.
  • the current point cloud segmentation methods are mainly based on local features. Specifically, the three-dimensional point cloud is transformed into the xyz space, and rasterization or proximity search is performed in the xyz space. Find the adjacent points around the candidate point, and determine the probability that the candidate point belongs to the ground point according to the thickness, height and normal vector of the adjacent point cloud. This method will have obvious degradation when dealing with 3D point clouds with a long distance, the segmentation accuracy is low, and it is difficult to establish a global ground model, and it is impossible to make correct judgments for non-ground planes.
  • the present disclosure provides a three-dimensional point cloud segmentation method, which is used to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform. As shown in FIG. 3 , the method includes:
  • Step 301 Obtain multiple candidate points in the 3D point cloud
  • Step 302 Search the multiple candidate points on the v-disparity plane, and determine a target candidate point located on the road surface of the movable platform among the multiple candidate points;
  • Step 303 Fit the model of the driving road surface based on the target candidate points, and perform a second point cloud segmentation on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane to obtain a point cloud segmentation result.
  • a three-dimensional point cloud may be collected by a point cloud collection device (eg, lidar, vision sensor, etc.) on the movable platform.
  • the movable platform may be an unmanned vehicle, an unmanned aerial vehicle, an unmanned ship, a movable robot, and the like.
  • the candidate points may be some or all of the points in the three-dimensional point cloud.
  • semantic segmentation may be performed on the three-dimensional point cloud, and multiple candidate points in the three-dimensional point cloud may be acquired based on the semantic segmentation result.
  • the categories of multiple 3D points in the 3D point cloud can be obtained, such as vehicle category, traffic light category, pedestrian category, lane line category, etc.
  • candidate points can be determined based on the categories of the three-dimensional points. For example, three-dimensional points of the lane line category are determined as candidate points.
  • the 3D point cloud may also be pre-segmented on the u-disparity plane, and candidate points are determined based on the pre-segmentation result.
  • the 3D point cloud can be obtained based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane. multiple candidate points.
  • the traveling road surface of the movable platform is assumed to be a plane, a first reference projection density is determined based on the plane, and then pre-segmentation is performed based on the first reference projection density to determine candidate points on the traveling surface of the movable platform.
  • the pre-segmentation on the u-disparity plane can improve the signal-to-noise ratio of the candidate points, so that the candidate points can be selected in long-distance regions (with less signal amount), and the distance and accuracy of the search on the v-disparity plane can be improved.
  • each coordinate axis in the uvd space can be determined based on the direction of the driving road surface of the movable platform.
  • the movable platform 401 is driving on the horizontal ground 402
  • the u-axis, v-axis and d-axis ie, the disparity axis
  • the u-axis, v-axis and d-axis can be the coordinate axes on the ground that are perpendicular to the traveling direction of the movable platform, and the A coordinate axis parallel to the traveling direction of the movable platform, and a coordinate axis in the height direction of the movable platform (ie, vertically upward).
  • the movable platform 404 is a glass-cleaning robot traveling on the vertical glass plane 403
  • the u-axis, v-axis and d-axis may be the coordinate axes on the glass plane that are perpendicular to the traveling direction of the movable platform,
  • each coordinate axis of the uvd space may also point in other directions, and the specific direction may be set according to actual needs, which is not limited in the present disclosure.
  • the u-disparity plane may be pre-divided into multiple grids.
  • a first grid of the plurality of grids if the ratio of the projection density to the first reference projection density is greater than or equal to a first preset ratio, project the three-dimensional point cloud to the Points in the first pixel grid are determined as candidate points.
  • the first preset ratio is greater than 1.
  • the u-disparity plane can be pre-divided into multiple grids, and each grid can be of the same size in order to compare projection densities.
  • Each black point represents a projection point of a 3D point in the 3D point cloud on the u-disparity plane, and the number of projected points in a grid is equal to the number of 3D points in the 3D point cloud projected to the grid.
  • the projected density within a grid can be determined as the ratio of the number of projected points within the grid to the area of the grid.
  • the three-dimensional points of whose plane is parallel to the driving direction of the vehicle (that is, the direction of the disparity coordinate axis) or have a small included angle, these three-dimensional points extend along the disparity coordinate axis, and the parallax change range is large, that is, the first
  • the density of three-dimensional points within a region is low.
  • there is an obstacle in the area d3 to d4 away from the vehicle (referred to as the second area), and the plane where the obstacle is located is generally perpendicular to the driving direction of the vehicle or has a large included angle, which will hinder the vehicle from moving forward.
  • the parallax variation range of the three-dimensional points in the second area is small, and the density of the three-dimensional points in the second area is large. Therefore, as long as the first reference density of the travel plane of the movable platform is known, it can be roughly inferred whether a three-dimensional point is a point on the travel plane or a point outside the travel plane (eg, an obstacle).
  • the preset ratio ⁇ (that is, the redundancy of the segmentation) is set to a value greater than 1 here, so as to provide a certain amount of redundancy and reduce the selection error of candidate points.
  • can be fixedly set according to the model of the vision sensor, or can also be dynamically set according to the actual application scenario. When the reliability of the vision sensor is low, ⁇ can be set to a larger value, otherwise, ⁇ can be set to a smaller value. For example, when the focal length of the vision sensor is long, or the surrounding light is dim, etc., ⁇ can be set to a larger value.
  • the first reference projection density of the plane model on the u-disparity plane is proportional to disparity values of points on the plane model.
  • the first coordinate axis may be determined according to the baseline length of the visual sensor, the ending of the first coordinate axis of the plane model in the coordinate system of the visual sensor, and the disparity value of points on the plane model. Baseline projected density. Assume that the plane model of the driving road is:
  • is the slope of the driving road surface.
  • a ratio of the intercept to a baseline length of the vision sensor may be calculated, and a product of the ratio and a disparity value of a point on the planar model may be determined as the first reference projection density. Then the first reference projection density of the points on the plane model on the u-disparity plane can be recorded as:
  • the first coordinate axis is the coordinate axis in the height direction of the movable platform.
  • the first coordinate axis may be a vertically upward coordinate axis.
  • the first coordinate axis may be a coordinate axis in the horizontal direction.
  • the first reference projection density is only related to the intercept, baseline length and disparity value, but has nothing to do with the distance (z value) of the ground. Therefore, here, the driving road of the movable platform is assumed to be a plane, and then the first reference projection density is determined based on the plane model to segment on the u-disparity plane to determine candidate points.
  • the amount of calculation is reduced, and on the other hand, the The signal-to-noise ratio of the candidate points is improved, so that the candidate points can be selected in the case of a long distance (less signal amount), which improves the subsequent search distance and accuracy on the v-disparity plane. Areas that are difficult to divide equally can also be accurately cut out.
  • the point cloud acquisition device does not acquire a complete point cloud frame due to a clock reset or other reasons. Therefore, the acquired 3D point cloud may include both valid points and invalid points.
  • multiple candidate points may be acquired only from valid points in the three-dimensional point cloud. Among them, invalid points can be set as invalid to avoid invalid points being selected as candidate points.
  • outlier points may also be filtered out of the three-dimensional points, and multiple candidate points in the filtered three-dimensional point cloud may be obtained.
  • Outliers are points whose value range is outside the valid range. Outliers can be filtered out of 3D points by filtering.
  • preset scale transformation parameters can be obtained, and based on the scale transformation parameters, scale transformation is performed on the u-coordinate values of each 3D point in the 3D point cloud, and the scaled transformation parameters are scaled.
  • the 3D point cloud is projected onto the u-disparity plane.
  • the scale transformation parameter scale of a three-dimensional point is used to enlarge or reduce the u-coordinate value of the three-dimensional point.
  • the scale transformation parameter scale is greater than 1, it means that the u-coordinate value of the three-dimensional point is enlarged, that is, a row of projection points on the u-disparity plane is mapped to the transformed scale row.
  • the scale transformation parameter scale is less than 1, it means that the u-coordinate value of the three-dimensional point is reduced, that is, the scale row projection point on the u-disparity plane is mapped to the transformed first row.
  • the value of the scale transformation parameter of a three-dimensional point corresponds to the u-coordinate value of the three-dimensional point. For example, if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is smaller than the first preset coordinate value, the scaling parameter of the first three-dimensional point is greater than 1. For another example, if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than or equal to the second preset coordinate value, the scaling parameter of the first three-dimensional point is less than 1. Wherein, the first preset coordinate value may be less than or equal to the second preset coordinate value.
  • the first preset coordinate value is less than the second preset coordinate value, then when the u coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than or equal to the first preset coordinate value, and When it is less than the second preset coordinate value, the scale transformation parameter of the first three-dimensional point is equal to 1.
  • multiple scaling parameters greater than 1 and/or multiple scaling parameters less than 1 may be set. For example, if the u-coordinate value of the first three-dimensional point is less than the third preset coordinate value, set the scale transformation parameter of the first three-dimensional point to the first parameter value; if the u-coordinate value of the first three-dimensional point is greater than or equal to the The third preset coordinate value is smaller than the first preset coordinate value, and the scale transformation parameter of the first three-dimensional point is set as the second parameter value. Wherein, both the first parameter value and the second parameter value are greater than 1, and the first parameter value is greater than the second parameter value, and the third preset coordinate value is less than the first preset coordinate value.
  • the scale transformation parameter of the first three-dimensional point is set to the third parameter value, and if the u-coordinate value of the first three-dimensional point is less than
  • the fourth preset coordinate value is greater than or equal to the second preset coordinate value, and the scale transformation parameter of the first three-dimensional point is set as the fourth parameter value.
  • the fourth preset coordinate value is greater than the second preset coordinate value, and the fourth parameter value is less than the third parameter value.
  • Table 1 shows the corresponding relationship between the scale parameter in some embodiments and the number of rows before and after transformation. Those skilled in the art can understand that Table 1 is only an exemplary description and is not used to limit the present disclosure.
  • the scale transformation parameter scale is 4.0
  • the projection points of 1 row before transformation are mapped to the projection points of 4 rows after transformation.
  • the projection points of the 3rd row before transformation are mapped to the projection points of the 0th row to the 3rd row after transformation
  • the projection points of the 4th row before transformation are mapped to the projection points of the 4th row to the 7th row after transformation, And so on.
  • the reason why the starting line number of the projection point before transformation is 3 is because the error corresponding to the point with too small parallax value is relatively large. Therefore, only the points with a disparity value greater than or equal to 3 are taken for processing here. Those skilled in the art can understand that the point before the third row can also be used without considering the error; in other cases, the starting row number of the projection point before transformation can also be set to a value greater than 3.
  • f is the focal length of the vision sensor used to collect the three-dimensional point cloud
  • b is the focal length of the vision sensor
  • d is the parallax value
  • z is the depth. It can be seen that the relationship between z and d is inversely proportional, as shown in Figure 6.
  • the u-coordinate value is scaled by using the scale transformation parameter. The purpose is to counteract the characteristic of nonlinear variation of depth corresponding to disparity, compress the near high-resolution area, and give full play to the sub-pixel accuracy in the distance, thereby improving the point cloud. Segmentation accuracy.
  • the number of rows of three-dimensional points after transformation is close to the number of rows before transformation, which avoids the situation that the number of rows is too large due to scale transformation, thereby greatly increasing the computing power, and realizes the computing power and the number of points.
  • the balance between cloud segmentation accuracy is very important.
  • the candidate points selected in step 301 can be projected onto the v-disparity plane.
  • the v-disparity plane is of equal scale, that is, the number of rows of projection points on the v-disparity plane.
  • the plurality of candidate points may be searched on the v-disparity plane to determine a target candidate point located on the road surface of the movable platform among the plurality of candidate points.
  • the following takes the dynamic programming method as an example to describe the process of determining the target candidate point. In practical applications, other methods may also be adopted to determine the target candidate point, which will not be described here.
  • a search cost of the candidate point may be determined, and a target candidate point may be determined from the candidate points based on the search cost of the candidate point. If the search cost of the candidate point is less than the preset cost, the candidate point is determined as the target candidate point.
  • the search cost includes a first search cost and a second search cost; wherein the first search cost is used to characterize whether a target candidate point is observed on the candidate point, and the second search cost The cost is used to characterize whether the candidate point and the neighbor target candidate point of the candidate point are smooth.
  • the density cost can be calculated as follows:
  • p is a certain point on the v-disparity image
  • cost is the density cost of the point
  • th is related to the parameters of the visual sensor.
  • the driving surface of the movable platform is the ground
  • the width of the lane line on the ground is about 3 meters
  • the 3-meter wide area captured by the vision sensor is generally one frame of 3D points.
  • the cloud image includes 5 pixels, so the value of th can be 5. In other cases, th can also be set to other values according to the actual situation.
  • a model of the driving road surface may be fitted based on the target candidate points.
  • polynomial fitting may be performed on the target candidate points based on the least squares method to obtain a polynomial model of the driving road surface.
  • the resulting model can be expressed as:
  • A, B, C, D, and E are all constants, and z is the depth.
  • the above model is only an exemplary description, and the above model can be adjusted to a cubic polynomial model or a quintic polynomial model, etc. according to actual application scenarios. Then, the slope of the model of the driving road surface may be obtained; the second reference projection density of the driving road surface on the u-disparity plane may be determined based on the slope; Second point cloud segmentation.
  • the manner of performing the second point cloud segmentation on the 3D point cloud based on the second reference projection density is similar to the foregoing manner of performing the point cloud segmentation based on the first reference projection density, that is, the second pixel grid on the u-disparity plane , if the ratio of the projection density of the 3D point cloud on the u-disparity plane to the second reference projection density is greater than or equal to a second preset ratio, project the 3D point cloud to the second pixel
  • the points in the grid are divided into target points on the road surface.
  • the second preset ratio may be set to a value greater than or equal to 1, and the second preset value may be the same as or different from the first preset value.
  • the second reference projection density may be determined based on the model of the driving road surface, the slope of the model and the depth of the driving road surface. For example, a product of the slope of the model and the depth of the travel surface may be calculated, a difference between the model of the travel surface and the product may be calculated, and a determination may be made based on the ratio of the difference to the baseline length of the vision sensor is the second reference projection density, specifically as follows:
  • the above process performs search and model fitting on the v-disparity plane, and performs point cloud segmentation on the u-disparity plane.
  • the two steps are iteratively performed.
  • the segmentation of the u-disparity plane can improve the signal-to-noise ratio of the ground candidate points, so that the The candidate points can be selected in long-distance regions (with less semaphore) to improve the search distance and accuracy on the v-disparity plane, and the resulting model can provide important information about the benchmark density for u-disparity segmentation, so that Areas that are difficult to segment, such as slopes and distances, can also be accurately cut out.
  • each 3D point in the 3D point cloud may be labeled based on the point cloud segmentation result, and one 3D point label is used to represent the category of the 3D point.
  • the categories may include a first category and a second category, wherein the first category is used to represent that the three-dimensional point belongs to a point on the road where the movable platform travels, and the second category is used to represent that the three-dimensional point belongs to a point on an obstacle.
  • the category may further include a third category, which is used to represent that the three-dimensional point does not belong to the point on the driving road, nor the point on the obstacle.
  • the points of the third category may be reflection points, or points whose category cannot be determined, and the like.
  • each 3D point in the 3D point cloud may be tagged based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud.
  • the 3D points are labeled based on the height of the 3D point and the point cloud segmentation result, which improves the accuracy of the label. Specifically, if the height of a three-dimensional point is lower than the height of the driving road surface, the label of the three-dimensional point may be determined as a first label, and the first label is used to indicate that the three-dimensional point is a reflection point. If the height of a 3D point is not lower than the height of the driving road, it can be further combined with the point cloud segmentation result for labeling.
  • a first confidence level that the 3D point is a point on the driving road may be determined based on the height of the 3D point;
  • the three-dimensional point is the second confidence level of the point on the driving road surface; the three-dimensional point is labeled based on the first confidence level and the second confidence level of the three-dimensional point.
  • the height of a three-dimensional point is not lower than the height of the driving road
  • the first confidence that the three-dimensional point is a point on the driving road is lower, otherwise the first confidence is lower. high.
  • the ratio of the projected density of a three-dimensional point to the second reference projected density is larger, the second confidence level that the three-dimensional point is a point on the driving road is lower, otherwise the second confidence level is higher.
  • the label of the 3D point may be determined as the first confidence level.
  • a label the first label is used to indicate that the three-dimensional point is a point on the driving road surface.
  • the label of the 3D point can be determined to be first label.
  • the labels of the three-dimensional points may also be determined based on other methods, which will not be listed one by one here.
  • the point cloud segmentation result can be used by the planning unit on the movable platform to plan the driving state of the movable platform.
  • the planning unit can determine whether there are obstacles on the driving path based on the labels obtained from the segmentation results of the point cloud, so as to decide whether to control the speed and attitude of the movable platform to avoid obstacles.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • An embodiment of the present disclosure further provides a point cloud segmentation device, including a processor, where the processor is configured to perform the following steps:
  • a model of the driving road surface is fitted based on the target candidate points, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane.
  • the processor is configured to: determine all 3D points in the 3D point cloud as candidate points; or perform semantic segmentation on the 3D point cloud, and obtain the 3D point cloud based on the semantic segmentation result. or based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane, obtain the 3D point Multiple candidate points in the cloud.
  • the processor is configured to: in the first pixel grid on the u-disparity plane, if the ratio of the projected density to the first reference projected density is greater than or equal to a first preset A ratio, and a point projected into the first pixel grid in the three-dimensional point cloud is determined as a candidate point, and the first preset ratio is greater than 1.
  • the first reference projection density of the plane model on the u-disparity plane is proportional to disparity values of points on the plane model.
  • the three-dimensional point cloud is acquired by a vision sensor on the movable platform; the processor is configured to: acquire the first coordinate axis of the plane model in the coordinate system of the vision sensor intercept, the first coordinate axis is the coordinate axis in the height direction of the movable platform; the intercept is determined based on the intercept, the baseline length of the vision sensor and the parallax value of the point on the plane The first reference projected density.
  • the processor is configured to: calculate a ratio of the intercept to a baseline length of the vision sensor; and determine a product of the ratio and a disparity value of a point on the plane model as the The first reference projected density.
  • the three-dimensional point cloud includes valid points and invalid points; the processor is configured to: obtain a plurality of candidate points from the valid points in the three-dimensional point cloud.
  • the processor is configured to: filter out outlier points from the three-dimensional points; and obtain a plurality of candidate points in the filtered three-dimensional point cloud.
  • the processor is further configured to: obtain preset scaling parameters; perform scaling on the u-coordinate values of each 3D point in the 3D point cloud based on the scaling parameters; The transformed three-dimensional point cloud is projected onto the u-disparity plane.
  • the scaling parameter of a 3D point corresponds to the u-coordinate value of the 3D point.
  • the scaling parameter of the first three-dimensional point is greater than 1; and/or if the three-dimensional point The u-coordinate value of the first three-dimensional point in the point cloud is greater than the second preset coordinate value, and the scale transformation parameter of the first three-dimensional point is less than 1.
  • the processor is configured to: for each candidate point in the plurality of candidate points, determine a search cost of the candidate point; determine from the candidate points based on the search cost of the candidate point target candidate point.
  • the processor is configured to: if the search cost of the candidate point is less than a preset cost, determine the candidate point as the target candidate point.
  • the search cost includes a first search cost and a second search cost; wherein the first search cost is used to characterize whether a target candidate point is observed on the candidate point, and the second search cost The cost is used to characterize whether the candidate point and the neighbor target candidate point of the candidate point are smooth.
  • the processor is configured to: perform polynomial fitting on the target candidate points based on a least squares method to obtain a polynomial model of the driving road surface.
  • the processor is configured to: obtain a slope of the model of the driving road surface; determine a second reference projection density of the driving road surface on the u-disparity plane based on the slope; The second point cloud segmentation is performed on the 3D point cloud by the two reference projection densities.
  • the processor is configured to: in the second pixel grid on the u-disparity plane, if the projection density of the three-dimensional point cloud on the u-disparity plane is the same as the second reference projection
  • the density ratio is greater than or equal to a second preset ratio, and the points projected into the second pixel grid in the three-dimensional point cloud are divided into target points on the driving road, and the second preset ratio is greater than or equal to 1.
  • the processor is configured to: determine the depth of the driving surface; and determine the second reference projected density based on a model of the driving surface, a slope of the model, and the depth of the driving surface.
  • the three-dimensional point cloud is collected by a vision sensor on the movable platform; the processor is configured to: calculate the product of the slope of the model and the depth of the driving surface; calculate the driving The difference between the model of the road surface and the product; the second reference projection density is determined based on the ratio of the difference to the baseline length of the vision sensor.
  • the processor is further configured to: label each 3D point in the 3D point cloud based on the point cloud segmentation result, and a label of a 3D point is used to represent the category of the 3D point .
  • the processor is configured to: label each 3D point in the 3D point cloud based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud.
  • the processor is configured to: if the height of the three-dimensional point is lower than the height of the driving road surface, determine the label of the three-dimensional point as a first label, and the first label is used to represent the The three-dimensional point is referred to as the reflection point.
  • the processor is configured to: for each 3D point in the 3D point cloud, determine a first confidence level that the 3D point is a point on the driving road based on the height of the 3D point; A second confidence level of the three-dimensional point as a point on the driving road is determined based on the point cloud segmentation result; the three-dimensional point is labeled based on the first confidence level and the second confidence level of the three-dimensional point.
  • the processor is configured to: if at least one of the first confidence level and the second confidence level of the 3D point is greater than a preset confidence level, determine that the label of the 3D point is the first A label, the first label is used to indicate that the three-dimensional point is a point on the driving road surface.
  • the three-dimensional point cloud is acquired based on a vision sensor or lidar installed on the movable platform; and/or the point cloud segmentation result is used for the planning unit on the movable platform to pair The traveling state of the movable platform is planned.
  • FIG. 7 shows a schematic diagram of a more specific hardware structure of a data processing apparatus provided by an embodiment of this specification.
  • the apparatus may include: a processor 701 , a memory 702 , an input/output interface 703 , a communication interface 704 and a bus 705 .
  • the processor 701 , the memory 702 , the input/output interface 703 and the communication interface 704 realize the communication connection among each other within the device through the bus 705 .
  • the processor 701 can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. program to implement the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor central processing unit
  • an application specific integrated circuit Application Specific Integrated Circuit, ASIC
  • ASIC Application Specific Integrated Circuit
  • the memory 702 can be implemented in the form of a ROM (Read Only Memory, read-only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like.
  • the memory 702 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 702 and invoked by the processor 701 for execution.
  • the input/output interface 703 is used to connect the input/output module to realize the input and output of information.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 704 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, mobile network, WIFI, Bluetooth, etc.).
  • Bus 705 includes a path to transfer information between the various components of the device (eg, processor 701, memory 702, input/output interface 703, and communication interface 704).
  • the above-mentioned device only shows the processor 701, the memory 702, the input/output interface 703, the communication interface 704 and the bus 705, in the specific implementation process, the device may also include necessary components for normal operation. other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present specification, rather than all the components shown in the figures.
  • an embodiment of the present disclosure further provides a movable platform 800 , which includes a housing 801 ; a point cloud collecting device 802 , which is arranged on the housing 801 and is used to collect a three-dimensional point cloud; and a three-dimensional point cloud.
  • the dividing device 803 is arranged in the casing 801 and is used for executing the method described in any embodiment of the present disclosure.
  • the movable platform 800 may be an unmanned aerial vehicle, an unmanned vehicle, an unmanned ship, a mobile robot, etc.
  • the point cloud collection device 802 may be a visual sensor (eg, a binocular vision sensor, a trinocular vision sensor, etc.) etc.) or lidar.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the steps executed by the second processing unit in the method described in any of the foregoing embodiments.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.

Abstract

A three-dimensional point cloud segmentation method and apparatus, and a mobile platform, said method and apparatus being used for performing point cloud segmentation on a three-dimensional point cloud collected by a mobile platform. Said method comprises: acquiring a plurality of candidate points in the three-dimensional point cloud (301); searching the plurality of candidate points on a v-disparity plane to determine target candidate points, located on a driving road surface of the mobile platform, among the plurality of candidate points (302); and obtaining, by fitting, a model of the driving road surface on the basis of the target candidate points, and on the u-disparity plane, performing second point cloud segmentation on the three-dimensional point cloud on the basis of the model of the driving road surface, so as to obtain a point cloud segmentation result (303).

Description

三维点云分割方法和装置、可移动平台Three-dimensional point cloud segmentation method and device, and movable platform 技术领域technical field
本公开涉及计算机视觉技术领域,具体而言,涉及三维点云分割方法和装置、可移动平台。The present disclosure relates to the technical field of computer vision, and in particular, to a three-dimensional point cloud segmentation method and device, and a movable platform.
背景技术Background technique
可移动平台在行驶过程中,可以通过可移动平台上的路径规划(planning)模块来对可移动平台的行驶状态(例如,位姿和速度)进行决策规划。为了使planning模块能够完成决策规划,需要由可移动平台上的点云采集装置来采集周围环境的三维点云,并进行点云分割,以区分出三维点云中的地面和障碍物,并进一步从障碍物中区分出动态对象和静态对象。因此,点云分割是对可移动平台的行驶状态进行决策规划的重要环节。During the traveling process of the movable platform, a path planning module on the movable platform can perform decision planning on the traveling state (eg, pose and speed) of the movable platform. In order to enable the planning module to complete the decision-making planning, the point cloud acquisition device on the movable platform needs to collect the 3D point cloud of the surrounding environment, and perform point cloud segmentation to distinguish the ground and obstacles in the 3D point cloud, and further Distinguish dynamic and static objects from obstacles. Therefore, point cloud segmentation is an important part of decision planning for the driving state of the mobile platform.
传统的点云分割方式一般是基于三维点云的局部特征来分割地面点与非地面点。然而,上述方式在处理距离较远的三维点云时都会发生比较明显的退化,分割准确性较低。Traditional point cloud segmentation methods are generally based on the local features of 3D point clouds to segment ground points and non-ground points. However, the above methods will obviously degenerate when dealing with 3D point clouds with a long distance, and the segmentation accuracy is low.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本公开的实施例提出了三维点云分割方法和装置、可移动平台,以准确地对可移动平台采集到的三维点云进行点云分割。In view of this, the embodiments of the present disclosure propose a three-dimensional point cloud segmentation method and device, and a movable platform, so as to accurately perform point cloud segmentation on the three-dimensional point cloud collected by the movable platform.
根据本公开实施例的第一方面,提供一种三维点云分割方法,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:获取所述三维点云中的多个候选点;在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割,得到点云分割结果。According to a first aspect of the embodiments of the present disclosure, there is provided a method for segmenting a 3D point cloud, which is used to segment a 3D point cloud collected by a movable platform, the method comprising: acquiring multiple points in the 3D point cloud. search for the multiple candidate points on the v-disparity plane, and determine the target candidate point located on the road surface of the movable platform among the multiple candidate points; fit the target candidate point based on the The model of the driving road surface is obtained, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane to obtain a point cloud segmentation result.
根据本公开实施例的第二方面,提供一种三维点云分割装置,包括处理器,所述三维点云分割装置用于对可移动平台采集到的三维点云进行点云分割,所述处理器 用于执行以下步骤:获取所述三维点云中的多个候选点;在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割。According to a second aspect of the embodiments of the present disclosure, there is provided a three-dimensional point cloud segmentation device, including a processor, the three-dimensional point cloud segmentation device is configured to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform, and the processing The device is configured to perform the following steps: acquiring multiple candidate points in the three-dimensional point cloud; searching the multiple candidate points on the v-disparity plane, and determining that the multiple candidate points are located on the movable platform for driving target candidate points on the road surface; fit a model of the driving road surface based on the target candidate points, and perform a second point cloud segmentation on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane.
根据本公开实施例的第三方面,提供一种可移动平台,其特征在于,包括:壳体;点云采集装置,设于所述壳体上,用于采集三维点云;以及三维点云分割装置,设于所述壳体内,用于执行本公开任一实施例所述的方法。According to a third aspect of the embodiments of the present disclosure, a movable platform is provided, which is characterized by comprising: a casing; a point cloud collecting device, disposed on the casing, for collecting a three-dimensional point cloud; and a three-dimensional point cloud A dividing device, which is arranged in the casing, is used for executing the method described in any embodiment of the present disclosure.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本公开任一实施例所述的方法。According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method described in any of the embodiments of the present disclosure.
应用本公开实施例方案,先在v-disparity平面上对候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点,再基于所述目标候选点拟合出所述行驶路面的模型,以该模型作为点云分割的基准,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割,提高了点云分割的准确性,使得三维点云中坡面、远处等分割难度较大的区域也可以被准确分割出。Applying the solution of the embodiment of the present disclosure, firstly, the candidate points are searched on the v-disparity plane, the target candidate points located on the road surface of the movable platform among the plurality of candidate points are determined, and then based on the target candidate points The model of the driving road surface is combined, and the model is used as the benchmark for point cloud segmentation, and the second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane, which improves the accuracy of point cloud segmentation. The accuracy makes it possible to accurately segment the areas that are difficult to segment, such as slopes and distances in the 3D point cloud.
附图说明Description of drawings
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是一些实施例的点云分割过程的示意图。Figure 1 is a schematic diagram of a point cloud segmentation process of some embodiments.
图2是一些实施例的可移动平台行驶过程中的决策规划过程的示意图。FIG. 2 is a schematic diagram of a decision-making planning process during travel of a mobile platform according to some embodiments.
图3是本公开实施例的点云分割方法的流程图。FIG. 3 is a flowchart of a point cloud segmentation method according to an embodiment of the present disclosure.
图4A和图4B分别是本公开实施例的uvd坐标系的示意图。4A and 4B are schematic diagrams of the uvd coordinate system according to an embodiment of the present disclosure, respectively.
图5是本公开实施例的u-disparity平面的投影过程示意图。FIG. 5 is a schematic diagram of a projection process of a u-disparity plane according to an embodiment of the present disclosure.
图6是本公开实施例的视差与深度的关系示意图。FIG. 6 is a schematic diagram of the relationship between parallax and depth according to an embodiment of the present disclosure.
图7是本公开实施例的点云分割装置的示意图。FIG. 7 is a schematic diagram of a point cloud segmentation apparatus according to an embodiment of the present disclosure.
图8是本公开实施例的可移动平台的示意图。FIG. 8 is a schematic diagram of a movable platform according to an embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining."
可移动平台在行驶过程中,可以通过可移动平台上的路径规划(planning)模块来对可移动平台的行驶状态进行决策规划。其中,点云分割是对可移动平台的行驶状态进行决策规划的重要环节。如图1所示,是一些实施例的点云分割过程的示意图。在步骤101中,可以由可移动平台上的点云采集装置采集三维点云,然后,在步骤102中,对于行驶在地面上的可移动平台(例如无人车),可以对采集到的三维点云进行地面分割,即将三维点云中的三维点分割为地面点和非地面点。对于其他类型的可移动平台(例如可移动机器人),可以对采集到的三维点云进行分割,以将三维点云中的三维点分割为可移动平台行驶路面上的点和不在可移动平台行驶路面上的点。为了便于描述,下文以行驶路面为地面进行说明。在步骤103中,如果一个三维点为地面点,则执行步骤104,为该三维点添加地面点标签,否则执行步骤105,对该三维点进行动静态分割,即将该三维点分割为静止不动的静态点和发生运动的动态点。在步骤106中,如果一个三维点为静态点,则执行步骤107,为该三维点添加静态点标签,否则执行步骤108,为该三维点添加动态点标签,并在步骤109中输出带标签的三维点云至下游模块。其中,可以为三维点云中的全部或者部分三维点打标签。所述标签可 以包括用于表征三维点是否为地面点的第一标签和用于表征三维点是否为静态点的第二标签中的至少一者,还可以包括用于表征三维点的其他信息的标签。During the traveling process of the movable platform, a path planning module on the movable platform can be used to make decision planning on the traveling state of the movable platform. Among them, point cloud segmentation is an important part of decision-making planning for the driving state of the mobile platform. As shown in FIG. 1 , it is a schematic diagram of a point cloud segmentation process in some embodiments. In step 101, a 3D point cloud can be collected by a point cloud collection device on the movable platform, and then, in step 102, for a movable platform (such as an unmanned vehicle) running on the ground, the collected 3D point cloud can be collected. The point cloud is divided into ground points and non-ground points. For other types of mobile platforms (such as mobile robots), the collected 3D point cloud can be segmented to segment the 3D points in the 3D point cloud into points on the road that the mobile platform is driving on and those not driving on the mobile platform. point on the road. For the convenience of description, the following description will be made by taking the driving road as the ground. In step 103, if a 3D point is a ground point, step 104 is performed to add a ground point label to the 3D point; otherwise, step 105 is performed to perform dynamic and static segmentation on the 3D point, that is, segment the 3D point into stationary static point and dynamic point where motion occurs. In step 106, if a 3D point is a static point, step 107 is performed to add a static point label to the 3D point; otherwise, step 108 is performed to add a dynamic point label to the 3D point, and in step 109 the labelled output is output. 3D point cloud to downstream modules. Among them, all or part of the three-dimensional points in the three-dimensional point cloud can be marked. The label may include at least one of a first label used to characterize whether the 3D point is a ground point and a second label used to characterize whether the 3D point is a static point, and may also include a label used to characterize other information of the 3D point. Label.
所述下游模块可以是可移动平台上的planning模块,例如电子控制单元(Electronic Control Unit,ECU)、中央处理器(Central Processing Unit,CPU)等。Planning模块在接收到带标签的三维点云之后,可以基于三维点的标签对可移动平台的行驶状态进行决策规划。所述行驶状态可以包括可移动平台的位姿和速度中的至少一者。如图2所示,是一些实施例的决策规划过程的示意图。在步骤201和步骤202中,planning模块可以接收三维点云并读取三维点云中携带的标签。在步骤203中,可以基于标签确定三维点云中的三维点是否为可移动平台行驶路面(例如地面)上的点。以地面点为例,如果是,则执行步骤204,从地面点中识别出属于车道线的三维点,并根据车道线的方向确定可移动平台的姿态,以使可移动平台沿着车道线的方向行驶。如果是非地面点,则执行步骤205,判断该非地面点是否为静态点。如果是,则执行步骤206,根据静态点的方位确定可移动平台的位姿。例如,判断该静态点是否处于预先规划的行驶路径上,如果是,则重新规划路径,以避免可移动平台与静态点相撞。如果该非地面点为动态点,则执行步骤207,根据该静态点的方位和速度确定可移动平台的姿态和速度中的至少一者。例如,若该动态点处于可移动平台预先规划的行驶路径上,且该动态点的移动速度小于或等于可移动平台的移动速度,则控制可移动平台减速行驶,或者调整可移动平台的姿态,以使可移动平台绕过该动态点。又例如,可以控制可移动平台按照与动态点相同的速度行驶。The downstream module may be a planning module on a movable platform, such as an electronic control unit (Electronic Control Unit, ECU), a central processing unit (Central Processing Unit, CPU) and the like. After receiving the labeled 3D point cloud, the Planning module can make decision planning on the driving state of the movable platform based on the label of the 3D point. The driving state may include at least one of a pose and a speed of the movable platform. As shown in FIG. 2 , it is a schematic diagram of the decision planning process of some embodiments. In step 201 and step 202, the planning module can receive the 3D point cloud and read the tags carried in the 3D point cloud. In step 203, it may be determined whether the three-dimensional point in the three-dimensional point cloud is a point on the road (eg, ground) on which the movable platform travels based on the label. Taking the ground point as an example, if it is, go to step 204, identify the three-dimensional point belonging to the lane line from the ground point, and determine the posture of the movable platform according to the direction of the lane line, so that the movable platform can follow the direction of the lane line. drive in the direction. If it is a non-ground point, step 205 is executed to determine whether the non-ground point is a static point. If yes, step 206 is executed to determine the pose of the movable platform according to the orientation of the static point. For example, it is judged whether the static point is on the pre-planned travel path, and if so, the path is re-planned to avoid the movable platform colliding with the static point. If the non-ground point is a dynamic point, step 207 is executed to determine at least one of the attitude and speed of the movable platform according to the orientation and speed of the static point. For example, if the dynamic point is on the pre-planned travel path of the movable platform, and the moving speed of the dynamic point is less than or equal to the moving speed of the movable platform, control the movable platform to slow down, or adjust the posture of the movable platform, so that the movable platform bypasses the dynamic point. As another example, the movable platform can be controlled to travel at the same speed as the dynamic point.
由此可知,点云分割是对可移动平台的行驶状态进行决策规划的重要环节,准确地进行点云分割有助于对可移动平台的行驶状态进行准确的决策规划。目前的点云分割方式主要基于局部特征来实现,具体来说,是将三维点云变换到xyz空间中,在xyz空间中进行栅格化或邻近搜索。在备选点周围找出其相邻点,根据相邻点云的厚度、高度和法向量等特征判定备选点属于地面点的概率。这种方法在处理距离较远的三维点云时都会发生比较明显的退化,分割准确性较低,而且难以建立全局的地面模型,对于非地面的平面无法做出正确的判断。It can be seen that point cloud segmentation is an important part of decision-making and planning for the driving state of the mobile platform, and accurate point cloud segmentation is helpful for accurate decision-making and planning of the driving state of the mobile platform. The current point cloud segmentation methods are mainly based on local features. Specifically, the three-dimensional point cloud is transformed into the xyz space, and rasterization or proximity search is performed in the xyz space. Find the adjacent points around the candidate point, and determine the probability that the candidate point belongs to the ground point according to the thickness, height and normal vector of the adjacent point cloud. This method will have obvious degradation when dealing with 3D point clouds with a long distance, the segmentation accuracy is low, and it is difficult to establish a global ground model, and it is impossible to make correct judgments for non-ground planes.
基于此,本公开提供一种三维点云分割方法,用于对可移动平台采集到的三维点云进行点云分割,如图3所示,所述方法包括:Based on this, the present disclosure provides a three-dimensional point cloud segmentation method, which is used to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform. As shown in FIG. 3 , the method includes:
步骤301:获取所述三维点云中的多个候选点;Step 301: Obtain multiple candidate points in the 3D point cloud;
步骤302:在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;Step 302: Search the multiple candidate points on the v-disparity plane, and determine a target candidate point located on the road surface of the movable platform among the multiple candidate points;
步骤303:基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割,得到点云分割结果。Step 303: Fit the model of the driving road surface based on the target candidate points, and perform a second point cloud segmentation on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane to obtain a point cloud segmentation result.
在步骤301中,可以由可移动平台上的点云采集装置(例如激光雷达、视觉传感器等)采集三维点云。所述可移动平台可以是无人车、无人机、无人船、可移动机器人等。In step 301, a three-dimensional point cloud may be collected by a point cloud collection device (eg, lidar, vision sensor, etc.) on the movable platform. The movable platform may be an unmanned vehicle, an unmanned aerial vehicle, an unmanned ship, a movable robot, and the like.
所述候选点可以是三维点云中的部分或全部点。可选地,可以对所述三维点云进行语义分割,基于语义分割结果获取所述三维点云中的多个候选点。通过语义分割,可以获取三维点云中多个三维点的类别,例如,车辆类别、交通信号灯类别、行人类别、车道线类别等。然后,可以基于三维点的类别确定候选点。例如,将车道线类别的三维点确定为候选点。The candidate points may be some or all of the points in the three-dimensional point cloud. Optionally, semantic segmentation may be performed on the three-dimensional point cloud, and multiple candidate points in the three-dimensional point cloud may be acquired based on the semantic segmentation result. Through semantic segmentation, the categories of multiple 3D points in the 3D point cloud can be obtained, such as vehicle category, traffic light category, pedestrian category, lane line category, etc. Then, candidate points can be determined based on the categories of the three-dimensional points. For example, three-dimensional points of the lane line category are determined as candidate points.
可选地,还可以在u-disparity平面上对三维点云进行预分割,基于预分割的结果确定候选点。具体来说,可以基于所述三维点云在u-disparity平面上的投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点。本实施例先将可移动平台的行驶路面假设为平面,基于该平面确定第一基准投影密度,再基于第一基准投影密度进行预分割,以确定可移动平台行驶路面上的候选点。u-disparity平面上的预分割能够提高候选点的信噪比,使候选点可以在远距离区域(信号量较少)被选取出,提高在v-disparity平面上进行搜索的距离和准确度。Optionally, the 3D point cloud may also be pre-segmented on the u-disparity plane, and candidate points are determined based on the pre-segmentation result. Specifically, the 3D point cloud can be obtained based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane. multiple candidate points. In this embodiment, the traveling road surface of the movable platform is assumed to be a plane, a first reference projection density is determined based on the plane, and then pre-segmentation is performed based on the first reference projection density to determine candidate points on the traveling surface of the movable platform. The pre-segmentation on the u-disparity plane can improve the signal-to-noise ratio of the candidate points, so that the candidate points can be selected in long-distance regions (with less signal amount), and the distance and accuracy of the search on the v-disparity plane can be improved.
其中,uvd空间中的各个坐标轴可以基于可移动平台的行驶路面的方向确定。例如,在图4A中,可移动平台401在水平地面402上行驶,则u轴、v轴和d轴(即disparity轴)可以分别是地面上与可移动平台行驶方向垂直的坐标轴、地面上与可移动平台行驶方向平行的坐标轴,以及可移动平台高度方向上(即竖直向上)的坐标轴。在图4B中,可移动平台404为行驶在竖直的玻璃平面403上的擦玻璃机器人,则u轴、v轴和d轴可以分别是玻璃平面上与可移动平台行驶方向垂直的坐标轴、玻璃平面上与可移动平台行驶方向平行的坐标轴,以及可移动平台高度方向上(即水平方向)的坐标轴。除此之外,uvd空间的各个坐标轴还可以指向其他方向,具体指向可根据实际需要设定,本公开对此不做限制。Wherein, each coordinate axis in the uvd space can be determined based on the direction of the driving road surface of the movable platform. For example, in FIG. 4A , the movable platform 401 is driving on the horizontal ground 402, then the u-axis, v-axis and d-axis (ie, the disparity axis) can be the coordinate axes on the ground that are perpendicular to the traveling direction of the movable platform, and the A coordinate axis parallel to the traveling direction of the movable platform, and a coordinate axis in the height direction of the movable platform (ie, vertically upward). In FIG. 4B, the movable platform 404 is a glass-cleaning robot traveling on the vertical glass plane 403, then the u-axis, v-axis and d-axis may be the coordinate axes on the glass plane that are perpendicular to the traveling direction of the movable platform, The coordinate axis on the glass plane that is parallel to the traveling direction of the movable platform, and the coordinate axis in the height direction (ie, the horizontal direction) of the movable platform. In addition, each coordinate axis of the uvd space may also point in other directions, and the specific direction may be set according to actual needs, which is not limited in the present disclosure.
在一些实施例中,可以将u-disparity平面预先划分为多个栅格。在所述多个栅格中的第一栅格中,若所述投影密度与所述第一基准投影密度之比大于或等于第一预设比值,将所述三维点云中投影到所述第一像素栅格中的点确定为候选点。其中,所述第一预设比值大于1。In some embodiments, the u-disparity plane may be pre-divided into multiple grids. In a first grid of the plurality of grids, if the ratio of the projection density to the first reference projection density is greater than or equal to a first preset ratio, project the three-dimensional point cloud to the Points in the first pixel grid are determined as candidate points. Wherein, the first preset ratio is greater than 1.
如图5所示,可以将u-disparity平面预先划分为多个栅格,为了便于比较投影密度,每个栅格的大小可以相同。每个黑点表示三维点云中的三维点在u-disparity平面上的一个投影点,一个栅格内投影点的数量等于三维点云中投影到该栅格的三维点的数量。一个栅格内的投影密度可以确定为该栅格内的投影点的数量与该栅格的面积之比。以行驶在路面上的车辆为例,车辆在行驶过程中,在与车辆相距d1到d2的区域(称为第一区域)内无障碍物,则第一区域内的各个三维点均为路面上的三维点,其所在平面与车辆的行驶方向(即disparity坐标轴的方向)平行或者呈较小的夹角,这些三维点沿着disparity坐标轴延伸,视差变化范围较大,也就是说,第一区域内的三维点的密度较小。而在与车辆相距d3到d4的区域(称为第二区域)内存在障碍物,该障碍物所在的平面一般与车辆的行驶方向垂直或者呈较大的夹角,会阻碍车辆前进。也就是说,第二区域内的三维点的视差变化范围较小,第二区域内的三维点的密度较大。因此,只要知道可移动平台行驶平面的第一基准密度,即可大致推断出一个三维点是该行驶平面上的点还是该行驶平面以外的点(例如障碍物)。As shown in Figure 5, the u-disparity plane can be pre-divided into multiple grids, and each grid can be of the same size in order to compare projection densities. Each black point represents a projection point of a 3D point in the 3D point cloud on the u-disparity plane, and the number of projected points in a grid is equal to the number of 3D points in the 3D point cloud projected to the grid. The projected density within a grid can be determined as the ratio of the number of projected points within the grid to the area of the grid. Taking the vehicle driving on the road as an example, when the vehicle is driving, there is no obstacle in the area (called the first area) that is separated from the vehicle by d1 to d2, then each three-dimensional point in the first area is on the road. The three-dimensional points of , whose plane is parallel to the driving direction of the vehicle (that is, the direction of the disparity coordinate axis) or have a small included angle, these three-dimensional points extend along the disparity coordinate axis, and the parallax change range is large, that is, the first The density of three-dimensional points within a region is low. However, there is an obstacle in the area d3 to d4 away from the vehicle (referred to as the second area), and the plane where the obstacle is located is generally perpendicular to the driving direction of the vehicle or has a large included angle, which will hinder the vehicle from moving forward. That is to say, the parallax variation range of the three-dimensional points in the second area is small, and the density of the three-dimensional points in the second area is large. Therefore, as long as the first reference density of the travel plane of the movable platform is known, it can be roughly inferred whether a three-dimensional point is a point on the travel plane or a point outside the travel plane (eg, an obstacle).
进一步地,由于实际的行驶路面可能不是平面,因此,这里将预设比值λ(即分割的冗余度)设为大于1的值,从而提供一定的冗余量,减小候选点的选取误差。其中,λ可以根据视觉传感器的型号固定设置,或者,也可以根据实际应用场景动态设置。在视觉传感器的可靠性较低时,可以将λ设置为较大的值,反之可以将λ设置为较小的值。例如,在视觉传感器的焦距较长,或者周围光线较暗等情况下,可以将λ设置为较大的值。Further, since the actual driving road may not be flat, the preset ratio λ (that is, the redundancy of the segmentation) is set to a value greater than 1 here, so as to provide a certain amount of redundancy and reduce the selection error of candidate points. . Among them, λ can be fixedly set according to the model of the vision sensor, or can also be dynamically set according to the actual application scenario. When the reliability of the vision sensor is low, λ can be set to a larger value, otherwise, λ can be set to a smaller value. For example, when the focal length of the vision sensor is long, or the surrounding light is dim, etc., λ can be set to a larger value.
在一些实施例中,所述平面模型在所述u-disparity平面上的第一基准投影密度与所述平面模型上的点的视差值成正比。具体来说,可以根据视觉传感器的基线长度、所述平面模型在所述视觉传感器的坐标系中的第一坐标轴的结局,以及所述平面模型上的点的视差值确定所述第一基准投影密度。假设行驶路面的平面模型为:In some embodiments, the first reference projection density of the plane model on the u-disparity plane is proportional to disparity values of points on the plane model. Specifically, the first coordinate axis may be determined according to the baseline length of the visual sensor, the ending of the first coordinate axis of the plane model in the coordinate system of the visual sensor, and the disparity value of points on the plane model. Baseline projected density. Assume that the plane model of the driving road is:
y=αz+hy=αz+h
其中,α为所述行驶路面的斜率。可以计算所述截距与所述视觉传感器的基线长度的比值,将所述比值与所述平面模型上的点的视差值的乘积确定为所述第一基准 投影密度。则所述平面模型上的点在u-disparity平面上的第一基准投影密度可记为:Wherein, α is the slope of the driving road surface. A ratio of the intercept to a baseline length of the vision sensor may be calculated, and a product of the ratio and a disparity value of a point on the planar model may be determined as the first reference projection density. Then the first reference projection density of the points on the plane model on the u-disparity plane can be recorded as:
Figure PCTCN2020128711-appb-000001
Figure PCTCN2020128711-appb-000001
其中,h为所述截距,b为所述基线长度,Δd为u-disparity平面上的一个栅格内的投影点的密度,Δv g为对应栅格内的第一基准投影密度。所述第一坐标轴为所述可移动平台的高度方向的坐标轴,例如,当可移动平台在图4A所示的地面上行驶时,所述第一坐标轴可以是竖直向上的坐标轴;又例如,当可移动平台在图4B所示的玻璃平面上行驶时,所述第一坐标轴可以是水平方向上的坐标轴。 Wherein, h is the intercept, b is the baseline length, Δd is the density of projection points in a grid on the u-disparity plane, and Δv g is the first reference projection density in the corresponding grid. The first coordinate axis is the coordinate axis in the height direction of the movable platform. For example, when the movable platform is driving on the ground shown in FIG. 4A , the first coordinate axis may be a vertically upward coordinate axis. For another example, when the movable platform travels on the glass plane shown in FIG. 4B , the first coordinate axis may be a coordinate axis in the horizontal direction.
利用上述原理,能够得到行驶路面的模型为平面模型的情况下,哪些点是行驶路面上的点,从而提取行驶路面上的点作为候选点。可以看出,第一基准投影密度只与截距、基线长度以及视差值相关,而与地面的远近(z值)无关。因此,这里先将可移动平台的行驶路面假设为平面,再基于平面模型确定第一基准投影密度以在u-disparity平面进行分割从而确定候选点,一方面减小了计算量,另一方面提高了候选点的信噪比,使得候选点能够在距离较远(信号量较少)的情况下被选取出,提高了后续在v-disparity平面上的搜索距离和准确度,使得坡面、远处等分割难度较大的区域也可以被准确割出。Using the above principle, it is possible to obtain which points are points on the driving road when the model of the driving road is a plane model, so as to extract the points on the driving road as candidate points. It can be seen that the first reference projection density is only related to the intercept, baseline length and disparity value, but has nothing to do with the distance (z value) of the ground. Therefore, here, the driving road of the movable platform is assumed to be a plane, and then the first reference projection density is determined based on the plane model to segment on the u-disparity plane to determine candidate points. On the one hand, the amount of calculation is reduced, and on the other hand, the The signal-to-noise ratio of the candidate points is improved, so that the candidate points can be selected in the case of a long distance (less signal amount), which improves the subsequent search distance and accuracy on the v-disparity plane. Areas that are difficult to divide equally can also be accurately cut out.
在一些实施例中,由于时钟重置等原因导致点云采集装置未采集到完整的点云帧,因此,采集到的三维点云中可能既包括有效点,又包括无效点。为了提高获取候选点的可靠性,可以仅从所述三维点云中的有效点中获取多个候选点。其中,可以将无效点置为无效(invalid),以避免无效点被选为候选点。In some embodiments, the point cloud acquisition device does not acquire a complete point cloud frame due to a clock reset or other reasons. Therefore, the acquired 3D point cloud may include both valid points and invalid points. In order to improve the reliability of acquiring candidate points, multiple candidate points may be acquired only from valid points in the three-dimensional point cloud. Among them, invalid points can be set as invalid to avoid invalid points being selected as candidate points.
在一些实施例中,还可以从所述三维点中过滤掉野值点,获取过滤后的所述三维点云中的多个候选点。野值点即取值范围在有效范围之外的点。可以通过滤波的方式从三维点中过滤掉野值点。In some embodiments, outlier points may also be filtered out of the three-dimensional points, and multiple candidate points in the filtered three-dimensional point cloud may be obtained. Outliers are points whose value range is outside the valid range. Outliers can be filtered out of 3D points by filtering.
在一些实施例中,在基于所述三维点云在u-disparity平面上的第一投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点之前,可以获取预设的尺度变换参数,基于所述尺度变换参数,对所述三维点云中各个三维点的u坐标值进行尺度变换,将经尺度变换的所述三维点云投影到所述u-disparity平面上。In some embodiments, obtaining the Before multiple candidate points in the 3D point cloud, preset scale transformation parameters can be obtained, and based on the scale transformation parameters, scale transformation is performed on the u-coordinate values of each 3D point in the 3D point cloud, and the scaled transformation parameters are scaled. The 3D point cloud is projected onto the u-disparity plane.
其中,一个三维点的尺度变换参数scale用于对该三维点的u坐标值进行放大 或者缩小。在尺度变换参数scale大于1的情况下,表示对三维点的u坐标值进行放大,即,将u-disparity平面上的1行投影点映射成变换后的scale行。在尺度变换参数scale小于1的情况下,表示对三维点的u坐标值进行缩小,即,将u-disparity平面上的scale行投影点映射成变换后的第1行。The scale transformation parameter scale of a three-dimensional point is used to enlarge or reduce the u-coordinate value of the three-dimensional point. When the scale transformation parameter scale is greater than 1, it means that the u-coordinate value of the three-dimensional point is enlarged, that is, a row of projection points on the u-disparity plane is mapped to the transformed scale row. When the scale transformation parameter scale is less than 1, it means that the u-coordinate value of the three-dimensional point is reduced, that is, the scale row projection point on the u-disparity plane is mapped to the transformed first row.
一个三维点的尺度变换参数的取值与该三维点的u坐标值对应。例如,若所述三维点云中的第一三维点的u坐标值小于第一预设坐标值,所述第一三维点的尺度变换参数大于1。又例如,若所述三维点云中的第一三维点的u坐标值大于或等于第二预设坐标值,所述第一三维点的尺度变换参数小于1。其中,第一预设坐标值可以小于或等于第二预设坐标值。进一步地,若所述第一预设坐标值小于所述第二预设坐标值,则当所述三维点云中的第一三维点的u坐标值大于或等于第一预设坐标值,且小于第二预设坐标值时,所述第一三维点的尺度变换参数等于1。The value of the scale transformation parameter of a three-dimensional point corresponds to the u-coordinate value of the three-dimensional point. For example, if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is smaller than the first preset coordinate value, the scaling parameter of the first three-dimensional point is greater than 1. For another example, if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than or equal to the second preset coordinate value, the scaling parameter of the first three-dimensional point is less than 1. Wherein, the first preset coordinate value may be less than or equal to the second preset coordinate value. Further, if the first preset coordinate value is less than the second preset coordinate value, then when the u coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than or equal to the first preset coordinate value, and When it is less than the second preset coordinate value, the scale transformation parameter of the first three-dimensional point is equal to 1.
进一步地,还可以设置多个大于1的尺度变换参数和/或多个小于1的尺度变换参数。例如,若第一三维点的u坐标值小于第三预设坐标值,将所述第一三维点的尺度变换参数设置为第一参数值;若第一三维点的u坐标值大于或等于所述第三预设坐标值,且小于所述第一预设坐标值,将所述第一三维点的尺度变换参数设置为第二参数值。其中,所述第一参数值和所述第二参数值均大于1,且所述第一参数值大于所述第二参数值,所述第三预设坐标值小于所述第一预设坐标值。Further, multiple scaling parameters greater than 1 and/or multiple scaling parameters less than 1 may be set. For example, if the u-coordinate value of the first three-dimensional point is less than the third preset coordinate value, set the scale transformation parameter of the first three-dimensional point to the first parameter value; if the u-coordinate value of the first three-dimensional point is greater than or equal to the The third preset coordinate value is smaller than the first preset coordinate value, and the scale transformation parameter of the first three-dimensional point is set as the second parameter value. Wherein, both the first parameter value and the second parameter value are greater than 1, and the first parameter value is greater than the second parameter value, and the third preset coordinate value is less than the first preset coordinate value.
又例如,若第一三维点的u坐标值大于或等于第四预设坐标值,将所述第一三维点的尺度变换参数设置为第三参数值,若第一三维点的u坐标值小于第四预设坐标值,且大于或等于第二预设坐标值,将第一三维点的尺度变换参数设置为第四参数值。其中,第四预设坐标值大于第二预设坐标值,第四参数值小于第三参数值。For another example, if the u-coordinate value of the first three-dimensional point is greater than or equal to the fourth preset coordinate value, the scale transformation parameter of the first three-dimensional point is set to the third parameter value, and if the u-coordinate value of the first three-dimensional point is less than The fourth preset coordinate value is greater than or equal to the second preset coordinate value, and the scale transformation parameter of the first three-dimensional point is set as the fourth parameter value. Wherein, the fourth preset coordinate value is greater than the second preset coordinate value, and the fourth parameter value is less than the third parameter value.
表1示出了一些实施例的scale参数与变换前后的行数的对应关系,本领域技术人员可以理解,表1所示仅为示例性说明,并非用于限制本公开。举例来说,在尺度变换参数scale为4.0的情况下,将变换前的1行投影点映射成了变换后的4行投影点。例如,变换前的第3行投影点映射成了变换后的第0行至第3行投影点,变换前的第4行投影点映射成了变换后的第4行至第7行投影点,以此类推。Table 1 shows the corresponding relationship between the scale parameter in some embodiments and the number of rows before and after transformation. Those skilled in the art can understand that Table 1 is only an exemplary description and is not used to limit the present disclosure. For example, when the scale transformation parameter scale is 4.0, the projection points of 1 row before transformation are mapped to the projection points of 4 rows after transformation. For example, the projection points of the 3rd row before transformation are mapped to the projection points of the 0th row to the 3rd row after transformation, and the projection points of the 4th row before transformation are mapped to the projection points of the 4th row to the 7th row after transformation, And so on.
应当说明的是,之所以变换前的投影点的起始行号为3,是因为视差值过小的点对应的误差较大。因此,这里只取视差值大于或等于3的点进行处理。本领域技术人员可以理解,在不考虑误差的情况下,也可以采用第3行以前的点;在其他情况下, 变换前的投影点的起始行号也可以设置为大于3的值。It should be noted that the reason why the starting line number of the projection point before transformation is 3 is because the error corresponding to the point with too small parallax value is relatively large. Therefore, only the points with a disparity value greater than or equal to 3 are taken for processing here. Those skilled in the art can understand that the point before the third row can also be used without considering the error; in other cases, the starting row number of the projection point before transformation can also be set to a value greater than 3.
表1 scale参数与变换前后的行数的对应关系Table 1 The correspondence between the scale parameter and the number of rows before and after transformation
scalescale 4.04.0 2.02.0 1.01.0 0.50.5 0.250.25
变换后的行范围Transformed row range [0,108)[0,108) [108,128)[108,128) [128,148)[128,148) [148,162)[148,162) [162,172)[162,172)
变换前的行范围range of lines before transformation [3,30)[3,30) [30,40)[30,40) [40,60)[40,60) [60,88)[60,88) [88,128)[88,128)
由于disparity与深度的关系为:Since the relationship between disparity and depth is:
z=b*f/dz=b*f/d
其中,f为用于采集三维点云的视觉传感器的焦距,b为所述视觉传感器的焦距,d为视差值,z为深度。可见,z与d的关系是反比例关系,如图6所示。这里通过使用尺度变换参数对u坐标值进行尺度变换,目的是为了对抗disparity对应深度非线性变化的特点,压缩近处高分辨率区域,同时充分发挥远处的亚像素精度,从而提高了点云分割的准确度。此外,通过上述尺度变换方式,使得变换后的三维点的行数与变换前的行数接近,避免了引尺度变换导致行数过多,从而大量增加算力的情况,实现了算力与点云分割准确度之间的平衡。Wherein, f is the focal length of the vision sensor used to collect the three-dimensional point cloud, b is the focal length of the vision sensor, d is the parallax value, and z is the depth. It can be seen that the relationship between z and d is inversely proportional, as shown in Figure 6. Here, the u-coordinate value is scaled by using the scale transformation parameter. The purpose is to counteract the characteristic of nonlinear variation of depth corresponding to disparity, compress the near high-resolution area, and give full play to the sub-pixel accuracy in the distance, thereby improving the point cloud. Segmentation accuracy. In addition, through the above scale transformation method, the number of rows of three-dimensional points after transformation is close to the number of rows before transformation, which avoids the situation that the number of rows is too large due to scale transformation, thereby greatly increasing the computing power, and realizes the computing power and the number of points. The balance between cloud segmentation accuracy.
在步骤302中,可以将步骤301中选取出来的候选点投影至v-disparity平面,与u-disparity平面不同,v-disparity平面是等尺度的,即v-disparity平面上的投影点的行数与视差图的行数一一对应,v-disparity平面上的投影点的列数与视差值的有效范围的整数值一一对应。In step 302, the candidate points selected in step 301 can be projected onto the v-disparity plane. Unlike the u-disparity plane, the v-disparity plane is of equal scale, that is, the number of rows of projection points on the v-disparity plane. One-to-one correspondence with the number of rows of the disparity map, and one-to-one correspondence between the number of columns of projection points on the v-disparity plane and the integer values of the valid range of disparity values.
在v-disparity平面上可以对所述多个候选点进行搜索,以确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点。下面以动态规划方式为例,对确定目标候选点的过程进行说明。在实际应用中,还可以采取其他方式来确定目标候选点,此处不再展开说明。The plurality of candidate points may be searched on the v-disparity plane to determine a target candidate point located on the road surface of the movable platform among the plurality of candidate points. The following takes the dynamic programming method as an example to describe the process of determining the target candidate point. In practical applications, other methods may also be adopted to determine the target candidate point, which will not be described here.
可以对所述多个候选点中的每个候选点,确定所述候选点的搜索代价,基于所述候选点的搜索代价从所述候选点中确定目标候选点。若所述候选点的搜索代价小于预设代价,将所述候选点确定为目标候选点。For each candidate point of the plurality of candidate points, a search cost of the candidate point may be determined, and a target candidate point may be determined from the candidate points based on the search cost of the candidate point. If the search cost of the candidate point is less than the preset cost, the candidate point is determined as the target candidate point.
在一些实施例中,所述搜索代价包括第一搜索代价和第二搜索代价;其中,所述第一搜索代价用于表征在所述候选点上是否观测到目标候选点,所述第二搜索代价 用于表征所述候选点与所述候选点的邻域目标候选点是否平滑。In some embodiments, the search cost includes a first search cost and a second search cost; wherein the first search cost is used to characterize whether a target candidate point is observed on the candidate point, and the second search cost The cost is used to characterize whether the candidate point and the neighbor target candidate point of the candidate point are smooth.
其中,密度代价可通过如下方式计算:Among them, the density cost can be calculated as follows:
Figure PCTCN2020128711-appb-000002
Figure PCTCN2020128711-appb-000002
p为v-disparity图像上某点,cost为该点的密度代价,th与视觉传感器的参数有关。举例来说,当可移动平台的行驶路面为地面时,地面上的车道线的宽度约为3米,在深度为100米处,视觉传感器拍摄到的3米宽的区域一般在一帧三维点云图像中包括5个像素点,因此,th的取值可以为5。在其他情况下,也可以根据实际情况将th设置为其他值。p is a certain point on the v-disparity image, cost is the density cost of the point, and th is related to the parameters of the visual sensor. For example, when the driving surface of the movable platform is the ground, the width of the lane line on the ground is about 3 meters, and at a depth of 100 meters, the 3-meter wide area captured by the vision sensor is generally one frame of 3D points. The cloud image includes 5 pixels, so the value of th can be 5. In other cases, th can also be set to other values according to the actual situation.
在得到目标候选点之后,在步骤303中,可以基于目标候选点拟合出行驶路面的模型。例如,可以基于最小二乘法对所述目标候选点进行多项式拟合,得到所述行驶路面的多项式模型。得到的模型可以表示为:After the target candidate points are obtained, in step 303, a model of the driving road surface may be fitted based on the target candidate points. For example, polynomial fitting may be performed on the target candidate points based on the least squares method to obtain a polynomial model of the driving road surface. The resulting model can be expressed as:
y=A+B*z+C*z 2+D*z 3+E*z 4 y=A+B*z+C*z 2 +D*z 3 +E*z 4
该模型在某点处的切线的斜率为:The slope of the tangent to the model at a point is:
y’=B+2*C*z+3*D*z 2+4*E*z 3 y'=B+2*C*z+3*D*z 2 +4*E*z 3
其中,A、B、C、D和E均为常数,z为深度。上述模型仅为示例性说明,根据实际应用场景,可将上述模型调整为三次多项式模型或者五次多项式模型等。然后,可以获取所述行驶路面的模型的斜率;基于所述斜率确定所述行驶路面在所述u-disparity平面上的第二基准投影密度;基于所述第二基准投影密度对三维点云进行第二点云分割。where A, B, C, D, and E are all constants, and z is the depth. The above model is only an exemplary description, and the above model can be adjusted to a cubic polynomial model or a quintic polynomial model, etc. according to actual application scenarios. Then, the slope of the model of the driving road surface may be obtained; the second reference projection density of the driving road surface on the u-disparity plane may be determined based on the slope; Second point cloud segmentation.
基于第二基准投影密度对三维点云进行第二点云分割的方式与前述基于第一基准投影密度进行点云分割的方式类似,即,在所述u-disparity平面上的第二像素栅格中,若所述三维点云在u-disparity平面上的投影密度与所述第二基准投影密度之比大于或等于第二预设比值,将所述三维点云中投影到所述第二像素栅格中的点分割为所述行驶路面上的目标点。这里可以将所述第二预设比值设置为大于或等于1的值,第二预设值与第一预设值可以相同也可以不同。The manner of performing the second point cloud segmentation on the 3D point cloud based on the second reference projection density is similar to the foregoing manner of performing the point cloud segmentation based on the first reference projection density, that is, the second pixel grid on the u-disparity plane , if the ratio of the projection density of the 3D point cloud on the u-disparity plane to the second reference projection density is greater than or equal to a second preset ratio, project the 3D point cloud to the second pixel The points in the grid are divided into target points on the road surface. Here, the second preset ratio may be set to a value greater than or equal to 1, and the second preset value may be the same as or different from the first preset value.
其中,第二基准投影密度可以基于所述行驶路面的模型、所述模型的斜率和所述行驶路面的深度而确定。例如,可以计算所述模型的斜率与所述行驶路面的深度的乘积,计算所述行驶路面的模型与所述乘积的差值,基于所述差值与所述视觉传感器 的基线长度的比值确定为所述第二基准投影密度,具体如下:Wherein, the second reference projection density may be determined based on the model of the driving road surface, the slope of the model and the depth of the driving road surface. For example, a product of the slope of the model and the depth of the travel surface may be calculated, a difference between the model of the travel surface and the product may be calculated, and a determination may be made based on the ratio of the difference to the baseline length of the vision sensor is the second reference projection density, specifically as follows:
Δv g=(y-z*y’)/b Δv g =(yz*y')/b
上述过程在v-disparity平面上进行搜索和模型拟合,在u-disparity平面上进行点云分割,两个步骤迭代进行,u-disparity平面的分割能够提高地面备选点的信噪比,使备选点可以在远距离区域(信号量较少)被选取出,提高v-disparity平面上的搜索距离和精度,而得到的模型又可以为u-disparity分割提供基准密度这一重要信息,使得坡面、远处等分割难度较大的区域也可以被准确割出。The above process performs search and model fitting on the v-disparity plane, and performs point cloud segmentation on the u-disparity plane. The two steps are iteratively performed. The segmentation of the u-disparity plane can improve the signal-to-noise ratio of the ground candidate points, so that the The candidate points can be selected in long-distance regions (with less semaphore) to improve the search distance and accuracy on the v-disparity plane, and the resulting model can provide important information about the benchmark density for u-disparity segmentation, so that Areas that are difficult to segment, such as slopes and distances, can also be accurately cut out.
在一些实施例中,还可以基于所述点云分割结果,为所述三维点云中的各个三维点打标签,一个三维点的标签用于表征所述三维点的类别。所述类别可以包括第一类别和第二类别,其中,第一类别用于表征三维点属于可移动平台行驶路面上的点,第二类别用于表征三维点属于障碍物上的点。进一步地,所述类别还可以包括第三类别,用于表征三维点既不属于行驶路面上的点,又不属于障碍物上的点。第三类别的点可以是倒影点,或者无法确定出类别的点等。In some embodiments, each 3D point in the 3D point cloud may be labeled based on the point cloud segmentation result, and one 3D point label is used to represent the category of the 3D point. The categories may include a first category and a second category, wherein the first category is used to represent that the three-dimensional point belongs to a point on the road where the movable platform travels, and the second category is used to represent that the three-dimensional point belongs to a point on an obstacle. Further, the category may further include a third category, which is used to represent that the three-dimensional point does not belong to the point on the driving road, nor the point on the obstacle. The points of the third category may be reflection points, or points whose category cannot be determined, and the like.
在一些实施例中,可以基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签。这里基于三维点的高度和点云分割结果共同为三维点打标签,提高了标签的准确性。具体来说,若一个三维点的高度低于所述行驶路面的高度,可以将所述三维点的标签确定为第一标签,所述第一标签用于表征所述三维点为倒影点。若一个三维点的高度不低于行驶路面的高度,可以进一步结合点云分割结果来进行打标签。In some embodiments, each 3D point in the 3D point cloud may be tagged based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud. Here, the 3D points are labeled based on the height of the 3D point and the point cloud segmentation result, which improves the accuracy of the label. Specifically, if the height of a three-dimensional point is lower than the height of the driving road surface, the label of the three-dimensional point may be determined as a first label, and the first label is used to indicate that the three-dimensional point is a reflection point. If the height of a 3D point is not lower than the height of the driving road, it can be further combined with the point cloud segmentation result for labeling.
例如,针对所述三维点云中的各个三维点,可以基于所述三维点的高度确定所述三维点为所述行驶路面上的点的第一置信度;基于所述点云分割结果确定所述三维点为所述行驶路面上的点的第二置信度;基于所述三维点的第一置信度和第二置信度为所述三维点打标签。For example, for each 3D point in the 3D point cloud, a first confidence level that the 3D point is a point on the driving road may be determined based on the height of the 3D point; The three-dimensional point is the second confidence level of the point on the driving road surface; the three-dimensional point is labeled based on the first confidence level and the second confidence level of the three-dimensional point.
在一个三维点的高度不低于行驶路面的高度的情况下,如果一个三维点的高度越高,则该三维点为行驶路面上的点的第一置信度越低,反之第一置信度越高。若一个三维点的投影密度与第二基准投影密度的比值越大,该三维点为行驶路面上的点的第二置信度越低,反之第二置信度越高。In the case that the height of a three-dimensional point is not lower than the height of the driving road, if the height of a three-dimensional point is higher, the first confidence that the three-dimensional point is a point on the driving road is lower, otherwise the first confidence is lower. high. If the ratio of the projected density of a three-dimensional point to the second reference projected density is larger, the second confidence level that the three-dimensional point is a point on the driving road is lower, otherwise the second confidence level is higher.
可以结合不同的场景选择不同的方式进行打标签。例如,在对标签可靠性和准确性要求较高的情况下,可以在一个三维点的第一置信度和第二置信度均大于对应的 置信度阈值时,确定所述三维点的标签为第一标签,所述第一标签用于指示所述三维点为所述行驶路面上的点。在对标签可靠性和准确性要求较低的情况下,可以在一个三维点的第一置信度和第二置信度中的至少一者大于对应置信度阈值时,确定所述三维点的标签为第一标签。还可以基于其他方式确定三维点的标签,此处不再一一列举。You can choose different ways to tag according to different scenarios. For example, in the case of high requirements for label reliability and accuracy, when both the first confidence level and the second confidence level of a 3D point are greater than the corresponding confidence threshold, the label of the 3D point may be determined as the first confidence level. A label, the first label is used to indicate that the three-dimensional point is a point on the driving road surface. In the case of low requirements on label reliability and accuracy, when at least one of the first confidence level and the second confidence level of a 3D point is greater than the corresponding confidence threshold, the label of the 3D point can be determined to be first label. The labels of the three-dimensional points may also be determined based on other methods, which will not be listed one by one here.
在实际应用中,点云分割结果可用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。例如,规划单元可以基于点云分割结果得到的标签,确定行驶路径上是否存在障碍物,从而决定是否需要控制可移动平台的速度和姿态以躲避障碍物。In practical applications, the point cloud segmentation result can be used by the planning unit on the movable platform to plan the driving state of the movable platform. For example, the planning unit can determine whether there are obstacles on the driving path based on the labels obtained from the segmentation results of the point cloud, so as to decide whether to control the speed and attitude of the movable platform to avoid obstacles.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
本公开实施例还提供一种点云分割装置,包括处理器,所述处理器用于执行以下步骤:An embodiment of the present disclosure further provides a point cloud segmentation device, including a processor, where the processor is configured to perform the following steps:
获取所述三维点云中的多个候选点;acquiring multiple candidate points in the three-dimensional point cloud;
在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;searching the multiple candidate points on the v-disparity plane, and determining a target candidate point located on the road surface of the movable platform among the multiple candidate points;
基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割。A model of the driving road surface is fitted based on the target candidate points, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane.
在一些实施例中,所述处理器用于:将所述三维点云中的全部三维点确定为候选点;或者对所述三维点云进行语义分割,基于语义分割结果获取所述三维点云中的多个候选点;或者基于所述三维点云在u-disparity平面上的投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点。In some embodiments, the processor is configured to: determine all 3D points in the 3D point cloud as candidate points; or perform semantic segmentation on the 3D point cloud, and obtain the 3D point cloud based on the semantic segmentation result. or based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane, obtain the 3D point Multiple candidate points in the cloud.
在一些实施例中,所述处理器用于:在所述u-disparity平面上的第一像素栅格中,若所述投影密度与所述第一基准投影密度之比大于或等于第一预设比值,将所述三维点云中投影到所述第一像素栅格中的点确定为候选点,所述第一预设比值大于1。In some embodiments, the processor is configured to: in the first pixel grid on the u-disparity plane, if the ratio of the projected density to the first reference projected density is greater than or equal to a first preset A ratio, and a point projected into the first pixel grid in the three-dimensional point cloud is determined as a candidate point, and the first preset ratio is greater than 1.
在一些实施例中,所述平面模型在所述u-disparity平面上的第一基准投影密度与所述平面模型上的点的视差值成正比。In some embodiments, the first reference projection density of the plane model on the u-disparity plane is proportional to disparity values of points on the plane model.
在一些实施例中,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述处理器用于:获取所述平面模型在所述视觉传感器的坐标系中的第一坐标轴的截距,所述第一坐标轴为所述可移动平台的高度方向的坐标轴;基于所述截距、所述视觉传感器的基线长度以及所述平面模型上的点的视差值确定所述第一基准投影密度。In some embodiments, the three-dimensional point cloud is acquired by a vision sensor on the movable platform; the processor is configured to: acquire the first coordinate axis of the plane model in the coordinate system of the vision sensor intercept, the first coordinate axis is the coordinate axis in the height direction of the movable platform; the intercept is determined based on the intercept, the baseline length of the vision sensor and the parallax value of the point on the plane The first reference projected density.
在一些实施例中,所述处理器用于:计算所述截距与所述视觉传感器的基线长度的比值;将所述比值与所述平面模型上的点的视差值的乘积确定为所述第一基准投影密度。In some embodiments, the processor is configured to: calculate a ratio of the intercept to a baseline length of the vision sensor; and determine a product of the ratio and a disparity value of a point on the plane model as the The first reference projected density.
在一些实施例中,所述三维点云中包括有效点和无效点;所述处理器用于:从所述三维点云中的有效点中获取多个候选点。In some embodiments, the three-dimensional point cloud includes valid points and invalid points; the processor is configured to: obtain a plurality of candidate points from the valid points in the three-dimensional point cloud.
在一些实施例中,所述处理器用于:从所述三维点中过滤掉野值点;获取过滤后的所述三维点云中的多个候选点。In some embodiments, the processor is configured to: filter out outlier points from the three-dimensional points; and obtain a plurality of candidate points in the filtered three-dimensional point cloud.
在一些实施例中,所述处理器还用于:获取预设的尺度变换参数;基于所述尺度变换参数,对所述三维点云中各个三维点的u坐标值进行尺度变换;将经尺度变换的所述三维点云投影到所述u-disparity平面上。In some embodiments, the processor is further configured to: obtain preset scaling parameters; perform scaling on the u-coordinate values of each 3D point in the 3D point cloud based on the scaling parameters; The transformed three-dimensional point cloud is projected onto the u-disparity plane.
在一些实施例中,一个三维点的尺度变换参数与所述三维点的u坐标值对应。In some embodiments, the scaling parameter of a 3D point corresponds to the u-coordinate value of the 3D point.
在一些实施例中,若所述三维点云中的第一三维点的u坐标值小于第一预设坐标值,所述第一三维点的尺度变换参数大于1;和/或若所述三维点云中的第一三维点的u坐标值大于第二预设坐标值,所述第一三维点的尺度变换参数小于1。In some embodiments, if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is smaller than the first preset coordinate value, the scaling parameter of the first three-dimensional point is greater than 1; and/or if the three-dimensional point The u-coordinate value of the first three-dimensional point in the point cloud is greater than the second preset coordinate value, and the scale transformation parameter of the first three-dimensional point is less than 1.
在一些实施例中,所述处理器用于:对所述多个候选点中的每个候选点,确定所述候选点的搜索代价;基于所述候选点的搜索代价从所述候选点中确定目标候选点。In some embodiments, the processor is configured to: for each candidate point in the plurality of candidate points, determine a search cost of the candidate point; determine from the candidate points based on the search cost of the candidate point target candidate point.
在一些实施例中,所述处理器用于:若所述候选点的搜索代价小于预设代价,将所述候选点确定为目标候选点。In some embodiments, the processor is configured to: if the search cost of the candidate point is less than a preset cost, determine the candidate point as the target candidate point.
在一些实施例中,所述搜索代价包括第一搜索代价和第二搜索代价;其中,所述第一搜索代价用于表征在所述候选点上是否观测到目标候选点,所述第二搜索代价用于表征所述候选点与所述候选点的邻域目标候选点是否平滑。In some embodiments, the search cost includes a first search cost and a second search cost; wherein the first search cost is used to characterize whether a target candidate point is observed on the candidate point, and the second search cost The cost is used to characterize whether the candidate point and the neighbor target candidate point of the candidate point are smooth.
在一些实施例中,所述处理器用于:基于最小二乘法对所述目标候选点进行多项式拟合,得到所述行驶路面的多项式模型。In some embodiments, the processor is configured to: perform polynomial fitting on the target candidate points based on a least squares method to obtain a polynomial model of the driving road surface.
在一些实施例中,所述处理器用于:获取所述行驶路面的模型的斜率;基于所 述斜率确定所述行驶路面在所述u-disparity平面上的第二基准投影密度;基于所述第二基准投影密度对三维点云进行第二点云分割。In some embodiments, the processor is configured to: obtain a slope of the model of the driving road surface; determine a second reference projection density of the driving road surface on the u-disparity plane based on the slope; The second point cloud segmentation is performed on the 3D point cloud by the two reference projection densities.
在一些实施例中,所述处理器用于:在所述u-disparity平面上的第二像素栅格中,若所述三维点云在u-disparity平面上的投影密度与所述第二基准投影密度之比大于或等于第二预设比值,将所述三维点云中投影到所述第二像素栅格中的点分割为所述行驶路面上的目标点,所述第二预设比值大于或等于1。In some embodiments, the processor is configured to: in the second pixel grid on the u-disparity plane, if the projection density of the three-dimensional point cloud on the u-disparity plane is the same as the second reference projection The density ratio is greater than or equal to a second preset ratio, and the points projected into the second pixel grid in the three-dimensional point cloud are divided into target points on the driving road, and the second preset ratio is greater than or equal to 1.
在一些实施例中,所述处理器用于:确定所述行驶路面的深度;基于所述行驶路面的模型、所述模型的斜率和所述行驶路面的深度,确定所述第二基准投影密度。In some embodiments, the processor is configured to: determine the depth of the driving surface; and determine the second reference projected density based on a model of the driving surface, a slope of the model, and the depth of the driving surface.
在一些实施例中,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述处理器用于:计算所述模型的斜率与所述行驶路面的深度的乘积;计算所述行驶路面的模型与所述乘积的差值;基于所述差值与所述视觉传感器的基线长度的比值确定为所述第二基准投影密度。In some embodiments, the three-dimensional point cloud is collected by a vision sensor on the movable platform; the processor is configured to: calculate the product of the slope of the model and the depth of the driving surface; calculate the driving The difference between the model of the road surface and the product; the second reference projection density is determined based on the ratio of the difference to the baseline length of the vision sensor.
在一些实施例中,所述处理器还用于:基于所述点云分割结果,为所述三维点云中的各个三维点打标签,一个三维点的标签用于表征所述三维点的类别。In some embodiments, the processor is further configured to: label each 3D point in the 3D point cloud based on the point cloud segmentation result, and a label of a 3D point is used to represent the category of the 3D point .
在一些实施例中,所述处理器用于:基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签。In some embodiments, the processor is configured to: label each 3D point in the 3D point cloud based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud.
在一些实施例中,所述处理器用于:若所述三维点的高度低于所述行驶路面的高度,将所述三维点的标签确定为第一标签,所述第一标签用于表征所述三维点为倒影点。In some embodiments, the processor is configured to: if the height of the three-dimensional point is lower than the height of the driving road surface, determine the label of the three-dimensional point as a first label, and the first label is used to represent the The three-dimensional point is referred to as the reflection point.
在一些实施例中,所述处理器用于:针对所述三维点云中的各个三维点,基于所述三维点的高度确定所述三维点为所述行驶路面上的点的第一置信度;基于所述点云分割结果确定所述三维点为所述行驶路面上的点的第二置信度;基于所述三维点的第一置信度和第二置信度为所述三维点打标签。In some embodiments, the processor is configured to: for each 3D point in the 3D point cloud, determine a first confidence level that the 3D point is a point on the driving road based on the height of the 3D point; A second confidence level of the three-dimensional point as a point on the driving road is determined based on the point cloud segmentation result; the three-dimensional point is labeled based on the first confidence level and the second confidence level of the three-dimensional point.
在一些实施例中,所述处理器用于:若所述三维点的第一置信度和所述第二置信度中的至少一者大于预设置信度,确定所述三维点的标签为第一标签,所述第一标签用于指示所述三维点为所述行驶路面上的点。In some embodiments, the processor is configured to: if at least one of the first confidence level and the second confidence level of the 3D point is greater than a preset confidence level, determine that the label of the 3D point is the first A label, the first label is used to indicate that the three-dimensional point is a point on the driving road surface.
在一些实施例中,所述三维点云基于安装于所述可移动平台上的视觉传感器或者激光雷达采集得到;和/或所述点云分割结果用于所述可移动平台上的规划单元对所 述可移动平台的行驶状态进行规划。In some embodiments, the three-dimensional point cloud is acquired based on a vision sensor or lidar installed on the movable platform; and/or the point cloud segmentation result is used for the planning unit on the movable platform to pair The traveling state of the movable platform is planned.
本公开实施例的点云分割装置中处理器所执行的方法的具体实施例可参见前述方法实施例,此处不再赘述。For specific embodiments of the method executed by the processor in the point cloud segmentation apparatus according to the embodiment of the present disclosure, reference may be made to the foregoing method embodiments, which will not be repeated here.
图7示出了本说明书实施例所提供的一种更为具体的数据处理装置硬件结构示意图,该设备可以包括:处理器701、存储器702、输入/输出接口703、通信接口704和总线705。其中处理器701、存储器702、输入/输出接口703和通信接口704通过总线705实现彼此之间在设备内部的通信连接。FIG. 7 shows a schematic diagram of a more specific hardware structure of a data processing apparatus provided by an embodiment of this specification. The apparatus may include: a processor 701 , a memory 702 , an input/output interface 703 , a communication interface 704 and a bus 705 . The processor 701 , the memory 702 , the input/output interface 703 and the communication interface 704 realize the communication connection among each other within the device through the bus 705 .
处理器701可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。The processor 701 can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. program to implement the technical solutions provided by the embodiments of this specification.
存储器702可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器702可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器702中,并由处理器701来调用执行。The memory 702 can be implemented in the form of a ROM (Read Only Memory, read-only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like. The memory 702 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 702 and invoked by the processor 701 for execution.
输入/输出接口703用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/output interface 703 is used to connect the input/output module to realize the input and output of information. The input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
通信接口704用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。The communication interface 704 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices. The communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, mobile network, WIFI, Bluetooth, etc.).
总线705包括一通路,在设备的各个组件(例如处理器701、存储器702、输入/输出接口703和通信接口704)之间传输信息。 Bus 705 includes a path to transfer information between the various components of the device (eg, processor 701, memory 702, input/output interface 703, and communication interface 704).
需要说明的是,尽管上述设备仅示出了处理器701、存储器702、输入/输出接口703、通信接口704以及总线705,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above-mentioned device only shows the processor 701, the memory 702, the input/output interface 703, the communication interface 704 and the bus 705, in the specific implementation process, the device may also include necessary components for normal operation. other components. In addition, those skilled in the art can understand that, the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present specification, rather than all the components shown in the figures.
如图8所示,本公开实施例还提供一种可移动平台800,包括壳体801;点云采集装置802,设于所述壳体801上,用于采集三维点云;以及三维点云分割装置803,设于所述壳体801内,用于执行本公开任一实施例所述的方法。其中,所述可移动平台800可以是无人机、无人车、无人船、可移动机器人等设备,所述点云采集装置802可以是视觉传感器(例如双目视觉传感器、三目视觉传感器等)或者激光雷达。As shown in FIG. 8 , an embodiment of the present disclosure further provides a movable platform 800 , which includes a housing 801 ; a point cloud collecting device 802 , which is arranged on the housing 801 and is used to collect a three-dimensional point cloud; and a three-dimensional point cloud. The dividing device 803 is arranged in the casing 801 and is used for executing the method described in any embodiment of the present disclosure. Wherein, the movable platform 800 may be an unmanned aerial vehicle, an unmanned vehicle, an unmanned ship, a mobile robot, etc., and the point cloud collection device 802 may be a visual sensor (eg, a binocular vision sensor, a trinocular vision sensor, etc.) etc.) or lidar.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法中由第二处理单元执行的步骤。Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the steps executed by the second processing unit in the method described in any of the foregoing embodiments.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the embodiments of the present specification can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the technical solutions of the embodiments of this specification or the parts that make contributions to the prior art may be embodied in the form of software products, and the computer software products may be stored in storage media, such as ROM/RAM, A magnetic disk, an optical disk, etc., includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments in this specification.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.
以上实施例中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征 的任意进行组合也属于本公开的范围。Various technical features in the above embodiments can be combined arbitrarily, as long as there is no conflict or contradiction between the combinations of features, but due to space limitations, they are not described one by one, so the various technical features in the above embodiments can be combined arbitrarily It is also within the scope of this disclosure.
本领域技术人员在考虑公开及实践这里公开的说明书后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the disclosure and practice of the specification disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of this disclosure that follow the general principles of this disclosure and include common general knowledge or techniques in the technical field not disclosed by this disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
以上所述仅为本公开的较佳实施例而已,并不用以限制本公开,凡在本公开的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本公开保护的范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included in the present disclosure. within the scope of protection.

Claims (52)

  1. 一种三维点云分割方法,其特征在于,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:A three-dimensional point cloud segmentation method, characterized in that it is used for point cloud segmentation on a three-dimensional point cloud collected by a movable platform, the method comprising:
    获取所述三维点云中的多个候选点;acquiring multiple candidate points in the three-dimensional point cloud;
    在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;searching the multiple candidate points on the v-disparity plane, and determining a target candidate point located on the road surface of the movable platform among the multiple candidate points;
    基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所述行驶路面的模型对三维点云进行第二点云分割,得到点云分割结果。A model of the driving road surface is fitted based on the target candidate points, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane to obtain a point cloud segmentation result.
  2. 根据权利要求1所述的方法,其特征在于,所述获取三维点云中的多个候选点,包括:The method according to claim 1, wherein the acquiring a plurality of candidate points in the three-dimensional point cloud comprises:
    将所述三维点云中的全部三维点确定为候选点;或者Determining all 3D points in the 3D point cloud as candidate points; or
    对所述三维点云进行语义分割,基于语义分割结果获取所述三维点云中的多个候选点;或者Perform semantic segmentation on the 3D point cloud, and obtain multiple candidate points in the 3D point cloud based on the semantic segmentation result; or
    基于所述三维点云在u-disparity平面上的投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点。Obtain a plurality of candidate points in the 3D point cloud based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane .
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述三维点云在u-disparity平面上的投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点,包括:The method according to claim 2, wherein the first reference based on the projection density of the three-dimensional point cloud on the u-disparity plane and the plane model of the driving road surface on the u-disparity plane Projection density, to obtain multiple candidate points in the 3D point cloud, including:
    在所述u-disparity平面上的第一像素栅格中,若所述投影密度与所述第一基准投影密度之比大于或等于第一预设比值,将所述三维点云中投影到所述第一像素栅格中的点确定为候选点,所述第一预设比值大于1。In the first pixel grid on the u-disparity plane, if the ratio of the projection density to the first reference projection density is greater than or equal to a first preset ratio, project the three-dimensional point cloud to the The points in the first pixel grid are determined as candidate points, and the first preset ratio is greater than 1.
  4. 根据权利要求2所述的方法,其特征在于,所述平面模型在所述u-disparity平面上的第一基准投影密度与所述平面模型上的点的视差值成正比。The method according to claim 2, wherein the first reference projection density of the plane model on the u-disparity plane is proportional to the disparity value of points on the plane model.
  5. 根据权利要求4所述的方法,其特征在于,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述第一基准投影密度基于以下方式确定:The method according to claim 4, wherein the three-dimensional point cloud is collected by a vision sensor on the movable platform; and the first reference projection density is determined based on the following methods:
    获取所述平面模型在所述视觉传感器的坐标系中的第一坐标轴的截距,所述第一坐标轴为所述可移动平台的高度方向的坐标轴;acquiring the intercept of the first coordinate axis of the plane model in the coordinate system of the vision sensor, where the first coordinate axis is the coordinate axis in the height direction of the movable platform;
    基于所述截距、所述视觉传感器的基线长度以及所述平面模型上的点的视差值确定所述第一基准投影密度。The first reference projection density is determined based on the intercept, the baseline length of the vision sensor, and disparity values of points on the planar model.
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述截距、所述视觉传感 器的基线长度以及所述平面模型上的点的视差值确定所述第一基准投影密度,包括:6. The method of claim 5, wherein the determining the first reference projection density based on the intercept, the baseline length of the vision sensor, and the disparity value of points on the plane model, comprises: :
    计算所述截距与所述视觉传感器的基线长度的比值;calculating the ratio of the intercept to the baseline length of the vision sensor;
    将所述比值与所述平面模型上的点的视差值的乘积确定为所述第一基准投影密度。The product of the ratio and the disparity value of the point on the plane model is determined as the first reference projection density.
  7. 根据权利要求1所述的方法,其特征在于,所述三维点云中包括有效点和无效点;所述获取所述三维点云中的多个候选点,包括:The method according to claim 1, wherein the three-dimensional point cloud includes valid points and invalid points; and the acquiring a plurality of candidate points in the three-dimensional point cloud comprises:
    从所述三维点云中的有效点中获取多个候选点。A plurality of candidate points are obtained from valid points in the three-dimensional point cloud.
  8. 根据权利要求1所述的方法,其特征在于,所述获取所述三维点云中的多个候选点,包括:The method according to claim 1, wherein the acquiring a plurality of candidate points in the 3D point cloud comprises:
    从所述三维点中过滤掉野值点;filtering out outliers from the three-dimensional points;
    获取过滤后的所述三维点云中的多个候选点。Obtain multiple candidate points in the filtered three-dimensional point cloud.
  9. 根据权利要求2所述的方法,其特征在于,在基于所述三维点云在u-disparity平面上的第一投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点之前,所述方法还包括:The method according to claim 2, wherein, based on the first projection density of the three-dimensional point cloud on the u-disparity plane and the first projection density of the plane model of the driving road on the u-disparity plane The reference projection density, before acquiring a plurality of candidate points in the three-dimensional point cloud, the method further includes:
    获取预设的尺度变换参数;Get the preset scaling parameters;
    基于所述尺度变换参数,对所述三维点云中各个三维点的u坐标值进行尺度变换;Based on the scale transformation parameter, scale transformation is performed on the u-coordinate value of each 3D point in the 3D point cloud;
    将经尺度变换的所述三维点云投影到所述u-disparity平面上。The scaled three-dimensional point cloud is projected onto the u-disparity plane.
  10. 根据权利要求9所述的方法,其特征在于,一个三维点的尺度变换参数与所述三维点的u坐标值对应。The method according to claim 9, wherein a scale transformation parameter of a three-dimensional point corresponds to a u-coordinate value of the three-dimensional point.
  11. 根据权利要求10所述的方法,其特征在于,若所述三维点云中的第一三维点的u坐标值小于第一预设坐标值,所述第一三维点的尺度变换参数大于1;和/或The method according to claim 10, wherein if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is smaller than the first preset coordinate value, the scaling parameter of the first three-dimensional point is greater than 1; and / or
    若所述三维点云中的第一三维点的u坐标值大于或等于第二预设坐标值,所述第一三维点的尺度变换参数小于1。If the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than or equal to the second preset coordinate value, the scale transformation parameter of the first three-dimensional point is less than 1.
  12. 根据权利要求1所述的方法,其特征在于,所述在v-disparity平面上对所述多个候选点进行搜索,包括:The method according to claim 1, wherein the searching for the plurality of candidate points on the v-disparity plane comprises:
    对所述多个候选点中的每个候选点,确定所述候选点的搜索代价;For each candidate point in the plurality of candidate points, determine the search cost of the candidate point;
    基于所述候选点的搜索代价从所述候选点中确定目标候选点。A target candidate point is determined from the candidate points based on the search cost of the candidate points.
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述候选点的搜索代价从所述候选点中确定目标候选点,包括:The method according to claim 12, wherein the determining the target candidate point from the candidate points based on the search cost of the candidate point comprises:
    若所述候选点的搜索代价小于预设代价,将所述候选点确定为目标候选点。If the search cost of the candidate point is less than the preset cost, the candidate point is determined as the target candidate point.
  14. 根据权利要求12所述的方法,其特征在于,所述搜索代价包括第一搜索代价 和第二搜索代价;其中,所述第一搜索代价用于表征在所述候选点上是否观测到目标候选点,所述第二搜索代价用于表征所述候选点与所述候选点的邻域目标候选点是否平滑。The method according to claim 12, wherein the search cost includes a first search cost and a second search cost; wherein the first search cost is used to represent whether a target candidate is observed on the candidate point point, the second search cost is used to characterize whether the candidate point and the candidate point's neighborhood target candidate point are smooth.
  15. 根据权利要求1所述的方法,其特征在于,所述基于所述目标候选点拟合出所述行驶路面的模型,包括:The method according to claim 1, wherein the fitting the model of the driving road surface based on the target candidate points comprises:
    基于最小二乘法对所述目标候选点进行多项式拟合,得到所述行驶路面的多项式模型。Polynomial fitting is performed on the target candidate points based on the least squares method to obtain a polynomial model of the driving road surface.
  16. 根据权利要求1所述的方法,其特征在于,所述基于所述行驶路面的模型对三维点云进行第二点云分割,包括:The method according to claim 1, wherein the second point cloud segmentation of the three-dimensional point cloud based on the model of the driving road surface comprises:
    获取所述行驶路面的模型的斜率;obtaining the slope of the model of the driving road surface;
    基于所述斜率确定所述行驶路面在所述u-disparity平面上的第二基准投影密度;determining a second reference projected density of the driving road surface on the u-disparity plane based on the slope;
    基于所述第二基准投影密度对三维点云进行第二点云分割。A second point cloud segmentation is performed on the three-dimensional point cloud based on the second reference projection density.
  17. 根据权利要求16所述的方法,其特征在于,所述基于所述第二基准投影密度对三维点云进行第二点云分割,包括:The method according to claim 16, wherein the performing the second point cloud segmentation on the three-dimensional point cloud based on the second reference projection density comprises:
    在所述u-disparity平面上的第二像素栅格中,若所述三维点云在u-disparity平面上的投影密度与所述第二基准投影密度之比大于或等于第二预设比值,将所述三维点云中投影到所述第二像素栅格中的点分割为所述行驶路面上的目标点,所述第二预设比值大于或等于1。In the second pixel grid on the u-disparity plane, if the ratio of the projection density of the 3D point cloud on the u-disparity plane to the second reference projection density is greater than or equal to a second preset ratio, The points projected into the second pixel grid in the three-dimensional point cloud are divided into target points on the driving road surface, and the second preset ratio is greater than or equal to 1.
  18. 根据权利要求16所述的方法,其特征在于,所述基于所述斜率确定所述行驶路面在所述u-disparity平面上的第二基准投影密度,包括:The method according to claim 16, wherein the determining the second reference projection density of the driving road surface on the u-disparity plane based on the slope comprises:
    确定所述行驶路面的深度;determining the depth of the travel surface;
    基于所述行驶路面的模型、所述模型的斜率和所述行驶路面的深度,确定所述第二基准投影密度。The second reference projected density is determined based on the model of the travel surface, the slope of the model, and the depth of the travel surface.
  19. 根据权利要求18所述的方法,其特征在于,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述基于所述行驶路面的模型、所述模型的斜率和所述行驶路面的深度,确定所述第二基准投影密度,包括:The method according to claim 18, wherein the three-dimensional point cloud is collected by a visual sensor on the movable platform; the model based on the driving road surface, the slope of the model and the driving The depth of the road surface, and the second reference projection density is determined, including:
    计算所述模型的斜率与所述行驶路面的深度的乘积;calculating the product of the slope of the model and the depth of the road surface;
    计算所述行驶路面的模型与所述乘积的差值;calculating the difference between the model of the driving surface and the product;
    基于所述差值与所述视觉传感器的基线长度的比值确定为所述第二基准投影密度。The second reference projected density is determined based on the ratio of the difference to the baseline length of the vision sensor.
  20. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    基于所述点云分割结果,为所述三维点云中的各个三维点打标签,一个三维点的标签用于表征所述三维点的类别。Based on the point cloud segmentation result, each 3D point in the 3D point cloud is labeled, and one 3D point label is used to characterize the category of the 3D point.
  21. 根据权利要求20所述的方法,其特征在于,所述基于所述点云分割结果,为所述三维点云中的各个三维点打标签,包括:The method according to claim 20, wherein, labeling each 3D point in the 3D point cloud based on the point cloud segmentation result, comprising:
    基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签。Labeling each 3D point in the 3D point cloud based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud.
  22. 根据权利要求21所述的方法,其特征在于,所述基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签,包括:The method according to claim 21, wherein, labeling each three-dimensional point in the three-dimensional point cloud based on the point cloud segmentation result and the height of each three-dimensional point in the three-dimensional point cloud, include:
    若所述三维点的高度低于所述行驶路面的高度,将所述三维点的标签确定为第一标签,所述第一标签用于表征所述三维点为倒影点。If the height of the three-dimensional point is lower than the height of the driving road surface, the label of the three-dimensional point is determined as a first label, and the first label is used to indicate that the three-dimensional point is a reflection point.
  23. 根据权利要求21所述的方法,其特征在于,所述基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签,包括:The method according to claim 21, wherein, labeling each three-dimensional point in the three-dimensional point cloud based on the point cloud segmentation result and the height of each three-dimensional point in the three-dimensional point cloud, include:
    针对所述三维点云中的各个三维点,基于所述三维点的高度确定所述三维点为所述行驶路面上的点的第一置信度;For each three-dimensional point in the three-dimensional point cloud, determining a first confidence level that the three-dimensional point is a point on the driving road based on the height of the three-dimensional point;
    基于所述点云分割结果确定所述三维点为所述行驶路面上的点的第二置信度;determining a second confidence level that the three-dimensional point is a point on the driving road surface based on the point cloud segmentation result;
    基于所述三维点的第一置信度和第二置信度为所述三维点打标签。The three-dimensional point is tagged based on the first confidence level and the second confidence level of the three-dimensional point.
  24. 根据权利要求23所述的方法,其特征在于,所述基于所述三维点的第一置信度和第二置信度为所述三维点打标签,包括:The method according to claim 23, wherein the labeling of the three-dimensional point based on the first confidence level and the second confidence level of the three-dimensional point comprises:
    若所述三维点的第一置信度和所述第二置信度中的至少一者大于预设置信度,确定所述三维点的标签为第一标签,所述第一标签用于指示所述三维点为所述行驶路面上的点。If at least one of the first confidence level and the second confidence level of the 3D point is greater than a preset confidence level, determine the label of the 3D point as a first label, and the first label is used to indicate the A three-dimensional point is a point on the driving road surface.
  25. 根据权利要求1所述的方法,其特征在于,所述三维点云基于安装于所述可移动平台上的视觉传感器或者激光雷达采集得到;和/或The method according to claim 1, wherein the three-dimensional point cloud is acquired based on a vision sensor or lidar installed on the movable platform; and/or
    所述点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。The point cloud segmentation result is used by the planning unit on the movable platform to plan the driving state of the movable platform.
  26. 一种三维点云分割装置,包括处理器,其特征在于,所述三维点云分割装置用于对可移动平台采集到的三维点云进行点云分割,所述处理器用于执行以下步骤:A three-dimensional point cloud segmentation device, comprising a processor, characterized in that, the three-dimensional point cloud segmentation device is used to perform point cloud segmentation on a three-dimensional point cloud collected by a movable platform, and the processor is configured to perform the following steps:
    获取所述三维点云中的多个候选点;acquiring multiple candidate points in the three-dimensional point cloud;
    在v-disparity平面上对所述多个候选点进行搜索,确定所述多个候选点中位于所述可移动平台行驶路面上的目标候选点;searching the multiple candidate points on the v-disparity plane, and determining a target candidate point located on the road surface of the movable platform among the multiple candidate points;
    基于所述目标候选点拟合出所述行驶路面的模型,并在u-disparity平面上基于所 述行驶路面的模型对三维点云进行第二点云分割。A model of the driving road surface is fitted based on the target candidate points, and a second point cloud segmentation is performed on the three-dimensional point cloud based on the model of the driving road surface on the u-disparity plane.
  27. 根据权利要求26所述的装置,其特征在于,所述处理器用于:The apparatus of claim 26, wherein the processor is configured to:
    将所述三维点云中的全部三维点确定为候选点;或者Determining all 3D points in the 3D point cloud as candidate points; or
    对所述三维点云进行语义分割,基于语义分割结果获取所述三维点云中的多个候选点;或者Perform semantic segmentation on the 3D point cloud, and obtain multiple candidate points in the 3D point cloud based on the semantic segmentation result; or
    基于所述三维点云在u-disparity平面上的投影密度以及所述行驶路面的平面模型在所述u-disparity平面上的第一基准投影密度,获取所述三维点云中的多个候选点。Obtain a plurality of candidate points in the 3D point cloud based on the projection density of the 3D point cloud on the u-disparity plane and the first reference projection density of the plane model of the driving road on the u-disparity plane .
  28. 根据权利要求27所述的装置,其特征在于,所述处理器用于:The apparatus of claim 27, wherein the processor is configured to:
    在所述u-disparity平面上的第一像素栅格中,若所述投影密度与所述第一基准投影密度之比大于或等于第一预设比值,将所述三维点云中投影到所述第一像素栅格中的点确定为候选点,所述第一预设比值大于1。In the first pixel grid on the u-disparity plane, if the ratio of the projection density to the first reference projection density is greater than or equal to a first preset ratio, project the three-dimensional point cloud to the The points in the first pixel grid are determined as candidate points, and the first preset ratio is greater than 1.
  29. 根据权利要求27所述的装置,其特征在于,所述平面模型在所述u-disparity平面上的第一基准投影密度与所述平面模型上的点的视差值成正比。The device according to claim 27, wherein the first reference projection density of the plane model on the u-disparity plane is proportional to the disparity value of points on the plane model.
  30. 根据权利要求29所述的装置,其特征在于,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述处理器用于:The device according to claim 29, wherein the three-dimensional point cloud is acquired by a vision sensor on the movable platform; and the processor is used for:
    获取所述平面模型在所述视觉传感器的坐标系中的第一坐标轴的截距,所述第一坐标轴为所述可移动平台的高度方向的坐标轴;acquiring the intercept of the first coordinate axis of the plane model in the coordinate system of the vision sensor, where the first coordinate axis is the coordinate axis in the height direction of the movable platform;
    基于所述截距、所述视觉传感器的基线长度以及所述平面模型上的点的视差值确定所述第一基准投影密度。The first reference projection density is determined based on the intercept, a baseline length of the vision sensor, and disparity values of points on the planar model.
  31. 根据权利要求30所述的装置,其特征在于,所述处理器用于:The apparatus of claim 30, wherein the processor is configured to:
    计算所述截距与所述视觉传感器的基线长度的比值;calculating the ratio of the intercept to the baseline length of the vision sensor;
    将所述比值与所述平面模型上的点的视差值的乘积确定为所述第一基准投影密度。The product of the ratio and the disparity value of the point on the plane model is determined as the first reference projection density.
  32. 根据权利要求26所述的装置,其特征在于,所述三维点云中包括有效点和无效点;所述处理器用于:The device according to claim 26, wherein the three-dimensional point cloud includes valid points and invalid points; the processor is configured to:
    从所述三维点云中的有效点中获取多个候选点。A plurality of candidate points are obtained from valid points in the three-dimensional point cloud.
  33. 根据权利要求26所述的装置,其特征在于,所述处理器用于:The apparatus of claim 26, wherein the processor is configured to:
    从所述三维点中过滤掉野值点;filtering out outliers from the three-dimensional points;
    获取过滤后的所述三维点云中的多个候选点。Obtain multiple candidate points in the filtered three-dimensional point cloud.
  34. 根据权利要求27所述的装置,其特征在于,所述处理器还用于:The apparatus of claim 27, wherein the processor is further configured to:
    获取预设的尺度变换参数;Get the preset scaling parameters;
    基于所述尺度变换参数,对所述三维点云中各个三维点的u坐标值进行尺度变换;Based on the scale transformation parameter, scale transformation is performed on the u-coordinate value of each 3D point in the 3D point cloud;
    将经尺度变换的所述三维点云投影到所述u-disparity平面上。The scaled three-dimensional point cloud is projected onto the u-disparity plane.
  35. 根据权利要求34所述的装置,其特征在于,一个三维点的尺度变换参数与所述三维点的u坐标值对应。The apparatus according to claim 34, wherein a scale transformation parameter of a three-dimensional point corresponds to a u-coordinate value of the three-dimensional point.
  36. 根据权利要求35所述的装置,其特征在于,若所述三维点云中的第一三维点的u坐标值小于第一预设坐标值,所述第一三维点的尺度变换参数大于1;和/或The device according to claim 35, wherein if the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is smaller than the first preset coordinate value, the scaling parameter of the first three-dimensional point is greater than 1; and / or
    若所述三维点云中的第一三维点的u坐标值大于第二预设坐标值,所述第一三维点的尺度变换参数小于1。If the u-coordinate value of the first three-dimensional point in the three-dimensional point cloud is greater than the second preset coordinate value, the scale transformation parameter of the first three-dimensional point is less than 1.
  37. 根据权利要求26所述的装置,其特征在于,所述处理器用于:The apparatus of claim 26, wherein the processor is configured to:
    对所述多个候选点中的每个候选点,确定所述候选点的搜索代价;For each candidate point in the plurality of candidate points, determine the search cost of the candidate point;
    基于所述候选点的搜索代价从所述候选点中确定目标候选点。A target candidate point is determined from the candidate points based on the search cost of the candidate points.
  38. 根据权利要求37所述的装置,其特征在于,所述处理器用于:The apparatus of claim 37, wherein the processor is configured to:
    若所述候选点的搜索代价小于预设代价,将所述候选点确定为目标候选点。If the search cost of the candidate point is less than the preset cost, the candidate point is determined as the target candidate point.
  39. 根据权利要求37所述的装置,其特征在于,所述搜索代价包括第一搜索代价和第二搜索代价;其中,所述第一搜索代价用于表征在所述候选点上是否观测到目标候选点,所述第二搜索代价用于表征所述候选点与所述候选点的邻域目标候选点是否平滑。The apparatus according to claim 37, wherein the search cost includes a first search cost and a second search cost; wherein the first search cost is used to represent whether a target candidate is observed at the candidate point point, and the second search cost is used to characterize whether the candidate point and the candidate point's neighborhood target candidate point are smooth.
  40. 根据权利要求26所述的装置,其特征在于,所述处理器用于:The apparatus of claim 26, wherein the processor is configured to:
    基于最小二乘法对所述目标候选点进行多项式拟合,得到所述行驶路面的多项式模型。Polynomial fitting is performed on the target candidate points based on the least squares method to obtain a polynomial model of the driving road surface.
  41. 根据权利要求26所述的装置,其特征在于,所述处理器用于:The apparatus of claim 26, wherein the processor is configured to:
    获取所述行驶路面的模型的斜率;obtaining the slope of the model of the driving road surface;
    基于所述斜率确定所述行驶路面在所述u-disparity平面上的第二基准投影密度;determining a second reference projected density of the driving road surface on the u-disparity plane based on the slope;
    基于所述第二基准投影密度对三维点云进行第二点云分割。A second point cloud segmentation is performed on the three-dimensional point cloud based on the second reference projection density.
  42. 根据权利要求41所述的装置,其特征在于,所述处理器用于:The apparatus of claim 41, wherein the processor is configured to:
    在所述u-disparity平面上的第二像素栅格中,若所述三维点云在u-disparity平面上的投影密度与所述第二基准投影密度之比大于或等于第二预设比值,将所述三维点云中投影到所述第二像素栅格中的点分割为所述行驶路面上的目标点,所述第二预设比值大于或等于1。In the second pixel grid on the u-disparity plane, if the ratio of the projection density of the 3D point cloud on the u-disparity plane to the second reference projection density is greater than or equal to a second preset ratio, The points projected into the second pixel grid in the three-dimensional point cloud are divided into target points on the driving road surface, and the second preset ratio is greater than or equal to 1.
  43. 根据权利要求41所述的装置,其特征在于,所述处理器用于:The apparatus of claim 41, wherein the processor is configured to:
    确定所述行驶路面的深度;determining the depth of the travel surface;
    基于所述行驶路面的模型、所述模型的斜率和所述行驶路面的深度,确定所述第二基准投影密度。The second reference projected density is determined based on the model of the travel surface, the slope of the model, and the depth of the travel surface.
  44. 根据权利要求43所述的装置,其特征在于,所述三维点云由所述可移动平台上的视觉传感器采集得到;所述处理器用于:The device according to claim 43, wherein the three-dimensional point cloud is acquired by a vision sensor on the movable platform; the processor is used for:
    计算所述模型的斜率与所述行驶路面的深度的乘积;calculating the product of the slope of the model and the depth of the road surface;
    计算所述行驶路面的模型与所述乘积的差值;calculating the difference between the model of the driving surface and the product;
    基于所述差值与所述视觉传感器的基线长度的比值确定为所述第二基准投影密度。The second reference projected density is determined based on the ratio of the difference to the baseline length of the vision sensor.
  45. 根据权利要求26所述的装置,其特征在于,所述处理器还用于:The apparatus of claim 26, wherein the processor is further configured to:
    基于所述点云分割结果,为所述三维点云中的各个三维点打标签,一个三维点的标签用于表征所述三维点的类别。Based on the point cloud segmentation result, each 3D point in the 3D point cloud is labeled, and one 3D point label is used to represent the category of the 3D point.
  46. 根据权利要求45所述的装置,其特征在于,所述处理器用于:The apparatus of claim 45, wherein the processor is configured to:
    基于所述点云分割结果和所述三维点云中的各个三维点的高度,为所述三维点云中的各个三维点打标签。Labeling each 3D point in the 3D point cloud based on the point cloud segmentation result and the height of each 3D point in the 3D point cloud.
  47. 根据权利要求46所述的装置,其特征在于,所述处理器用于:The apparatus of claim 46, wherein the processor is configured to:
    若所述三维点的高度低于所述行驶路面的高度,将所述三维点的标签确定为第一标签,所述第一标签用于表征所述三维点为倒影点。If the height of the three-dimensional point is lower than the height of the driving road, the label of the three-dimensional point is determined as a first label, and the first label is used to indicate that the three-dimensional point is a reflection point.
  48. 根据权利要求46所述的装置,其特征在于,所述处理器用于:The apparatus of claim 46, wherein the processor is configured to:
    针对所述三维点云中的各个三维点,基于所述三维点的高度确定所述三维点为所述行驶路面上的点的第一置信度;For each three-dimensional point in the three-dimensional point cloud, determining a first confidence level that the three-dimensional point is a point on the driving road based on the height of the three-dimensional point;
    基于所述点云分割结果确定所述三维点为所述行驶路面上的点的第二置信度;determining a second confidence level that the three-dimensional point is a point on the driving road surface based on the point cloud segmentation result;
    基于所述三维点的第一置信度和第二置信度为所述三维点打标签。The three-dimensional point is tagged based on the first confidence level and the second confidence level of the three-dimensional point.
  49. 根据权利要求48所述的装置,其特征在于,所述处理器用于:The apparatus of claim 48, wherein the processor is configured to:
    若所述三维点的第一置信度和所述第二置信度中的至少一者大于预设置信度,确定所述三维点的标签为第一标签,所述第一标签用于指示所述三维点为所述行驶路面上的点。If at least one of the first confidence level and the second confidence level of the 3D point is greater than a preset confidence level, determine the label of the 3D point as a first label, and the first label is used to indicate the A three-dimensional point is a point on the driving road surface.
  50. 根据权利要求26所述的装置,其特征在于,所述三维点云基于安装于所述可移动平台上的视觉传感器或者激光雷达采集得到;和/或The device according to claim 26, wherein the three-dimensional point cloud is acquired based on a vision sensor or lidar installed on the movable platform; and/or
    所述点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。The point cloud segmentation result is used by the planning unit on the movable platform to plan the driving state of the movable platform.
  51. 一种可移动平台,其特征在于,包括:A movable platform, characterized in that, comprising:
    壳体;case;
    点云采集装置,设于所述壳体上,用于采集三维点云;以及a point cloud collecting device, arranged on the casing, for collecting three-dimensional point clouds; and
    三维点云分割装置,设于所述壳体内,用于执行权利要求1至25任意一项所述的方法。A three-dimensional point cloud segmentation device, arranged in the casing, is used to execute the method of any one of claims 1 to 25.
  52. 一种计算机可读存储介质,其特征在于,其上存储有计算机指令,该指令被处理器执行时实现权利要求1至25任意一项所述的方法。A computer-readable storage medium, characterized in that computer instructions are stored thereon, and when the instructions are executed by a processor, the method of any one of claims 1 to 25 is implemented.
PCT/CN2020/128711 2020-11-13 2020-11-13 Three-dimensional point cloud segmentation method and apparatus, and mobile platform WO2022099620A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080071116.8A CN114631124A (en) 2020-11-13 2020-11-13 Three-dimensional point cloud segmentation method and device and movable platform
PCT/CN2020/128711 WO2022099620A1 (en) 2020-11-13 2020-11-13 Three-dimensional point cloud segmentation method and apparatus, and mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/128711 WO2022099620A1 (en) 2020-11-13 2020-11-13 Three-dimensional point cloud segmentation method and apparatus, and mobile platform

Publications (1)

Publication Number Publication Date
WO2022099620A1 true WO2022099620A1 (en) 2022-05-19

Family

ID=81602047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128711 WO2022099620A1 (en) 2020-11-13 2020-11-13 Three-dimensional point cloud segmentation method and apparatus, and mobile platform

Country Status (2)

Country Link
CN (1) CN114631124A (en)
WO (1) WO2022099620A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147653A (en) * 2023-04-14 2023-05-23 北京理工大学 Three-dimensional reference path planning method for unmanned vehicle
CN116524472A (en) * 2023-06-30 2023-08-01 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system
CN107977654A (en) * 2017-12-25 2018-05-01 海信集团有限公司 A kind of road area detection method, device and terminal
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system
CN107977654A (en) * 2017-12-25 2018-05-01 海信集团有限公司 A kind of road area detection method, device and terminal
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IMAD BENACER ET AL.: "A novel stereovision algorithm for obstacles detection based on U-V- disparity approach", INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS, 27 May 2015 (2015-05-27), pages 369 - 372, XP033183134, DOI: 10.1109/ISCAS.2015.7168647 *
ZHENCHENG HU ET AL.: "U-V-disparity: an efficient algorithm for stereovision based scene analysis", PROCEEDINGS. INTELLIGENT VEHICLES SYMPOSIUM, 8 June 2005 (2005-06-08), pages 48 - 54, XP010833942, ISSN: 1931-0587, DOI: 10.1109/IVS.2005.1505076 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147653A (en) * 2023-04-14 2023-05-23 北京理工大学 Three-dimensional reference path planning method for unmanned vehicle
CN116147653B (en) * 2023-04-14 2023-08-22 北京理工大学 Three-dimensional reference path planning method for unmanned vehicle
CN116524472A (en) * 2023-06-30 2023-08-01 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment
CN116524472B (en) * 2023-06-30 2023-09-22 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN114631124A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US11506769B2 (en) Method and device for detecting precision of internal parameter of laser radar
US10670416B2 (en) Traffic sign feature creation for high definition maps used for navigating autonomous vehicles
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
CN107677279A (en) It is a kind of to position the method and system for building figure
CN111968229A (en) High-precision map making method and device
WO2022141116A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
WO2022142628A1 (en) Point cloud data processing method and device
US20230042968A1 (en) High-definition map creation method and device, and electronic device
CN113593017A (en) Method, device and equipment for constructing surface three-dimensional model of strip mine and storage medium
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114545434A (en) Road side visual angle speed measurement method and system, electronic equipment and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN113804100B (en) Method, device, equipment and storage medium for determining space coordinates of target object
CN113111787A (en) Target detection method, device, equipment and storage medium
CN112902911B (en) Ranging method, device, equipment and storage medium based on monocular camera
CN112965076A (en) Multi-radar positioning system and method for robot
Wang et al. Improved LeGO-LOAM method based on outlier points elimination
WO2022126380A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN112835063B (en) Method, device, equipment and storage medium for determining dynamic and static properties of object
Madake et al. Visualization of 3D Point Clouds for Vehicle Detection Based on LiDAR and Camera Fusion
CN114384486A (en) Data processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20961176

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20961176

Country of ref document: EP

Kind code of ref document: A1