WO2022099530A1 - Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2022099530A1
WO2022099530A1 PCT/CN2020/128264 CN2020128264W WO2022099530A1 WO 2022099530 A1 WO2022099530 A1 WO 2022099530A1 CN 2020128264 W CN2020128264 W CN 2020128264W WO 2022099530 A1 WO2022099530 A1 WO 2022099530A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
bounding box
ground
frame
Prior art date
Application number
PCT/CN2020/128264
Other languages
English (en)
Chinese (zh)
Inventor
吴伟
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to CN202080092973.6A priority Critical patent/CN115066708A/zh
Priority to PCT/CN2020/128264 priority patent/WO2022099530A1/fr
Publication of WO2022099530A1 publication Critical patent/WO2022099530A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Definitions

  • the present application relates to a method, device, computer equipment and storage medium for motion segmentation of point cloud data.
  • Lidar sensors can provide real-time and accurate 3D scene information, with a large ranging range and high accuracy, and the measured values are not affected by ambient light.
  • the information collected by lidar sensors is usually presented in the form of a point cloud.
  • unmanned driving in order to ensure the safety of unmanned vehicles, it is necessary to monitor objects in the surrounding environment in real time, especially to detect and track moving objects such as pedestrians and moving vehicles. Data is segmented to identify moving objects.
  • non-deep learning methods such as based on grid maps, count the change status of point cloud data in grids corresponding to multiple consecutive frames, so as to identify dynamic points and static points in point cloud data. point.
  • the traditional method only considers the individual features of each point, and it is difficult to directly correspond to the point cloud data of consecutive frames, resulting in low accuracy of point cloud motion segmentation.
  • a point cloud data segmentation method, apparatus, computer equipment and storage medium capable of accurate point cloud motion segmentation are provided.
  • a motion segmentation method for point cloud data comprising:
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and the target point cloud motion segmentation result is obtained.
  • a point cloud data motion segmentation device comprising:
  • Communication module for acquiring multi-frame point cloud data
  • the point cloud motion segmentation module is used to perform point cloud motion segmentation on the point cloud data, and obtain the point cloud motion segmentation result;
  • the ground filtering module is used to perform ground filtering processing on each frame of point cloud data to obtain multiple frames of non-ground point cloud data;
  • the cluster analysis module is used to perform cluster analysis on the multi-frame non-ground point cloud data, and obtain the three-dimensional bounding box corresponding to each frame of the non-ground point cloud data;
  • an occlusion analysis module for performing occlusion analysis on the 3D bounding box to determine the motion state of the point cloud corresponding to the 3D bounding box;
  • the updating module is used to update the point cloud motion segmentation result according to the non-ground point cloud data corresponding to the three-dimensional bounding box when the point cloud motion state is dynamic, and obtain the target point cloud motion segmentation result.
  • a computer device includes a memory and one or more processors, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, causes the one or more processors to perform the following steps:
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and the target point cloud motion segmentation result is obtained.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and the target point cloud motion segmentation result is obtained.
  • FIG. 1 is an application environment diagram of a method for motion segmentation of point cloud data in one or more embodiments.
  • FIG. 2 is a schematic flowchart of a method for motion segmentation of point cloud data in one or more embodiments.
  • FIG. 3 is a schematic flowchart of steps of performing occlusion analysis on a three-dimensional bounding box to determine a point cloud motion state corresponding to the three-dimensional bounding box in one or more embodiments.
  • FIG. 4 is a block diagram of an apparatus for motion segmentation of point cloud data in one or more embodiments.
  • FIG. 5 is a block diagram of a computer device in one or more embodiments.
  • the point cloud data motion segmentation method provided in this application can be applied to the application environment shown in FIG. 1 .
  • the vehicle-mounted sensor 102 sends the collected multi-frame point cloud data to the computer device 104 .
  • the onboard sensor could be a lidar.
  • the computer device 104 performs point cloud motion segmentation on the point cloud data to obtain a point cloud motion segmentation result.
  • the computer device 104 can also perform ground filtering processing on each frame of point cloud data to obtain multiple frames of non-ground point cloud data, and perform cluster analysis on the multiple frames of non-ground point cloud data to obtain a three-dimensional enclosure corresponding to each frame of non-ground point cloud data.
  • the computer device 104 updates the point cloud motion segmentation result according to the non-ground point cloud data corresponding to the three-dimensional bounding box to obtain the target point cloud motion segmentation result.
  • FIG. 2 a method for motion segmentation of point cloud data is provided, and the method is applied to the computer equipment in FIG. 1 as an example to illustrate, including the following steps:
  • Step 202 acquiring multi-frame point cloud data.
  • the point cloud data can be the data recorded by the vehicle-mounted sensor in the form of point cloud by scanning the surrounding environment information.
  • the onboard sensor could be a lidar.
  • a single frame of point cloud data refers to the original point cloud collected by the vehicle-mounted sensor rotating one circle horizontally.
  • Multi-frame point cloud data refers to the original point cloud in which multiple consecutive frames exist in time sequence.
  • the original point cloud is distributed on multiple circular scan lines with different vertical angles, and the point cloud data on each circular scan line is composed of a circle of laser points.
  • the point cloud data may specifically include three-dimensional coordinates (x, y, z) of each point, laser reflection intensity (Intensity), color information (RGB), and the like.
  • Three-dimensional coordinates are used to represent the position information of the target object surface in the surrounding environment.
  • the three-dimensional coordinates may be the coordinates of the point in the Cartesian coordinate system, and specifically include the horizontal, vertical, and vertical coordinates of the point in the Cartesian coordinate system.
  • the Cartesian coordinate system is a three-dimensional space coordinate system established with lidar as the origin.
  • the three-dimensional space coordinate system includes a horizontal axis (x axis), a vertical axis (y axis) and a vertical axis (z axis).
  • the three-dimensional space coordinate system established with lidar as the origin satisfies the right-hand rule.
  • the x-axis coordinate in the three-dimensional coordinates represents the longitudinal distance of the scanned target object surface relative to the lidar
  • the y-axis coordinate represents the lateral offset of the scanned target object surface relative to the lidar
  • the z-axis coordinate represents the scanned target.
  • the surrounding environment can be scanned by the lidar installed on the vehicle to obtain the corresponding multi-frame point cloud data, and the collected multi-frame point cloud data can be transmitted to the computer equipment.
  • Step 204 Perform point cloud motion segmentation on the point cloud data to obtain a point cloud motion segmentation result.
  • Point cloud motion segmentation refers to classifying the motion of points in point cloud data and identifying whether each point is a dynamic point or a static point. After acquiring the multi-frame point cloud data transmitted by the vehicle sensor, the computer equipment can perform point cloud motion segmentation on the point cloud data.
  • the point cloud motion segmentation result includes the motion state corresponding to each point in the point cloud data, including dynamic points and static points.
  • the way to perform point cloud motion segmentation on point cloud data may be to first divide the data space corresponding to each frame of point cloud data into multiple grids according to the preset grid size to obtain a grid map, so that the computer equipment counts multiple consecutive frames. For the quantitative change value of the point cloud data under each grid, the point corresponding to the grid with the quantitative change value greater than the threshold is determined as a dynamic point, and the point corresponding to the grid with the quantitative change value less than or equal to the threshold value is determined as a static point. .
  • the computer device may further establish a spatial grid containing the point cloud data, and set each grid to a preset size, such as 0.1 meters, so as to divide the spatial grid into a plurality of grids.
  • the raster When a raster has one or more points in the point cloud data at its corresponding location, the raster can be marked as occupied. When the position corresponding to the grid does not have any point, the grid is marked as unoccupied, thereby realizing the marking of each grid in the spatial grid.
  • the grid of the target object By marking the grid, the grid of the target object can be quickly determined, so as to perform point cloud motion segmentation.
  • the smaller the size of the grid the higher the precision and the higher the resolution, that is, for the same spatial grid, the more grids there are, the higher the resolution of the spatial grid is. The higher the resolution rate of the spatial grid, the higher the accuracy of the raster map.
  • the point cloud motion segmentation method may be any one of a variety of point cloud motion segmentation methods, such as a point-level occlusion analysis method, a point cloud motion segmentation method based on a deep learning model, and the like.
  • performing point cloud motion segmentation on point cloud data, and obtaining a point cloud motion segmentation result includes: calling a pre-trained point cloud motion segmentation model, and inputting the point cloud data into the point cloud motion segmentation model;
  • the cloud motion segmentation model classifies the point cloud data, outputs the corresponding motion state of each point in the point cloud data, and generates the point cloud motion segmentation result.
  • the point cloud motion segmentation model is trained by a large number of labeled sample data.
  • the point cloud motion segmentation model can be any one of pointnet, pointnet++, etc.
  • the computer equipment can input the point cloud data into the point cloud motion segmentation model, classify the point cloud data through the point cloud motion segmentation model, and determine the motion state of each point in the point cloud data, that is, output each point as a dynamic point. Still a static point.
  • the motion classification of point cloud data can improve the accuracy of point cloud motion segmentation by using a pre-established point cloud motion segmentation model.
  • the computer device may further preprocess the point cloud data before the point cloud motion segmentation.
  • the way of preprocessing can include downsampling and filtering processing operations.
  • computer equipment can use voxel grid filtering to downsample point cloud data, which can remove noise points, reduce the amount of redundant data, improve point cloud processing speed, and maintain details in the depth direction during filtering. Reduce the loss of detail in the depth direction. Therefore, the computer equipment performs through filtering processing on the downsampled point cloud data.
  • the depth direction data range of the point cloud data can be set, for example, between 0.6-1.0m, and the point cloud outside this range will be used as interference. Click to remove.
  • the computer equipment adopts the statistical filtering method on the point cloud data after the pass-through filtering, performs a statistical analysis on the neighborhood of each point, and removes the point cloud outside the standard range, thereby reducing the isolated redundant point cloud.
  • the computer equipment can use the sparse outlier removal method to calculate the average distance from each point in the point cloud data to all adjacent points, determine the points whose average distance is outside the standard range as outliers, and extract the points from the point cloud data. removed from cloud data.
  • the standard range is defined by the global distance mean and variance. By removing outliers, the accuracy of point cloud motion segmentation can be improved.
  • Step 206 Perform ground filtering processing on each frame of point cloud data to obtain multiple frames of non-ground point cloud data.
  • Step 208 Perform cluster analysis on multiple frames of non-ground point cloud data to obtain a three-dimensional bounding box corresponding to each frame of non-ground point cloud data.
  • the computer equipment can also perform point cloud motion on the point cloud data.
  • 3D object-level occlusion analysis is performed on the point cloud data to assist the traditional point cloud motion segmentation method and improve the accuracy of point cloud motion segmentation.
  • the computer equipment Before performing occlusion analysis on the point cloud data at the 3D target level, the computer equipment needs to perform ground filtering, cluster analysis and identification of 3D bounding boxes on the point cloud data of each frame.
  • Ground filtering refers to filtering out the ground points in the point cloud data, and the remaining points are non-ground points.
  • the computer equipment can identify the ground points in each frame of point cloud data by performing ground segmentation on each frame of point cloud data, thereby filtering the ground points in each frame of point cloud data to obtain each frame of non-ground point cloud data.
  • the computer device may first divide the point cloud area where the point cloud data is located into a plurality of sub-areas.
  • the point cloud area refers to the three-dimensional data space where the point cloud data is located.
  • the division method may be grid division of the point cloud area, that is, division of the horizontal plane formed by the point cloud area in the x-axis direction and the y-axis direction.
  • the grid division method may be equal division or random division. For example, for a vehicle-mounted sensor with a visible range of 100m, and the horizontal plane in the scanning area of the vehicle-mounted sensor is 100m*100m, the point cloud area can be equally divided into 10*10 horizontal grids.
  • the computer device may use the least squares method to estimate the corresponding ground according to the preset plane equation, so as to obtain the corresponding ground of each sub-region.
  • the preset plane equation may be a ternary linear equation.
  • the ground corresponding to each sub-region is embodied in the form of a ternary linear equation.
  • the computer equipment traverses the coordinates of the points in the corresponding sub-regions in the equation corresponding to the ground, calculates the distance between each point and the corresponding ground, and determines the point as a ground point when the distance is less than a threshold. When the distance is greater than or equal to the threshold, the point is determined as a non-ground point. Threshold refers to the distance threshold used to judge whether the point is a ground point.
  • the computer equipment further filters the ground points to obtain non-ground points in each frame of point cloud data, thereby obtaining each frame of non-ground point cloud data.
  • the ground segmentation method may be any one of a depth image and geometric relationship-based method, a normal vector method, an absolute height method, an average height method, and the like.
  • the computer equipment performs cluster analysis on multiple frames of non-ground point cloud data, and obtains a three-dimensional bounding box corresponding to each target object in each frame of non-ground point cloud data.
  • Cluster analysis refers to clustering the points corresponding to the same target object together, and identifying the three-dimensional bounding box corresponding to each target object.
  • Cluster analysis can include two steps of clustering and identifying three-dimensional bounding boxes.
  • the clustering method may be any one of the clustering algorithms such as the connected domain clustering method and the DBSCAN (Density-Based Spatial Clustering of Applications with Noise, density clustering) algorithm.
  • the three-dimensional bounding box corresponding to the identified target object may be any one of L-shape fitting method, principal component analysis method, and the like.
  • the three-dimensional bounding box may include the center point coordinates, size, orientation, etc. of the target object.
  • Step 210 Perform occlusion analysis on the three-dimensional bounding box to determine the motion state of the point cloud corresponding to the three-dimensional bounding box.
  • Occlusion analysis refers to identifying the motion state of point cloud data according to the characteristics of the laser beam.
  • the occlusion characteristic of the laser beam means that when multiple objects are on the same laser scanning zone, the lidar will directly measure the target object closest to the vehicle, but will not measure other target objects on the same laser scanning zone. There is no other target object between it and the measured target object.
  • the laser scanning zone can be the scanning range of the laser beam emitted by the lidar at a certain time. Therefore, the computer equipment can use the characteristics of the laser beam emitted by the lidar to surround the target object corresponding to multiple frames of non-ground point cloud data. box for occlusion analysis.
  • the computer device can perform occlusion analysis on the middle position of the line connecting each 3D bounding box of the current frame and the 3D bounding box of the historical frame to the vehicle. If the bounding box is dynamic, all points in the three-dimensional bounding box are determined as dynamic points.
  • Step 212 when the motion state of the point cloud is dynamic, update the point cloud motion segmentation result according to the non-ground point cloud data corresponding to the three-dimensional bounding box to obtain the target point cloud motion segmentation result.
  • the computer equipment can update the point cloud motion segmentation result according to the dynamic points obtained by the occlusion analysis, and can correct the wrong points in the dynamic point recognition in the point cloud motion segmentation results. Get the target point cloud segmentation result.
  • the computer equipment performs point cloud motion segmentation on the acquired multi-frame point cloud data to obtain the point cloud motion segmentation result.
  • the computer equipment performs ground filtering processing on each frame of point cloud data to obtain effective cloud data. Clustering analysis of multiple frames of non-ground point cloud data can cluster the points belonging to the same target object together, and obtain the three-dimensional bounding box corresponding to each target object.
  • the computer equipment performs occlusion analysis on the three-dimensional bounding box, and determines the points in the dynamic three-dimensional bounding box as dynamic points.
  • the occlusion analysis of the 3D bounding box avoids the situation that the points corresponding to the same target object have different motion states, and can obtain a more accurate point cloud motion state, thereby obtaining a more accurate dynamic point. Further, the computer equipment updates the result of the point cloud motion segmentation according to the dynamic points, thereby improving the accuracy of the point cloud motion segmentation.
  • the occlusion analysis is performed on the three-dimensional bounding box, and the step of determining the motion state of the point cloud corresponding to the three-dimensional bounding box includes:
  • Step 302 in the three-dimensional bounding box, determine the current three-dimensional bounding box corresponding to the non-ground point cloud data of the current frame and the historical three-dimensional bounding box corresponding to the non-ground point cloud data of the historical frame.
  • Step 304 Perform occlusion analysis on the current 3D bounding box and the historical 3D bounding box, and identify the motion state of the point cloud corresponding to the current 3D bounding box.
  • the computer device may determine the current three-dimensional bounding box corresponding to the non-ground point cloud data of the current frame and the historical three-dimensional bounding box corresponding to the non-ground point cloud data of the historical frame in the three-dimensional bounding box obtained by the cluster analysis.
  • the historical frame may be a certain historical frame, or may be multiple historical frames within a preset time period.
  • the current 3D bounding box and the historical 3D bounding box may be one or multiple. Therefore, the computer device performs occlusion analysis on the current three-dimensional bounding box and the historical three-dimensional bounding box.
  • performing occlusion analysis on the current 3D bounding box and the historical 3D bounding box, and identifying the motion state of the point cloud corresponding to the current 3D bounding box includes: acquiring point cloud scan lines corresponding to each historical 3D bounding box; identifying the current 3D bounding box The positional relationship between the bounding box and the point cloud scan line; identify the motion state of the point cloud corresponding to the current bounding box according to the positional relationship.
  • the computer device acquires the point cloud scan line corresponding to the historical three-dimensional bounding box of each frame in the historical frame.
  • the point cloud scan line refers to the connection between the historical 3D bounding box obtained on the laser scan band emitted by the lidar and the vehicle.
  • the point cloud scan line includes the scan angle of the lidar. Therefore, the computer device recognizes the positional relationship between the current 3D bounding box and each point cloud scan line. Since there is no other target object in the connection between the historical 3D bounding box and the vehicle, when the computer device recognizes the current When there is a 3D bounding box located in the middle of the point cloud scan line in the 3D box, the 3D bounding box can be determined as a dynamic target, and the points in the 3D bounding box are all dynamic points.
  • the computer device only needs to determine the three-dimensional bounding box as a dynamic target when detecting that there is a three-dimensional bounding box located in the middle of each point cloud scan line in the current three-dimensional bounding box, and determine the points in the three-dimensional bounding box as dynamic points. It can quickly judge the motion state of the point cloud.
  • the lidar scans the target object z(n, 1), indicating that there is no other target object in the connection between the vehicle and the target object z(n, 1), while at time t, the lidar measures the target object z(n, 1).
  • the target object z(m, 1) in the middle of the point cloud scan line formed by the vehicle to the target object z(n, 1) at time t-k indicates that the target object z(m, 1) is dynamic relative to time t-k. Therefore, the computer device can determine the points in the three-dimensional bounding box corresponding to the target object z(m, 1) as dynamic points.
  • the computer device performs occlusion analysis on the current 3D bounding box and the historical 3D bounding box, and can quickly and accurately determine the motion state of the point cloud by using the characteristics of the laser beam.
  • the 3D bounding box is a real 3D target
  • the occlusion analysis of the 3D bounding box avoids the situation that the points corresponding to the same target object have different motion states, and can obtain more accurate dynamic points.
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and obtaining the point cloud data segmentation result includes: extracting dynamic point cloud data corresponding to the 3D bounding box; In the segmentation result, the point cloud motion segmentation result corresponding to each dynamic point in the dynamic point cloud data is determined; when the determined point cloud motion segmentation result is a static point, the determined point cloud motion segmentation result is replaced with a dynamic point.
  • the computer device can update the point cloud motion segmentation result according to the dynamic point cloud data corresponding to the dynamic three-dimensional bounding box. Specifically, the computer device extracts dynamic point cloud data corresponding to the dynamic three-dimensional bounding box, and the dynamic point cloud data includes a plurality of dynamic points. The computer device thus determines the point cloud motion segmentation result corresponding to each dynamic point in the point cloud motion segmentation result. When the determined point cloud motion segmentation result is a static point, the computer equipment updates the point cloud motion segmentation result to a dynamic point, thereby improving the recognition accuracy of the dynamic point in the point cloud motion segmentation result, thereby improving the detection of dynamic targets. accuracy.
  • performing ground filtering processing on each frame of point cloud data to obtain multiple frames of non-ground point cloud data includes: dividing the point cloud area corresponding to each frame of point cloud data into multiple grids; The equation calculates the ground corresponding to the point cloud data in each grid; calculates the distance value between each point in the point cloud data of each grid and the corresponding ground; filters the points whose distance value is less than the first threshold to obtain multiple frames Non-ground point cloud data.
  • the point cloud area refers to the data space where the point cloud data of each frame is located.
  • the computer equipment may divide the point cloud area corresponding to each frame of point cloud data into grids to obtain multiple grids.
  • Grid division refers to dividing the point cloud area in the x-axis direction and the y-axis direction, that is, dividing the point cloud area on the xy level.
  • the computer device may divide the data area corresponding to the extracted point cloud data in the x-axis direction and the y-axis direction according to preset parameters.
  • the preset parameter may be a parameter for grid division of the data region where the extracted point cloud data is located.
  • the preset parameter may be length*width, indicating the length and width of each grid obtained after grid division.
  • the length and width can be the same or different.
  • the preset parameter may also be equal division.
  • the height of multiple grids is the same.
  • the computer device may first divide the point cloud area corresponding to the extracted point cloud data in the x-axis direction according to preset parameters, and then divide the point cloud area corresponding to the extracted point cloud data in the y-axis direction according to the preset parameters.
  • the point cloud area corresponding to the extracted point cloud data may also be firstly divided in the y-axis direction according to preset parameters, and then the point cloud area corresponding to the extracted point cloud data may be divided in the x-axis direction according to the preset parameters.
  • the computer device obtains the preset plane equation, and calculates the ground corresponding to the point cloud data in each grid according to the preset plane equation and the least square method, so as to obtain the ground corresponding to each grid.
  • the ground corresponding to each grid is embodied in the form of a ternary linear equation.
  • the computer equipment traverses and inputs the point coordinates of each point in the corresponding grid in the equation corresponding to the ground, and calculates the distance value between each point and the corresponding ground.
  • a first threshold for judging the type of the point is pre-stored in the computer device.
  • Classes of points can include ground points as well as non-ground points.
  • the computer device compares the distance value with the first threshold value, and when the distance value is smaller than the first threshold value, it indicates that the point is a ground point. When the distance value is greater than or equal to the first threshold, it indicates that the point is a non-ground point.
  • the computer equipment filters the points whose distance value is less than the first threshold, and the remaining points constitute non-ground point cloud data.
  • the point cloud area corresponding to each frame of point cloud data is divided into multiple grids, so as to calculate the ground corresponding to the point cloud data in each grid, and by calculating the distance between each point and the corresponding ground, Thereby filtering the ground points. Since the grid division only needs to divide the data area corresponding to the point cloud data in the x-axis direction and the y-axis direction, it can be quickly performed in the unmanned mode, when the computing resources of the computer equipment are limited and the real-time requirements are high. Ground filtration treatment.
  • calculating the ground corresponding to the point cloud data in each grid according to the preset plane equation includes: selecting the point with the smallest height value in the point cloud data corresponding to each grid; calculating the point corresponding to each grid The height difference between each point in the cloud data and the point with the smallest height value; extract the points whose height difference is less than the second threshold, and perform plane fitting on the selected points according to the preset plane equation to obtain the points in each grid The ground corresponding to the cloud data.
  • the computer equipment After the computer equipment divides the point cloud area corresponding to each frame of point cloud data, and obtains multiple grids, it can select the point with the smallest height value in the point cloud data corresponding to each grid, and calculate the corresponding grid. The height difference between each point in the point cloud data and the point with the smallest height value.
  • a second threshold for judging whether it is a plane fitting point is pre-stored in the computer device. When the height difference is less than the second threshold, it indicates that the point corresponding to the height difference is a plane fitting point.
  • the computer equipment compares the height difference with the second threshold, selects points whose height difference is less than the second threshold, and performs plane fitting on the selected points according to the preset plane equation by the computer equipment to obtain the point cloud data in each grid. corresponding ground.
  • the computer device selects the plane fitting by calculating the height difference between each point in the point cloud data corresponding to each grid and the point with the smallest height value, and comparing the height difference with the second threshold. Because the probability of the point with the smallest height value being the ground point is the largest, by calculating the height difference between each point and the point with the smallest height value, the plane fitting point can be determined more accurately, and the ground estimation efficiency can be improved, and then Improve ground filtration efficiency.
  • a motion segmentation device for point cloud data including: a communication module 402 , a point cloud motion segmentation module 404 , a ground filtering module 406 , a cluster analysis module 408 , and an occlusion analysis module. module 410 and update module 412, where:
  • the communication module 402 is used for acquiring multi-frame point cloud data.
  • the point cloud motion segmentation module 404 is configured to perform point cloud motion segmentation on the point cloud data to obtain a point cloud motion segmentation result.
  • the ground filtering module 406 is configured to perform ground filtering processing on each frame of point cloud data to obtain multiple frames of non-ground point cloud data.
  • the cluster analysis module 408 is configured to perform cluster analysis on multiple frames of non-ground point cloud data to obtain a three-dimensional bounding box corresponding to each frame of non-ground point cloud data.
  • the occlusion analysis module 410 is configured to perform occlusion analysis on the three-dimensional bounding box, and determine the motion state of the point cloud corresponding to the three-dimensional bounding box.
  • the updating module 412 is configured to update the point cloud motion segmentation result according to the non-ground point cloud data corresponding to the three-dimensional bounding box when the motion state of the point cloud is dynamic, so as to obtain the target point cloud motion segmentation result.
  • the occlusion analysis module 410 is further configured to determine, in the three-dimensional bounding box, the current three-dimensional bounding box corresponding to the non-ground point cloud data of the current frame and the historical three-dimensional bounding box corresponding to the non-ground point cloud data of the historical frame; The current 3D bounding box and the historical 3D bounding box are subjected to occlusion analysis to identify the motion state of the point cloud corresponding to the current 3D bounding box.
  • the occlusion analysis module 410 is further configured to obtain the point cloud scan line corresponding to each historical 3D bounding box; identify the positional relationship between the current 3D bounding box and the point cloud scan line; identify the current bounding box according to the positional relationship The motion state of the point cloud corresponding to the box.
  • the update module 412 is further configured to extract the dynamic point cloud data corresponding to the three-dimensional bounding box; determine the point cloud motion segmentation result corresponding to each dynamic point in the dynamic point cloud data in the point cloud motion segmentation result; when When the determined point cloud motion segmentation result is a static point, replace the determined point cloud motion segmentation result with a dynamic point.
  • the ground filtering module 406 is further configured to divide the point cloud area corresponding to each frame of point cloud data into multiple grids; calculate the ground corresponding to the point cloud data in each grid according to a preset plane equation ; Calculate the distance value between each point in the point cloud data of each grid and the corresponding ground; filter the points whose distance value is less than the first threshold to obtain multiple frames of non-ground point cloud data.
  • the ground filtering module 406 is further configured to select the point with the smallest height value in the point cloud data corresponding to each grid; calculate the point with the smallest height value in the point cloud data corresponding to each grid The height difference between the two points is extracted; the points whose height difference is less than the second threshold are extracted, and the selected points are fitted according to the preset plane equation to obtain the ground corresponding to the point cloud data in each grid.
  • the point cloud motion segmentation module 404 is further configured to call a pre-trained point cloud motion segmentation model, and input the point cloud data into the point cloud motion segmentation model; Perform classification, output the motion state corresponding to each point in the point cloud data, and generate a point cloud motion segmentation result.
  • Each module in the above-mentioned point cloud data motion segmentation device can be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, the internal structure of which can be shown in FIG. 5 .
  • the computer device includes a processor, memory, a communication interface, and a database connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions and a database.
  • the internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store the point cloud motion segmentation results, the target point cloud motion segmentation results, and the like.
  • the communication interface of the computer device is used to connect and communicate with the vehicle sensor.
  • the computer readable instructions when executed by a processor, implement a method for motion segmentation of point cloud data.
  • FIG. 5 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device includes a memory and one or more processors, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the one or more processors, causes the one or more processors to perform the following steps:
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and the target point cloud motion segmentation result is obtained.
  • the processor further implements the following steps when executing the computer-readable instructions: determining, in the three-dimensional bounding box, the current three-dimensional bounding box corresponding to the non-ground point cloud data of the current frame and the historical frame corresponding to the non-ground point cloud data of the historical frame 3D bounding box: Perform occlusion analysis on the current 3D bounding box and historical 3D bounding box, and identify the point cloud motion state corresponding to the current 3D bounding box.
  • the processor when the processor executes the computer-readable instructions, it further implements the following steps: acquiring point cloud scan lines corresponding to each historical three-dimensional bounding box; identifying the positional relationship between the current three-dimensional bounding box and the point cloud scan line; The positional relationship identifies the motion state of the point cloud corresponding to the current bounding box.
  • the processor further implements the following steps when executing the computer-readable instructions: extracting the dynamic point cloud data corresponding to the three-dimensional bounding box; determining the point corresponding to each dynamic point in the dynamic point cloud data in the point cloud motion segmentation result Cloud motion segmentation result; when the determined point cloud motion segmentation result is a static point, replace the determined point cloud motion segmentation result with a dynamic point.
  • the processor further implements the following steps when executing the computer-readable instructions: dividing the point cloud area corresponding to each frame of point cloud data into a plurality of grids; calculating the points in each grid according to a preset plane equation The ground corresponding to the cloud data; calculate the distance value between each point in the point cloud data of each grid and the corresponding ground; filter the points whose distance value is less than the first threshold to obtain multiple frames of non-ground point cloud data.
  • the processor further implements the following steps when executing the computer-readable instructions: selecting the point with the smallest height value in the point cloud data corresponding to each grid; calculating the difference between each point in the point cloud data corresponding to each grid and the The height difference between the points with the smallest height value; extract the points whose height difference is less than the second threshold, and perform plane fitting on the selected points according to the preset plane equation to obtain the ground corresponding to the point cloud data in each grid.
  • the processor further implements the following steps when executing the computer-readable instructions: calling a pre-trained point cloud motion segmentation model, inputting point cloud data into the point cloud motion segmentation model; The point cloud data is classified, and the motion state corresponding to each point in the point cloud data is output to generate the point cloud motion segmentation result.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the point cloud motion segmentation result is updated according to the non-ground point cloud data corresponding to the 3D bounding box, and the target point cloud motion segmentation result is obtained.
  • the following steps are further implemented: determining, in the three-dimensional bounding box, the current three-dimensional bounding box corresponding to the non-ground point cloud data of the current frame and the non-ground point cloud data of the historical frame corresponding to the current frame.
  • Historical 3D bounding box Perform occlusion analysis on the current 3D bounding box and historical 3D bounding box, and identify the point cloud motion state corresponding to the current 3D bounding box.
  • the computer-readable instructions further implement the following steps when executed by the processor: acquiring point cloud scan lines corresponding to each historical 3D bounding box; identifying the positional relationship between the current 3D bounding box and the point cloud scan lines; Identify the motion state of the point cloud corresponding to the current bounding box according to the positional relationship.
  • the following steps are further implemented: extracting dynamic point cloud data corresponding to a three-dimensional bounding box; Point cloud motion segmentation result; when the determined point cloud motion segmentation result is a static point, replace the determined point cloud motion segmentation result with a dynamic point.
  • the following steps are further implemented: dividing the point cloud area corresponding to each frame of point cloud data into a plurality of grids; The ground corresponding to the point cloud data; calculate the distance value between each point in the point cloud data of each grid and the corresponding ground; filter the points whose distance value is less than the first threshold to obtain multiple frames of non-ground point cloud data.
  • the following steps are further implemented: selecting the point with the smallest height value in the point cloud data corresponding to each grid; calculating each point in the point cloud data corresponding to each grid The height difference between the point with the smallest height value; extract the points whose height difference is less than the second threshold, and perform plane fitting on the selected points according to the preset plane equation to obtain the ground corresponding to the point cloud data in each grid .
  • the following steps are further implemented: calling a pre-trained point cloud motion segmentation model, inputting point cloud data into the point cloud motion segmentation model; Classify the point cloud data, output the motion state corresponding to each point in the point cloud data, and generate the point cloud motion segmentation result.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé de segmentation de mouvement pour des données de nuage de points comprenant les étapes consistant à : acquérir de multiples trames de données de nuage de points ; réaliser une segmentation de mouvement de nuage de points sur les données de nuage de points pour obtenir un résultat de segmentation de mouvement de nuage de points ; réaliser un traitement de filtrage de sol sur chaque trame de données de nuage de points pour obtenir de multiples trames de données de nuage de points autres que du sol ; réaliser une analyse de grappe sur les multiples trames de données de nuage de points autres que du sol pour obtenir une boîte de délimitation 3D correspondant à chaque trame de données de nuage de points autres que du sol ; réaliser une analyse d'occlusion sur la boîte de délimitation 3D pour déterminer un état de mouvement de nuage de points correspondant à la boîte de délimitation 3D ; et lorsque l'état de mouvement de nuage de points est dynamique, mettre à jour le résultat de segmentation de mouvement de nuage de points en fonction des données de nuage de points autres que du sol correspondant à la boîte de délimitation 3D pour obtenir un résultat de segmentation de mouvement de nuage de points cible.
PCT/CN2020/128264 2020-11-12 2020-11-12 Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage WO2022099530A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080092973.6A CN115066708A (zh) 2020-11-12 2020-11-12 点云数据运动分割方法、装置、计算机设备和存储介质
PCT/CN2020/128264 WO2022099530A1 (fr) 2020-11-12 2020-11-12 Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/128264 WO2022099530A1 (fr) 2020-11-12 2020-11-12 Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage

Publications (1)

Publication Number Publication Date
WO2022099530A1 true WO2022099530A1 (fr) 2022-05-19

Family

ID=81601955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128264 WO2022099530A1 (fr) 2020-11-12 2020-11-12 Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN115066708A (fr)
WO (1) WO2022099530A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782438A (zh) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 物体点云修正方法、装置、电子设备和存储介质
CN114924289A (zh) * 2022-06-14 2022-08-19 燕山大学 一种激光雷达点云目标拟合算法
CN115187713A (zh) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 一种用于加速点云选点操作的方法、设备及介质
CN115236637A (zh) * 2022-06-20 2022-10-25 重庆长安汽车股份有限公司 一种基于车辆数据的车载激光雷达点云地面提取方法
CN115719354A (zh) * 2022-11-17 2023-02-28 同济大学 基于激光点云提取立杆的方法与装置
CN116402956A (zh) * 2023-06-02 2023-07-07 深圳大学 智能驱动的三维物体可交互重建方法、装置、设备和介质
CN116797704A (zh) * 2023-08-24 2023-09-22 山东云海国创云计算装备产业创新中心有限公司 点云数据处理方法、系统、装置、电子设备及存储介质
CN117968682A (zh) * 2024-04-01 2024-05-03 山东大学 基于多线激光雷达和惯性测量单元的动态点云去除方法
CN118053153A (zh) * 2024-04-16 2024-05-17 之江实验室 一种点云数据的识别方法、装置、存储介质及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311457B (zh) * 2022-10-09 2023-03-24 广东汇天航空航天科技有限公司 点云数据处理方法、计算设备、飞行装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254499A1 (en) * 2014-03-07 2015-09-10 Chevron U.S.A. Inc. Multi-view 3d object recognition from a point cloud and change detection
CN110275153A (zh) * 2019-07-05 2019-09-24 上海大学 一种基于激光雷达的水面目标检测与跟踪方法
CN111260683A (zh) * 2020-01-09 2020-06-09 合肥工业大学 一种三维点云数据的目标检测与跟踪方法及其装置
CN111461245A (zh) * 2020-04-09 2020-07-28 武汉大学 一种融合点云和图像的轮式机器人语义建图方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9043069B1 (en) * 2012-11-07 2015-05-26 Google Inc. Methods and systems for scan matching approaches for vehicle heading estimation
CN111402414B (zh) * 2020-03-10 2024-05-24 北京京东叁佰陆拾度电子商务有限公司 一种点云地图构建方法、装置、设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254499A1 (en) * 2014-03-07 2015-09-10 Chevron U.S.A. Inc. Multi-view 3d object recognition from a point cloud and change detection
CN110275153A (zh) * 2019-07-05 2019-09-24 上海大学 一种基于激光雷达的水面目标检测与跟踪方法
CN111260683A (zh) * 2020-01-09 2020-06-09 合肥工业大学 一种三维点云数据的目标检测与跟踪方法及其装置
CN111461245A (zh) * 2020-04-09 2020-07-28 武汉大学 一种融合点云和图像的轮式机器人语义建图方法及系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114924289A (zh) * 2022-06-14 2022-08-19 燕山大学 一种激光雷达点云目标拟合算法
CN114782438B (zh) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 物体点云修正方法、装置、电子设备和存储介质
CN115236637A (zh) * 2022-06-20 2022-10-25 重庆长安汽车股份有限公司 一种基于车辆数据的车载激光雷达点云地面提取方法
CN114782438A (zh) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 物体点云修正方法、装置、电子设备和存储介质
CN115187713A (zh) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 一种用于加速点云选点操作的方法、设备及介质
CN115719354B (zh) * 2022-11-17 2024-03-22 同济大学 基于激光点云提取立杆的方法与装置
CN115719354A (zh) * 2022-11-17 2023-02-28 同济大学 基于激光点云提取立杆的方法与装置
CN116402956A (zh) * 2023-06-02 2023-07-07 深圳大学 智能驱动的三维物体可交互重建方法、装置、设备和介质
CN116402956B (zh) * 2023-06-02 2023-09-22 深圳大学 智能驱动的三维物体可交互重建方法、装置、设备和介质
CN116797704A (zh) * 2023-08-24 2023-09-22 山东云海国创云计算装备产业创新中心有限公司 点云数据处理方法、系统、装置、电子设备及存储介质
CN116797704B (zh) * 2023-08-24 2024-01-23 山东云海国创云计算装备产业创新中心有限公司 点云数据处理方法、系统、装置、电子设备及存储介质
CN117968682A (zh) * 2024-04-01 2024-05-03 山东大学 基于多线激光雷达和惯性测量单元的动态点云去除方法
CN118053153A (zh) * 2024-04-16 2024-05-17 之江实验室 一种点云数据的识别方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN115066708A (zh) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2022099530A1 (fr) Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage
CN111160302B (zh) 基于自动驾驶环境的障碍物信息识别方法和装置
WO2022099511A1 (fr) Procédé et appareil de segmentation de sol basée sur des données de nuage de points, et dispositif informatique
CN110458854B (zh) 一种道路边缘检测方法和装置
JP2021523443A (ja) Lidarデータと画像データの関連付け
WO2021134441A1 (fr) Procédé et appareil de contrôle de vitesse de véhicule basé sur la conduite automatisée, et dispositif informatique
WO2021134296A1 (fr) Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage
CN110286389B (zh) 一种用于障碍物识别的栅格管理方法
JP5822255B2 (ja) 対象物識別装置及びプログラム
WO2022188663A1 (fr) Procédé et appareil de détection de cible
JP2007527569A (ja) 立体視に基づく差し迫った衝突の検知
WO2022226831A1 (fr) Procédé et appareil de détection d'un obstacle de catégorie indéfinie et dispositif informatique
JP7091686B2 (ja) 立体物認識装置、撮像装置および車両
CN110674705A (zh) 基于多线激光雷达的小型障碍物检测方法及装置
WO2021134285A1 (fr) Procédé et appareil de traitement de suivi d'image, et dispositif informatique et support de stockage
CN113008296B (zh) 用于通过在点云平面上融合传感器数据来检测汽车环境的方法和汽车控制单元
WO2022133770A1 (fr) Procédé de génération de vecteur normal à nuage de points, appareil, dispositif informatique et support de stockage
CN111553946A (zh) 用于去除地面点云的方法及装置、障碍物检测方法及装置
CN116109601A (zh) 一种基于三维激光雷达点云的实时目标检测方法
US20220171975A1 (en) Method for Determining a Semantic Free Space
US11415698B2 (en) Point group data processing device, point group data processing method, point group data processing program, vehicle control device, and vehicle
CN114241448A (zh) 障碍物航向角的获取方法、装置、电子设备及车辆
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN115601435B (zh) 车辆姿态检测方法、装置、车辆及存储介质
CN117148832A (zh) 一种基于多深度相机的移动机器人避障方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20961087

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20961087

Country of ref document: EP

Kind code of ref document: A1