WO2020043041A1 - 点云数据的分割方法和装置、存储介质、电子装置 - Google Patents

点云数据的分割方法和装置、存储介质、电子装置 Download PDF

Info

Publication number
WO2020043041A1
WO2020043041A1 PCT/CN2019/102486 CN2019102486W WO2020043041A1 WO 2020043041 A1 WO2020043041 A1 WO 2020043041A1 CN 2019102486 W CN2019102486 W CN 2019102486W WO 2020043041 A1 WO2020043041 A1 WO 2020043041A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
data
target
segmentation
Prior art date
Application number
PCT/CN2019/102486
Other languages
English (en)
French (fr)
Inventor
曾超
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2020043041A1 publication Critical patent/WO2020043041A1/zh
Priority to US17/019,067 priority Critical patent/US11282210B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7635Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks based on graphs, e.g. graph cuts or spectral clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • This application relates to the field of autonomous driving, and in particular, to segmentation technology of point cloud data.
  • Depth sensors and position and attitude sensors are used to collect three-dimensional information of the surrounding environment based on fixed stations or mobile platforms. They are widely used due to their high-efficiency, real-time, and high-precision characteristics. Because the scanned scene may contain different objects, such as the ground, buildings, trees, vehicles, etc., before performing 3D reconstruction, the point cloud data belonging to different objects need to be separated from each other by point cloud segmentation so that each object can be separated separately. Perform point cloud modeling.
  • Point cloud segmentation algorithms in related technologies need to scan the point cloud data multiple times, which is computationally expensive and inefficient, and does not meet real-time processing requirements.
  • the embodiments of the present application provide a method and device for segmenting point cloud data, a storage medium, and an electronic device, so as to at least solve the technical problem of low efficiency of point cloud segmentation in related technologies.
  • a method for segmenting point cloud data including: obtaining target point cloud data, where the target point cloud data is data obtained by scanning a target object around a vehicle through a laser beam; Clustering the target point cloud data to obtain multiple first data sets, wherein the feature points represented by the point cloud data included in each first data set are fitted to the same segmentation line segment, and the feature points are on the target object Point; combining a plurality of first data sets according to the distance between the plurality of divided line segments to obtain a second data set, wherein the second data set includes at least one first data set.
  • a point cloud data segmentation device including: an obtaining unit for obtaining target point cloud data, where the target point cloud data is a target around a vehicle through a laser wire harness Data obtained by scanning objects; a clustering unit configured to cluster target point cloud data to obtain a plurality of first data sets, wherein feature points represented by the point cloud data included in each first data set are fitted On the same segmentation line segment, feature points are points on the target object; a merging unit is configured to merge a plurality of first data sets according to the distance between the plurality of segmentation line segments to obtain a second data set, where: The second data set includes at least one first data set.
  • a storage medium is also provided.
  • the storage medium includes a stored program, and the method is executed when the program is run.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the foregoing method through the computer program.
  • a computer program product including instructions, which, when run on a computer, cause the computer to execute the above method.
  • target point cloud data is obtained, and the target point cloud data is data obtained by scanning a target object around a vehicle through a laser beam; the target point cloud data is clustered to obtain a plurality of first data sets, each The feature points represented by the point cloud data included in each of the first data sets are fitted to the same segmentation line segment, and the feature points are points on the target object; multiple first data sets are compared according to the distance between the plurality of segmentation line segments.
  • the merging is performed to obtain a second data set, and the second data set includes at least one first data set.
  • clustering the target point cloud data to obtain multiple first data sets is equivalent to completing the point cloud data segmentation only after traversing all the point cloud data once, instead of using multiple Iterating over the point cloud data to complete the segmentation can solve the technical problem of low efficiency of point cloud segmentation in related technologies, and then achieve the technical effect of improving the segmentation efficiency.
  • FIG. 1 is a schematic diagram of a hardware environment of a method for segmenting point cloud data according to an embodiment of the present application
  • FIG. 2 is a flowchart of an optional method for segmenting point cloud data according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an optional lidar according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an optional point cloud data according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an optional lidar scene according to an embodiment of the present application.
  • FIG. 6 is a flowchart of an optional method for segmenting point cloud data according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an optional adaptive distance threshold according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an optional segmentation of point cloud data according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an optional device for segmenting point cloud data according to an embodiment of the present application.
  • FIG. 10 is a structural block diagram of a terminal according to an embodiment of the present application.
  • Autonomous vehicles also known as self-driving cars, computer-driven cars, or wheeled mobile robots, are smart cars that realize driverlessness through computer systems.
  • High-definition maps are high-precision, fine-definition maps that require decimeter accuracy to be able to distinguish individual lanes.
  • the refined definition requires formatting and storage of various traffic elements in the traffic scene, including road map data, lane network data, lane lines, and traffic signs in traditional maps.
  • a method embodiment of a method for segmenting point cloud data is provided.
  • the foregoing method for segmenting point cloud data may be applied to a hardware environment composed of a server 101 and / or a terminal 103 as shown in FIG. 1.
  • the server 101 is connected to the terminal 103 through the network, and can be used to provide services (such as game services, application services, map services, autonomous driving, etc.) to the terminal or the client installed on the terminal.
  • a database 105 is provided independently of the server 101 to provide data storage services for the server 101.
  • the above networks include, but are not limited to: a wide area network, a metropolitan area network, or a local area network.
  • the terminal 103 is a smart terminal that can be used in a vehicle, and is not limited to Car equipment, mobile phones, tablets, etc.
  • FIG. 2 is a flowchart of an optional method for segmenting point cloud data according to the embodiment of the present application. As shown in FIG. 2, the method It can include the following steps:
  • Step S202 The server obtains target point cloud data.
  • the target point cloud data is data obtained by scanning a target object around the vehicle with a laser beam.
  • the above-mentioned target point cloud data may be data obtained by scanning multiple laser beams of a lidar.
  • the lidar may measure the size and shape of an object by using a scanning technique, and the lidar may adopt a stability And a precision rotating motor, when the laser beam hits the polygonal gauge driven by the motor to form a scanning beam, because the polygonal gauge is located on the front focal plane of the scanning lens, and the laser beam is evenly rotated to the mirror
  • the incident angle changes relatively continuously, so the reflection angle also changes continuously.
  • a parallel and continuous top-to-bottom scanning line is formed to form the scanning line data, that is, a single-beam laser scans once Sequence of formed point clouds.
  • the lidar of the present application may be a low-beam lidar or a multi-beam lidar.
  • a low-beam lidar scan can generate fewer line-beam scan lines at a time.
  • a low-beam lidar generally includes a 4-beam and an 8-beam, mainly 2.5D lidar
  • the vertical field of view generally does not exceed 10 °; multi-line beam lidar (or 3D lidar) can generate multiple scan lines in one scan.
  • Multi-line beam lidar generally includes 16 line beams, 32 line beams, 64 line beams, etc. 3D lidar and 2.5
  • the biggest difference between D lidar lies in the range of vertical field of view of lidar.
  • the range of vertical field of view of 3D laser mine can reach 30 ° or even more than 40 °.
  • ADAS Advanced Driver Assistance System
  • sensors installed on the car to collect the environmental data inside and outside the car at the first time to identify and detect static and dynamic objects.
  • Active safety technology that enables drivers to detect possible dangers in the fastest time, so as to attract attention and improve safety.
  • the sensors used by ADAS are mainly cameras, lidars, etc.
  • the vehicle detects potential danger, it will issue an alert to remind the driver to pay attention to abnormal vehicle or road conditions.
  • the target object can be static and dynamic outside the vehicle.
  • Objects such as buildings, pedestrians, other vehicles, animals, traffic lights, etc., are used to determine whether they are potential dangers and assist driving logic.
  • Step S204 The server clusters the target point cloud data to obtain a plurality of first data sets.
  • the feature points represented by the point cloud data included in each first data set are fitted to the same segmentation line segment, that is, the feature points represented by the point cloud data stored in the first data set are located at the positions corresponding to the first data set.
  • the feature points on the segment are the points on the target object. It is equivalent to a segmentation method based on each scan line. Each scan line is segmented to obtain a segment line segment on each scan line, so that all point cloud data is initially segmented, such as obtaining a dataset of the boundary of the target object, and the target object. Appearance dataset, ground dataset, etc.
  • step S206 the server merges the plurality of first data sets according to the distance between the plurality of divided line segments to obtain a second data set.
  • Each second data set includes at least one first data set, which is equivalent to using a plane scanning method to merge the scan line segmentation line segments to obtain a candidate target clustering set.
  • feature extraction can be performed on the candidate target clustering set, noise and ground sets are eliminated, and the final segmentation result is obtained, that is, the second data set and the merged segmentation line segment.
  • the applicant realized that most of the point cloud segmentation methods in the related technology deal with unordered and discrete point cloud data.
  • the clustering segmentation method has low complexity due to its method It is easy to implement and can be used for the segmentation of spatial point clouds.
  • ground point cloud data due to the existence of ground point cloud data in large outdoor scenes, it is difficult to effectively segment ground and non-ground objects using clustering methods.
  • a clustering segmentation method based on a fixed threshold radius may be adopted.
  • the selection of the threshold value has a great influence on the segmentation result. If the threshold value is too large, the separation distance Closer small objects may not be separated as one object (that is, under segmentation). If the threshold is selected too small, objects with a large separation distance (such as buildings) may be divided into different objects (that is, over segmentation occurs). ).
  • segment line segments (may be referred to as segment segments) on each scan line are obtained, and each segment line segment corresponds to a first data set, which can implement object
  • first data set which can implement object
  • the plane scanning method is used to merge the segmented segments of each scan line to obtain the candidate target clustering set (that is, the second data set), which is equivalent to merging the contour lines that belong to an object object.
  • the target point cloud data needs to be traversed once, which can solve the problem of low efficiency and difficult to effectively segment ground and non-ground objects when using only clustering and segmentation.
  • the solution of this application only needs to determine the distance between segmented lines, relative to the distance between point clouds, there is no accidental factor for determining the distance between point clouds (some point clouds overlap in a certain plane dimension due to the perspective, but actually exist in It does not overlap in space), and can also avoid the problem of under-segmentation or over-segmentation that occurs when the point cloud is directly segmented using a clustering segmentation method based on a fixed threshold radius. It can be seen that, while reducing the complexity of the algorithm, the technical solution of the present application is beneficial to reduce the over-segmentation and under-segmentation in the segmentation process, and improve the perceived robustness of autonomous driving.
  • the foregoing embodiment takes the point cloud data segmentation method of the present application as an example for description by the server 101.
  • the point cloud data segmentation method of the present application can also be executed by the terminal 103.
  • the difference from the above embodiment lies in the execution subject. Converted by server to terminal.
  • the point cloud data segmentation method of the present application may also be performed by the server 101 and the terminal 103 jointly, and one or two of the steps (such as step S202) are performed by the terminal 103, and the remaining steps (such as step S204-step S206) are performed by the server 101 ).
  • the method for segmenting point cloud data performed by the terminal 103 in the embodiment of the present application may also be performed by a client installed on the method.
  • target point cloud data is obtained, and the target point cloud data is data obtained by scanning a target object around the vehicle through a laser wire harness; clustering the target point cloud data to obtain a plurality of first data sets, The feature points represented by the point cloud data included in each first data set are fitted to the same segmentation line segment, and the feature points are points on the target object; the plurality of first data are compared according to the distance between the plurality of segmentation line segments. The sets are merged to obtain a second data set, and the second data set includes at least one first data set.
  • clustering the target point cloud data to obtain multiple first data sets is equivalent to completing the point cloud data segmentation only after traversing all the point cloud data once, instead of using multiple Iterating over the point cloud data to complete the segmentation can solve the technical problem of low efficiency of point cloud segmentation in related technologies, and then achieve the technical effect of improving the segmentation efficiency.
  • the technical solution of the present application may be applied to the field of autonomous driving.
  • a laser sensor or laser radar
  • the lidar can be a low-beam lidar.
  • the laser point sensors installed on the vehicle can be used to scan the target objects around the vehicle to obtain the target point cloud data.
  • the vehicle may be a vehicle having an automatic driving system.
  • the server clusters the target point cloud data to obtain multiple first data sets, and the feature points represented by the point cloud data included in each first data set are fitted to the same segmentation line segment.
  • the feature points are points on the target object.
  • each first data set may be created in the following manner (including steps 1 to 2):
  • Step 1 Find a plurality of first point cloud data in the target point cloud data, and the feature points represented by the plurality of first point cloud data are adjacent.
  • finding multiple first point cloud data in the target point cloud data includes: setting the distance between the feature points represented in the target point cloud data to be not greater than a first threshold, and the distance between the represented feature points.
  • the point cloud data with the included angle not less than the second threshold is regarded as a plurality of first point cloud data.
  • Step 11 Obtain the second point cloud data in the target point cloud data, and the second point cloud data is the point cloud data in the target point cloud data that is not clustered into any one of the first data sets.
  • point cloud data is sequentially acquired according to the point cloud data acquisition time in the target point cloud data.
  • acquiring the point cloud data in the order of morning to night (or late to early) in the acquisition time can become
  • the characteristic parts (such as edges, surfaces, edges and corners) of a target object (such as obstacles) are often adjacent in position, and the lidar scans the target object in turn according to the position, in other words, the acquisition time phase
  • Adjacent point cloud data is used to represent adjacent feature points. It can be seen that each first data set generally retains multiple second point cloud data adjacent to each other at the acquisition time. In other words, the above solution is to use continuous point clouds.
  • the data is divided into multiple segments, and the point cloud data of each segment is stored in a first data set.
  • the above-mentioned target point cloud data can also be fitted to obtain lines (that is, segmented line segments), and it can be defined that the distance between the lines does not exceed a certain threshold. The distance exceeds the threshold, then the two points can be respectively the endpoints of the line, and then multiple lines can be determined, and the point cloud data corresponding to all points on each line is used as a first data set.
  • lines that is, segmented line segments
  • Step 12 Obtain point cloud data (collected as third point cloud data) whose acquisition time is later than the second point cloud data.
  • the above-mentioned second point cloud data is equivalent to the starting point cloud data in a first data set. After this, the end point cloud data of the first data set needs to be found.
  • the third point cloud data is point cloud data in the target point cloud data that has not been clustered into any of the first data sets and is collected later than the second point cloud data.
  • Step S13 Obtain the distance between the third point cloud data and the fourth point cloud data (that is, the point cloud data collected later than the third point cloud data and the adjacent point cloud data), and the first point cloud data represented by the third point cloud data.
  • An angle formed between a feature point A, a second feature point B represented by the fourth point cloud data, and a third feature point C represented by the fifth point cloud data that is, the angle of ⁇ ABC.
  • Step 14 If the distance between the feature points represented by the third point cloud data and the feature points represented by the fourth point cloud data is greater than a first threshold (the first threshold is used to characterize adjacent features of the same feature of the obstacle)
  • the maximum distance between points such as the maximum distance between pixels at the edge of a building
  • the included angle formed between the represented feature points is smaller than the second threshold (the second threshold is used to characterize the maximum corner between adjacent feature points of the same feature of the obstacle).
  • the cloud data and the point cloud data whose acquisition time is between the acquisition time of the second point cloud data and the acquisition time of the third point cloud data are regarded as the plurality of first point cloud data.
  • the third point cloud data is equivalent to the end point cloud data of the first data set
  • the third point cloud data and the fourth point cloud data are point cloud data adjacent to the acquisition time
  • the fourth point cloud data is collected late.
  • the fourth point cloud data is equivalent to the starting point cloud data of the next first data set
  • the fourth point cloud data and the fifth point cloud data are adjacent point cloud data at the time of collection and The collection time of the fifth point cloud data is later than the collection time of the fourth point cloud data.
  • Step 15 If the distance between the feature point represented by the third point cloud data and the feature point represented by the fourth point cloud data is greater than the first threshold, and the feature point represented by the third point cloud data and the fourth point cloud The angle formed between the feature points represented by the data and the feature points represented by the fifth point cloud data is less than the second threshold value, and the fourth point cloud data is saved to a different one from the one used to save the third point cloud data.
  • the first data set If the distance between the feature point represented by the third point cloud data and the feature point represented by the fourth point cloud data is greater than the first threshold, and the feature point represented by the third point cloud data and the fourth point cloud The angle formed between the feature points represented by the data and the feature points represented by the fifth point cloud data is less than the second threshold value, and the fourth point cloud data is saved to a different one from the one used to save the third point cloud data.
  • Step 2 Save multiple first point cloud data to the same first data set created.
  • Subsequent point cloud data may be processed according to the foregoing steps 1 to 2 until point cloud data does not exist in the target point cloud data.
  • step S204 a preliminary segmentation of the point cloud data can be achieved.
  • the server merges a plurality of first data sets according to the distance between the plurality of divided line segments to obtain a second data set, and the second data set includes at least one first data set.
  • a plurality of first data sets are combined according to the distance between the plurality of segmented line segments, and when a second data set is obtained, the interval between the segmented line segments obtained by the plurality of first data sets may be combined.
  • the first data set with a distance less than the third threshold is merged to obtain a second data set.
  • An optional implementation may include the following steps 1 to 2:
  • Step 1 Create an event collection.
  • multiple segment line events corresponding to the multiple first data sets are stored according to the collection time of the point cloud data in the multiple first data sets.
  • the segment line event includes an insertion event corresponding to the starting feature point of the split line segment.
  • the deletion event corresponding to the ending feature point of the segmentation line segment.
  • Step 2 Iterate through each event in the event collection.
  • step 3 when the traversed current event is an insertion event, the first divided line segment corresponding to the current event in the plurality of divided line segments is saved to a line segment set.
  • the point cloud data on each segment line segment is point cloud data with continuous acquisition time
  • the point cloud data equivalent to each segment line segment can actually correspond to a collection period, so in the event collection, Events for dividing line segments are stored according to the collection time of the dividing line segments. For different dividing line segments, the collecting time is placed first at the head of the team, followed by the team head, and so on.
  • step 4 if the current event is a delete event and the second segment line segment does not exist in the line segment set, the first data set corresponding to the first segment line segment is used as a third data set.
  • the second segmented line segment is that the distance between the line segment set and the first segmented line segment is less than a third threshold (the third threshold may be a parameter used to determine whether the two segmented line segments can be merged, and may be an empirical value or an experimental value. According to the environment at that time).
  • the third threshold may be a parameter used to determine whether the two segmented line segments can be merged, and may be an empirical value or an experimental value. According to the environment at that time).
  • Step 5 In the case where the current event is a delete event and a second segment line segment exists in the line segment set, the first data set corresponding to the first segment line segment is merged into the first data set corresponding to the second segment line segment to obtain A third data set.
  • the two first data sets that are over-segmented may be merged, and the corresponding divided line segments may also be merged into one.
  • Step 6 Determine the second data set according to the obtained multiple third data sets.
  • determining the second data set according to the obtained multiple third data sets includes: directly using the multiple third data sets as the multiple second data sets. Or, perform denoising processing on multiple third data sets to obtain a second data set. For example, perform denoising processing on each third data set separately. If point cloud data still exists in the third data set after denoising processing, As a second dataset.
  • denoising processing may be performed on each third data set as follows:
  • the number of point cloud data in the third data set can be obtained. If the number of point cloud data in the third data set is less than the fourth threshold, the third data set is deleted.
  • the set of culling point cloud data is less than the minimum point threshold value N min (that is, the fourth threshold value), that is, the third data set excluding the point cloud data that may belong to noise.
  • the third data set may be a set of ground points obtained by scanning the ground points instead of the target object, and the ground point set will cause trouble for the classification, recognition and tracking of subsequent obstacles (target objects). Therefore, in a possible implementation manner, the distance between the center of gravity of the feature point represented by the point cloud data in the third data set and the laser sensor, and the number of scanning lines of the point cloud data in the third data set can be obtained. The distance from the laser sensor is less than the fifth threshold and the number of scan lines is less than 2.
  • the third data set is deleted, that is, if the distance between the center of gravity of a certain data set and the sensor origin is less than a given distance threshold D min (that is, the fifth threshold), and If the number of scanning lines is less than 2 layers, the data set can be considered as a set of ground points and removed.
  • D min that is, the fifth threshold
  • Autonomous vehicles ie, Autonomous Vehicles or Self-piloting Automobiles
  • self-driving cars computer-driven cars, or wheeled mobile robots
  • autonomous cars that realize driverlessness through computer systems.
  • Autonomous cars rely on artificial intelligence , Visual computing, radar, monitoring devices, and global positioning systems work together to allow computers to operate motor vehicles automatically and safely without any human intervention.
  • each A circle represents the point cloud data obtained by scanning a laser beam.
  • the Level 3 autopilot system on the vehicle uses a 4-wire beam lidar, as shown in Figure 5.
  • the above method mainly has the following disadvantages: the segmentation method based on a single scan line only considers the distance and direction changes between adjacent points, it is easy to cause over-segmentation when the target is occluded, and it is easy to cause when the distance between target objects is too close Under-segmentation; In addition, segmented merges between different scan lines require multiple layers of loops, which are less efficient; a segmentation method based directly on 3D point cloud data, considering only the distance between points as a similarity measure, also exists Easy to over-segmentation and under-segmentation.
  • the present application provides a fast segmentation method of a low-beam laser point cloud.
  • This method first obtains segmentation segments on each scan line based on the segmentation method of each scan line; and then uses a planar scan method to perform each scan Line segmentation segments are merged to obtain candidate target clustering sets. Finally, feature extraction is performed on the candidate target clustering sets to remove noise and ground sets to obtain the final segmentation result.
  • the basic flow of the low-beam laser point cloud (that is, target point cloud data) fast segmentation method is shown in Figure 6 below. This method is performed on a vehicle terminal as an example:
  • step S602 the vehicle-mounted terminal performs segmentation of the scanning line of the low-beam laser point cloud layer by layer.
  • a scanning line segmentation scheme considering distance and angular continuity is implemented as follows:
  • step S604 the in-vehicle terminal merges the divided segments based on the plane scanning method.
  • the plane scanning algorithm is the basic algorithm in computational geometry. It is used to calculate the intersection points of several line segments on the plane. Here, the plane scanning algorithm is improved to realize the merging of the divided segments of the scanning lines.
  • the specific algorithm flow is as follows:
  • event as the start or end endpoint of the segment, where the start point corresponds to the insertion event of the current segment, and the end point corresponds to the delete event of the current segment; define the event queue Q (equivalent to the event set) as all events Ordered set; defines the current segment set S;
  • the method for calculating the distance between the divided segments may be to calculate an average distance in an overlapping area between the two divided segments.
  • Step S606 The vehicle-mounted terminal removes noise and ground set.
  • Noise point cloud set culling the number of point cloud data is less than the minimum point threshold Nmin (equivalent to the fourth threshold), because if the number of point cloud data is less than the specified threshold, it means that the point cloud There is a high probability that the data dataset belongs to noise; in addition, the number of point cloud data is too small to calculate related features;
  • Ground point cloud set culling If the distance between the center of gravity of the point cloud set and the origin of the sensor is less than a given distance threshold D min (equivalent to the fifth threshold), and the number of scan lines is less than 2 layers, the point cloud set can be considered as ground Point collection and removal.
  • Is the angle between the point cloud data P n-1 and the x-axis Is the angle between the point cloud data P n and the x-axis, is Yes versus The difference between them, D max represents the radius with P n-1 as the center, Represents the intersection between the line passing through the origin and P n and the circle, r n-1 represents the line segment from the origin to P n-1 , and ⁇ represents the line passing through the origin and P n-1 and The angle between the line and P n-1 , ⁇ r is a preset parameter, which can be an empirical value or an experimental value.
  • the point cloud data belonging to different target objects in the segmentation result can be identified by different colors or by using different shape identification frames.
  • the point cloud data belonging to the same target object is identified by a rectangular frame.
  • the segmentation method based on a single scan line only considers the distance and direction changes between adjacent points, and it is easy to cause over-segmentation when the target object is occluded, and when the distance between the target objects is too close It is easy to cause under-segmentation.
  • the segmentation of segmented lines between different scanning lines requires multiple layers of loops, which is relatively inefficient.
  • the segmentation method based on 3D point cloud data directly considers only the distance between points as a similarity measure. There is also the problem of easy over-segmentation and under-segmentation.
  • a fast segmentation method of a low-beam laser point cloud is provided.
  • This method firstly obtains the segmentation segments on each scan line based on the segmentation method of each scan line; then uses the planar scanning method to combine the segmentation segments of each scan line to obtain the candidate target cluster set; finally, the candidate target cluster set Feature extraction is performed to remove the ground and noise sets to obtain the final segmentation result.
  • the method according to the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is Better implementation.
  • the technical solution of this application that is essentially or contributes to the existing technology can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in the embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.
  • FIG. 9 is a schematic diagram of an optional point cloud data segmentation device according to an embodiment of the present application. As shown in FIG. 9, the device may include:
  • the obtaining unit 901 is configured to obtain target point cloud data, where the target point cloud data is data obtained by scanning a target object around a vehicle through a laser beam.
  • the above-mentioned target point cloud data may be data obtained by scanning multiple laser beams of a lidar.
  • the lidar may measure the size and shape of an object by using a scanning technique, and the lidar may adopt a stability And a precision rotating motor, when the laser beam hits the polygonal gauge driven by the motor to form a scanning beam, because the polygonal gauge is located on the front focal plane of the scanning lens, and the laser beam is evenly rotated to the mirror
  • the incident angle changes relatively continuously, so the reflection angle also changes continuously.
  • a parallel and continuous top-to-bottom scanning line is formed to form the scanning line data, that is, a single-beam laser scans once Sequence of formed point clouds.
  • the lidar of the present application may be a low-beam lidar or a multi-beam lidar.
  • a low-beam lidar scan can generate fewer line-beam scan lines at a time.
  • a low-beam lidar generally includes a 4-beam and an 8-beam, mainly 2.5D lidar.
  • the vertical field of view generally does not exceed 10 °; multi-line beam lidar (or 3D lidar) can generate multiple scan lines in one scan.
  • Multi-line beam lidar generally includes 16 line beams, 32 line beams, 64 line beams, etc. 3D lidar and 2.5
  • the biggest difference between D lidar lies in the range of vertical field of view of lidar.
  • the range of vertical field of view of 3D laser mine can reach 30 ° or even more than 40 °.
  • ADAS Advanced Driver Assistance System
  • sensors installed on the car to collect the environmental data inside and outside the car at the first time to identify and detect static and dynamic objects.
  • Active safety technology that enables drivers to detect possible dangers in the fastest time, so as to attract attention and improve safety.
  • the sensors used by ADAS are mainly cameras, lidars, etc.
  • the vehicle detects potential danger, it will issue an alert to remind the driver to pay attention to abnormal vehicle or road conditions.
  • the target object can be static and dynamic outside the vehicle.
  • Objects such as buildings, pedestrians, other vehicles, animals, traffic lights, etc., are used to determine whether they are potential dangers and assist driving logic.
  • a clustering unit 903 is configured to cluster the target point cloud data to obtain multiple first data sets, wherein the feature points represented by the point cloud data included in each first data set are fitted on the same segmentation line segment The feature points are points on the target object.
  • a merging unit 905 is configured to merge a plurality of first data sets according to a distance between a plurality of divided line segments to obtain a second data set, where the second data set includes at least one first data set.
  • the obtaining unit 901 in this embodiment may be used to execute step S202 in the embodiment of the present application
  • the clustering unit 903 in this embodiment may be used to execute step S204 in the embodiment of the present application
  • the merging unit 905 in the example may be used to execute step S206 in the embodiment of the present application.
  • the applicant analyzed the related technology and realized that most point cloud segmentation methods in the related technology deal with unordered and discrete point clouds.
  • the cluster segmentation method has low complexity, It is easy to implement and can be used for segmentation of spatial point clouds.
  • a clustering segmentation method based on a fixed threshold radius may be adopted.
  • the selection of the threshold value has a great influence on the segmentation result. If the threshold value is too large, the separation distance Closer small objects may not be separated (that is, under segmentation). If the threshold is selected too small, objects with a large distance (such as buildings) may be divided into multiple clusters (that is, over segmentation occurs).
  • segment line segments (may be referred to as segment segments) on each scan line are obtained, and each segment line segment corresponds to a first data set, which can implement object
  • first data set which can implement object
  • the plane scanning method is used to merge the segmented segments of each scan line to obtain the candidate target clustering set (that is, the second data set), which is equivalent to merging the contour lines that belong to an object object.
  • the point cloud data needs to be traversed once, which can solve the problem of inefficiency and difficulty in effectively dividing the ground and non-ground objects when only using clustering and segmentation.
  • the solution of this application only needs to determine the distance between segmented line segments, compared to the point cloud Distance, no accidental factor for judging the distance between point clouds (some point clouds overlap in a certain plane dimension due to perspective, but actually do not overlap in space), and can also avoid clustering based on a fixed threshold radius
  • the method directly segmented the point cloud under segmentation or over segmentation. It can be seen that, while reducing the complexity of the algorithm, the technical solution of the present application is beneficial to reduce the over-segmentation and under-segmentation in the segmentation process, and improve the perceived robustness of autonomous driving.
  • target point cloud data is obtained.
  • the target point cloud data is data obtained by scanning target objects around the vehicle through a laser beam; the target point cloud data is clustered to obtain a plurality of first data sets, each of which The feature points represented by the point cloud data included in the data set are fitted to the same segmentation line segment, and the feature points are points on the target object; multiple first data sets are merged according to the distance between the plurality of segmentation line segments, A second data set is obtained, and the second data set includes at least one first data set.
  • clustering the target point cloud data to obtain multiple first data sets is equivalent to completing the point cloud data segmentation only after traversing all the point cloud data once, instead of using multiple Iterating over the point cloud data to complete the segmentation can solve the technical problem of low efficiency of point cloud segmentation in related technologies, and then achieve the technical effect of improving the segmentation efficiency.
  • the above-mentioned clustering unit may include: a finding module for finding a plurality of first point cloud data in the target point cloud data, wherein the feature points represented by the plurality of first point cloud data are adjacent; the first A save module, configured to save multiple first point cloud data to the same first data set created.
  • the above search module may also be used as point cloud data in which the distance between the feature points represented in the target point cloud data is not greater than the first threshold and the included angle formed between the represented feature points is not less than the second threshold Multiple first point cloud data.
  • the search module may include: an acquisition submodule, configured to acquire second point cloud data in the target point cloud data, where the second point cloud data is not clustered to any one of the first point cloud data Point cloud data of the data set; a search sub-module is used if the distance between the feature points represented by the third point cloud data and the feature points represented by the fourth point cloud data is greater than the first threshold, and the third point cloud data
  • the included angle between the represented feature point, the feature point represented by the fourth point cloud data, and the feature point represented by the fifth point cloud data is smaller than the second threshold, and the second point cloud data and the third point cloud are
  • the data and the point cloud data whose acquisition time is between the time when the second point cloud data is collected and the time when the third point cloud data is collected are regarded as a plurality of first point cloud data, wherein the third point cloud data is the target point cloud data.
  • the point cloud data that has not been clustered into any of the first data sets, and the collection time of the third point cloud data is later than the collection time of the second point cloud data.
  • the third point cloud data and the fourth point cloud data are the collection time.
  • Adjacent point cloud data The point cloud data collection time is later than the third point cloud data collection time, the fourth point cloud data and the fifth point cloud data are adjacent point cloud data at the collection time and the fifth point cloud data collection time is later than the fourth Collection time of point cloud data.
  • the clustering unit may further include: a second saving module, configured to: if the distance between the feature points represented by the third point cloud data and the feature points represented by the fourth point cloud data is greater than the first threshold, and The angle formed between the feature points represented by the third point cloud data, the feature points represented by the fourth point cloud data, and the feature points represented by the fifth point cloud data is less than the second threshold, and the fourth point cloud data is Save to another first data set than the one used to save the third point cloud data.
  • a second saving module configured to: if the distance between the feature points represented by the third point cloud data and the feature points represented by the fourth point cloud data is greater than the first threshold, and The angle formed between the feature points represented by the third point cloud data, the feature points represented by the fourth point cloud data, and the feature points represented by the fifth point cloud data is less than the second threshold, and the fourth point cloud data is Save to another first data set than the one used to save the third point cloud data.
  • the above-mentioned merging unit may also be used for merging the first data set whose distance between the segmented line segments fitted in the plurality of first data sets is less than the third threshold to obtain the second data set.
  • the merging unit may include: a creating module for creating an event set, wherein the event set stores multiple segment line segments corresponding to the multiple first data sets according to the collection time of the point cloud data in the multiple first data sets.
  • the events of the segmented line segment include the insertion event corresponding to the starting feature point of the segmented line segment and the delete event corresponding to the end feature point of the segmented line segment; the merge module is used to traverse each event in the event set.
  • the first division line segment corresponding to the current event among the plurality of division line segments is saved to the line segment set; the case where the current event is a delete event and the second segment line segment does not exist in the line segment set
  • the first data set corresponding to the first division line segment is taken as a third data set; if the current event is a delete event and a second division line segment exists in the line segment set, the first data set corresponding to the first division line segment will be A data set is merged into the first data set corresponding to the second segmentation line segment to obtain a third data set.
  • the second segmentation line segment is The set of segments divided in the distance between the first line segment is less than the third threshold segmentation; determining means for determining a second set of data according to the third plurality of sets of data obtained.
  • the foregoing determining module may be further configured to: use the plurality of third data sets as the second data set; and perform denoising processing on the plurality of third data sets to obtain the second data set.
  • the above determination module may be further configured to: obtain the number of point cloud data in the third data set; if the number of point cloud data in the third data set is less than the fourth threshold, delete the third data set; obtain the point cloud data in the third data set The distance between the center of gravity of the represented feature point and the laser sensor, and the number of scan lines of the point cloud data in the third data set. If the distance is less than the fifth threshold and the number of scan lines is less than 2, the third data set is deleted.
  • the obtaining unit may be further configured to scan the target object by using a laser sensor installed on a vehicle to obtain target point cloud data, and the vehicle has an automatic driving system.
  • the segmentation method based on a single scan line only considers the distance and direction changes between adjacent points, and it is easy to cause over-segmentation when the target object is occluded, and when the distance between the target objects is too close It is easy to cause under-segmentation.
  • the segmentation of segmented lines between different scanning lines requires multiple layers of loops, which is relatively inefficient.
  • the segmentation method based on 3D point cloud data directly considers only the distance between points as a similarity measure. There is also the problem of easy over-segmentation and under-segmentation.
  • a fast segmentation method of a low-beam laser point cloud is provided.
  • This method firstly obtains the segmentation segments on each scan line based on the segmentation method of each scan line; then uses the planar scanning method to combine the segmentation segments of each scan line to obtain the candidate target cluster set; finally, the candidate target cluster set Feature extraction is performed to remove the ground and noise sets to obtain the final segmentation result.
  • the above modules can be run in a hardware environment as shown in FIG. 1, and can be implemented by software or hardware, wherein the hardware environment includes a network environment.
  • a server or terminal for implementing the method for segmenting point cloud data is also provided.
  • FIG. 10 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the terminal may include: one or more processors (only one is shown in FIG. 10) a processor 1001, a memory 1003, and a transmission device. 1005.
  • the terminal may further include an input / output device 1007.
  • the memory 1003 may be used to store software programs and modules, such as program instructions / modules corresponding to the point cloud data segmentation method and device in the embodiments of the present application.
  • the processor 1001 runs the software programs and modules stored in the memory 1003. Thus, various functional applications and data processing are performed, that is, the above-mentioned segmentation method of point cloud data is implemented.
  • the memory 1003 may include a high-speed random access memory, and may further include a non-volatile memory, such as one or more magnetic storage devices, a flash memory, or other non-volatile solid-state memory.
  • the memory 1003 may further include a memory remotely set with respect to the processor 1001, and these remote memories may be connected to the terminal through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the transmission device 1005 is used to receive or send data through a network, and may also be used for data transmission between a processor and a memory. Specific examples of the foregoing network may include a wired network and a wireless network.
  • the transmission device 1005 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1005 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the memory 1003 is used to store an application program.
  • the processor 1001 may call the application program stored in the memory 1003 through the transmission device 1005 to perform the following steps:
  • target point cloud data is data obtained by scanning a target object around a vehicle with a laser beam
  • a plurality of first data sets are combined according to the distance between the plurality of division line segments to obtain a second data set, where each second data set includes at least one first data set.
  • target point cloud data is obtained, and the target point cloud data is data obtained by scanning a target object around the vehicle through a laser beam; the target point cloud data is clustered to obtain multiple first data sets, each The feature points represented by the point cloud data included in the first data set are fitted to the same segmentation line segment, and the feature points are points on the target object; multiple first data sets are performed according to the distance between the plurality of segmentation line segments. Combine to obtain a second data set, and the second data set includes at least one first data set.
  • clustering the target point cloud data to obtain multiple first data sets is equivalent to completing the point cloud data segmentation only after traversing all the point cloud data once, instead of using multiple Iterating over the point cloud data to complete the segmentation can solve the technical problem of low efficiency of point cloud segmentation in related technologies, and then achieve the technical effect of improving the segmentation efficiency.
  • the structure shown in FIG. 10 is only for illustration, and the terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet device (Mobile Internet Devices) PAD and other terminal equipment.
  • FIG. 10 does not limit the structure of the electronic device.
  • the terminal may also include more or fewer components (such as a network interface, a display device, etc.) than those shown in FIG. 10, or have a different configuration from that shown in FIG.
  • the program may be stored in a computer-readable storage medium, and the storage medium may be Including: flash disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a storage medium.
  • the foregoing storage medium may be used for program code for performing a method of segmenting point cloud data.
  • the storage medium may be located on at least one network device among multiple network devices in the network shown in the foregoing embodiment.
  • the storage medium is configured to store program code for performing the following steps:
  • target point cloud data is data obtained by scanning a target object around the vehicle through a laser beam
  • the foregoing storage medium may include, but is not limited to, a U disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a mobile hard disk, and a magnetic disk.
  • Various media such as discs or optical discs that can store program codes.
  • the integrated unit in the foregoing embodiment When the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
  • Several instructions are included to enable one or more computer devices (which may be personal computers, servers, or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be combined. Integration into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Discrete Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请公开了一种点云数据的分割方法和装置、存储介质、电子装置。其中,该方法包括:获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,第二数据集包括至少一个第一数据集。本申请解决了相关技术中进行点云分割的效率较低的技术问题。

Description

点云数据的分割方法和装置、存储介质、电子装置
本申请要求于2018年8月27日提交中国专利局、申请号201810982858.0、申请名称为“点云数据的分割方法和装置、存储介质、电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶领域,具体而言,涉及点云数据的分割技术。
背景技术
针对大场景的三维重建,由于其在三维城市地图、道路维护、城市规划、自动驾驶等方面的重要应用,受到了极大关注。利用深度传感器和位置姿态传感器基于固定站或移动平台采集周围环境的三维信息,由于其高效、实时、高精度的特性而被广泛采用。由于扫描的场景中可能包含不同的物体,例如地面、建筑物、树木、车辆等,在进行三维重建之前,需要通过点云分割将属于不同物体的点云数据彼此分割开,以便对各个物体分别进行点云建模。
相关技术中点云分割算法需多次扫描点云数据,计算代价大、效率低,不满足实时处理需求。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种点云数据的分割方法和装置、存储介质、电子装置,以至少解决相关技术中进行点云分割的效率较低的技术问题。
根据本申请实施例的一个方面,提供了一种点云数据的分割方法,包括:获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,第二数据集包括至少一个第一数据集。
根据本申请实施例的另一方面,还提供了一种点云数据的分割装置,包括:获取单元,用于获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;聚类单元,用于对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为所述目标对象上的点;合并单元,用于按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,第二数据集包括至少一个第一数据集。
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,程序运行时执行上述的方法。
根据本申请实施例的另一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器通过计算机程序执行上述的方法。
根据本申请实施例的另一方面,还提供了一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行上述的方法。
在本申请实施例中,获取目标点云数据,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,第二数据集包括至少一个第一数据集。在分割的过程中,“对目标点云数据进行聚类得到多个第一数据集”相当于仅仅遍历了一次所有点云数据就完成了点云数据的分割,而不用像相关技术中通过多次遍历点云数据来完成分割,可以解决相关技术中进行点云分割的效率较低的技术问题,进而达到提高分割效率的技术效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的点云数据的分割方法的硬件环境的示意图;
图2是根据本申请实施例的一种可选的点云数据的分割方法的流程图;
图3是根据本申请实施例的一种可选的激光雷达的示意图;
图4是根据本申请实施例的一种可选的点云数据的示意图;
图5是根据本申请实施例的一种可选的激光雷达场景的示意图;
图6是根据本申请实施例的一种可选的点云数据的分割方法的流程图;
图7是根据本申请实施例的一种可选的自适应距离阈值的示意图;
图8是根据本申请实施例的一种可选的点云数据的分割的示意图;
图9是根据本申请实施例的一种可选的点云数据的分割装置的示意图;
图10是根据本申请实施例的一种终端的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申 请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本申请实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
自动驾驶汽车(Autonomous vehicles;Self-piloting automobile)又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过电脑系统实现无人驾驶的智能汽车。
高精细地图是指高精度、精细化定义的地图,其精度需要达到分米级才能够区分各个车道,如今随着定位技术的发展,高精度的定位已经成为可能。而精细化定义,则是需要格式化存储交通场景中的各种交通要素,包括传统地图的道路网数据、车道网络数据、车道线以和交通标志等数据。
根据本申请实施例的一方面,提供了一种点云数据的分割方法的方法实施例。
可选地,在本实施例中,上述点云数据的分割方法可以应用于如图1所示的由服务器101和/或终端103所构成的硬件环境中。如图1所示,服务器101通过网络与终端103进行连接,可用于为终端或终端上安装的客户端提供服务(如游戏服务、应用服务、地图服务、自动驾驶等),可在服务器101上或独立于服务器101设置数据库105,用于为服务器101提供数据存储服务,上述网络包括但不限于:广域网、城域网或局域网,终端103为可以在车辆上使用的智能终端,并不限定于车载设备、手机、平板电脑等。
本申请实施例的点云数据的分割方法可以由服务器101来执行,图2是根据本申请实施例的一种可选的点云数据的分割方法的流程图,如图2所示,该方法可以包括以下步骤:
步骤S202,服务器获取目标点云数据。
目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据。在一种实现方式中,上述的目标点云数据可为激光雷达的多个激光线束扫描得到的数据,激光雷达可借着扫描技术来测量对象的尺寸及形状等,激光雷达可采用一个稳定度及精度良好的旋转马达,当激光线束打到由马达所带动的多面棱规反射而形成扫描光束,由于多面棱规位于扫描透镜的前焦面上,并均匀旋转使激光线束对反射镜而言,其入射角相对地连续性改变,因而反射角也作连续性改变,经由扫描透镜的作用,形成一平行且连续由上而下的扫描线,从而形成扫描线数据,即单线束激光扫描一次形成的点云序列。
本申请的激光雷达可为低线束激光雷达或多线束激光雷达,低线束激光雷达扫描一次可产生较少线束扫描线,低线束激光雷达一般包括4线束、8线束,主要为2.5D激光雷达,垂直视野范围一般不超过10°;多线束激光雷达(或称3D激光雷达)扫描一次可产生多条扫描线,多线束激光雷达一般包括16线束、32线束、64线束等,3D激光雷达与2.5D激光雷达最大的区别在于激光雷达垂直视野的范围,3D激光雷的垂直视野范围可达到30°甚至40°以上。
在先进驾驶辅助系统ADAS(Advanced Driver Assistance System)中,需要是利用安装于车上的各式各样的传感器,在第一时间收集车内外的环境数据,进行静、动态物体的辨识、侦测与追踪等技术上的处理,从而能够让驾驶者在最快的时间察觉可能发生的危险,以引起注意和提高安全性的主动安全技术。ADAS采用的传感器主要有摄像头、激光雷达等,当车辆检测到潜在危险时,会发出警报提醒驾车者注意异常的车辆或道路情况,在这种情况下,目标对象可以为车外的静、动态物体等用于判断是否为潜在危险、辅助驾驶逻辑判断的对象,如建筑、行人、其它车辆、动物、红绿灯等。
步骤S204,服务器对目标点云数据进行聚类得到多个第一数据集。
每个第一数据集包括的点云数据所表示的特征点被拟合在同一条分割线段上,也即第一数据集中保存的点云数据所表示的特征点位于与第一数据集对应的分割线段上,特征点为目标对象上的点。其相当于是基于各扫描线的分割方法,将每条扫描线进行分割得到各扫描线上的分割线段,实现将所有点云数据进行初步分割,如获取目标对象的边界的数据集、目标对象的外观的数据集、地面的数据集等。
步骤S206,服务器按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集。
每个第二数据集包括至少一个第一数据集,其相当于利用平面扫描方法,进行各扫描线分割线段的合并,得到候选目标聚类集合。在得到候选目标聚类集合后可对候选目标聚类集合进行特征提取,剔除噪声和地面集合,得到最终的分割结果,即第二数据集和合并后的分割线段。
申请人经过对相关技术进行分析,认识到相关技术中的大多数点云分割方法是对无序、离散的点云数据进行处理,点云分割方法中,聚类分割方法由于其方法复杂度低、易于实现,可用于空间点云的分割,但是由于室外大场景中地面点云数据的存在,使用聚类方法难以有效地对地面和非地面物体进行分割。
在一个可选的实施例中,在非地面点云聚类分割部分,可采用基于固定阈值半径的聚类分割方法,阈值的选取对分割的结果影响很大,如果阈值选取过大,间隔距离较近的小物体可能不会被分开而作为一个物体(即出现欠分割),如果阈值选取过小,间隔距离较大的物体(如建筑物)可能被分割成不同的物体(即出现过分割)。
而在本申请的实施例中,首先基于各扫描线的分割方法,得到各扫描线上的分割线段(可简称为分割段),每个分割线段对应于一个第一数据集合,可以实现对物体轮廓的初步分割;然后利用平面扫描方法,进行各扫描线分割段的合并,得到候选目标聚类集合(即第二数据集合),其相当于是将共属于一个对象物体的轮廓线条合并,由于仅需遍历一次目标点云数据,可以解决仅仅采用聚类分割时效率低下、难以有效地对地面和非地面物体进行分割的问题。另外,由于本申请的方案仅需判断分割线段间距,相对于点云间的距离,不会出现判断点云间的距离的偶然性因素(部分点云由于视角原因在某个平面维度重叠但是实际在空间中是不重叠的),还可避免采用基于固定阈值半径的聚类分割方法直接对点云进行分割时出现的欠分割或者过分割的问题。可见,本申请的技术方案在降低算法复杂度的同时,有利于减少分割过程中出现的过分割和欠分割现象,提升自动驾驶的感知鲁棒性。
上述实施例以本申请的点云数据的分割方法由服务器101来执行为例进行说明,本申请的点云数据的分割方法也可以由终端103来执行,其与上述实施例的区别在于执行主体由服务器变换为终端。本申请的点云数据的分割方法还可以是由服务器101和终端103共同执行,由终端103执行其中的一个或两个步骤(如步骤S202),服务器101执行剩余步骤(如步骤S204-步骤S206)。其中,终端103执行本申请实施例的点云数据的分割方法也可以是由安装在其上的客户端来执行。
通过上述步骤S202至步骤S206,获取目标点云数据,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,第二数据集包括至少一个第一数据集。在分割的过程中,“对目标点云数据进行聚类得到多个第一数据集”相当于仅仅遍历了一次所有点云数据就完成了点云数据的分割,而不用像相关技术中通过多次遍历点云数据来完成分割,可以解决相关技术中进行点云分割的效率较低的技术问题,进而达到提高分割效率的技术效果。
随着低线束激光雷达的量产,基于低线束激光点云数据的精确分割,是实现障碍物检测和跟踪的基本前提,在无人驾驶感知技术中具有重要意义。本申请提出的基于低线束激光点云数据的快速分割方法(也适用于多线束激光雷达),在降低运算复杂度的同时,有利于减少分割过程中出现的过分割和欠分割现象,提升自动驾驶的感知鲁棒性,下面集合图2所示的步骤详述本申请的技术方案。
可选地,在步骤S202提供的技术方案中,本申请的技术方案可以应用于自动驾驶领域,为了使得服务器可以获取到目标点云数据,可以在车辆上安装激光传感器(或称激光雷达),为了降低成本,激光雷达可以为低线束激光雷达,这样,在获取目标点云数据时,可以通过安装在车辆上的激光传感器对车辆周围的目标对象进行扫描得到目标点云数据。其中,该车辆可以为具有自动驾驶系统的车辆。
在步骤S204提供的技术方案中,服务器对目标点云数据进行聚类得到多个第一数据集,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点。
在上述实施例中,在对目标点云数据进行聚类得到多个第一数据集时,可以按照如下方式(包括步骤1-步骤2)创建每个第一数据集:
步骤1,查找目标点云数据中的多个第一点云数据,多个第一点云数据所表示的特征点相邻。
可选地,查找目标点云数据中的多个第一点云数据包括:将目标点云数据中所表示的特征点之间的距离不大于第一阈值、且所表示的特征点之间所形成的夹角不小于第二阈值的点云数据作为多个第一点云数据。一种可选的实施方式如下:
步骤11,获取目标点云数据中的第二点云数据,第二点云数据为目标点云数据中未被聚类至任意一个第一数据集的点云数据。
例如,按照目标点云数据中点云数据的采集时间依次获取点云数据,如按照采集时间从早到晚(或从晚到早)的顺序获取其中的点云数据,一般而言,能够成为一个目标对象(如障碍物)的特征的部分(如边缘、表面、棱角等),往往在位置上是相邻的,而激光雷达也是按照位置对目标对象依次进行扫描的,换言之,采集时间相邻的点云数据用于表示位置相邻的特征点,可见,每个第一数据集中一般保留的是采集时间相邻的多个第二点云数据,换言之,上述方案就是把连续的点云数据划分为多个段,每个段的点云数据被保存在一个第一数据集中。
可选地,也可以对上述的目标点云数据进行拟合,可以得到线条(即分割线段),可以定义线条之间的距离不会超过某个阈值,当某个位置两个点之间的距离超过了该阈值,那么这两个点就可以分别所在线条的端点,进而可以确定多个线条,将每个线条上的所有点对应的点云数据作为一个第一数据集。
步骤12,获取采集时间晚于第二点云数据的点云数据(记为第三点云数据)。
上述的第二点云数据相当于一个第一数据集中的起始点云数据,在此之后,需要寻找该第一数据集的结束点云数据。第三点云数据为目标点云数据中未被聚类至任意一个第一数据集且采集时间晚于第二点云数据的点云数据。
步骤S13,获取第三点云数据与第四点云数据(即采集时间晚于第三点云数据且与之相邻的点云数据)之间的距离、第三点云数据所表示的第一特征点A、第四点云数据所表示的第二特征点B以及第五点云数据所表示的第三特征点C之间所形成的夹角(即∠ABC的角度大小)。
步骤14,若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于第一阈值(该第一阈值用于表征障碍物的同一特征的相邻特征点之间的最大相距距离, 如建筑边缘的像素点之间的最大距离),且第三点云数据所表示的特征点、第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于第二阈值(该第二阈值用于表征障碍物的同一特征的相邻特征点之间的最大拐角),将第二点云数据、第三点云数据以及采集时间位于第二点云数据的采集时间和第三点云数据的采集时间之间的点云数据作为多个第一点云数据。
换言之,第三点云数据相当于是该第一数据集的结束点云数据,第三点云数据和第四点云数据为采集时间相邻的点云数据且第四点云数据的采集时间晚于第三点云数据的采集时间,第四点云数据相当于下一个第一数据集的起始点云数据,第四点云数据和第五点云数据为采集时间相邻的点云数据且第五点云数据的采集时间晚于第四点云数据的采集时间。
步骤15,若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于第一阈值,且第三点云数据所表示的特征点、第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于第二阈值,将第四点云数据保存至不同于用于保存第三点云数据的另一个第一数据集中。
步骤2,将多个第一点云数据保存至创建的同一个第一数据集中。
对于后续的点云数据可以按照前述步骤1-步骤2进行处理,直至目标点云数据中不存在点云数据。
通过步骤S204提供的技术方案,可以实现对点云数据的初步分割。
在步骤S206提供的技术方案中,服务器按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,第二数据集包括至少一个第一数据集。
在上述实施例中,按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集时,可以对多个第一数据集中所拟合得到的分割线段之间的距离小于第三阈值的第一数据集合进行合并,得到第二数据集合,一种可选的实施方式可包括如下步骤1-步骤2:
步骤1,创建事件集合。
事件集合中按照多个第一数据集中点云数据的采集时间保存有多个第一数据集对应的多个分割线段的事件,分割线段的事件包括与分割线段的起始特征点对应的插入事件和与分割线段的结束特征点对应的删除事件。
步骤2,遍历事件集合中的每个事件。
步骤3,在遍历到的当前事件为插入事件的情况下,将多个分割线段中与当前事件对应的第一分割线段保存至线段集合中。
可选地,由于每个分割线段上的点云数据是采集时间连续的点云数据,那么相当于每个分割线段上的点云数据其实可以对应于一段采集时间,因此,在事件集合中可以按照分 割线段的采集时间来存放分割线段的事件,如对于不同的分割线段,采集时间在前的放置在队首,次之的排在队首之后,以此类推。
步骤4,在当前事件为删除事件、且线段集合中不存在第二分割线段的情况下,将与第一分割线段对应的第一数据集作为一个第三数据集。
第二分割线段为线段集合中与第一分割线段之间的距离小于第三阈值(第三阈值可为用于判断两个分割线段是否可合并的参数,可以为经验值或实验值,具体可以根据当时的环境确定)的分割线段。
步骤5,在当前事件为删除事件、且线段集合中存在第二分割线段的情况下,将与第一分割线段对应的第一数据集合并至与第二分割线段对应的第一数据集中,得到一个第三数据集。
换言之,前述步骤S204中可能存在过分割的情况,因此,对于过分割的两个第一数据集可以合并,相应的分割线段也可以合并为一个。
步骤6,根据得到的多个第三数据集确定第二数据集。
可选地,根据得到的多个第三数据集确定第二数据集包括:将多个第三数据集直接作为多个第二数据集。或,对多个第三数据集进行去噪处理,得到第二数据集,如分别对每个第三数据集进行去噪处理,若去噪处理之后的第三数据集中还存在点云数据则作为一个第二数据集。
在上述实施例中,可以按照如下方式对每个第三数据集进行去噪处理:
其一,如果集合中点云数据的个数少于指定阈值,那么,该集合中包括的点云数据有较大概率属于噪声而非目标对象。因此,在一种可能的实现方式中,可以获取第三数据集中点云数据的个数,若第三数据集中点云数据的个数小于第四阈值,删除第三数据集。剔除点云数据的个数少于最小点阈值N min(即第四阈值)的集合,即剔除可能包括属于噪声的点云数据的第三数据集。
其二,在一些情况下,第三数据集可能是扫描地面点而非目标对象得到的地面点集合,地面点集合将对后续障碍物(目标对象)的分类、识别和跟踪带来麻烦。因此,在一种可能的实现方式中,可以获取第三数据集中点云数据所表示的特征点的重心与激光传感器之间的距离,以及第三数据集中点云数据的扫描线数,若重心与激光传感器之间的距离小于第五阈值且扫描线数小于2,删除第三数据集,即若某个数据集重心与传感器原点距离小于给定距离阈值D min(即第五阈值),且扫描线数少于2层,则可以认为该数据集为地面点集合,并进行去除。
作为一种可选的实施例,下面以将本申请的技术方案应用于自动驾驶为例详述本申请的技术方案。
自动驾驶汽车(即Autonomous vehicles或Self-piloting automobile)又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过电脑系统实现无人驾驶的智能汽车,自动驾驶汽车依靠人工智能、视觉计算、雷达、监控装置和全球定位系统协同合作,让电脑可以在没有任何人类主动的操作下,自动安全地操作机动车辆。
近年来,随着无人驾驶技术的兴起,如图3所示的多线束激光雷达得到了蓬勃的发展(64线束激光雷达和32线束激光雷达是其中的典型代表),图3中的每条黑色实线表示一条激光线束。与单线束激光雷达相比,多线束激光雷达一次可以扫描多根扫描线,能够快速得到周围环境的丰富三维信息,非常适合应用于无人驾驶系统的三维环境感知,如图4所示,每一个圈分别代表一束激光光束扫描得到的点云数据。
出于成本、应用的便捷性等目的的考虑,目前激光雷达的研发,正往小型化、低线束发展,特别是近年来ADAS应用的落地,低线束激光雷达发挥着越来越重要的作用,车辆搭载的Level3级别的自动驾驶系统,采用了4线束激光雷达,如图5所示。
由于单一传感器均具有各自的缺陷,往往需要进行多传感器融合,来实现鲁棒的环境感知,在无人车感知系统中,往往将视觉、毫米波和激光雷达的数据进行融合,相对于独立系统,这样可以做出更好、更安全的决策,恶劣天气条件或光照不足的情况不利于摄像头发挥作用,不过摄像头能够分辨颜色(可以识别红绿灯和路牌等信息),并且具有很高的分辨率;激光雷达可以准确测量周围障碍物的距离信息,且不受环境光线的影响,可以在夜晚正常工作。
针对单线束激光点云数据分割的方案比较多,但是针对低线束激光点云数据分割的方案比较少,可以通过本申请提供的如下两种技术方案实现:一类是沿用单线束激光雷达分割算法,基于单根激光扫描线内点顺序排列的分割方法;另一类是忽略点的连续性,直接对3D点云数据进行分割的方法。
上述的方法主要存在以下缺点:基于单扫描线的分割方法仅考虑相邻点之间的距离和方向变化,当目标存在遮挡时容易造成过分割,而当目标对象之间距离过近时容易造成欠分割;此外,不同扫描线之间的分段合并需要进行多层循环,效率较低;直接基于3D点云数据的分割方法,仅考虑点与点之间的距离作为相似性度量,同样存在容易过分割和欠分割的问题。
为了克服以上问题,本申请提供了一种低线束激光点云的快速分割方法,该方法首先基于各扫描线的分割方法,得到各扫描线上的分割段;然后利用平面扫描方法,进行各扫描线分割段的合并,得到候选目标聚类集合;最后,对候选目标聚类集合进行特征提取,剔除噪声和地面集合,得到最终的分割结果。低线束激光点云(即目标点云数据)快速分割方法的基本流程如下图6所示,以该方法在车载终端上执行为例:
步骤S602,车载终端对获取到的低线束激光点云进行逐层扫描线分割。
参考图7,考虑距离和角度连续性的扫描线分割方案,具体实现流程如下:
1)初始化分割段集合M为空,初始化lamda和Amax,并将第一个点P1(相当于第二点云数据)加入到第一个分割段S 1中(相当于一个第一数据集);
2)从第二个点P 2开始,依次遍历扫描线上的每一个P i(相当于第四点云数据),如果||P i-P i-1||(即第三点云数据P i-1与第四点云数据P i之间的距离)大于D max(第一阈值),且P i-1、P i和P i+1(相当于第五点云数据)这3个点形成的角度值小于180-A max(相当于第二阈值),则将当前的分割段S k保存到M中,并新建分割段S k+1(相当于另一个第一数据集),并将当前点P i插入到S k+1中;否则,将当前点P i插入到S k中;
3)遍历完所有点后,如果当前分割段S n非空,则将当前分割段S n插入到集合M中。
扫描线分割段的打断和合并:如果扫描线分割段呈现出非凸形状,则需要在转折点处进行打断;如果某两分割段符合被前景物体遮挡而导致分为两部分,则可以进行合并。
步骤S604,车载终端基于平面扫描法的分割段合并。
平面扫描算法是计算几何中的基础算法,它用来计算平面上若干线段的交点,这里将平面扫描算法进行改进,以实现各扫描线分割段的合并。具体算法流程如下:
1)定义事件Event为分割段的起始或结束端点,其中起始点对应当前分割段的插入事件,结束点对应当前分割段的删除事件;定义事件队列Q(相当于事件集合)为所有事件的有序集合;定义当前分割段集合S;
2)初始化事件队列Q和当前分割段集合S(相当于线段集合)为空,并将所有扫描线分割段的端点插入到事件队列Q中,并进行从小到大排序;
3)依次遍历事件队列Q中的每一个事件,如果该事件为插入事件,则将其对应的分割段(第一分割线段)插入到当前分割段集合S中;如果该事件为删除事件,则先计算其对应的分割段与当前分割段集合S中的其它分割段的距离,如果距离小于给定阈值D thred(相当于第三阈值),则说明存在第二分割线段,将当前分割段集合合并到距离最近的分割段集合中;否则将当前分割段集合输出;最后将其对应的分割段集合从当前分割段集合S中进行删除。
分割段之间的距离计算方法,可以是计算两分割段之间重叠区域内的平均距离。
步骤S606,车载终端剔除噪声和地面集合。
1)噪声点云集合剔除:剔除点云数据的个数少于最小点阈值 Nmin(相当于第四阈值)的集合,因为如果点云数据的个数少于指定阈值,则说明包括该点云数据的数据集有较大概率属于噪声;此外,点云数据的个数过少无法计算相关特征;
2)地面点云集合剔除:对于点云集合重心与传感器原点距离小于给定距离阈值D min(相当于第五阈值),且扫描线数少于2层,则可以认为该点云集合为地面点集合,并进行去除。
自适应距离阈值计算示意图参见图7:
Figure PCTCN2019102486-appb-000001
Figure PCTCN2019102486-appb-000002
其中,
Figure PCTCN2019102486-appb-000003
是点云数据P n-1与x轴之间的夹角,
Figure PCTCN2019102486-appb-000004
是点云数据P n与x轴之间的夹角,是
Figure PCTCN2019102486-appb-000005
Figure PCTCN2019102486-appb-000006
Figure PCTCN2019102486-appb-000007
之间的差值,D max表示以P n-1为圆心的半径,
Figure PCTCN2019102486-appb-000008
表示经过原点和P n的线条与圆周之间的交点,r n-1表示原点到P n-1之间的线段,λ表示经过原点和P n-1的线条与经过
Figure PCTCN2019102486-appb-000009
和P n-1的线条之间的夹角,σ r是预先设定的参数,可以为经验值或者实验值。
基于以上步骤,就可以实现低线束激光点云的快速分割,得到最终的分割结果,如图8中801所示。分割结果中属于不同目标对象上的点云数据可以用不同颜色标识,也可以用不同形状的标识框标识,801将属于同一个目标对象(例如栏杆)的点云数据采用矩形框标识。
针对相关技术中存在的如下问题:基于单扫描线的分割方法仅考虑相邻点之间的距离和方向变化,当目标对象存在遮挡时容易造成过分割,而当目标对象之间距离过近时容易造成欠分割;此外,不同扫描线之间的分割线段合并需要进行多层循环,效率较低;直接基于3D点云数据的分割方法,仅考虑点与点之间的距离作为相似性度量,同样存在容易过分割和欠分割的问题。在本申请的方案中,提供了一种低线束激光点云的快速分割方法。该方法首先基于各扫描线的分割方法,得到各扫描线上的分割段;然后利用平面扫描方法,进行各扫描线分割段的合并,得到候选目标聚类集合;最后,对候选目标聚类集合进行特征提取,剔除地面和噪声集合,得到最终的分割结果。
随着低线束激光雷达的量产,基于低线束激光点云数据的精确分割,是实现障碍物检测和跟踪的基本前提,在无人驾驶感知技术中具有重要意义。本申请提出的基于低线束激光点云数据的快速分割方法,在降低算法复杂度的同时,有利于减少分割过程中出现的过分割和欠分割现象,提升自动驾驶的感知鲁棒性。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质 (如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例的另一个方面,还提供了一种用于实施上述点云数据的分割方法的点云数据的分割装置。图9是根据本申请实施例的一种可选的点云数据的分割装置的示意图,如图9所示,该装置可以包括:
获取单元901,用于获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据。
在一种实现方式中,上述的目标点云数据可为激光雷达的多个激光线束扫描得到的数据,激光雷达可借着扫描技术来测量对象的尺寸及形状等,激光雷达可采用一个稳定度及精度良好的旋转马达,当激光线束打到由马达所带动的多面棱规反射而形成扫描光束,由于多面棱规位于扫描透镜的前焦面上,并均匀旋转使激光线束对反射镜而言,其入射角相对地连续性改变,因而反射角也作连续性改变,经由扫描透镜的作用,形成一平行且连续由上而下的扫描线,从而形成扫描线数据,即单线束激光扫描一次形成的点云序列。
本申请的激光雷达可为低线束激光雷达或多线束激光雷达,低线束激光雷达扫描一次可产生较少线束扫描线,低线束激光雷达一般包括4线束、8线束,主要为2.5D激光雷达,垂直视野范围一般不超过10°;多线束激光雷达(或称3D激光雷达)扫描一次可产生多条扫描线,多线束激光雷达一般包括16线束、32线束、64线束等,3D激光雷达与2.5D激光雷达最大的区别在于激光雷达垂直视野的范围,3D激光雷的垂直视野范围可达到30°甚至40°以上。
在先进驾驶辅助系统ADAS(Advanced Driver Assistance System)中,需要是利用安装于车上的各式各样的传感器,在第一时间收集车内外的环境数据,进行静、动态物体的辨识、侦测与追踪等技术上的处理,从而能够让驾驶者在最快的时间察觉可能发生的危险,以引起注意和提高安全性的主动安全技术。ADAS采用的传感器主要有摄像头、激光雷达等,当车辆检测到潜在危险时,会发出警报提醒驾车者注意异常的车辆或道路情况,在这种情况下,目标对象可以为车外的静、动态物体等用于判断是否为潜在危险、辅助驾驶逻辑判断的对象,如建筑、行人、其它车辆、动物、红绿灯等。
聚类单元903,用于对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点。
合并单元905,用于按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,第二数据集包括至少一个第一数据集。
需要说明的是,该实施例中的获取单元901可以用于执行本申请实施例中的步骤S202,该实施例中的聚类单元903可以用于执行本申请实施例中的步骤S204,该实施例中的合并单元905可以用于执行本申请实施例中的步骤S206。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
申请人经过对相关技术进行分析,认识到相关技术中的大多数点云分割方法是对无序、离散的点云进行处理,点云分割方法中,聚类分割方法由于其方法复杂度低、易于实现,可用于空间点云的分割,但是由于室外大场景中地面点云的存在,使用聚类方法难以有效地对地面和非地面物体进行分割。
在一个可选的实施例中,在非地面点云聚类分割部分,可采用基于固定阈值半径的聚类分割方法,阈值的选取对分割的结果影响很大,如果阈值选取过大,间隔距离较近的小物体可能不会被分开(即出现欠分割),如果阈值选取过小,间隔距离较大的物体(如建筑物)可能被分割成多个聚类(即出现过分割)。
而在本申请的实施例中,首先基于各扫描线的分割方法,得到各扫描线上的分割线段(可简称为分割段),每个分割线段对应于一个第一数据集合,可以实现对物体轮廓的初步分割;然后利用平面扫描方法,进行各扫描线分割段的合并,得到候选目标聚类集合(即第二数据集合),其相当于是将共属于一个对象物体的轮廓线条合并,由于仅需遍历一次点云数据,可以解决仅仅采用聚类分割时效率低下、难以有效地对地面和非地面物体进行分割的问题,由于本申请的方案仅需判断分割线段间距,相对于点云间的距离,不会出现判断点云间的距离的偶然性因素(部分点云由于视角原因在某个平面维度重叠但是实际在空间中是不重叠的),还可避免采用基于固定阈值半径的聚类分割方法直接对点云进行分割时出现的欠分割或者过分割的问题。可见,本申请的技术方案在降低算法复杂度的同时,有利于减少分割过程中出现的过分割和欠分割现象,提升自动驾驶的感知鲁棒性。
通过上述模块,获取目标点云数据,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,第二数据集包括至少一个第一数据集。在分割的过程中,“对目标点云数据进行聚类得到多个第一数据集”相当于仅仅遍历了一次所有点云数据就完成了点云数据的分割,而不用像相关技术中通过多次遍历点云数据来完成分割,可以解决相关技术中进行点云分割的效率较低的技术问题,进而达到提高分割效率的技术效果。
可选地,上述聚类单元可包括:查找模块,用于查找目标点云数据中的多个第一点云数据,其中,多个第一点云数据所表示的特征点相邻;第一保存模块,用于将多个第一点云数据保存至创建的同一个第一数据集中。
上述查找模块还可用于将目标点云数据中所表示的特征点之间的距离不大于第一阈值、且所表示的特征点之间所形成的夹角不小于第二阈值的点云数据作为多个第一点云数据。
可选地,查找模块可包括:获取子模块,用于获取目标点云数据中的第二点云数据,其中,第二点云数据为目标点云数据中未被聚类至任意一个第一数据集的点云数据;查找子模块,用于若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于第一阈值,且第三点云数据所表示的特征点、第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于第二阈值,将第二点云数据、第三点云数据以及采集时间位于第二点云数据的采集时间和第三点云数据的采集时间之间的点云数据作为多个第一点云数据;其中,第三点云数据为目标点云数据中未被聚类至任意一个第一数据集的点云数据,且第三点云数据的采集时间晚于第二点云数据的采集时间,第三点云数据和第四点云数据为采集时间相邻的点云数据且第四点云数据的采集时间晚于第三点云数据的采集时间,第四点云数据和第五点云数据为采集时间相邻的点云数据且第五点云数据的采集时间晚于第四点云数据的采集时间。
可选地,聚类单元还可包括:第二保存模块,用于若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于第一阈值,且第三点云数据所表示的特征点、第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于第二阈值,将第四点云数据保存至不同于用于保存第三点云数据的另一个第一数据集中。
上述的合并单元还可用于对多个第一数据集中所拟合得到的分割线段之间的距离小于第三阈值的第一数据集合进行合并,得到第二数据集合。
可选地,合并单元可包括:创建模块,用于创建事件集合,其中,事件集合中按照多个第一数据集中点云数据的采集时间保存有多个第一数据集对应的多个分割线段的事件,分割线段的事件包括与分割线段的起始特征点对应的插入事件和与分割线段的结束特征点对应的删除事件;合并模块,用于遍历事件集合中的每个事件,在遍历到的当前事件为插入事件的情况下,将多个分割线段中与当前事件对应的第一分割线段保存至线段集合中;在当前事件为删除事件、且线段集合中不存在第二分割线段的情况下,将与第一分割线段对应的第一数据集作为一个第三数据集;在当前事件为删除事件、且线段集合中存在第二分割线段的情况下,将与第一分割线段对应的第一数据集合并至与第二分割线段对应的第一数据集中,得到一个第三数据集,第二分割线段为线段集合中与第一分割线段之间的距离小于第三阈值的分割线段;确定模块,用于根据得到的多个第三数据集确定第二数据集。
可选地,上述的确定模块还可用于:将多个第三数据集作为第二数据集;对多个第三数据集进行去噪处理,得到第二数据集。
上述的确定模块还可用于:获取第三数据集中点云数据的个数,若第三数据集中点云数据的个数小于第四阈值,删除第三数据集;获取第三数据集中点云数据所表示的特征点的重心与激光传感器之间的距离,以及第三数据集中点云数据的扫描线数,若该距离小于第五阈值且扫描线数小于2,删除第三数据集。
可选地,获取单元还可用于:通过安装在车辆上的激光传感器对目标对象进行扫描得到目标点云数据,该车辆具有自动驾驶系统。
针对相关技术中存在的如下问题:基于单扫描线的分割方法仅考虑相邻点之间的距离和方向变化,当目标对象存在遮挡时容易造成过分割,而当目标对象之间距离过近时容易造成欠分割;此外,不同扫描线之间的分割线段合并需要进行多层循环,效率较低;直接基于3D点云数据的分割方法,仅考虑点与点之间的距离作为相似性度量,同样存在容易过分割和欠分割的问题。在本申请的方案中,提供了一种低线束激光点云的快速分割方法。该方法首先基于各扫描线的分割方法,得到各扫描线上的分割段;然后利用平面扫描方法,进行各扫描线分割段的合并,得到候选目标聚类集合;最后,对候选目标聚类集合进行特征提取,剔除地面和噪声集合,得到最终的分割结果。
随着低线束激光雷达的量产,基于低线束激光点云数据的精确分割,是实现障碍物检测和跟踪的基本前提,在无人驾驶感知技术中具有重要意义。本申请提出的基于低线束激光点云数据的快速分割方法,在降低算法复杂度的同时,有利于减少分割过程中出现的过分割和欠分割现象,提升自动驾驶的感知鲁棒性。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本申请实施例的另一个方面,还提供了一种用于实施上述点云数据的分割方法的服务器或终端。
图10是根据本申请实施例的一种终端的结构框图,如图10所示,该终端可以包括:一个或多个(图10中仅示出一个)处理器1001、存储器1003、以及传输装置1005,如图10所示,该终端还可以包括输入输出设备1007。
其中,存储器1003可用于存储软件程序以及模块,如本申请实施例中的点云数据的分割方法和装置对应的程序指令/模块,处理器1001通过运行存储在存储器1003内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的点云数据的分割方法。存储器1003可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1003可进一步包括相对于处理器1001远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置1005用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1005包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其 他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1005为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器1003用于存储应用程序。
处理器1001可以通过传输装置1005调用存储器1003存储的应用程序,以执行下述步骤:
获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;
对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;
按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,每个第二数据集包括至少一个第一数据集。
采用本申请实施例,获取目标点云数据,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;对目标点云数据进行聚类得到多个第一数据集,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,第二数据集包括至少一个第一数据集。在分割的过程中,“对目标点云数据进行聚类得到多个第一数据集”相当于仅仅遍历了一次所有点云数据就完成了点云数据的分割,而不用像相关技术中通过多次遍历点云数据来完成分割,可以解决相关技术中进行点云分割的效率较低的技术问题,进而达到提高分割效率的技术效果。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图10所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图10其并不对上述电子装置的结构造成限定。例如,终端还可包括比图10中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图10所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行点云数据的分割方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S12,获取目标点云数据,其中,目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;
S14,对目标点云数据进行聚类得到多个第一数据集,其中,每个第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,特征点为目标对象上的点;
S16,按照多个分割线段之间的距离对多个第一数据集进行合并,得到第二数据集,其中,第二数据集包括至少一个第一数据集。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (15)

  1. 一种点云数据的分割方法,所述方法应用于网络设备,包括:
    获取目标点云数据,其中,所述目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;
    对所述目标点云数据进行聚类得到多个第一数据集,其中,每个所述第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,所述特征点为所述目标对象上的点;以及
    按照多个所述分割线段之间的距离对所述多个第一数据集进行合并,得到第二数据集,其中,所述第二数据集包括至少一个所述第一数据集。
  2. 根据权利要求1所述的方法,其中,所述对所述目标点云数据进行聚类得到多个第一数据集包括按照如下方式创建每个所述第一数据集:
    查找所述目标点云数据中的多个第一点云数据,其中,所述多个第一点云数据所表示的特征点相邻;以及
    将所述多个第一点云数据保存至创建的同一个所述第一数据集中。
  3. 根据权利要求2所述的方法,其中,查找所述目标点云数据中的多个第一点云数据包括:
    将所述目标点云数据中所表示的特征点之间的距离不大于第一阈值、且所表示的特征点之间所形成的夹角不小于第二阈值的点云数据作为所述多个第一点云数据。
  4. 根据权利要求3所述的方法,其中,将所述目标点云数据中所表示的特征点之间的距离不大于第一阈值、且所表示的特征点之间所形成的夹角不小于第二阈值的点云数据作为所述多个第一点云数据包括:
    获取所述目标点云数据中的第二点云数据,其中,所述第二点云数据为所述目标点云数据中未被聚类至任意一个所述第一数据集的点云数据;以及
    若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于所述第一阈值,且所述第三点云数据所表示的特征点、所述第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于所述第二阈值,将所述第二点云数据、所述第三点云数据以及采集时间位于所述第二点云数据的采集时间和所述第三点云数据的采集时间之间的点云数据作为所述多个第一点云数据;其中,所述第三点云数据为所述目标点云数据中未被聚类至任意一个所述第一数据集的点云数据,且所述第三点云数据的采集时间晚于所述第二点云数据的采集时间,所述第三点云数据和所述第四点云数据为采集时间相邻的点云数据且所述第四点云数据的采集时间晚于所述第三点云数据的采集时间, 所述第四点云数据和所述第五点云数据为采集时间相邻的点云数据且所述第五点云数据的采集时间晚于所述第四点云数据的采集时间。
  5. 根据权利要求4所述的方法,其中,所述方法还包括:
    若所述第三点云数据所表示的特征点与所述第四点云数据所表示的特征点之间的距离大于所述第一阈值,且所述第三点云数据所表示的特征点、所述第四点云数据所表示的特征点以及所述第五点云数据所表示的特征点之间所形成的夹角小于所述第二阈值,将所述第四点云数据保存至不同于用于保存所述第三点云数据的另一个所述第一数据集中。
  6. 根据权利要求1至5中任意一项所述的方法,其中,按照多个所述分割线段之间的距离对所述多个第一数据集进行合并,得到第二数据集包括:
    对所述多个第一数据集中所拟合得到的所述分割线段之间的距离小于第三阈值的所述第一数据集合进行合并,得到所述第二数据集合。
  7. 根据权利要求6所述的方法,其中,对所述多个第一数据集中所拟合得到的所述分割线段之间的距离小于第三阈值的所述第一数据集合进行合并,得到所述第二数据集合包括:
    创建事件集合,其中,所述事件集合中按照所述多个第一数据集中点云数据的采集时间保存有所述多个第一数据集对应的多个所述分割线段的事件,所述分割线段的事件包括与所述分割线段的起始特征点对应的插入事件和与所述分割线段的结束特征点对应的删除事件;
    遍历所述事件集合中的每个事件,在遍历到的当前事件为插入事件的情况下,将多个所述分割线段中与所述当前事件对应的第一分割线段保存至线段集合中;在所述当前事件为删除事件、且所述线段集合中不存在第二分割线段的情况下,将与所述第一分割线段对应的所述第一数据集作为一个第三数据集;在所述当前事件为删除事件、且所述线段集合中存在所述第二分割线段的情况下,将与所述第一分割线段对应的所述第一数据集合并至与所述第二分割线段对应的所述第一数据集中,得到一个所述第三数据集,所述第二分割线段为所述线段集合中与所述第一分割线段之间的距离小于第三阈值的分割线段;以及
    根据得到的多个所述第三数据集确定所述第二数据集。
  8. 根据权利要求7所述的方法,其中,根据得到的多个所述第三数据集确定所述第二数据集包括:
    将多个所述第三数据集作为所述第二数据集;或,
    对多个所述第三数据集进行去噪处理,得到所述第二数据集。
  9. 根据权利要求8所述的方法,其中,对多个所述第三数据集进行去噪处理包括:
    获取所述第三数据集中点云数据的个数;
    若所述第三数据集中点云数据的个数小于第四阈值,删除所述第三数据集;
    和/或,
    获取所述第三数据集中点云数据所表示的特征点的重心与激光传感器之间的距离,以及所述第三数据集中点云数据的扫描线数;以及
    若所述距离小于第五阈值且所述扫描线数小于2,删除所述第三数据集。
  10. 根据权利要求1至5中任意一项所述的方法,其中,获取目标点云数据包括:
    通过安装在所述车辆上的激光传感器对所述目标对象进行扫描得到所述目标点云数据;所述车辆具有自动驾驶系统。
  11. 一种点云数据的分割装置,应用于网络设备,包括:
    获取单元,用于获取目标点云数据,其中,所述目标点云数据为通过激光线束对车辆周围的目标对象进行扫描得到的数据;
    聚类单元,用于对所述目标点云数据进行聚类得到多个第一数据集,其中,每个所述第一数据集中包括的点云数据所表示的特征点被拟合在同一条分割线段上,所述特征点为所述目标对象上的点;以及
    合并单元,用于按照多个所述分割线段之间的距离对所述多个第一数据集进行合并,得到第二数据集,其中,所述第二数据集包括至少一个所述第一数据集。
  12. 根据权利要求11所述的装置,其中,所述聚类单元包括:
    查找模块,用于查找所述目标点云数据中的多个第一点云数据,其中,所述多个第一点云数据所表示的特征点相邻;以及
    第一保存模块,用于将所述多个第一点云数据保存至创建的同一个所述第一数据集中。
  13. 根据权利要求12所述的装置,其中,所述查找模块包括:
    获取子模块,用于获取所述目标点云数据中的第二点云数据,其中,所述第二点云数据为所述目标点云数据中未被聚类至任意一个所述第一数据集的点云数据;以及
    查找子模块,用于若第三点云数据所表示的特征点与第四点云数据所表示的特征点之间的距离大于所述第一阈值,且所述第三点云数据所表示的特征点、所述第四点云数据所表示的特征点以及第五点云数据所表示的特征点之间所形成的夹角小于所述第二阈值,将所述第二点云数据、所述第三点云数据以及采集时间位于所述第二点云数据的采集时间和所述第三点云数据的采集时间之间的点云数据作为所述多个第一点云数据;其中,所述第三点云数据为所述目标点云数据中未被聚类至任意一个所述第一数据集的点云数据,且所述第三点云数据的采集时间晚于所述第二点云数据的采集时间,所述第三点云数据和所述第四点云数据为采集时间相邻的点云数据且所述第四点云数据的采集时间晚于所述第三点 云数据的采集时间,所述第四点云数据和所述第五点云数据为采集时间相邻的点云数据且所述第五点云数据的采集时间晚于所述第四点云数据的采集时间。
  14. 一种非易失性计算机可读存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至10任一项中所述的方法。
  15. 一种电子装置,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器通过所述计算机程序执行上述权利要求1至10任一项中所述的方法。
PCT/CN2019/102486 2018-08-27 2019-08-26 点云数据的分割方法和装置、存储介质、电子装置 WO2020043041A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/019,067 US11282210B2 (en) 2018-08-27 2020-09-11 Method and apparatus for segmenting point cloud data, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810982858.0 2018-08-27
CN201810982858.0A CN110148144B (zh) 2018-08-27 2018-08-27 点云数据的分割方法和装置、存储介质、电子装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/019,067 Continuation US11282210B2 (en) 2018-08-27 2020-09-11 Method and apparatus for segmenting point cloud data, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2020043041A1 true WO2020043041A1 (zh) 2020-03-05

Family

ID=67589379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102486 WO2020043041A1 (zh) 2018-08-27 2019-08-26 点云数据的分割方法和装置、存储介质、电子装置

Country Status (3)

Country Link
US (1) US11282210B2 (zh)
CN (1) CN110148144B (zh)
WO (1) WO2020043041A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832536A (zh) * 2020-07-27 2020-10-27 北京经纬恒润科技有限公司 一种车道线检测方法及装置
KR20210043518A (ko) * 2020-06-28 2021-04-21 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 고정밀도 맵 작성 방법 및 장치
CN112767512A (zh) * 2020-12-31 2021-05-07 广州小鹏自动驾驶科技有限公司 一种环境线状元素生成方法、装置、电子设备及存储介质
CN113345025A (zh) * 2021-04-26 2021-09-03 香港理工大学深圳研究院 一种基于背包式激光雷达系统的建图和地面分割方法
CN113496491A (zh) * 2020-03-19 2021-10-12 广州汽车集团股份有限公司 一种基于多线激光雷达的路面分割方法及装置
CN113762310A (zh) * 2021-01-26 2021-12-07 北京京东乾石科技有限公司 一种点云数据分类方法、装置、计算机存储介质及系统
CN114782469A (zh) * 2022-06-16 2022-07-22 西南交通大学 公共交通的拥挤度识别方法、装置、电子设备及存储介质
US11430224B2 (en) 2020-10-23 2022-08-30 Argo AI, LLC Systems and methods for camera-LiDAR fused object detection with segment filtering
CN115079126A (zh) * 2022-05-12 2022-09-20 探维科技(北京)有限公司 点云处理方法、装置、设备及存储介质
US11885886B2 (en) 2020-10-23 2024-01-30 Ford Global Technologies, Llc Systems and methods for camera-LiDAR fused object detection with LiDAR-to-image detection matching

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148144B (zh) 2018-08-27 2024-02-13 腾讯大地通途(北京)科技有限公司 点云数据的分割方法和装置、存储介质、电子装置
CN110471086B (zh) * 2019-09-06 2021-12-03 北京云迹科技有限公司 一种雷达测障系统及方法
CN112634181A (zh) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 用于检测地面点云点的方法和装置
CN110794392B (zh) * 2019-10-15 2024-03-19 上海创昂智能技术有限公司 车辆定位方法、装置、车辆及存储介质
CN110749895B (zh) * 2019-12-23 2020-05-05 广州赛特智能科技有限公司 一种基于激光雷达点云数据的定位方法
CN113496160B (zh) * 2020-03-20 2023-07-11 百度在线网络技术(北京)有限公司 三维物体检测方法、装置、电子设备和存储介质
CN112639822B (zh) * 2020-03-27 2021-11-30 华为技术有限公司 一种数据处理方法及装置
CN111308500B (zh) * 2020-04-07 2022-02-11 三一机器人科技有限公司 基于单线激光雷达的障碍物感知方法、装置和计算机终端
CN111583318B (zh) * 2020-05-09 2020-12-15 南京航空航天大学 一种基于翼身实测数据虚拟对接的整流蒙皮修配方法
CN111860493B (zh) * 2020-06-12 2024-02-09 北京图森智途科技有限公司 一种基于点云数据的目标检测方法及装置
CN111830526B (zh) * 2020-09-17 2020-12-29 上海驭矩信息科技有限公司 一种基于多线激光数据融合的集装箱定位方法及装置
CN112630793B (zh) * 2020-11-30 2024-05-17 深圳集智数字科技有限公司 一种确定平面异常点的方法和相关装置
CN112785596B (zh) * 2021-02-01 2022-06-10 中国铁建电气化局集团有限公司 基于dbscan聚类的点云图螺栓分割和高度测量方法
US20220292290A1 (en) * 2021-03-09 2022-09-15 Pony Ai Inc. Distributed computing network to perform simultaneous localization and mapping
CN112946612B (zh) * 2021-03-29 2024-05-17 上海商汤临港智能科技有限公司 外参标定方法、装置、电子设备及存储介质
CN113436223B (zh) * 2021-07-14 2022-05-24 北京市测绘设计研究院 点云数据的分割方法、装置、计算机设备和存储介质
CN113744323B (zh) * 2021-08-11 2023-12-19 深圳蓝因机器人科技有限公司 点云数据处理方法和装置
US11751578B2 (en) 2021-12-31 2023-09-12 Ocean Research Center Of Zhoushan, Zhejiang University Intelligent methods and devices for cutting squid white slices
CN114372227B (zh) * 2021-12-31 2023-04-14 浙江大学舟山海洋研究中心 鱿鱼白片智能切割计算方法、装置、设备及存储介质
CN115376365B (zh) * 2022-10-21 2023-01-13 北京德风新征程科技有限公司 车辆控制方法、装置、电子设备和计算机可读介质
CN115439484B (zh) * 2022-11-10 2023-03-21 苏州挚途科技有限公司 基于4d点云的检测方法、装置、存储介质及处理器
CN115953604B (zh) * 2023-03-13 2023-05-30 泰安市金土地测绘整理有限公司 一种不动产地理信息测绘数据采集方法
CN116110046B (zh) * 2023-04-11 2023-06-23 北京五一视界数字孪生科技股份有限公司 一种数据流形实例的确定方法、装置及设备
CN117894015B (zh) * 2024-03-15 2024-05-24 浙江华是科技股份有限公司 点云标注数据优选方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702200A (zh) * 2009-11-03 2010-05-05 武汉大学 一种机载激光雷达点云数据的自动分类方法
CN105046710A (zh) * 2015-07-23 2015-11-11 北京林业大学 基于深度图分割与代理几何体的虚实碰撞交互方法及装置
WO2016068869A1 (en) * 2014-10-28 2016-05-06 Hewlett-Packard Development Company, L.P. Three dimensional object recognition
CN108010116A (zh) * 2017-11-30 2018-05-08 西南科技大学 点云特征点检测方法和点云特征提取方法
CN110148144A (zh) * 2018-08-27 2019-08-20 腾讯大地通途(北京)科技有限公司 点云数据的分割方法和装置、存储介质、电子装置

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8050491B2 (en) * 2003-12-17 2011-11-01 United Technologies Corporation CAD modeling system and method
JP5199992B2 (ja) * 2009-12-28 2013-05-15 シャープ株式会社 画像処理装置
CN103098100B (zh) * 2010-12-03 2016-01-20 中国科学院自动化研究所 基于感知信息的三维模型形状分析方法
CN103186704A (zh) * 2011-12-29 2013-07-03 鸿富锦精密工业(深圳)有限公司 寻线过滤系统及方法
US9025861B2 (en) * 2013-04-09 2015-05-05 Google Inc. System and method for floorplan reconstruction and three-dimensional modeling
US9811714B2 (en) * 2013-08-28 2017-11-07 Autodesk, Inc. Building datum extraction from laser scanning data
US9704055B2 (en) * 2013-11-07 2017-07-11 Autodesk, Inc. Occlusion render mechanism for point clouds
US9436987B2 (en) * 2014-04-30 2016-09-06 Seiko Epson Corporation Geodesic distance based primitive segmentation and fitting for 3D modeling of non-rigid objects from 2D images
CN104143194B (zh) * 2014-08-20 2017-09-08 清华大学 一种点云分割方法及装置
GB2532948B (en) * 2014-12-02 2021-04-14 Vivo Mobile Communication Co Ltd Object Recognition in a 3D scene
CN105260988B (zh) * 2015-09-09 2019-04-05 百度在线网络技术(北京)有限公司 一种高精地图数据的处理方法和装置
CN106996795B (zh) * 2016-01-22 2019-08-09 腾讯科技(深圳)有限公司 一种车载激光外参标定方法和装置
CN105701478B (zh) * 2016-02-24 2019-03-26 腾讯科技(深圳)有限公司 杆状地物提取的方法和装置
KR101818189B1 (ko) * 2016-06-30 2018-01-15 성균관대학교산학협력단 영상 처리 장치 및 영상 처리 방법
CN106291506A (zh) * 2016-08-16 2017-01-04 长春理工大学 基于单线点云数据机器学习的车辆目标识别方法及装置
US10031231B2 (en) * 2016-09-12 2018-07-24 Delphi Technologies, Inc. Lidar object detection system for automated vehicles
CN106548479B (zh) * 2016-12-06 2019-01-18 武汉大学 一种多层次激光点云建筑物边界规则化方法
US10565787B1 (en) * 2017-01-27 2020-02-18 NHIAE Group, LLC Systems and methods for enhanced 3D modeling of a complex object
US10528851B2 (en) * 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
CN108226894A (zh) * 2017-11-29 2018-06-29 北京数字绿土科技有限公司 一种点云数据处理方法及装置
CN108389250B (zh) * 2018-03-08 2020-05-22 武汉大学 基于点云数据快速生成建筑物断面图的方法
US10345447B1 (en) * 2018-06-27 2019-07-09 Luminar Technologies, Inc. Dynamic vision sensor to direct lidar scanning
CN111322985B (zh) * 2020-03-25 2021-04-09 南京航空航天大学 基于激光点云的隧道限界分析方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702200A (zh) * 2009-11-03 2010-05-05 武汉大学 一种机载激光雷达点云数据的自动分类方法
WO2016068869A1 (en) * 2014-10-28 2016-05-06 Hewlett-Packard Development Company, L.P. Three dimensional object recognition
CN105046710A (zh) * 2015-07-23 2015-11-11 北京林业大学 基于深度图分割与代理几何体的虚实碰撞交互方法及装置
CN108010116A (zh) * 2017-11-30 2018-05-08 西南科技大学 点云特征点检测方法和点云特征提取方法
CN110148144A (zh) * 2018-08-27 2019-08-20 腾讯大地通途(北京)科技有限公司 点云数据的分割方法和装置、存储介质、电子装置

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496491A (zh) * 2020-03-19 2021-10-12 广州汽车集团股份有限公司 一种基于多线激光雷达的路面分割方法及装置
CN113496491B (zh) * 2020-03-19 2023-12-15 广州汽车集团股份有限公司 一种基于多线激光雷达的路面分割方法及装置
KR102548282B1 (ko) * 2020-06-28 2023-06-26 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 고정밀도 맵 작성 방법 및 장치
US20210405200A1 (en) * 2020-06-28 2021-12-30 Beijing Baidu Netcome Science Technology Co. Ltd. High-Precision Mapping Method And Device
KR20210043518A (ko) * 2020-06-28 2021-04-21 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 고정밀도 맵 작성 방법 및 장치
US11668831B2 (en) * 2020-06-28 2023-06-06 Beijing Baidu Netcom Science Technology Co., Ltd. High-precision mapping method and device
CN111832536A (zh) * 2020-07-27 2020-10-27 北京经纬恒润科技有限公司 一种车道线检测方法及装置
CN111832536B (zh) * 2020-07-27 2024-03-12 北京经纬恒润科技股份有限公司 一种车道线检测方法及装置
US11430224B2 (en) 2020-10-23 2022-08-30 Argo AI, LLC Systems and methods for camera-LiDAR fused object detection with segment filtering
US11885886B2 (en) 2020-10-23 2024-01-30 Ford Global Technologies, Llc Systems and methods for camera-LiDAR fused object detection with LiDAR-to-image detection matching
CN112767512A (zh) * 2020-12-31 2021-05-07 广州小鹏自动驾驶科技有限公司 一种环境线状元素生成方法、装置、电子设备及存储介质
CN112767512B (zh) * 2020-12-31 2024-04-19 广州小鹏自动驾驶科技有限公司 一种环境线状元素生成方法、装置、电子设备及存储介质
CN113762310A (zh) * 2021-01-26 2021-12-07 北京京东乾石科技有限公司 一种点云数据分类方法、装置、计算机存储介质及系统
CN113345025A (zh) * 2021-04-26 2021-09-03 香港理工大学深圳研究院 一种基于背包式激光雷达系统的建图和地面分割方法
CN115079126A (zh) * 2022-05-12 2022-09-20 探维科技(北京)有限公司 点云处理方法、装置、设备及存储介质
CN115079126B (zh) * 2022-05-12 2024-05-14 探维科技(北京)有限公司 点云处理方法、装置、设备及存储介质
CN114782469A (zh) * 2022-06-16 2022-07-22 西南交通大学 公共交通的拥挤度识别方法、装置、电子设备及存储介质
CN114782469B (zh) * 2022-06-16 2022-08-19 西南交通大学 公共交通的拥挤度识别方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US11282210B2 (en) 2022-03-22
US20200410690A1 (en) 2020-12-31
CN110148144A (zh) 2019-08-20
CN110148144B (zh) 2024-02-13

Similar Documents

Publication Publication Date Title
WO2020043041A1 (zh) 点云数据的分割方法和装置、存储介质、电子装置
US10846874B2 (en) Method and apparatus for processing point cloud data and storage medium
WO2020083024A1 (zh) 障碍物的识别方法和装置、存储介质、电子装置
KR102062680B1 (ko) 레이저 포인트 클라우드 기반의 도시 도로 인식 방법, 장치, 저장 매체 및 기기
WO2021097618A1 (zh) 点云分割方法、系统及计算机存储介质
WO2020052530A1 (zh) 一种图像处理方法、装置以及相关设备
WO2022188663A1 (zh) 一种目标检测方法及装置
JP6442834B2 (ja) 路面高度形状推定方法とシステム
WO2020154990A1 (zh) 目标物体运动状态检测方法、设备及存储介质
CN111308500B (zh) 基于单线激光雷达的障碍物感知方法、装置和计算机终端
CN112753038B (zh) 识别车辆变道趋势的方法和装置
CN111640323A (zh) 一种路况信息获取方法
JP2019106034A (ja) 点群から対象を特定する対象識別装置、プログラム及び方法
Goga et al. Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs
CN114037966A (zh) 高精地图特征提取方法、装置、介质及电子设备
CN114841910A (zh) 车载镜头遮挡识别方法及装置
Quach et al. Real-time lane marker detection using template matching with RGB-D camera
CN116071729A (zh) 可行驶区域和路沿的检测方法、装置及相关设备
Vajak et al. A rethinking of real-time computer vision-based lane detection
WO2023216555A1 (zh) 基于双目视觉的避障方法、装置、机器人及介质
WO2020248118A1 (zh) 点云处理方法、系统、设备及存储介质
Zhao et al. Omni-Directional Obstacle Detection for Vehicles Based on Depth Camera
Schomerus et al. Camera-based lane border detection in arbitrarily structured environments
CN115359332A (zh) 基于车路协同的数据融合方法、装置、电子设备及系统
CN112513876B (zh) 一种用于地图的路面提取方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19855833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19855833

Country of ref document: EP

Kind code of ref document: A1