WO2022078197A1 - 点云分割方法、装置、设备和存储介质 - Google Patents

点云分割方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2022078197A1
WO2022078197A1 PCT/CN2021/120919 CN2021120919W WO2022078197A1 WO 2022078197 A1 WO2022078197 A1 WO 2022078197A1 CN 2021120919 W CN2021120919 W CN 2021120919W WO 2022078197 A1 WO2022078197 A1 WO 2022078197A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
neural network
target grid
sample
Prior art date
Application number
PCT/CN2021/120919
Other languages
English (en)
French (fr)
Inventor
孔涛
储瑞航
李磊
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/249,205 priority Critical patent/US20230394669A1/en
Priority to JP2023521505A priority patent/JP2023545423A/ja
Publication of WO2022078197A1 publication Critical patent/WO2022078197A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present application relates to the field of computer technology, for example, to a point cloud segmentation method, apparatus, device and storage medium.
  • top-down segmentation method When segmenting point clouds such as instance segmentation, a “top-down” segmentation method can usually be used.
  • the “top-down” approach is to predict multiple 3D bounding boxes to represent different instances, and then within each bounding box to find the points belonging to the corresponding instance through binary classification.
  • this method needs to remove a large number of redundant bounding boxes, resulting in low efficiency of point cloud segmentation; at the same time, the classification effect of point cloud also depends on the accuracy of bounding box prediction in the previous stage.
  • the present application provides a point cloud segmentation method, device, equipment and storage medium to solve the technical problems of low point cloud segmentation efficiency and low point cloud segmentation accuracy.
  • a point cloud segmentation method including:
  • the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs is obtained through a pre-trained neural network, wherein the pre-trained neural network uses the sample point cloud and the sample point cloud in the sample gridded scene
  • the corresponding sample target grid in the space is obtained by training;
  • the point cloud corresponding to each instance is output, and the instance category of the same target grid is the same.
  • a point cloud segmentation device comprising:
  • the first acquisition module is set to acquire the point cloud to be processed
  • the prediction module is set to obtain the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs through a pre-trained neural network, wherein the pre-trained neural network is obtained through the sample point cloud and the sample point cloud in the grid.
  • the corresponding sample target grids in the sample gridded scene space are obtained by training;
  • the output module is set to output the point cloud corresponding to each instance according to the instance category corresponding to the target grid, and the instance category of the same target grid is the same.
  • An electronic device including a memory and a processor, wherein the memory stores a computer program, and when the processor executes the computer program, the above-mentioned point cloud segmentation method provided by the embodiment of the present application is implemented.
  • a computer-readable storage medium including a memory and a processor, where the memory stores a computer program, and when the processor executes the computer program, the above-mentioned point cloud segmentation method provided by the embodiments of the present application is implemented.
  • FIG. 1 is an application environment diagram of a point cloud segmentation method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a point cloud segmentation method provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of another point cloud segmentation method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the principle of a point cloud segmentation process provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the principle of another point cloud segmentation process provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a comparison of feature space distance distributions provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a point cloud segmentation device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is an application environment diagram of a point cloud segmentation method provided by an embodiment of the present application.
  • the point cloud segmentation method is applied to a point cloud segmentation system.
  • the point cloud segmentation system may include a terminal 101 and a server 102 .
  • the terminal 101 and the server 102 are connected through a wireless network or a wired network.
  • the terminal 101 can be a desktop terminal or a mobile terminal, optionally, the mobile terminal can be a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer (Portable Android Device, PAD), a portable multimedia player (Personal Multimedia Player, PMP) , at least one of a vehicle-mounted terminal (such as a vehicle-mounted navigation terminal) and a mobile phone.
  • the server 102 may be an independent server or a server cluster composed of multiple servers.
  • the point cloud segmentation method in this embodiment of the present application may be independently executed by the terminal 101 or the server 102, or may be executed jointly by the terminal 101 and the server 102.
  • the following method embodiments are described by taking the execution subject being an electronic device (the electronic device being the terminal 101 and/or the server 102 ) as an example.
  • An instance is a specific object of a class, a specific object can be considered as an instance, and an instance category refers to the category of a specific object.
  • an instance category refers to the category of a specific object.
  • Instance segmentation can be to segment a point cloud into one or more non-overlapping point groups belonging to a specific object, and one point group corresponds to one instance. For example, in a two-dimensional image or a three-dimensional scene, distinguish which points belong to a specific object and which points belong to another specific object.
  • FIG. 2 is a schematic flowchart of a point cloud segmentation method provided by an embodiment of the present application. This embodiment relates to a process of how an electronic device implements point cloud segmentation through single-stage point-by-point classification. As shown in Figure 2, the method may include the following steps.
  • the point cloud to be processed refers to the point cloud that needs instance segmentation.
  • the point cloud in this embodiment of the present application may include a two-dimensional point cloud or a three-dimensional point cloud, or the like.
  • the two-dimensional point cloud may be a collection of multiple pixel points in a two-dimensional image
  • the three-dimensional point cloud may be a collection of multiple three-dimensional points in a three-dimensional scene.
  • the point cloud segmentation method provided by the embodiments of the present application has broad application prospects, and has great potential in the fields of automatic driving, robot control, and augmented reality.
  • Electronic devices can acquire point clouds to be processed from front-end scanning devices in various application fields.
  • the front-end scanning device can also upload the scanned point cloud to the cloud, and the electronic device downloads the point cloud that needs instance segmentation from the cloud.
  • the pre-trained neural network is obtained by training the sample point cloud and the sample target grid corresponding to the sample point cloud in the sample gridded scene space.
  • the scene space to which the point cloud belongs can be meshed in advance, and different meshes can be defined.
  • the grids correspond to different instance categories.
  • the scene space to which the point cloud belongs can be divided into multiple grids in advance (eg, divided into N s *N s *N s grids), and each grid occupies a specific position and represents an instance category. If the center point of an object instance (the coordinate of the center point is the average of the coordinates of all points in the object instance) is located inside a grid, then all points belonging to the object instance are classified into the grid, That is, it is classified into the instance category corresponding to the grid.
  • the point-by-point classification process of the above point cloud can be achieved by pre-training a neural network. Therefore, a large amount of training data needs to be used to train the pretrained neural network.
  • training can be performed through a large number of sample point clouds and sample target grids corresponding to the sample point clouds in the sample gridded scene space.
  • the sample scene space can be gridded to obtain the sample gridded scene space, and at the same time, the instance category corresponding to each grid in the sample gridded scene space is defined, and different grids correspond to different instance categories.
  • the sample point cloud and mark the sample target grid corresponding to each point in the sample point cloud in the gridded scene space, so as to obtain the training data of the pre-trained neural network (that is, the sample point cloud and the sample point cloud in the The sample grid corresponds to the sample target grid in scene space). Use this training data to train the pretrained neural network.
  • the electronic device can input the point cloud to be processed into the pre-trained neural network, and use the pre-trained neural network to predict where each point in the point cloud belongs to Meshes the corresponding target mesh in scene space.
  • the electronic device can also combine the coordinate features and channel features of each point in the point cloud to obtain the Initial features; through the feature extraction network, the local features of each point are extracted from the initial features of each point.
  • the coordinate feature of the point may be the coordinate position of the point.
  • the channel feature of a point can be a channel value of the point, such as a color channel value (Red Green Blue (RGB) value). Combining the coordinate feature and the channel feature can be by splicing each coordinate position and each channel value to obtain the initial feature of each point.
  • the initial feature here can be in matrix form, that is, the initial feature matrix.
  • the electronic device inputs the initial feature matrix to the feature extraction network to extract the local features of each point.
  • the local feature here can be a high-dimensional expression of the characteristics of the point cloud, and the local feature covers the information of the entire point cloud. Local features can also be in matrix form, ie local feature matrix.
  • the electronic device inputs the local feature matrix to the pre-trained neural network, projects the local feature matrix to the feature space for instance classification, and outputs the target grid corresponding to each point in the gridded scene space to which it belongs.
  • the electronic device may select a deep learning network as the feature extraction network.
  • the feature extraction network can be the first half of the PointNet network, or the first half of the PointNet++ network, or the first half of the PointConv, or other network structures.
  • each point in the obtained point cloud is in the gridded scene space to which it belongs.
  • the electronic device can divide each point in the point cloud into a corresponding object instance based on the instance category corresponding to the target grid, and output the point cloud corresponding to each object instance.
  • the electronic device after acquiring the point cloud to be processed, the electronic device obtains the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs through a pre-trained neural network , and then output the point cloud corresponding to each instance according to the instance category corresponding to the target grid.
  • the single-stage point-by-point classification method directly classifies each point in the point cloud into a clear and specific instance category, which avoids the accumulation of errors caused by multiple stages and improves the accuracy of the segmentation results. At the same time, it also avoids the computational loss caused by removing a large number of redundant bounding boxes in the first stage, and improves the efficiency of point cloud segmentation.
  • the method may further include:
  • a confidence level can be calculated for each grid. The higher the confidence level of the grid, the greater the probability that the grid corresponds to the real object instance, otherwise the probability that the grid corresponds to the real object instance smaller.
  • the electronic device can separately obtain the credibility of each target grid.
  • the electronic device may obtain the credibility of each target grid in the multiple target grids through the following process, and the above S301 may include:
  • the features of multiple relevant points closest to the center point of the target grid can characterize the characteristics of the target grid. Therefore, the electronic device can select multiple relevant points closest to the center point of the target grid to calculate the corresponding target grid. credibility. In practical applications, the number of selected relevant points can be set according to actual requirements. Optionally, 32 points closest to the center point can be selected to participate in the calculation process of the credibility.
  • Aggregating the features of multiple related points can be mean pooling the features of the multiple related points, that is, adding the feature data of the multiple related points and calculating the mean value, thereby obtaining the aggregated features corresponding to the target grid.
  • the electronic device can use the sigmoid activation function to activate the aggregated features, so as to obtain the credibility of each target grid.
  • the electronic device may add a branch to the above-mentioned pre-trained neural network to predict the reliability of each grid in the gridded scene space to which the point cloud belongs.
  • the electronic device can directly obtain the reliability of each target grid in the multiple target grids from the above-mentioned pre-trained neural network.
  • the training data when training the above-mentioned pre-trained neural network, the training data also needs to include the actual reliability of each grid obtained through the sample point cloud, and combined with the actual reliability of each grid for the pre-trained neural network The network is trained.
  • the electronic device may use the target grid with the highest reliability as the final target grid corresponding to the target point.
  • the target grid with the highest reliability When there are multiple target grids with the same reliability, one target grid can be arbitrarily selected as the final target grid corresponding to the target point.
  • the electronic device may determine the final target grid corresponding to the target point based on the reliability of each target grid. Moreover, when determining the reliability of each target grid, the features of multiple related points closest to the center point of the target grid are combined, thereby improving the calculation result of the reliability of each target grid. Based on the calculation results of the accurate reliability, the target points can be accurately classified into the corresponding target grids, and then the target points can be accurately classified into the corresponding instance categories, thus improving the accuracy of the point cloud segmentation results. .
  • the output channels of the pre-trained neural network correspond one-to-one with the grids in the gridded scene space.
  • the output channels of the pre-trained neural network can be pre-matched to the grids included in the gridded scene space.
  • the pre-trained neural network may be implemented by two layers of perceptrons, that is, the pre-trained neural network includes two layers of perceptrons.
  • the above S202 may be: obtaining a target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs by using a two-layer perceptron in a pre-trained neural network.
  • the point cloud P includes N p points, and the scene space to which the point cloud belongs is divided into N s *N s *N s grids in advance, each grid corresponds to a different instance category, and in this implementation In the example, the grid corresponds to the output channels of the pretrained neural network.
  • the electronic device combines the coordinate feature and channel feature of each point in the point cloud to obtain the initial feature N p ′ of the corresponding point, and extracts the local feature of the corresponding point from the initial feature N p ′ through a feature extraction network (such as PointNet), get the local feature matrix
  • the electronic device inputs the local feature matrix F1 into the above-mentioned pre-trained neural network, through a multi-layer perceptron (for example, the perceptron can be in the shape of (32, ) of the two-layer perceptron) project F l into the feature space for classification, and output a feature matrix And use the sigmoid activation function to scale the element values in F to the interval (0, 1) to obtain the prediction matrix of the point cloud
  • the target grid corresponding to each point is obtained.
  • N p is the number of points in the point cloud
  • N l is the local feature dimension
  • N c is the categorical feature dimension
  • R is a symbolic representation space.
  • the row dimension of the prediction matrix F' represents multiple points in the point cloud
  • the column dimension represents multiple grids in the gridded scene to which the point cloud belongs.
  • the feature vector of the output space of the pre-trained neural network (the feature vector is dimension) is too sparse.
  • another pre-trained neural network is also provided.
  • the pretrained neural network includes output channels in the x-axis, y-axis, and z-axis directions.
  • the pre-trained neural network can be implemented by three independent three-layer perceptrons, that is, the pre-trained neural network includes three-layer perceptrons corresponding to the x-axis, y-axis and z-axis output channels.
  • the above S202 may include:
  • the electronic device can obtain the x-axis, y-axis and z-axis directions of each point in the point cloud through the three-layer perceptron corresponding to the x-axis, y-axis and z-axis output channels in the pre-trained neural network, respectively. projection position.
  • the electronic device determines the corresponding target grid of each point in the gridded scene space to which it belongs by predicting the orthogonal projections of each point in the point cloud in the three directions of the x-axis, the y-axis and the z-axis.
  • the predicted orthogonal projections on the x-axis, y-axis and z-axis are respectively a x , a y , and az .
  • the point cloud P includes N p points, and the scene space to which the point cloud belongs is divided into N s *N s *N s grids in advance, each grid corresponds to a different instance category, and the pretrained neural
  • the output channel of the network in the x-axis direction corresponds to the projection of the grid in the x-axis direction
  • the output channel of the pre-trained neural network in the y-axis direction corresponds to the projection of the grid in the y-axis direction
  • the output channel of the network in the z-axis direction corresponds to the projection of the mesh in the z-axis direction.
  • the electronic device combines the coordinate feature and channel feature of each point in the point cloud to obtain the initial feature N p ′ of the corresponding point, and extracts the local feature of the corresponding point from the initial feature N p ′ through a feature extraction network (such as PointNet), get the local feature matrix
  • the electronic device inputs the local feature matrix F1 into the pre-trained neural network, and uses three independent multi-layer perceptrons (for example, the perceptron can be a three-layer perceptron of shape (32, 32, N s ))
  • F l is projected to the feature space for classification, and the sigmoid activation function is used for activation processing to obtain the prediction matrix of the point cloud in the x-axis direction Prediction matrix in the y-axis direction and the prediction matrix in the z-axis direction
  • the electronic device obtains the target grid corresponding to each point in the point cloud based on the prediction matrices F x , F y and F z .
  • the dimension of the output space of the pre-trained neural network is Compared to the dimension of the output space
  • the calculation amount in the point-by-point classification process is reduced, and the memory consumption is reduced.
  • the electronic device can use pre-trained neural networks with different network architectures to predict the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs, which improves the diversification of point cloud segmentation methods .
  • the pre-trained neural network can include output channels in the x-axis, y-axis, and z-axis directions, and the pre-trained neural network including output channels in three directions is used to predict the target grid corresponding to each point, which can reduce the time-to-point. The amount of calculation in the point classification process is reduced, and the consumption of memory is reduced.
  • an acquisition process of the above-mentioned pre-trained neural network is also provided.
  • the acquisition process of the pre-trained neural network may be as follows: the sample point cloud is used as the first input of the pre-trained neural network, and the sample point cloud is in the sample grid.
  • the corresponding sample target grid in the gridded scene space is used as the first expected output corresponding to the first input, and the actual reliability of each grid in the sample gridded scene space is used as the first input.
  • the pre-trained neural network is trained by using a cross-entropy loss function.
  • the sample scene space can be gridded to obtain the sample gridded scene space, and at the same time, the instance category corresponding to each grid in the sample gridded scene space is defined, and different grids correspond to different instance categories.
  • the sample point cloud is obtained, and the sample target grid corresponding to each point in the sample point cloud in the gridded scene space is marked to obtain the sample target grid corresponding to the sample point cloud.
  • the actual reliability of each grid can also be calculated based on the sample point cloud and the location information of each grid.
  • the above-mentioned sample point cloud, the sample target grid corresponding to the sample point cloud, and the actual reliability of each grid are used as training data to train the pre-trained neural network.
  • the above process of calculating the actual reliability of each grid can be: for each grid, select multiple sample related points from the sample point cloud that are closest to the center point of the grid, and aggregate the features of the multiple sample related points. , to obtain the aggregated features of each grid, and activate the aggregated features respectively to obtain the actual reliability of the corresponding grid.
  • 32 points closest to the center point can be selected to participate in the calculation process of the credibility.
  • the electronic device uses the sample point cloud as the first input of the pre-trained neural network, the sample target grid corresponding to the sample point cloud as the first expected output corresponding to the first input, and calculates the first value of the cross-entropy loss function.
  • Loss value take the actual credibility of each grid as the second expected output corresponding to the first input, calculate the second loss value of the cross-entropy loss function, based on the weighted sum of the first loss value and the second loss value, to The parameters of the pre-trained neural network are adjusted until the convergence condition of the loss function is reached, thereby obtaining a trained pre-trained neural network.
  • the calculation formula of the above-mentioned first loss value L cate may be: N pos in the calculation formula is the number of credible grids (the credible grids here can be understood as grids whose credibility is greater than the preset threshold), is the indicator corresponding to the jth column of the matrix, if the grid corresponding to the jth column is a positive sample, then The value is 1, otherwise it is 0; F ij represents the element in the ith row and jth column of the prediction matrix F, and D ij represents the sample matrix G (the sample matrix G can represent the correspondence between the sample point cloud and the sample target grid) The i-th row and j-th column elements; here Dice Loss is used to calculate the distance D( ) between matrices.
  • the electronic device may use cross entropy as training data based on a large number of sample point clouds, the sample target grids corresponding to the sample point clouds in the sample gridded scene space, and the actual reliability of each grid.
  • the loss function trains the pre-trained neural network, so that the pre-trained neural network obtained by training is more accurate. Based on an accurate pre-trained neural network, it can directly predict the target grid corresponding to each point in the point cloud to be processed in the gridded scene space to which it belongs, which improves the accuracy of the prediction results and the accuracy of the point cloud segmentation results. .
  • the technical solutions provided by the embodiments of the present application can be widely applied to fields such as automatic driving, robot control, augmented reality, and video instance segmentation.
  • fields such as automatic driving, robot control, augmented reality, and video instance segmentation.
  • the robot can accurately perceive each object and empower the robot's navigation and control.
  • the technical solutions provided by the embodiments of the present application are compared with point cloud segmentation methods (eg, ASIS).
  • point cloud segmentation methods eg, ASIS
  • FIG. 6 it can be seen from FIG. 6 that the point cloud segmentation method provided by the embodiment of the present application can make the overlapping range of the feature distance between the same instances and the feature distance between different instances smaller, so that the distinction between different instances can be made smaller. higher degree.
  • FIG. 7 is a schematic structural diagram of a point cloud segmentation device according to an embodiment of the present application.
  • the apparatus may include: a first acquisition module 701 , a prediction module 702 and an output module 703 .
  • the first acquisition module 701 is configured to acquire the point cloud to be processed
  • the prediction module 702 is configured to obtain the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs through a pre-trained neural network, wherein the pre-trained neural network uses the sample point cloud and the sample point cloud in the corresponding target grid.
  • the corresponding sample target grids in the sample gridded scene space are obtained by training;
  • the output module 703 is configured to output the point cloud corresponding to each instance according to the instance category corresponding to the target grid, and the instance category of the same target grid is the same.
  • the electronic device after acquiring the point cloud to be processed, the electronic device obtains the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs through a pre-trained neural network , and then output the point cloud corresponding to each instance according to the instance category corresponding to the target grid.
  • the single-stage point-by-point classification method directly classifies each point in the point cloud into a clear and specific instance category, which avoids the accumulation of errors caused by multiple stages and improves the accuracy of the segmentation results. At the same time, it also avoids the computational loss caused by removing a large number of redundant bounding boxes in the first stage, and improves the efficiency of point cloud segmentation.
  • the apparatus may further include: a second acquiring module and a determining module.
  • the second obtaining module is configured to obtain the reliability of each target grid in the multiple target grids before the output module 703 outputs the point cloud corresponding to each instance according to the instance category corresponding to the target grid ;
  • the determining module is configured to determine the final target grid corresponding to the target point according to the reliability.
  • the second acquisition module is configured to select, for each target grid, a plurality of relevant points closest to the center point of the target grid from the point cloud; The features of the plurality of related points are used to obtain the aggregated features of each target grid; and the aggregated features are activated to obtain the credibility of each target grid.
  • the output channels of the pre-trained neural network correspond one-to-one with the grids in the gridded scene space.
  • the pre-trained neural network includes output channels in the directions of the x-axis, the y-axis, and the z-axis;
  • the prediction module 702 is configured to obtain the projection position of each point in the point cloud on the x-axis, y-axis and z-axis directions through a pre-trained neural network; according to the projection positions on the x-axis, y-axis and z-axis directions, Determines the target grid to which each point corresponds in the gridded scene space to which it belongs.
  • the apparatus may further include: a network training module;
  • the network training module is set to use the sample point cloud as the first input of the pre-trained neural network, and the sample target grid corresponding to the sample point cloud in the sample gridded scene space is used as the corresponding first input.
  • the first expected output, and the actual reliability of each grid in the sample gridded scene space is used as the second expected output corresponding to the first input, and the pre-trained neural network is subjected to a cross entropy loss function.
  • the network is trained.
  • the apparatus may further include: a combination module and a feature extraction module;
  • the combining module is set to combine the coordinate feature and channel feature of each point in the point cloud after the first acquiring module 701 acquires the point cloud to be processed to obtain the initial feature of each point;
  • the feature extraction module is configured to extract local features of each point from the initial features of each point through a feature extraction network.
  • FIG. 8 shows a schematic structural diagram of an electronic device (eg, the terminal or server in FIG. 1 ) 800 suitable for implementing an embodiment of the present disclosure.
  • the electronic device shown in FIG. 8 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, and the processing device 801 may be based on a program stored in a read-only memory (Read-only Memory, ROM) 802 or from a Storage device 808 loads programs into random access memory (RAM) 803 to perform various appropriate actions and processes. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored.
  • the processing device 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • An Input/Output (I/O) interface 805 is also connected to the bus 804 .
  • the following devices can be connected to the I/O interface 805: input devices 808 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 807 , speaker, vibrator, etc.; storage device 808 including, eg, magnetic tape, hard disk, etc.; and communication device 809 .
  • Communication means 809 may allow electronic device 800 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 8 shows an electronic device 800 having various means, it is not required to implement or have all of the illustrated means. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 809, or from the storage device 808, or from the ROM 802.
  • the processing device 801 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the storage medium may be a non-transitory storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Flash memory, optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet eg, the Internet
  • peer-to-peer networks eg, ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the point cloud to be processed; The target grid corresponding to each point in the gridded scene space to which it belongs, wherein the pre-trained neural network is trained by the sample point cloud and the sample target grid corresponding to the sample point cloud in the sample gridded scene space Obtain; according to the instance category corresponding to the target grid, output the point cloud corresponding to each instance, and the instance category of the same target grid is the same.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the unit does not constitute a limitation of the unit itself in one case, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • Machine-readable storage media include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM, flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or the above any suitable combination of content.
  • an electronic device comprising a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs is obtained through a pre-trained neural network, wherein the pre-trained neural network uses the sample point cloud and the sample point cloud in the sample gridded scene
  • the corresponding sample target grid in the space is obtained by training;
  • the point cloud corresponding to each instance is output, and the instance category of the same target grid is the same.
  • the electronic device after acquiring the point cloud to be processed, the electronic device obtains the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs through a pre-trained neural network , and then output the point cloud corresponding to each instance according to the instance category corresponding to the target grid.
  • the single-stage point-by-point classification method directly classifies each point in the point cloud into a clear and specific instance category, which avoids the accumulation of errors caused by multiple stages and improves the accuracy of the segmentation results. At the same time, it also avoids the computational loss caused by removing a large number of redundant bounding boxes in the first stage, and improves the efficiency of point cloud segmentation.
  • the processor when the same target point corresponds to multiple target grids, in one embodiment, when the processor executes the computer program, the processor further implements the following steps: acquiring the reliability of each target grid in the multiple target grids; For the reliability, the final target grid corresponding to the target point is determined.
  • the processor further implements the following steps when executing the computer program: for each target grid, selecting a plurality of relevant points from the point cloud that are closest to the center point of the target grid; aggregating the The features of a plurality of related points are obtained to obtain aggregated features of each target grid; the aggregated features are activated to obtain the credibility of each target grid.
  • the output channels of the pre-trained neural network correspond one-to-one with the grids in the gridded scene space.
  • the pre-trained neural network includes a two-layer perceptron; when the processor executes the computer program, the processor further implements the following steps: obtaining each of the point clouds through the two-layer perceptron in the pre-trained neural network The target mesh corresponding to the point in the meshed scene space to which it belongs.
  • the pre-trained neural network includes output channels in the directions of the x-axis, the y-axis and the z-axis; when the processor executes the computer program, the following steps are further implemented: obtaining each of the points in the point cloud through the pre-trained neural network.
  • the pre-trained neural network includes three-layer perceptrons corresponding to the x-axis, y-axis, and z-axis output channels; when the processor executes the computer program, the following steps are further implemented:
  • the three-layer perceptron obtains the projection positions of each point in the point cloud in the directions of the x-axis, the y-axis and the z-axis, respectively.
  • the processor further implements the following steps when executing the computer program: using the sample point cloud as the first input of the pre-trained neural network, the sample point cloud corresponding to the sample gridded scene space
  • the sample target grid is taken as the first expected output corresponding to the first input
  • the actual reliability of each grid in the sample gridded scene space is taken as the second expected output corresponding to the first input
  • using the cross-entropy loss function to train the pre-trained neural network.
  • the processor further implements the following steps when executing the computer program: combining the coordinate feature and channel feature of each point in the point cloud to obtain the initial feature of the corresponding point; The local features of the corresponding points are extracted from the initial features of .
  • the point cloud segmentation device, device, and storage medium provided in the above embodiments can execute the point cloud segmentation method provided in any embodiment of the present application, and have corresponding functional modules and effects for executing the method.
  • the point cloud segmentation method provided in any embodiment of the present application can execute the point cloud segmentation method provided in any embodiment of the present application, and have corresponding functional modules and effects for executing the method.
  • a point cloud segmentation method including:
  • the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs is obtained through a pre-trained neural network, wherein the pre-trained neural network uses the sample point cloud and the sample point cloud in the sample gridded scene
  • the corresponding sample target grid in the space is obtained by training;
  • the point cloud corresponding to each instance is output, and the instance category of the same target grid is the same.
  • the above point cloud segmentation method is provided, further comprising: acquiring each target grid in the multiple target grids reliability; according to the reliability, determine the final target grid corresponding to the target point.
  • the above point cloud segmentation method further comprising: for each target grid, selecting a plurality of points closest to the center point of the target grid from the point cloud Correlation points; aggregate the features of the plurality of correlation points to obtain aggregated features of each target grid; perform activation processing on the aggregated features to obtain the credibility of each target grid.
  • the output channels of the pre-trained neural network correspond one-to-one with the grids in the gridded scene space.
  • the pre-trained neural network includes two layers of perceptrons; according to one or more embodiments of the present disclosure, the above point cloud segmentation method is provided, further comprising: by pre-training the two layers in the neural network The perceptron obtains the target grid corresponding to each point in the point cloud in the gridded scene space to which it belongs.
  • the pre-trained neural network includes output channels in the x-axis, y-axis, and z-axis directions; according to one or more embodiments of the present disclosure, the above point cloud segmentation method is provided, further comprising: pre- Train the neural network to obtain the projection position of each point in the point cloud in the x-axis, y-axis and z-axis directions; according to the projection positions in the x-axis, y-axis and z-axis directions, determine the corresponding point in the grid to which it belongs The corresponding target mesh in scene space.
  • the pre-trained neural network includes three-layer perceptrons corresponding to the x-axis, y-axis, and z-axis output channels; according to one or more embodiments of the present disclosure, the above point cloud segmentation method is provided, and further The method includes: obtaining the projection positions of each point in the point cloud in the directions of the x-axis, the y-axis and the z-axis, respectively, by using the three-layer perceptron in the pre-trained neural network.
  • the above point cloud segmentation method is provided, further comprising: using the sample point cloud as the first input of the pre-trained neural network, and the sample point cloud is in the sample grid
  • the corresponding sample target grid in the gridded scene space is used as the first expected output corresponding to the first input, and the actual reliability of each grid in the gridded scene space of the sample is used as the first input.
  • the pre-trained neural network is trained by using a cross-entropy loss function.
  • the above point cloud segmentation method further comprising: combining the coordinate feature and channel feature of each point in the point cloud to obtain the initial feature of the corresponding point; through the feature extraction network, The local features of the corresponding points are extracted from the initial features of each point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

一种点云分割方法、装置、设备和存储介质。该点云分割方法包括:获取待处理的点云(S201);通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格(S202),其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同(S203)。

Description

点云分割方法、装置、设备和存储介质
本申请要求在2020年10月16日提交中国专利局、申请号为202011112395.6的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,例如涉及一种点云分割方法、装置、设备和存储介质。
背景技术
随着计算机技术的发展,数字图像的数量与日俱增,能否正确识别数字图像对于如自动驾驶以及机器人控制等技术领域来说是非常重要的。数字图像可以使用点云数据来表示,因此,点云的分割处理也是数字图像处理技术中的重要分支。
在对点云进行分割如实例分割时,通常可以采用“自上而下型”的分割方法。“自上而下型”的方法是通过预测多个三维边界框来表示不同实例,接着在每个边界框内通过二元分类找到属于对应该实例的点。但是,这种方式需要去除大量冗余的边界框,导致点云分割效率较低;同时,点云的分类效果也依赖于上一阶段对于边界框预测的准确性。
发明内容
本申请提供一种点云分割方法、装置、设备和存储介质,以解决点云分割效率较低以及点云分割的准确性较低的技术问题。
提供了一种点云分割方法,包括:
获取待处理的点云;
通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
还提供了一种点云分割装置,包括:
第一获取模块,设置为获取待处理的点云;
预测模块,设置为通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
输出模块,设置为根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
还提供了一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现本申请实施例提供的上述点云分割方法。
还提供了一种计算机可读存储介质,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现本申请实施例提供的上述点云分割方法。
附图说明
图1为本申请实施例提供的点云分割方法的应用环境图;
图2为本申请实施例提供的一种点云分割方法的流程示意图;
图3为本申请实施例提供的另一种点云分割方法的流程示意图;
图4为本申请实施例提供的一种点云分割过程的原理示意图;
图5为本申请实施例提供的另一种点云分割过程的原理示意图;
图6为本申请实施例提供的一种特征空间距离分布对比示意图;
图7为本申请实施例提供的一种点云分割装置的结构示意图;
图8为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。 本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本申请实施例提供的点云分割方法的应用环境图。参照图1,该点云分割方法应用于点云分割系统。该点云分割系统可以包括终端101和服务器102。终端101和服务器102通过无线网络或者有线网络连接。终端101可以是台式终端或者移动终端,可选的,移动终端可以为个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Personal Multimedia Player,PMP)、车载终端(例如车载导航终端)以及手机等中的至少一种。服务器102可以为独立的服务器或者多个服务器组成的服务器集群。
本申请实施例中的点云分割方法可以由终端101或者服务器102分别单独执行,也可由终端101和服务器102联合执行。下述方法实施例以执行主体是电子设备(该电子设备为终端101和/或服务器102)为例进行说明。
下述对本申请实施例涉及的一些概念进行简要介绍:
实例为类的具体对象,一个具体对象可以认为是一个实例,实例类别就是指具体对象的类别。比如,物体1、物体2、物体3等。实例分割可以是将点云分割为一个或多个互不重叠的、属于一个具体对象的点组,一个点组对应一个实例。比如,在二维图像或者三维场景中,区分出哪些点属于一个具体对象,哪些点属于另一个具体对象等。
图2为本申请实施例提供的一种点云分割方法的流程示意图。本实施例涉及的是电子设备如何通过单阶段逐点分类的方式实现点云分割的过程。如图2所示,该方法可以包括以下步骤。
S201、获取待处理的点云。
待处理的点云是指需要进行实例分割的点云。本申请实施例中的点云可以包括二维点云或者三维点云等。其中,二维点云可以为二维图像中的多个像素点的集合,三维点云可以为三维场景中多个三维点的集合。
本申请实施例提供的点云分割方法具有广阔的应用前景,在自动驾驶、机器人控制以及增强现实等领域有很大潜力。电子设备可以从多个应用领域中的前端扫描设备中获取待处理的点云。前端扫描设备也可以将扫描到的点云上传到云端,电子设备从云端下载需要进行实例分割的点云。
S202、通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格。
所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到。
在实际应用中,考虑到场景空间中物体是互不重叠的,且不同的物体实例占据场景空间中的不同位置,因此,可以预先将点云所属的场景空间网格化,且定义不同的网格对应不同的实例类别。例如,可以预先将点云所属的场景空间划分为多个网格(如划分为N s*N s*N s个网格),每个网格占据特定位置,并代表一个实例类别。如果一个物体实例的中心点(该中心点的坐标为该物体实例中所有点的坐标的平均值)位于一个网格的内部,则将属于该物体实例的所有点均分类到该网格中,即分类到该网格对应的实例类别。
基于此,可以通过预训练神经网络来实现上述点云的逐点分类过程。因此,需要采用大量的训练数据来训练该预训练神经网络。在该预训练神经网络的训练过程中,可以通过大量的样本点云和样本点云在样本网格化场景空间中对应的样本目标网格来进行训练。可以将样本场景空间进行网格化,得到样本网格化场景空间,同时,定义样本网格化场景空间中的每个网格所对应的实例类别,不同的网格对应不同的实例类别。获取样本点云,并对样本点云中每个点在该网格化场景空间中对应的样本目标网格进行标记,从而得到预训练神经网络的训练数据(即样本点云和样本点云在样本网格化场景空间中对应的样本目标网格)。使用该训练数据对预训练神经网络进行训练。
这样,在得到该网格化场景空间的预训练神经网络之后,电子设备便可以将待处理的点云,输入至该预训练神经网络,通过预训练神经网络预测点云中每个点在所属网格化场景空间中对应的目标网格。
为了便于预训练神经网络对点云的处理,可选的,在获取到待处理的点云之后,电子设备还可以将点云中每个点的坐标特征和通道特征组合,得到每个点的初始特征;通过特征提取网络,从所述每个点的初始特征中提取每个点的 局部特征。
点的坐标特征可以是点的坐标位置。点的通道特征可以是点的通道值,比如颜色通道值(红绿蓝(Red Green Blue,RGB)值)。将坐标特征和通道特征组合,可以是将每个坐标位置和每个通道值进行拼接,从而得到每个点的初始特征。这里的初始特征可以是矩阵形式,即初始特征矩阵。电子设备将初始特征矩阵输入至特征提取网络,以提取每个点的局部特征。这里的局部特征可以是对点云的特性的高维表达,局部特征涵盖了该整个点云的信息。局部特征也可以是矩阵形式,即局部特征矩阵。电子设备将该局部特征矩阵输入至预训练神经网络,将局部特征矩阵投影到用于实例分类的特征空间,输出每个点在所属网格化场景空间中对应的目标网格。
在一个可选的实施例中,电子设备可以选择深度学习网络作为特征提取网络。该特征提取网络可以是PointNet网络的前半部分,或者PointNet++网络的前半部分,或者PointConv的前半部分,或者其它网络结构等。
S203、根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
由于预先定义了点云所属网格化场景空间中的网格对应的实例类别,且不同的网格对应的实例类别不同,因此,在得到点云中每个点在所属网格化场景空间中对应的目标网格之后,电子设备便可以基于目标网格对应的实例类别,将点云中每个点划分到对应的物体实例,并输出每个物体实例对应的点云。
本申请实施例提供的点云分割方法,在获取到待处理的点云之后,电子设备通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,进而根据目标网格对应的实例类别,输出每个实例对应的点云。在点云分割过程中,由于电子设备可以直接通过预训练神经网络预测点云中每个点在所属网格化场景空间中对应的目标网格,且相同目标网格的实例类别相同,即通过单阶段的逐点分类方式直接将点云中的每个点分类到明确而特定的实例类别,避免了多阶段带来的误差积累,提高了分割结果的准确性。同时,也避免了第一阶段去除大量冗余边界框带来的计算量的损耗,提高了点云分割的效率。
在实际应用中,可能存在这样一种情况,预测的点云中的同一个目标点对应多个目标网格,此时,需要将该目标点明确地分类到对应的目标网格中。为此,可以参照下述实施例所述的过程进行处理。可选的,在上述实施例的基础上,如图3所示,当同一个目标点对应多个目标网格时,在上述S203之前,该方法还可以包括:
S301、获取所述多个目标网格中每个目标网格的可信度。
在将点云所属的场景空间网格化之后,网格化场景空间中包括的多数网格并没有对应真实的物体实例,仅有少数网格对应到真实的物体实例。为此,可以为每个网格计算一个可信度,网格的可信度越高,则表示该网格对应真实的物体实例的概率越大,反之该网格对应真实的物体实例的概率越小。
这样,在通过预训练神经网络得到同一个目标点对应多个目标网格时,电子设备可以分别获取每个目标网格的可信度。作为一种可选的实施方式,电子设备可以通过下述过程获取多个目标网格中每个目标网格的可信度,上述S301可以包括:
S3011、针对每个目标网格,从所述点云中选择距离所述目标网格的中心点最近的多个相关点。
距离目标网格的中心点最近的多个相关点的特征能够表征目标网格的特性,因此,电子设备可以选择距离目标网格的中心点最近的多个相关点,来计算相应目标网格的可信度。在实际应用中,可以根据实际需求,设置所选择的相关点的数量。可选的,可以选择距离中心点最近的32个点来参与可信度的计算过程。
S3012、聚合所述多个相关点的特征,得到所述每个目标网格的聚合特征。
聚合多个相关点的特征,可以是将多个相关点的特征进行均值池化处理,即将多个相关点的特征数据相加再求均值,从而得到对应目标网格的聚合特征。
S3013、对所述聚合特征进行激活处理,得到每个目标网格的可信度。
在得到每个目标网格的聚合特征之后,电子设备可以采用sigmoid激活函数对聚合特征进行激活处理,从而得到每个目标网格的可信度。
作为另一种可选的实施方式,电子设备可以在上述预训练神经网络中增加一个分支,来预测点云所属网格化场景空间中每个网格的可信度。在通过预训练神经网络得到同一个目标点对应多个目标网格时,电子设备可以直接从上述预训练神经网络中获取该多个目标网格中每个目标网格的可信度。为了实现该目的,在训练上述预训练神经网络时,训练数据还需要包括通过样本点云得到的每个网格的实际可信度,并结合每个网格的实际可信度对预训练神经网络进行训练。
S302、根据所述可信度,确定所述目标点对应的最终目标网格。
电子设备可以将可信度最大的目标网格作为目标点对应的最终目标网格。当存在多个可信度相同的目标网格时,可以任意选择一个目标网格作为目标点 对应的最终目标网格。
在本实施例中,当点云中同一个目标点对应多个目标网格时,电子设备可以基于每个目标网格的可信度,来确定该目标点对应的最终目标网格。并且,在确定每个目标网格的可信度时,结合了距离目标网格的中心点最近的多个相关点的特征,从而提高了每个目标网格的可信度的计算结果。基于准确的可信度的计算结果,后续能够准确地将目标点分类到对应的目标网格中,进而能够准确地将目标点分类到对应的实例类别,从而提高了点云分割结果的准确性。
在一个实施例中,为了通过预训练神经网络实现点云的逐点分类过程,可选的,该预训练神经网络的输出通道与所述网格化场景空间中的网格一一对应。
可以预先将预训练神经网络的输出通道与网格化场景空间中所包括的网格一一对应,当预训练神经网络的一个输出通道的输出值为1,则表示该点属于该输出通道所对应的网格,反之则该输出通道的输出值为0。可选的,该预训练神经网络可以通过两层感知机来实现,即该预训练神经网络包括两层感知机。相应地,上述S202可以为:通过预训练神经网络中的两层感知机得到点云中每个点在所属网格化场景空间中对应的目标网格。
参见图4,假设点云P包括N p个点,且预先将点云所属场景空间划分为N s*N s*N s个网格,每个网格对应不同的实例类别,且在本实施例中网格和预训练神经网络的输出通道一一对应。电子设备将点云中每个点的坐标特征和通道特征组合,得到对应点的初始特征N p′,通过特征提取网络(如PointNet),从初始特征N p′中提取对应点的局部特征,得到局部特征矩阵
Figure PCTCN2021120919-appb-000001
电子设备将局部特征矩阵F l输入至上述预训练神经网络中,通过多层感知机(如该感知机可以是形状为(32,
Figure PCTCN2021120919-appb-000002
)的两层感知机)将F l投影到用于分类的特征空间,输出特征矩阵
Figure PCTCN2021120919-appb-000003
并采用sigmoid激活函数将F中的元素值放缩到区间(0,1)中,得到该点云的预测矩阵
Figure PCTCN2021120919-appb-000004
从而得到每个点所对应的目标网格。其中,N p是点云中点的个数,N l是局部特征维度,N c表示分类特征维度,R是一个符号表示空间。预测矩阵F′的行维度代表点云中的多个点,列维度代表点云所属网格化场景中的多个网格。
通常情况下,一个场景空间中仅有少数网格对应到真实的物体实例,预训练神经网络的输出空间的特征向量(该特征向量为
Figure PCTCN2021120919-appb-000005
维)过于稀疏。为了降低计算资源的损耗,在一个实施例中,还提供了另一种预训练神经网络。该预训练神经网络包括x轴、y轴以及z轴方向上的输出通道。在该网络架构下,该预训练神经网络可以通过三个独立的三层感知机来实现,即该预训练神经网络包括与x轴、y轴以及z轴输出通道对应的三层感知机。
上述S202可以包括:
S2021、通过预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置。
可选的,电子设备可以通过预训练神经网络中的与x轴、y轴以及z轴输出通道对应的三层感知机分别得到点云中每个点在x轴、y轴以及z轴方向上的投影位置。
S2022、根据x轴、y轴以及z轴方向上的投影位置,确定每个点在所属网格化场景空间中对应的目标网格。
电子设备通过预测点云中每个点在x轴、y轴以及z轴三个方向上的正交投影,来确定每个点在所属网格化场景空间中对应的目标网格。假设针对点云中的一个点,所预测的在x轴、y轴以及z轴三个方向上的正交投影分别为a x、a y、a z,则基于该点在x轴、y轴以及z轴方向上的投影位置,便可以确定出该点在所属网格化场景空间中对应的目标网格为(a x,a y,a z)。
参见图5,假设点云P包括N p个点,且预先将点云所属场景空间划分为N s*N s*N s个网格,每个网格对应不同的实例类别,且预训练神经网络在x轴方向上的输出通道与网格在x轴方向上的投影相对应,预训练神经网络在y轴方向上的输出通道与网格在y轴方向上的投影相对应,预训练神经网络在z轴方向上的输出通道与网格在z轴方向上的投影相对应。电子设备将点云中每个点 的坐标特征和通道特征组合,得到对应点的初始特征N p′,通过特征提取网络(如PointNet),从初始特征N p′中提取对应点的局部特征,得到局部特征矩阵
Figure PCTCN2021120919-appb-000006
Figure PCTCN2021120919-appb-000007
电子设备将局部特征矩阵F l输入至该预训练神经网络中,通过三个独立的多层感知机(如该感知机可以是形状为(32,32,N s)的三层感知机)将F l投影到用于分类的特征空间,并采用sigmoid激活函数进行激活处理,得到该点云在x轴方向上的预测矩阵
Figure PCTCN2021120919-appb-000008
y轴方向上的预测矩阵
Figure PCTCN2021120919-appb-000009
以及z轴方向上的预测矩阵
Figure PCTCN2021120919-appb-000010
电子设备基于预测矩阵F x、F y和F z,得到点云中每个点所对应的目标网格。
在该网络架构下,预训练神经网络的输出空间的维度为
Figure PCTCN2021120919-appb-000011
相比输出空间的维度
Figure PCTCN2021120919-appb-000012
降低了逐点分类过程中的计算量,且降低了对内存的消耗。
在本实施例中,电子设备可以采用不同的网络架构的预训练神经网络来预测点云中每个点在所属网格化场景空间中对应的目标网格,提高了点云分割方式的多样化。同时,该预训练神经网络可以包括x轴、y轴以及z轴方向上的输出通道,采用包括三个方向上的输出通道的预训练神经网络预测每个点对应的目标网格,能够降低逐点分类过程中的计算量,且降低对内存的消耗。
在一个实施例中,还提供了上述预训练神经网络的获取过程。在上述实施例的基础上,可选的,该预训练神经网络的获取过程可以为:将所述样本点云作为所述预训练神经网络的第一输入,所述样本点云在样本网格化场景空间中对应的样本目标网格作为所述第一输入对应的第一期望输出,以及将所述样本网格化场景空间中的每个网格的实际可信度作为所述第一输入对应的第二期望输出,采用交叉熵损失函数对所述预训练神经网络进行训练。
可以将样本场景空间进行网格化,得到样本网格化场景空间,同时,定义样本网格化场景空间中的每个网格所对应的实例类别,不同的网格对应不同的实例类别。获取样本点云,并对样本点云中每个点在该网格化场景空间中对应的样本目标网格进行标记,得到样本点云对应的样本目标网格。同时,还可以基于样本点云以及每个网格的位置信息,计算每个网格的实际可信度。将上述样本点云、样本点云对应的样本目标网格以及每个网格的实际可信度作为训练数据,对预训练神经网络进行训练。上述计算每个网格的实际可信度的过程可以为:针对每个网格,从样本点云中选择距离网格的中心点最近的多个样本相 关点,聚合多个样本相关点的特征,得到每个网格的聚合特征,分别对该聚合特征进行激活处理,得到对应网格的实际可信度。可选的,可以选择距离中心点最近的32个点来参与可信度的计算过程。
在得到训练数据之后,电子设备将样本点云作为预训练神经网络的第一输入,样本点云对应的样本目标网格作为第一输入对应的第一期望输出,计算交叉熵损失函数的第一损失值;将每个网格的实际可信度作为第一输入对应的第二期望输出,计算交叉熵损失函数的第二损失值,基于第一损失值和第二损失值的加权和,对预训练神经网络的参数进行调整,直至达到损失函数的收敛条件,从而得到训练好的预训练神经网络。
上述第一损失值L cate的计算公式可以为:
Figure PCTCN2021120919-appb-000013
该计算公式中的N pos为可信网格的数量(此处的可信网格可以理解为可信度大于预设阈值的网格),
Figure PCTCN2021120919-appb-000014
为对应矩阵第j列的指示器,如果第j列对应的网格为正样本,则
Figure PCTCN2021120919-appb-000015
取值为1,反之为0;F ij表示预测矩阵F的第i行第j列元素,D ij表示样本矩阵G(样本矩阵G可以表示样本点云与样本目标网格之间的对应关系)的第i行第j列元素;这里使用Dice Loss来计算矩阵间的距离D(·)。
在本实施例中,电子设备可以基于大量的样本点云、样本点云在样本网格化场景空间中对应的样本目标网格以及每个网格的实际可信度作为训练数据,采用交叉熵损失函数对预训练神经网络进行训练,使得训练得到的预训练神经网络更加准确。基于准确的预训练神经网络能够直接预测待处理的点云中每个点在所属网格化场景空间中对应的目标网格,提高了预测结果的准确性,提高了点云分割结果的准确性。
本申请实施例提供的技术方案能够广泛应用于自动驾驶、机器人控制、增强现实以及视频实例分割等领域。如室内机器人的控制,如果可以将扫描得到的点云进行分割,便可使得机器人能够精确感知每一个物体,给机器人的导航和控制赋能。
为了验证本申请实施例提供的技术方案,将本申请实施例提供的技术方案与点云分割方式(如ASIS)进行了对比。参见图6,从图6中可以看出,本申请实施例提供的点云分割方法,能够使得相同实例间的特征距离和不同实例间的特征距离的重叠范围更小,使得不同实例间的区分度更高。
图7为本申请实施例提供的一种点云分割装置的结构示意图。如图7所示, 该装置可以包括:第一获取模块701、预测模块702和输出模块703。
第一获取模块701设置为获取待处理的点云;
预测模块702设置为通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
输出模块703设置为根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
本申请实施例提供的点云分割装置,在获取到待处理的点云之后,电子设备通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,进而根据目标网格对应的实例类别,输出每个实例对应的点云。在点云分割过程中,由于电子设备可以直接通过预训练神经网络预测点云中每个点在所属网格化场景空间中对应的目标网格,且相同目标网格的实例类别相同,即通过单阶段的逐点分类方式直接将点云中的每个点分类到明确而特定的实例类别,避免了多阶段带来的误差积累,提高了分割结果的准确性。同时,也避免了第一阶段去除大量冗余边界框带来的计算量的损耗,提高了点云分割的效率。
在上述实施例的基础上,可选的,当同一个目标点对应多个目标网格时,该装置还可以包括:第二获取模块和确定模块。
第二获取模块设置为在所述输出模块703根据目标网格对应的实例类别,输出每个实例对应的点云之前之前,获取所述多个目标网格中每个目标网格的可信度;
确定模块设置为根据所述可信度,确定所述目标点对应的最终目标网格。
在上述实施例的基础上,可选的,第二获取模块是设置为针对每个目标网格,从所述点云中选择距离所述目标网格的中心点最近的多个相关点;聚合所述多个相关点的特征,得到所述每个目标网格的聚合特征;对所述聚合特征进行激活处理,得到每个目标网格的可信度。
可选的,所述预训练神经网络的输出通道与所述网格化场景空间中的网格一一对应。
在上述实施例的基础上,可选的,所述预训练神经网络包括x轴、y轴以及z轴方向上的输出通道;
预测模块702是设置为通过预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置;根据x轴、y轴以及z轴方向上的投影位置, 确定每个点在所属网格化场景空间中对应的目标网格。
在上述实施例的基础上,可选的,该装置还可以包括:网络训练模块;
网络训练模块设置为将所述样本点云作为所述预训练神经网络的第一输入,所述样本点云在样本网格化场景空间中对应的样本目标网格作为所述第一输入对应的第一期望输出,以及将所述样本网格化场景空间中的每个网格的实际可信度作为所述第一输入对应的第二期望输出,采用交叉熵损失函数对所述预训练神经网络进行训练。
在上述实施例的基础上,可选的,该装置还可以包括:组合模块和特征提取模块;
组合模块设置为在第一获取模块701获取待处理的点云之后,将点云中每个点的坐标特征和通道特征组合,得到每个点的初始特征;
特征提取模块设置为通过特征提取网络,从所述每个点的初始特征中提取每个点的局部特征。
下面参考图8,图8示出了适于用来实现本公开实施例的电子设备(例如图1中的终端或服务器)800的结构示意图。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,处理装置801可以根据存储在只读存储器(Read-only Memory,ROM)802中的程序或者从存储装置808加载到随机访问存储器(Random Access Memory,RAM)803中的程序而执行多种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的多种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(Input/Output,I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置808;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图8示出了具有多种装置的电子设备800,但是并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的 方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。存储介质可以是非暂态(non-transitory)存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待处理的点云;通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和所述样本点云在样本网格化场景空间中对 应的样本目标网格训练得到;根据所述目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机 器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM、快闪存储器、光纤、便捷式CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
在一个实施例中,提供了一种电子设备,包括存储器和处理器,存储器存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待处理的点云;
通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
本申请实施例提供的点云分割设备,在获取到待处理的点云之后,电子设备通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,进而根据目标网格对应的实例类别,输出每个实例对应的点云。在点云分割过程中,由于电子设备可以直接通过预训练神经网络预测点云中每个点在所属网格化场景空间中对应的目标网格,且相同目标网格的实例类别相同,即通过单阶段的逐点分类方式直接将点云中的每个点分类到明确而特定的实例类别,避免了多阶段带来的误差积累,提高了分割结果的准确性。同时,也避免了第一阶段去除大量冗余边界框带来的计算量的损耗,提高了点云分割的效率。
当同一个目标点对应多个目标网格时,在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取所述多个目标网格中每个目标网格的可信度;根据所述可信度,确定所述目标点对应的最终目标网格。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:针对每个目标网格,从所述点云中选择距离所述目标网格的中心点最近的多个相关点;聚合所述多个相关点的特征,得到每个所述目标网格的聚合特征;对所述聚合特征进行激活处理,得到每个目标网格的可信度。
可选的,所述预训练神经网络的输出通道与所述网格化场景空间中的网格一一对应。
在一个实施例中,所述预训练神经网络包括两层感知机;处理器执行计算机程序时还实现以下步骤:通过预训练神经网络中的所述两层感知机得到所述 点云中每个点在所属网格化场景空间中对应的目标网格。
在一个实施例中,所述预训练神经网络包括x轴、y轴以及z轴方向上的输出通道;处理器执行计算机程序时还实现以下步骤:通过预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置;根据x轴、y轴以及z轴方向上的投影位置,确定对应点在所属网格化场景空间中对应的目标网格。
在一个实施例中,所述预训练神经网络包括与x轴、y轴以及z轴输出通道对应的三层感知机;处理器执行计算机程序时还实现以下步骤:通过预训练神经网络中的所述三层感知机分别得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:将所述样本点云作为所述预训练神经网络的第一输入,所述样本点云在样本网格化场景空间中对应的样本目标网格作为所述第一输入对应的第一期望输出,以及将所述样本网格化场景空间中的每个网格的实际可信度作为所述第一输入对应的第二期望输出,采用交叉熵损失函数对所述预训练神经网络进行训练。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:将点云中每个点的坐标特征和通道特征组合,得到对应点的初始特征;通过特征提取网络,从所述每个点的初始特征中提取对应点的局部特征。
上述实施例中提供的点云分割装置、设备以及存储介质可执行本申请任意实施例所提供的点云分割方法,具备执行该方法相应的功能模块和效果。未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的点云分割方法。
根据本公开的一个或多个实施例,提供一种点云分割方法,包括:
获取待处理的点云;
通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
根据目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
当同一个目标点对应多个目标网格时,根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:获取所述多个目标网格中每个目标网格的可信度;根据所述可信度,确定所述目标点对应的最终目标网格。
根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:针对每个目标网格,从所述点云中选择距离所述目标网格的中心点最近的多个相关点;聚合所述多个相关点的特征,得到每个所述目标网格的聚合特征;对所述聚合特征进行激活处理,得到每个目标网格的可信度。
可选的,所述预训练神经网络的输出通道与所述网格化场景空间中的网格一一对应。
可选的,所述预训练神经网络包括两层感知机;根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:通过预训练神经网络中的所述两层感知机得到所述点云中每个点在所属网格化场景空间中对应的目标网格。
可选的,所述预训练神经网络包括x轴、y轴以及z轴方向上的输出通道;根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:通过预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置;根据x轴、y轴以及z轴方向上的投影位置,确定对应点在所属网格化场景空间中对应的目标网格。
可选的,所述预训练神经网络包括与x轴、y轴以及z轴输出通道对应的三层感知机;根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:通过预训练神经网络中的所述三层感知机分别得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置。
根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:将所述样本点云作为所述预训练神经网络的第一输入,所述样本点云在样本网格化场景空间中对应的样本目标网格作为所述第一输入对应的第一期望输出,以及将所述样本网格化场景空间中的每个网格的实际可信度作为所述第一输入对应的第二期望输出,采用交叉熵损失函数对所述预训练神经网络进行训练。
根据本公开的一个或多个实施例,提供了如上的点云分割方法,还包括:将点云中每个点的坐标特征和通道特征组合,得到对应点的初始特征;通过特征提取网络,从所述每个点的初始特征中提取对应点的局部特征。
此外,虽然采用特定次序描绘了每个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了多个实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (12)

  1. 一种点云分割方法,包括:
    获取待处理的点云;
    通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和所述样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
    根据所述目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
  2. 根据权利要求1所述的方法,其中,在同一个目标点对应多个目标网格的情况下,在所述根据所述目标网格对应的实例类别,输出每个实例对应的点云之前,还包括:
    获取所述多个目标网格中每个目标网格的可信度;
    根据所述可信度,确定所述目标点对应的最终目标网格。
  3. 根据权利要求2所述的方法,其中,所述获取所述多个目标网格中每个目标网格的可信度,包括:
    针对每个目标网格,从所述点云中选择距离所述目标网格的中心点最近的多个相关点;
    聚合所述多个相关点的特征,得到所述每个目标网格的聚合特征;
    对所述聚合特征进行激活处理,得到所述每个目标网格的可信度。
  4. 根据权利要求1至3中任一项所述的方法,其中,所述预训练神经网络的输出通道与所述网格化场景空间中的网格一一对应。
  5. 根据权利要求4所述的方法,其中,所述预训练神经网络包括两层感知机;
    所述通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,包括:
    通过所述预训练神经网络中的所述两层感知机得到所述点云中每个点在所属网格化场景空间中对应的目标网格。
  6. 根据权利要求1至3中任一项所述的方法,其中,所述预训练神经网络包括x轴、y轴以及z轴方向上的输出通道;
    所述通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,包括:
    通过所述预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置;
    根据所述x轴、y轴以及z轴方向上的投影位置,确定所述每个点在所属网格化场景空间中对应的目标网格。
  7. 根据权利要求6所述的方法,其中,所述预训练神经网络包括与x轴、y轴以及z轴方向上的输出通道对应的三层感知机;
    所述通过所述预训练神经网络得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置,包括:通过所述预训练神经网络中的所述三层感知机分别得到所述点云中每个点在x轴、y轴以及z轴方向上的投影位置。
  8. 根据权利要求1至3中任一项所述的方法,其中,所述预训练神经网络的获取过程,包括:
    将所述样本点云作为所述预训练神经网络的第一输入,所述样本点云在所述样本网格化场景空间中对应的样本目标网格作为所述第一输入对应的第一期望输出,以及将所述样本网格化场景空间中的每个网格的实际可信度作为所述第一输入对应的第二期望输出,采用交叉熵损失函数对所述预训练神经网络进行训练。
  9. 根据权利要求1至3中任一项所述的方法,其中,在所述获取待处理的点云之后,还包括:
    将所述点云中每个点的坐标特征和通道特征组合,得到所述每个点的初始特征;
    通过特征提取网络,从所述每个点的初始特征中提取所述每个点的局部特征。
  10. 一种点云分割装置,包括:
    第一获取模块,设置为获取待处理的点云;
    预测模块,设置为通过预训练神经网络得到所述点云中每个点在所属网格化场景空间中对应的目标网格,其中,所述预训练神经网络通过样本点云和所述样本点云在样本网格化场景空间中对应的样本目标网格训练得到;
    输出模块,设置为根据所述目标网格对应的实例类别,输出每个实例对应的点云,相同目标网格的实例类别相同。
  11. 一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现权利要求1至9中任一项所述方法。
  12. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述方法。
PCT/CN2021/120919 2020-10-16 2021-09-27 点云分割方法、装置、设备和存储介质 WO2022078197A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/249,205 US20230394669A1 (en) 2020-10-16 2021-09-27 Point cloud segmentation method and apparatus, device, and storage medium
JP2023521505A JP2023545423A (ja) 2020-10-16 2021-09-27 点群分割方法、装置、機器および記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011112395.6A CN112258512B (zh) 2020-10-16 2020-10-16 点云分割方法、装置、设备和存储介质
CN202011112395.6 2020-10-16

Publications (1)

Publication Number Publication Date
WO2022078197A1 true WO2022078197A1 (zh) 2022-04-21

Family

ID=74245552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/120919 WO2022078197A1 (zh) 2020-10-16 2021-09-27 点云分割方法、装置、设备和存储介质

Country Status (4)

Country Link
US (1) US20230394669A1 (zh)
JP (1) JP2023545423A (zh)
CN (1) CN112258512B (zh)
WO (1) WO2022078197A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681767A (zh) * 2023-08-03 2023-09-01 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备
CN116721399A (zh) * 2023-07-26 2023-09-08 之江实验室 一种量化感知训练的点云目标检测方法及装置
CN117291845A (zh) * 2023-11-27 2023-12-26 成都理工大学 一种点云地面滤波方法、系统、电子设备及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258512B (zh) * 2020-10-16 2023-05-26 抖音视界有限公司 点云分割方法、装置、设备和存储介质
CN113204998B (zh) * 2021-04-01 2022-03-15 武汉大学 一种基于单木尺度的机载点云森林生态估测方法及系统
CN112767424B (zh) * 2021-04-08 2021-07-13 深圳大学 一种基于室内三维点云空间自动剖分方法
CN113392841B (zh) * 2021-06-03 2022-11-18 电子科技大学 一种基于多特征信息增强编码的三维点云语义分割方法
CN113420846A (zh) * 2021-08-24 2021-09-21 天津云圣智能科技有限责任公司 点云分割方法、装置及终端设备
CN114882046A (zh) * 2022-03-29 2022-08-09 驭势科技(北京)有限公司 三维点云数据的全景分割方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741329A (zh) * 2018-11-27 2019-05-10 广东工业大学 一种面向电力走廊场景的点云分割方法
US20200151557A1 (en) * 2018-11-14 2020-05-14 Ehsan Nezhadarya Method and system for deep neural networks using dynamically selected feature-relevant points from a point cloud
CN111310765A (zh) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 激光点云语义分割方法和装置
CN112258512A (zh) * 2020-10-16 2021-01-22 北京字节跳动网络技术有限公司 点云分割方法、装置、设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590335A (zh) * 2014-10-23 2016-05-18 富泰华工业(深圳)有限公司 点云网格精细化系统及方法
CN106408011B (zh) * 2016-09-09 2020-04-17 厦门大学 基于深度学习的激光扫描三维点云树木自动分类方法
CN111339876B (zh) * 2020-02-19 2023-09-01 北京百度网讯科技有限公司 用于识别场景中各区域类型的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151557A1 (en) * 2018-11-14 2020-05-14 Ehsan Nezhadarya Method and system for deep neural networks using dynamically selected feature-relevant points from a point cloud
CN109741329A (zh) * 2018-11-27 2019-05-10 广东工业大学 一种面向电力走廊场景的点云分割方法
CN111310765A (zh) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 激光点云语义分割方法和装置
CN112258512A (zh) * 2020-10-16 2021-01-22 北京字节跳动网络技术有限公司 点云分割方法、装置、设备和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721399A (zh) * 2023-07-26 2023-09-08 之江实验室 一种量化感知训练的点云目标检测方法及装置
CN116721399B (zh) * 2023-07-26 2023-11-14 之江实验室 一种量化感知训练的点云目标检测方法及装置
CN116681767A (zh) * 2023-08-03 2023-09-01 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备
CN116681767B (zh) * 2023-08-03 2023-12-29 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备
CN117291845A (zh) * 2023-11-27 2023-12-26 成都理工大学 一种点云地面滤波方法、系统、电子设备及存储介质
CN117291845B (zh) * 2023-11-27 2024-03-19 成都理工大学 一种点云地面滤波方法、系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN112258512A (zh) 2021-01-22
CN112258512B (zh) 2023-05-26
JP2023545423A (ja) 2023-10-30
US20230394669A1 (en) 2023-12-07

Similar Documents

Publication Publication Date Title
WO2022078197A1 (zh) 点云分割方法、装置、设备和存储介质
CN111476309B (zh) 图像处理方法、模型训练方法、装置、设备及可读介质
CN108898086B (zh) 视频图像处理方法及装置、计算机可读介质和电子设备
WO2022083536A1 (zh) 一种神经网络构建方法以及装置
EP3506161A1 (en) Method and apparatus for recovering point cloud data
CN108960090B (zh) 视频图像处理方法及装置、计算机可读介质和电子设备
CN111079619B (zh) 用于检测图像中的目标对象的方法和装置
CN111414953B (zh) 点云分类方法和装置
CN112668588B (zh) 车位信息生成方法、装置、设备和计算机可读介质
CN111950570B (zh) 目标图像提取方法、神经网络训练方法及装置
CN114399588B (zh) 三维车道线生成方法、装置、电子设备和计算机可读介质
CN113762003B (zh) 一种目标对象的检测方法、装置、设备和存储介质
CN113592033B (zh) 油罐图像识别模型训练方法、油罐图像识别方法和装置
WO2022012178A1 (zh) 用于生成目标函数的方法、装置、电子设备和计算机可读介质
WO2024001653A9 (zh) 特征提取方法、装置、存储介质及电子设备
CN116164770B (zh) 路径规划方法、装置、电子设备和计算机可读介质
CN115100536B (zh) 建筑物识别方法、装置、电子设备和计算机可读介质
WO2023103887A1 (zh) 图像分割标签的生成方法、装置、电子设备及存储介质
WO2023061195A1 (zh) 图像获取模型的训练方法、图像检测方法、装置及设备
WO2022193976A1 (zh) 一种图像深度预测方法及电子设备
WO2022052889A1 (zh) 图像识别方法、装置、电子设备和计算机可读介质
CN115147870A (zh) 行人再识别方法及装置
CN111968030B (zh) 信息生成方法、装置、电子设备和计算机可读介质
CN113778078A (zh) 定位信息生成方法、装置、电子设备和计算机可读介质
CN117558023A (zh) 姿态估计方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879243

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023521505

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21879243

Country of ref document: EP

Kind code of ref document: A1