WO2024007616A1 - 一种点云补全方法、装置、设备及介质 - Google Patents

一种点云补全方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024007616A1
WO2024007616A1 PCT/CN2023/081451 CN2023081451W WO2024007616A1 WO 2024007616 A1 WO2024007616 A1 WO 2024007616A1 CN 2023081451 W CN2023081451 W CN 2023081451W WO 2024007616 A1 WO2024007616 A1 WO 2024007616A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
target
features
point
missing
Prior art date
Application number
PCT/CN2023/081451
Other languages
English (en)
French (fr)
Inventor
卢丽华
魏辉
李茹杨
赵雅倩
李仁刚
Original Assignee
山东海量信息技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东海量信息技术研究院 filed Critical 山东海量信息技术研究院
Publication of WO2024007616A1 publication Critical patent/WO2024007616A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present application relates to the field of three-dimensional vision, and in particular to a point cloud completion method, a point cloud completion device, a corresponding electronic device, and a corresponding computer non-volatile readable storage medium.
  • 3D objects can be represented by 3D point clouds.
  • This representation method has been widely used in 3D computer vision tasks such as intelligent driving, robots, virtual reality, and augmented reality.
  • 3D computer vision tasks such as intelligent driving, robots, virtual reality, and augmented reality.
  • object occlusion and specular reflection the original 3D point cloud collected by 3D scanning equipment such as depth cameras and radars inevitably has defects.
  • the point cloud completion task is dedicated to predicting the point cloud of the missing part of the object based on the collected point cloud, thereby obtaining a complete three-dimensional point cloud of the three-dimensional object.
  • point cloud completion methods use an encoder-decoder structure.
  • the encoder is designed to extract global shape features from known point clouds, and then the decoder is used to decode the global shape features to obtain a complete point cloud.
  • Some methods have proposed hierarchical decoders to decode point clouds at multiple granularities from sparse to dense, and gradually obtain dense and complete point clouds of objects.
  • existing point cloud completion methods ignore the essential relationship between the known point cloud and the missing point cloud of the object, and cannot accurately predict the missing point cloud from the known point cloud.
  • the purpose of this application is to provide a point cloud completion method, device, equipment and medium that can improve the accuracy of predicting missing point clouds.
  • the specific plan is as follows:
  • the embodiments of the present application disclose a point cloud completion method, which is applied to a preset point cloud completion network, including:
  • the first target point cloud is filtered out from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target, including:
  • the first feature and the second feature are added to obtain the multi-scale features to be improved, and feature enhancement is performed on the scale features to be improved to obtain a target point cloud set containing the multi-scale features of the target.
  • selecting the first neighborhood and the second neighborhood corresponding to the first target point cloud includes:
  • feature improvement is performed on the scale features to be improved, including:
  • the self-attention mechanism is used to improve the features of the scale to be improved.
  • global features corresponding to all known point clouds are obtained based on target multi-scale features, including:
  • Maximum pooling is performed on multi-scale features of the target to obtain global features corresponding to all known point clouds.
  • maximum pooling is performed on the multi-scale features of the target to obtain global features corresponding to all known point clouds, including:
  • obtaining point cloud features of the initial missing point cloud includes:
  • the global features are fused with the three-dimensional coordinates of each initial missing point cloud to obtain the fused features, and the point cloud features of each initial missing point cloud are obtained based on the fused features.
  • target multi-scale features are used to optimize the point cloud features of the initial missing point cloud to obtain optimized point cloud features, including:
  • the point cloud features of the initial missing point cloud are used as the current point cloud features, and the target point cloud set with the smallest number of point clouds is selected from the target point cloud set as the current point cloud set;
  • target point cloud features as the current point cloud features, select the target point cloud set with the smallest number of point clouds from the unselected target point cloud set as the current point cloud set, and then jump to using the target multi-scale features in the current point cloud set to analyze the current point cloud set.
  • the step of optimizing the point cloud features to obtain the target point cloud features until there is no unselected target point cloud set to obtain the optimized point cloud features.
  • the target multi-scale features in the current point cloud set are used to optimize the current point cloud features to obtain the target point cloud features, including:
  • the method further includes:
  • the first loss and the second loss are added to obtain the target loss, and the target loss is used to train the preset point cloud completion network.
  • target loss is used to train a preset point cloud completion network, including:
  • the preset condition is that the target loss is not greater than the first preset threshold or the number of modifications is not less than the second preset threshold.
  • predicting the optimized missing point cloud of the target object based on the optimized point cloud features includes:
  • a multi-layer perceptron is used to predict the optimized missing point cloud of the target object based on the optimized point cloud features.
  • the embodiments of the present application disclose a point cloud completion device, which is applied to a preset point cloud completion network, including:
  • a screening module used to filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target;
  • the first prediction module is used to obtain the global features corresponding to all known point clouds based on the multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and obtain the point cloud features of the initial missing point cloud;
  • the second prediction module is used to optimize the point cloud features of the initial missing point cloud using multi-scale features of the target to obtain optimized point cloud features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features;
  • the complete point cloud acquisition module is used to select the second target point cloud from the known point cloud, fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and then obtain it based on the sparse complete point cloud Dense complete point cloud of the target object.
  • the embodiments of the present application disclose an electronic device, including a processor and a memory; wherein, when the processor executes a computer program stored in the memory, the point cloud completion method disclosed above is implemented.
  • the embodiments of the present application disclose a computer non-volatile readable storage medium for storing a computer program; wherein when the computer program is executed by a processor, the aforementioned disclosed point cloud completion is realized. method.
  • the embodiment of the present application selects the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target; and obtains the global corresponding to all known point clouds based on the multi-scale features of the target.
  • the embodiment of the present application predicts the initial missing point cloud of the target object based on global features, and uses the multi-scale target
  • the feature optimizes the point cloud features of the initial missing point cloud to obtain the optimized point cloud features. This optimization process establishes the essential relationship between the known point cloud and the missing point cloud, improving the accuracy of point cloud prediction.
  • Figure 1 is a flow chart of a point cloud completion method in some embodiments of the present application.
  • Figure 2 is a flow chart of a specific point cloud completion method in some embodiments of the present application.
  • Figure 3 is a schematic structural diagram of a feature extraction and improvement module in some embodiments of the present application.
  • Figure 4 is a flow chart of a specific point cloud completion method in some embodiments of the present application.
  • Figure 5 is a schematic flow chart of a point cloud completion method in some embodiments of the present application.
  • Figure 6 is a schematic flowchart of a specific point cloud completion method in some embodiments of the present application.
  • Figure 7 is a schematic diagram of test results of a point cloud completion network in some embodiments of the present application.
  • Figure 8 is a schematic structural diagram of a point cloud completion device in some embodiments of the present application.
  • Figure 9 is a schematic structural diagram of an electronic device in some embodiments of the present application.
  • Figure 10 is a schematic structural diagram of a computer non-volatile readable storage medium in some embodiments of the present application.
  • the current existing point cloud completion methods ignore the essential relationship between the known point cloud and the missing point cloud of the object, and cannot accurately predict the missing point cloud from the known point cloud.
  • some embodiments of the present application provide a point cloud completion solution that can improve the accuracy of predicting missing point clouds.
  • some embodiments of the present application disclose a point cloud completion method, which is applied to a preset point cloud completion network.
  • the method includes:
  • Step S11 Filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target.
  • the point cloud representation of the scanned object may be missing due to occlusion, poor lighting, etc.
  • the existing point cloud completion method only Global features are used and local features containing more detailed information are not extracted, which will result in the loss of detailed shapes when decoding the point cloud.
  • the essential relationship between the known point cloud and the missing point cloud of the object is ignored, which cannot be determined by It is known that point clouds accurately predict missing point clouds; therefore, some embodiments of the present application propose a new point cloud completion method.
  • the first target point cloud is filtered out from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target; it should be noted that all known point clouds of the target object can be The point cloud is used to filter different point cloud numbers to obtain different target point cloud sets including multi-scale features. For example, if three screenings are performed, three groups of first target point clouds can be obtained through three different levels of screening, and then three different target point clouds can be obtained. Alternatively, the screening can be carried out in sequence, and the first point clouds can be obtained by screening in sequence, and then the first point clouds can be obtained.
  • the second point cluster is selected, and then the third point cluster is obtained by filtering based on the second point cluster.
  • the target point cloud set to be used not all of them may be used. For example, when there are a first point cloud set, a second point cloud set and a third point cloud set, three point cloud sets may be used, or the third point cloud set may be used. One point cloud collection can be combined with a second point cloud collection, or a third point cloud collection can be used alone. It should be pointed out that when selecting a target point cloud set, priority is given to a point cloud set with a small number of point clouds among the obtained point cloud sets; the number of specific target point cloud sets is not specifically limited here.
  • Step S12 Obtain the global features corresponding to all known point clouds based on the multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and obtain the point cloud features of the initial missing point cloud.
  • maximum pooling is performed on multi-scale features of the target to obtain global features corresponding to all known point clouds. It should be pointed out that if there are several different target point clouds, the target multi-scale features between different target point clouds are fused to obtain the fused multi-scale features, and the fused multi-scale features are max-pooled to obtain Global features corresponding to all known point clouds.
  • a multi-layer perceptron is used to predict the optimized missing point cloud of the target object based on the optimized point cloud features, and then the point cloud features of the initial missing point cloud are obtained, specifically: combining the global features with The three-dimensional coordinates of each initial missing point cloud are fused to obtain post-fusion features, and the point cloud features of each initial missing point cloud are obtained based on the post-fusion features.
  • Step S13 Use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features.
  • the target multi-scale features in the target point cloud set are used to optimize the point cloud features of the initial missing point cloud to obtain optimized point cloud features. It should be pointed out that if there are several different target point cloud sets, the target multi-scale features in different target point cloud sets will be optimized to obtain the corresponding different sets of optimized point cloud features. The optimized point cloud features are then used to predict the optimized missing point cloud of the target object.
  • Step S14 Select the second target point cloud from the known point cloud, fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and obtain the dense complete points of the target object based on the sparse complete point cloud. cloud.
  • the second target point cloud is selected from the known point cloud, and the second target point cloud and the optimized missing point cloud are fused to obtain a sparse complete point cloud, and then a folding decoding network is used to convert the second target point cloud based on The sparse complete point cloud obtains the dense complete point cloud of the target object. Specifically, the sparse complete point cloud is used as the center point and decoded to obtain the dense complete point cloud. It should be pointed out that the folding decoding network is implemented using the existing FoldNet.
  • the preset point cloud completion network After obtaining the dense complete point cloud of the target object based on the sparse complete point cloud, it is necessary to train the preset point cloud completion network to improve the network. Specifically: determine the true complete point of the target object. Cloud, calculate the first loss between the real complete point cloud and the sparse complete point cloud, and the second loss between the real complete point cloud and the dense complete point cloud; add the first loss and the second loss to get the target loss, And use the target loss to train a preset point cloud completion network.
  • CD Chip Distance, chamfer distance
  • L s and L c are the first loss and the second loss respectively, L is the target loss, P t represents the real complete point cloud, P s represents the sparse complete point cloud, P c represents the dense complete point cloud, and g represents any The three-dimensional coordinates of a sparse complete point cloud or the three-dimensional coordinates of any dense complete point cloud, y represents the three-dimensional coordinates of the real complete point cloud.
  • target loss is used to train a preset point cloud completion network, specifically: using target loss, and modifying each network parameter in the point cloud completion network based on the gradient descent algorithm, and then using the modified
  • the point cloud completion network calculates a new loss and uses the new loss as the target loss; jump to the step of using the target loss and modifying each network parameter in the point cloud completion network based on the gradient descent algorithm until the preset conditions are met, and then Save the final modified point cloud completion network.
  • the preset condition is that the target loss is not greater than the first preset threshold or the number of modifications is not less than the second preset threshold.
  • this application uses gradient descent to minimize the target loss and train the point cloud completion network end-to-end to predict the real complete point cloud of the three-dimensional object.
  • the training error of the network reaches a specified smaller value or the number of iterations reaches a specified maximum value, the training ends. Save the network and network parameters for testing.
  • the network after obtaining the final modified point cloud completion network, the network can be further tested. Specifically, taking the categories “table”, “ship”, and “airplane” as examples, the ShapeNet test set is incomplete
  • the known point cloud is input into the final modified point cloud completion network as a test set, and the output test result is the complete point cloud of the object.
  • some embodiments of the present application filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target; obtain all known point cloud correspondences based on the multi-scale features of the target global features, and predict the initial missing point cloud of the target object based on the global features, and obtain the point cloud features of the initial missing point cloud; use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud Features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features; select the second target point cloud from the known point cloud, and fuse the second target point cloud and the optimized missing point cloud to obtain a sparse Complete point cloud, and then obtain a dense complete point cloud of the target object based on the sparse complete point cloud.
  • some embodiments of the present application screen out the first target point cloud from known point clouds.
  • Using the first target point cloud for subsequent operations takes into account the local detailed features of the target object, improving the accuracy of point cloud prediction; some embodiments of the present application predict the initial missing point cloud of the target object based on global features, and use multiple target
  • the scale feature optimizes the point cloud features of the initial missing point cloud to obtain the optimized point cloud features. This optimization process establishes the essential relationship between the known point cloud and the missing point cloud, improving the accuracy of point cloud prediction; using Training the network with target loss is beneficial to improving the network and obtaining a more accurate location cloud completion network.
  • some embodiments of the present application disclose a specific point cloud completion method, which is applied to a preset point cloud completion network.
  • the method includes:
  • Step S21 Filter out the first target point cloud from all known point clouds of the target object; select the first neighborhood and the second neighborhood corresponding to the first target point cloud, and fuse the first neighborhood and the second neighborhood respectively.
  • the original features of each first target point cloud in the domain are used to obtain first features and second features.
  • the specific process of selecting the first neighborhood and the second neighborhood corresponding to the first target point cloud is: calculating the Euclidean distance corresponding to the point cloud coordinates of the first target point cloud, and based on the Euclidean distance Select the first neighborhood and the second neighborhood corresponding to the first target point cloud.
  • Step S22 Add the first feature and the second feature to obtain the multi-scale features to be improved, and perform feature enhancement on the scale features to be improved to obtain a target point cloud set containing the multi-scale features of the target.
  • the first feature and the second feature are added to obtain multi-scale features to be improved, and a self-attention mechanism is used to perform feature enhancement on the scale features to be improved to obtain a target point cloud set containing multi-scale features of the target. .
  • the point cloud feature extraction module is implemented by improving the existing EdgeConv. Specifically, for each point in the point cloud , select two K1 and K2 neighborhoods of different sizes based on the Euclidean distance of their coordinates, use the existing EdgeConv algorithm to fuse the features of the midpoint in the two neighborhoods, and fuse the features of the two neighborhoods Add up to get the multi-scale features of the current point. Then, the point cloud feature improvement module is implemented using a self-attention mechanism. For the multi-scale features of the obtained points, the existing self-attention mechanism is used to establish the relationship between the points in the known point cloud and fuse the points. Contextual features improve the semantic and geometric expression capabilities of multi-scale features of point clouds.
  • the feature extraction and improvement module uses the attention mechanism to learn multi-scale features for the seed points; it can learn the seed point set from the known point cloud and learn multi-scale features for the seed points to optimize object missing.
  • Point cloud features provide the basis; in addition, the feature extraction and enhancement module fully extracts the local and global features of known point clouds, which is beneficial to learning the features of missing point clouds.
  • Step S23 Obtain the global features corresponding to all known point clouds based on the multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and obtain the point cloud features of the initial missing point cloud.
  • Step S24 Use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features.
  • the target multi-scale features in the target point cloud set are used to optimize the point cloud features of the initial missing point cloud to obtain optimized point cloud features. It should be pointed out that if there are several different target point cloud sets, the target multi-scale features in different target point cloud sets will be optimized to obtain the corresponding different sets of optimized point cloud features. The optimized point cloud features are then used to predict the optimized missing point cloud of the target object.
  • Step S25 Select the second target point cloud from the known point cloud, fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and obtain the dense complete points of the target object based on the sparse complete point cloud. cloud.
  • step S25 regarding the specific process of the above-mentioned step S25, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be described again here.
  • some embodiments of the present application screen out the first target point cloud from all known point clouds of the target object; select the first neighborhood and the second neighborhood corresponding to the second target point cloud, and fuse the first neighbors respectively.
  • the original features of each first target point cloud in the domain and the second neighborhood are used to obtain the first feature and the second feature; the first feature and the second feature are added to obtain the multi-scale features to be improved, and feature enhancement is performed on the scale features to be improved.
  • a target point cloud set containing multi-scale features of the target To obtain a target point cloud set containing multi-scale features of the target; obtain the global features corresponding to all known point clouds based on the multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and then obtain the point cloud of the initial missing point cloud Features; use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features; from Select the second target point cloud from the known point cloud, fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and then obtain a dense complete point cloud of the target object based on the sparse complete point cloud.
  • the embodiments of this application use feature extraction and improvement modules to facilitate the learning of multi-scale features of the target and provide a basis for optimizing the missing point cloud features of objects; the embodiments of this application predict the initial missing point cloud of the target object based on global features. And use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud features. This optimization process establishes the essential relationship between the known point cloud and the missing point cloud, improving the accuracy of point cloud prediction. accuracy.
  • some embodiments of the present application disclose a specific point cloud completion method, which is applied to a preset point cloud completion network.
  • the method includes:
  • Step S31 Filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target.
  • step S31 regarding the specific process of step S31, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be described again here.
  • Step S32 Obtain global features corresponding to all known point clouds based on multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and obtain point cloud features of the initial missing point cloud.
  • step S32 regarding the specific process of the above-mentioned step S32, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be described again here.
  • Step S33 If there are several different target point cloud sets, use the point cloud feature of the initial missing point cloud as the current point cloud feature, and select the target point cloud set with the smallest number of point clouds from the target point cloud set as the current point cloud set.
  • step S33 regarding the specific process of step S33, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be described again here.
  • Step S34 Use the multi-scale features of the target in the current point cloud set to optimize the current point cloud features to obtain the target cloud features.
  • a third target point cloud is selected from the current point cloud set, and the third target point cloud is used to perform point cloud voting to obtain the feature offset of the initial missing point cloud corresponding to the current point cloud feature; Add the current point cloud feature and the corresponding feature offset to optimize the current point cloud feature to obtain the target point cloud feature of the initial missing point cloud.
  • a third target point cloud is selected from the current point cloud set, and the third target point cloud is used to perform point cloud voting to obtain the feature offset of the initial missing point cloud corresponding to the current point cloud feature, It should be noted that the third target point cloud and the initial missing point cloud may be the same or different, and are not specifically limited here.
  • the third target point cloud in the target point cloud set is directly used for point cloud voting to obtain the feature offsets corresponding to the initial missing point clouds.
  • the third target point cloud is selected from the target point cloud set, and the third target point cloud is used to perform point cloud voting to obtain the feature offset of the initial missing point cloud corresponding to the point cloud features of the initial missing point cloud;
  • the point cloud features of the missing point cloud and the corresponding feature offsets are added to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud features of the initial missing point cloud.
  • the depth Hough point cloud voting mechanism is mainly used for point cloud voting, and the feature offset of the initial missing point cloud of the target object is obtained by voting. It should be pointed out that the point cloud voting process is completed by the point cloud voting module, which is conducive to directly predicting the point cloud of the missing part of the object from the known point cloud of the object.
  • Step S35 Use the target point cloud features as the current point cloud features, select the target point cloud set with the smallest number of point clouds from the unselected target point cloud set as the current point cloud set, and jump to using the target multi-scale features in the current point cloud set. The step of optimizing the current point cloud features to obtain the target point cloud features until there is no unselected target point cloud set to obtain the optimized Point cloud features.
  • all target point cloud sets participate in the point cloud voting process to improve accuracy and obtain optimized point cloud features.
  • Step S36 Predict the optimized missing point cloud of the target object based on the optimized point cloud features.
  • step S36 regarding the specific process of the above-mentioned step S36, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be described again here.
  • Step S37 Select the second target point cloud from the known point cloud, fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and obtain the dense complete points of the target object based on the sparse complete point cloud. cloud.
  • some embodiments of the present application filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target; obtain all known point cloud correspondences based on the multi-scale features of the target global features, and predict the initial missing point cloud of the target object based on the global features, and then obtain the point cloud features of the initial missing point cloud; if there are several different target point cloud sets, use the point cloud features of the initial missing point cloud as the current point Cloud features, and select the target point cloud set with the smallest number of point clouds from the target point cloud set as the current point cloud set; use the target multi-scale features in the current point cloud set to optimize the current point cloud features to obtain the target cloud features; convert the target point cloud features As the current point cloud feature, select the target point cloud set with the smallest number of point clouds from the unselected target point cloud set as the current point cloud set, and then jump to using the target multi-scale features in the current point cloud set to optimize the current point cloud feature
  • Second target point cloud and fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, and then obtain a dense complete point cloud of the target object based on the sparse complete point cloud.
  • the embodiment of this application uses the deep Hough point cloud voting mechanism to vote on the point cloud to optimize the initial missing point cloud. This optimization process establishes the essential relationship between the known point cloud and the missing point cloud. The accuracy of point cloud prediction is improved; in addition, all target point cloud sets participate in point cloud voting, which improves the accuracy of point cloud voting.
  • the object's known point cloud is obtained through the pyramid seed point learning network and the hierarchical missing point cloud voting network to obtain the object's missing point cloud, and then further by The complete point cloud of the object is obtained from the known point cloud of the object and the missing point cloud of the object.
  • the known point cloud of the object is input into the network, and pyramid seed point learning is used to learn multiple levels of seed point sets, and multi-scale features are assigned to each seed point.
  • the corresponding seed point set is used to vote to obtain the missing point cloud.
  • the known point clouds are fused to obtain a complete point cloud of the object.
  • some embodiments of this application design a voting-based point cloud completion method, which selects seed points from known point clouds and uses the deep Hough voting mechanism to vote to obtain the missing point cloud fusion of the object.
  • a known point cloud a complete point cloud of the object can be obtained;
  • a feature extraction and enhancement module is designed in the pyramid seed point learning network.
  • the seed points are selected from the known point cloud, and then the feature extraction and enhancement network is used to use attention. Force mechanism to learn multi-scale features for seed points.
  • Multiple feature extraction and improvement modules form the pyramid seed point generation part, which can learn multiple seed point sets at multiple levels; a point cloud voting module is designed in the hierarchical missing point cloud voting network, which uses seed point voting to The known point cloud of the object directly predicts the missing point cloud of the object. Multiple point cloud voting modules form the hierarchical missing point cloud voting part. It can use the seed point set of the corresponding level to generate votes at multiple levels, predict the missing point cloud, use the predicted point cloud as the center point, and use folding-based decoding The device can decode and obtain the dense point cloud of the missing part of the object, thereby obtaining the object's Complete point cloud.
  • Step 6 is a schematic flowchart of a specific point cloud completion method disclosed in some embodiments of the present application. It specifically includes the following steps: Step 1: Construct a pyramid seed point learning network and learn from the known point cloud of the object. Obtain multiple levels of seed point sets; Step 2: Construct a hierarchical missing point cloud voting network to predict the missing point cloud of the object, and obtain a sparse complete point cloud; Step 3: Construct a folding decoding network to predict the dense and complete point cloud of the object; In addition, there is step four: setting the loss function and training the voting-based point cloud completion network.
  • the data set ShapeNet is used as an example to further explain this application.
  • the ShapeNet data set contains 8 categories such as chairs, lamps, and boats. There are a total of 30,974 three-dimensional point cloud models. 100 and 150 are selected respectively. The model serves as the validation set and test set, and the remaining models serve as the training set. For each 3D point cloud model, an incomplete 3D point cloud is randomly sampled from 8 viewing angles as an incomplete known point cloud of the object. Taking the category "chair” as an example, the point cloud completion method will be explained in detail.
  • step one is specifically: (1) Construct a point cloud feature extraction and improvement module to learn multi-scale features for seed points. For each point in the point cloud, two K1 and K2 neighborhoods of different sizes are selected based on the Euclidean distance of its coordinates. The existing EdgeConv algorithm is used on the two neighborhoods to fuse the characteristics of the midpoint in the field. The fused features on the two neighborhoods are added to obtain the multi-scale features of the current point.
  • the point cloud feature enhancement module is implemented using a self-attention mechanism. For the multi-scale features of the obtained points, the existing self-attention mechanism is used to establish the relationship between the points in the known point cloud and fuse the contextual features of the points. , improve the semantic and geometric expression capabilities of multi-scale features of point clouds.
  • step two Use the global characteristics of the known point cloud to predict the initial missing point cloud P 0 of the object. First, fuse the features of the seed point sets S 1 and S 2 and perform maximum pooling to obtain the global feature F g of the known point cloud. Then use MLP (multilayer perceptron, multi-layer perceptron) to predict the initial absence of the object. Point cloud P 0 contains M points. The global features are copied M times and fused with the three-dimensional coordinates of the midpoint of P0 , and then the initial features of each point in P0 are obtained through MLP. (2) Construct a point cloud voting module to optimize the initial features of the initial missing point cloud P 0 midpoint. The point cloud voting module is implemented using MLP.
  • MLP multilayer perceptron, multi-layer perceptron
  • step 1 Based on the seed point set obtained in step 1, first use FPS to obtain M seed points, use the characteristics of each seed point to vote, and obtain the characteristic offset of the P 0 midpoint, which is the same as P The initial features of the 0 midpoint are added to obtain the optimized features of the P 0 midpoint. (3) Construct a hierarchical missing point cloud voting network to predict the sparse complete point cloud of the object. Two point cloud voting modules are used to construct a hierarchical missing point cloud voting network. The seed point sets S 1 and S 2 obtained in step 1 are input into the two point cloud voting modules respectively, and the characteristics of the P 0 midpoint are optimized at two levels in turn, and finally the characteristics of the missing point cloud are obtained.
  • MLP is used to predict the corrected missing point cloud P 1 of the object, which contains M points.
  • Use FPS to select M points from the known point cloud of the object and fuse them with the predicted missing point cloud P 1 to obtain the sparse complete point cloud P s of the object.
  • the third specific step is: construct a folding decoding network to predict the dense and complete point cloud of the object.
  • the folding decoding network is implemented using the existing FoldNet. Taking the sparse complete point cloud P s of the object as the center point and decoding it through the folding decoding network, the dense complete point cloud P c of the object is obtained.
  • the specific steps of step four are: (1) Set the loss function and train the point cloud completion network.
  • CD is used as the loss function to calculate the distance between the predicted point cloud and the true value of the point cloud.
  • the loss is calculated on the predicted sparse complete point cloud P s and the dense complete point cloud P c respectively.
  • the sum of the two parts of the loss is used as the final loss of the network.
  • Gradient descent is used to minimize the final loss and the point cloud completion network is trained end-to-end. , to predict three-dimensional objects real complete point cloud.
  • the training error of the network reaches a specified smaller value or the number of iterations reaches a specified maximum value, the training ends. Save the network and network parameters for testing. After that, a test set can also be obtained to test the network.
  • some embodiments of this application propose a voting-based point cloud completion method, which can use a simple voting mechanism to predict the complete point cloud of an object from coarse to fine.
  • a feature extraction and improvement module is proposed, which can learn a seed point set from known point clouds and learn multi-scale features for the seed points, providing a basis for optimizing the missing point cloud features of objects.
  • a point cloud voting module is proposed that can learn the essential relationship between the known point cloud and the missing point cloud of the object, thereby using the seed point features to directly optimize the characteristics of the missing point cloud.
  • Multiple point cloud voting modules can generate votes at multiple levels using seed point sets of corresponding levels to predict and optimize missing point clouds of objects from coarse to fine.
  • the known point cloud is fused to obtain the complete point cloud of the object.
  • This method allows the algorithm to focus on predicting the missing point cloud of the object without losing the known point cloud.
  • based on The voting point cloud completion algorithm can algorithmically make up for the shortcomings of three-dimensional scanning equipment such as depth cameras, and obtain high-quality three-dimensional point clouds representing three-dimensional objects, providing a basis for the development of virtual reality, metaverse and other technologies; in addition, this application
  • the voting-based point cloud completion method proposed in some embodiments of the invention is not only suitable for completing incomplete object point clouds obtained by three-dimensional scanning equipment, but also suitable for optimizing incomplete point clouds obtained by point cloud-based three-dimensional reconstruction algorithms. Reconstruction results.
  • some embodiments of the present application disclose a point cloud completion device, which is applied to a preset point cloud completion network, including:
  • the screening module 11 is used to screen out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target;
  • the first prediction module 12 is used to obtain global features corresponding to all known point clouds based on multi-scale features of the target, predict the initial missing point cloud of the target object based on the global features, and obtain point cloud features of the initial missing point cloud;
  • the second prediction module 13 is used to optimize the point cloud features of the initial missing point cloud using multi-scale features of the target to obtain optimized point cloud features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features;
  • the complete point cloud acquisition module 14 is used to select the second target point cloud from the known point cloud, and fuse the second target point cloud and the optimized missing point cloud to obtain a sparse complete point cloud, based on the sparse complete point cloud acquisition Dense complete point cloud of the target object.
  • some embodiments of the present application filter out the first target point cloud from all known point clouds of the target object to obtain a target point cloud set containing multi-scale features of the target; obtain all known point cloud correspondences based on the multi-scale features of the target global features, and predict the initial missing point cloud of the target object based on the global features, and obtain the point cloud features of the initial missing point cloud; use the multi-scale features of the target to optimize the point cloud features of the initial missing point cloud to obtain the optimized point cloud Features, and predict the optimized missing point cloud of the target object based on the optimized point cloud features; select the second target point cloud from the known point cloud, and fuse the second target point cloud and the optimized missing point cloud to obtain a sparse Complete point cloud, and then obtain a dense complete point cloud of the target object based on the sparse complete point cloud.
  • some embodiments of the present application screen out the first target point cloud from known point clouds.
  • Using the first target point cloud for subsequent operations takes into account the local detailed features of the target object, improving the accuracy of point cloud prediction;
  • some embodiments of the present application predict the initial missing point cloud of the target object based on global features, and use the target Multiscale
  • the feature optimizes the point cloud features of the initial missing point cloud to obtain the optimized point cloud features. This optimization process establishes the essential relationship between the known point cloud and the missing point cloud, improving the accuracy of point cloud prediction.
  • Figure 9 is a structural diagram of an electronic device 20 according to an exemplary embodiment. The content in the figure cannot be considered to be within the scope of use of the present application. any restrictions.
  • FIG. 9 is a schematic structural diagram of an electronic device 20 provided by an embodiment of the present application.
  • the electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, an input and output interface 24, a communication interface 25 and a communication bus 26.
  • the memory 22 is used to store a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps of the point cloud completion method disclosed in any of the foregoing embodiments.
  • the power supply 23 is used to provide working voltage for each hardware device on the electronic device 20;
  • the communication interface 25 can create a data transmission channel between the electronic device 20 and external devices, and the communication protocol it follows It is any communication protocol that can be applied to the technical solution of this application, and it is not specifically limited here;
  • the input and output interface 24 is used to obtain external input data or output data to the external world, and its specific interface type can be selected according to specific application needs. , no specific limitation is made here.
  • the memory 22, as a carrier for resource storage can be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc.
  • the memory 22 can include a random access memory as a running memory and a non-volatile memory for external memory storage.
  • the storage resources on the memory include operating system 221, computer program 222, etc., and the storage method can be short-term storage or permanent storage.
  • the operating system 221 is used to manage and control each hardware device and computer program 222 on the electronic device 20 on the source host.
  • the operating system 221 can be Windows, Unix, Linux, etc.
  • the computer program 222 may further include computer programs that can be used to complete other specific tasks.
  • the input and output interface 24 may specifically include, but is not limited to, a USB interface, a hard disk reading interface, a serial interface, a voice input interface, a fingerprint input interface, etc.
  • some embodiments of the present application also disclose a computer non-volatile readable storage medium.
  • the computer non-volatile readable storage medium 10 is used to store the computer program 110; wherein, the computer When the program 110 is executed by the processor, the aforementioned disclosed point cloud completion method is implemented.
  • the computer non-volatile readable storage media mentioned here include random access memory (Random Access Memory, RAM), memory, read-only memory (Read-Only Memory, ROM), electrically programmable ROM, electrically erasable memory Program ROM, register, hard disk, magnetic disk or optical disk or any other form of storage medium known in the technical field.
  • RAM Random Access Memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • Program ROM register, hard disk, magnetic disk or optical disk or any other form of storage medium known in the technical field.
  • RAM random access memory
  • ROM read-only memory
  • electrically programmable ROM electrically erasable programmable ROM
  • registers hard disks, removable disks, CD-ROMs, or anywhere in the field of technology. any other known form of storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请公开了一种点云补全方法、装置、设备及介质,涉及三维视觉领域,该方法包括:从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征确定所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,获取初始缺失点云的点云特征;利用目标多尺度特征对初始缺失点云的点云特征进行优化得到优化后点云特征,基于优化后点云特征预测目标对象的优化后缺失点云;从已知点云中选取第二目标点云,将第二目标点云和优化后缺失点云进行融合得到稀疏完整点云,基于稀疏完整点云获取目标对象的稠密完整点云。优化过程建立了已知点云与缺失点云的本质关系,提高了点云预测的准确性。

Description

一种点云补全方法、装置、设备及介质
相关申请的交叉引用
本申请要求于2022年07月06日提交中国专利局,申请号为202210785669.0,申请名称为“一种点云补全方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及三维视觉领域,特别涉及一种点云补全方法、一种点云补全装置、相应的一种电子设备以及相应的一种计算机非易失性可读存储介质。
背景技术
当前,在三维视觉中,三维物体可由三维点云表示,这种表示方式已被广泛应用于智能驾驶、机器人、虚拟现实、增强现实等三维计算机视觉任务中。但是,由于物体遮挡、镜面反射等问题,深度相机、雷达等三维扫描设备,采集得到的原始三维点云不可避免的存在缺失。点云补全任务致力于根据采集到的点云,预测物体缺失部分的点云,从而得到三维物体的完整三维点云。
现存点云补全方法大多是采用编码器-解码器结构,设计编码器从已知的点云中提取全局形状特征,然后利用解码器从全局形状特征中解码得到完整的点云。最近,有方法提出层次解码器,从稀疏到稠密多个粒度上解码点云,逐步得到物体的稠密完整点云。但是,现有的点云补全方法忽略了物体的已知点云与缺失点云之间的本质关系,不能由已知点云准确地预测缺失的点云。
发明内容
有鉴于此,本申请的目的在于提供一种点云补全方法、装置、设备及介质,能够提高预测缺失点云的准确率。其具体方案如下:
在本申请的一些实施例中,本申请实施例公开了一种点云补全方法,应用于预先设置的点云补全网络,包括:
从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;
基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,并获取初始缺失点云的点云特征;
利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;
从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。
在本申请的一些实施例中,从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集,包括:
从目标对象的所有已知点云中筛选出第一目标点云;
选取第一目标点云对应的第一邻域和第二邻域,并分别融合第一邻域和第二邻域中各个第一目标点云的原始特征得到第一特征和第二特征;
将第一特征和第二特征相加得到待提升多尺度特征,并对待提升尺度特征进行特征提升以获取包含目标多尺度特征的目标点云集。
在本申请的一些实施例中,选取第一目标点云对应的第一邻域和第二邻域,包括:
计算第一目标点云的点云坐标对应的欧式距离,并基于欧氏距离选取第一目标点云对应的第一邻域和第二邻域。
在本申请的一些实施例中,对待提升尺度特征进行特征提升,包括:
利用自注意力机制对待提升尺度特征进行特征提升。
在本申请的一些实施例中,基于目标多尺度特征得到所有已知点云对应的全局特征,包括:
对目标多尺度特征进行最大池化以得到所有已知点云对应的全局特征。
在本申请的一些实施例中,对目标多尺度特征进行最大池化以得到所有已知点云对应的全局特征,包括:
若存在不同的若干目标点云集,则将不同目标点云集之间的目标多尺度特征进行融合以得到融合后多尺度特征,并对融合后多尺度特征进行最大池化以得到所有已知点云对应的全局特征。
在本申请的一些实施例中,获取初始缺失点云的点云特征,包括:
将全局特征分别与每个初始缺失点云的三维坐标进行融合得到融合后特征,并基于融合后特征获取每个初始缺失点云的点云特征。
在本申请的一些实施例中,利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,包括:
若存在不同的若干目标点云集,则将初始缺失点云的点云特征作为当前点云特征,并从目标点云集中选取点云数目最少的目标点云集作为当前点云集;
利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标云特征;
将目标点云特征作为当前点云特征,并从未选取的目标点云集中选取点云数目最少的目标点云集作为当前点云集,然后跳转至利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标点云特征的步骤,直至不存在未选取的目标点云集,以得到优化后点云特征。
在本申请的一些实施例中,利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标点云特征,包括:
从当前点云集中选取出第三目标点云,利用第三目标点云进行点云投票以获取当前点云特征对应的初始缺失点云的特征偏移量;
将当前点云特征和相应的特征偏移量相加,以对当前点云特征进行优化得到初始缺失点云的目标点云特征。
在本申请的一些实施例中,基于稀疏完整点云获取目标对象的稠密完整点云之后,还包括:
确定目标对象的真实完整点云,计算真实完整点云与稀疏完整点云之间的第一损失,以及真实完整点云与稠密完整点云之间的第二损失;
将第一损失和第二损失相加得到目标损失,并利用目标损失训练预先设置的点云补全网络。
在本申请的一些实施例中,利用目标损失训练预先设置的点云补全网络,包括:
利用目标损失,并基于梯度下降算法修改点云补全网络中的各网络参数,然后利用修改后的点云补全网络计算新损失,并将新损失作为目标损失;
跳转至利用目标损失,并基于梯度下降算法修改点云补全网络中的各网络参数的步骤,直至满足预设条件,然后保存最终修改后的点云补全网络。
在本申请的一些实施例中,预设条件为目标损失不大于第一预设阈值或修改次数不小于第二预设阈值。
在本申请的一些实施例中,基于优化后点云特征预测目标对象的优化后缺失点云,包括:
利用多层感知机,并基于优化后点云特征预测目标对象的优化后缺失点云。
在本申请的一些实施例中,本申请实施例公开了一种点云补全装置,应用于预先设置的点云补全网络,包括:
筛选模块,用于从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;
第一预测模块,用于基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,并获取初始缺失点云的点云特征;
第二预测模块,用于利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;
完整点云获取模块,用于从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。
在本申请的一些实施例中,本申请实施例公开了一种电子设备,包括处理器和存储器;其中,处理器执行存储器中保存的计算机程序时实现前述公开的点云补全方法。
在本申请的一些实施例中,本申请实施例公开了一种计算机非易失性可读存储介质,用于存储计算机程序;其中,计算机程序被处理器执行时实现前述公开的点云补全方法。
可见,本申请实施例从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,并获取初始缺失点云的点云特征;利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。由此可见,本申请从已知点云中筛选得到第一目标点云。利用第一目标点云进行后续操作,考虑到了目标对象的局部的细节特征,提高了点云预测的准确性;本申请实施例基于全局特征预测目标对象的初始缺失点云,并利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,此优化过程建立了已知点云与缺失点云之间的本质关系,提高了点云预测的准确性。
附图说明
为了更清楚地说明本申请一些实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请一些实施例中的一种点云补全方法流程图;
图2为本申请一些实施例中的一种具体的点云补全方法流程图;
图3为本申请一些实施例中的一种特征提取和提升模块结构示意图;
图4为本申请一些实施例中的一种具体的点云补全方法流程图;
图5为本申请一些实施例中的一种点云补全方法流程示意图;
图6为本申请一些实施例中的一种具体的点云补全方法流程示意图;
图7为本申请一些实施例中的一种点云补全网络测试结果示意图;
图8为本申请一些实施例中的一种点云补全装置结构示意图;
图9为本申请一些实施例中的一种电子设备结构示意图;
图10为本申请一些实施例中的一种计算机非易失性可读存储介质的结构示意图。
具体实施方式
下面将结合本申请的一些实施例中的附图,对本申请的一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的一些实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
当前现有的点云补全方法忽略了物体的已知点云与缺失点云之间的本质关系,不能由已知点云准确地预测缺失的点云。
为了克服上述问题,本申请的一些实施例提供了一种点云补全方案,能够提高预测缺失点云的准确率。
参见图1所示,本申请的一些实施例公开了一种点云补全方法,应用于预先设置的点云补全网络,该方法包括:
步骤S11:从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集。
本申请的一些实施例中,深度相机、雷达等设备扫描三维真实场景时,由于遮挡、光线不好等原因,扫描得到的物体点云表示可能存在缺失,但是现有的点云补全方法只利用了全局特征,并没有提取含有更多细节信息的局部特征,会导致解码点云的时候丢失细节形状,另外,忽略了物体的已知点云与缺失点云之间的本质关系,不能由已知点云准确地预测缺失的点云;因此本申请一些实施例提出了新的点云补全方法。
本申请的一些实施例中,从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;需要指出的是,可以对目标对象的所有已知点云进行不同点云数量的筛选,以获取不同的包括多尺度特征的目标点云集。例如,若进行三次筛选,可以分别进行三次不同程度的筛选得到三组第一目标点云,进一步得到三个不同的目标点云集,也可以依次进行筛选,首先依次筛选得到第一点云集,再在第一点云集的基础上进行筛 选得到第二点云集,然后在第二点云集的基础上进行筛选得到第三点云集。需要指出的是,当选择使用的目标点云集时,可以不全部使用,例如,当存在第一点云集、第二点云集和第三点云集时,可使用三个点云集,也可以使用第一点云集合第二点云集,还可以单使用第三点云集。需要指出的是,选择目标点云集时优先选择得到的点云集之间点云数量少的点云集;具体的目标点云集的数量在此不做具体限定。
步骤S12:基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,获取初始缺失点云的点云特征。
本申请的一些实施例中,对目标多尺度特征进行最大池化以得到所有已知点云对应的全局特征。需要指出的是,若存在不同的若干目标点云集,则将不同目标点云集之间的目标多尺度特征进行融合以得到融合后多尺度特征,并对融合后多尺度特征进行最大池化以得到所有已知点云对应的全局特征。
本申请的一些实施例中,利用多层感知机,并基于优化后点云特征预测目标对象的优化后缺失点云,然后获取初始缺失点云的点云特征,具体为:将全局特征分别与每个初始缺失点云的三维坐标进行融合得到融合后特征,并基于融合后特征获取每个初始缺失点云的点云特征。
步骤S13:利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云。
本申请的一些实施例中,利用目标点云集中的目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征。需要指出的是,若存在若干不同的目标点云集,则分别对不同目标点云集中的目标多尺度特征对初始缺失点云的点云特征进行优化以得到相应的不同组优化后点云特征,然后用优化后点云特征预测目标对象的优化后缺失点云。
步骤S14:从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,基于稀疏完整点云获取目标对象的稠密完整点云。
本申请的一些实施例中,从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后利用折叠解码网络将基于稀疏完整点云获取目标对象的稠密完整点云,具体的,以稀疏完整点云为中心点,进行解码以得到稠密完整点云。需要指出的是,折叠解码网络采用现有的FoldNet实现。
本申请的一些实施例中,在基于稀疏完整点云获取目标对象的稠密完整点云之后,需要对预先设置的点云补全网络进行训练以完善网络,具体的:确定目标对象的真实完整点云,计算真实完整点云与稀疏完整点云之间的第一损失,以及真实完整点云与稠密完整点云之间的第二损失;将第一损失和第二损失相加得到目标损失,并利用目标损失训练预先设置的点云补全网络。
需要指出的是,采用CD(Chamfer Distance,倒角距离)作为损失函数,计算第一损失和第二损失,并将第一损失和第二损失相加得到目标损失,具体计算公式为:


L=Ls+Lc
式中,Ls和Lc分别为第一损失和第二损失,L为目标损失,Pt表示真实完整点云,Ps表示稀疏完整点云,Pc表示稠密完整点云,g表示任一稀疏完整点云的三维坐标或任一稠密完整点云的三维坐标,y表示真实完整点云的三维坐标。
本申请的一些实施例中,利用目标损失训练预先设置的点云补全网络,具体为:利用目标损失,并基于梯度下降算法修改点云补全网络中的各网络参数,然后利用修改后的点云补全网络计算新损失,并将新损失作为目标损失;跳转至利用目标损失,并基于梯度下降算法修改点云补全网络中的各网络参数的步骤,直至满足预设条件,然后保存最终修改后的点云补全网络。需要指出的是,预设条件为目标损失不大于第一预设阈值或修改次数不小于第二预设阈值。具体的,本申请利用梯度下降最小化目标损失,端对端地训练点云补全网络,以预测三维物体的真实完整点云。当网络的训练误差达到一个指定的较小值或者迭代次数达到指定的最大值时,训练结束。保存网络以及网络参数,用以测试。
需要指出的是,若获取稠密完整点云的过程为正向,则反向对点云补全网络进行训练,并在一次训练完成后,继续正向获取完整点云,然后再反向对点云补全网络进行训练,直至满足预设条件。需要指出的是,一次正向和一次方向的过程为一次迭代。
本申请的一些实施例中,获得最终修改后的点云补全网络之后,还可以进一步测试网络,具体,以类别“桌子”、“船”、“飞机”为例,将ShapeNet测试集中不完整的已知点云作为测试集输入到最终修改后的点云补全网络中,输出的测试结果为物体的完整点云。
可见,本申请的一些实施例从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,并获取初始缺失点云的点云特征;利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。由此可见,本申请的一些实施例从已知点云中筛选得到第一目标点云。利用第一目标点云进行后续操作,考虑到了目标对象的局部的细节特征,提高了点云预测的准确性;本申请一些实施例基于全局特征预测目标对象的初始缺失点云,并利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,此优化过程建立了已知点云与缺失点云之间的本质关系,提高了点云预测的准确性;利用目标损失对网络进行训练有利于完善网络得到更准确地点云补全网络。
参见图2所示,本申请的一些实施例公开了一种具体的点云补全方法,应用于预先设置的点云补全网络,该方法包括:
步骤S21:从目标对象的所有已知点云中筛选出第一目标点云;选取第一目标点云对应的第一邻域和第二邻域,并分别融合第一邻域和第二邻域中各个第一目标点云的原始特征得到第一特征和第二特征。
本申请的一些实施例中,选取第一目标点云对应的第一邻域和第二邻域的具体过程为:计算第一目标点云的点云坐标对应的欧式距离,并基于欧氏距离选取第一目标点云对应的第一邻域和第二邻域。
步骤S22:将第一特征和第二特征相加得到待提升多尺度特征,并对待提升尺度特征进行特征提升以获取包含目标多尺度特征的目标点云集。
本申请的一些实施例中,将第一特征和第二特征相加得到待提升多尺度特征,并利用自注意力机制对待提升尺度特征进行特征提升,以获取包含目标多尺度特征的目标点云集。
本申请的一些实施例中,如图3所示,为特征提取和提升模块结构示意图,点云特征的提取模块,通过改进已有的EdgeConv而实现,具体地,对于点云中的每个点,根据其坐标的欧氏距离选取两个不同大小的K1和K2邻域,在两个邻域上分别利用已有的EdgeConv算法融合领域中点的特征,将两个邻域上融合后的特征相加,得到当前点的多尺度特征。然后,点云特征提升模块采用自注意力机制实现,对于已获得的点的多尺度特征,利用已有的自注意力机制,建立已知点云中点与点之间的关系,融合点的上下文特征,提升点云多尺度特征的语义和几何表达能力。
需要指出的是,特征提取和提升模块,利用注意力机制,为种子点学习多尺度特征;可以从已知点云中学习得到种子点集,并为种子点学习多尺度特征,为优化物体缺失点云特征提供基础;另外,特征提取和提升模块充分提取已知点云的局部和全局特征,有利于学习缺失点云的特征。
步骤S23:基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,获取初始缺失点云的点云特征。
在本申请的一些实施例中,关于上述步骤S23的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S24:利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云。
在本申请的一些实施例中,利用目标点云集中的目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征。需要指出的是,若存在若干不同的目标点云集,则分别对不同目标点云集中的目标多尺度特征对初始缺失点云的点云特征进行优化以得到相应的不同组优化后点云特征,然后用优化后点云特征预测目标对象的优化后缺失点云。
步骤S25:从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,基于稀疏完整点云获取目标对象的稠密完整点云。
在本申请的一些实施例中,关于上述步骤S25的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
可见,本申请的一些实施例从目标对象的所有已知点云中筛选出第一目标点云;选取第二目标点云对应的第一邻域和第二邻域,并分别融合第一邻域和第二邻域中各个第一目标点云的原始特征得到第一特征和第二特征;将第一特征和第二特征相加得到待提升多尺度特征,并对待提升尺度特征进行特征提升以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,然后获取初始缺失点云的点云特征;利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;从 已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。由此可见,本申请实施例利用特征提取和提升模块,有利于目标多尺度特征的学习,为优化物体缺失点云特征提供基础;本申请实施例基于全局特征预测目标对象的初始缺失点云,并利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,此优化过程建立了已知点云与缺失点云之间的本质关系,提高了点云预测的准确性。
参见图4所示,本申请的一些实施例公开了一种具体的点云补全方法,应用于预先设置的点云补全网络,该方法包括:
步骤S31:从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集。
在本申请的一些实施例中,关于上述步骤S31的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S32:基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,获取初始缺失点云的点云特征。
在本申请的一些实施例中,关于上述步骤S32的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S33:若存在不同的若干目标点云集,则将初始缺失点云的点云特征作为当前点云特征,并从目标点云集中选取点云数目最少的目标点云集作为当前点云集。
在本申请的一些实施例中,关于上述步骤S33的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S34:利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标云特征。
在本申请的一些实施例中,从当前点云集中选取出第三目标点云,利用第三目标点云进行点云投票以获取当前点云特征对应的初始缺失点云的特征偏移量;将当前点云特征和相应的特征偏移量相加,以对当前点云特征进行优化得到初始缺失点云的目标点云特征。
在本申请的一些实施例中,从当前点云集中选取出第三目标点云,利用第三目标点云进行点云投票以获取当前点云特征对应的初始缺失点云的特征偏移量,需要指出的是,第三目标点云与初始缺失点云可相同或不同,在此不做具体限定。
本申请的一些实施例中,若只存在一个目标点云集,则直接利用目标点云集中的第三目标点云进行点云投票,得到初始缺失点云分别对应的特征偏移量。具体的,从目标点云集中选取出第三目标点云,利用第三目标点云进行点云投票以获取初始缺失点云的点云特征对应的初始缺失点云的特征偏移量;将初始缺失点云的点云特征和相应的特征偏移量相加,以对初始缺失点云的点云特征进行优化得到初始缺失点云的优化后点云特征。
在本申请的一些实施例中,进行点云投票主要使用的是深度霍夫点云投票机制,投票得到目标对象的初始缺失点云的特征偏移量。需要指出的是,点云投票过程由点云投票模块完成,有利于从物体的已知点云直接预测物体缺失部分的点云。
步骤S35:将目标点云特征作为当前点云特征,并从未选取的目标点云集中选取点云数目最少的目标点云集作为当前点云集,跳转至利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标点云特征的步骤,直至不存在未选取的目标点云集,以得到优化后 点云特征。
在本申请的一些实施例中,所有的目标点云集都要参与点云投票的过程以提高准确率,得到优化后点云特征。
步骤S36:基于优化后点云特征预测目标对象的优化后缺失点云。
在本申请的一些实施例中,关于上述步骤S36的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S37:从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,基于稀疏完整点云获取目标对象的稠密完整点云。
在本申请的一些实施例中,关于上述步骤S37的具体过程,可以参考前述实施例中公开的相应内容,在此不再进行赘述。
可见,本申请的一些实施例从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,然后获取初始缺失点云的点云特征;若存在不同的若干目标点云集,则将初始缺失点云的点云特征作为当前点云特征,并从目标点云集中选取点云数目最少的目标点云集作为当前点云集;利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标云特征;将目标点云特征作为当前点云特征,并从未选取的目标点云集中选取点云数目最少的目标点云集作为当前点云集,然后跳转至利用当前点云集中的目标多尺度特征对当前点云特征进行优化得到目标点云特征的步骤,直至不存在未选取的目标点云集,以得到优化后点云特征;基于优化后点云特征预测目标对象的优化后缺失点云;从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。由此可见,本申请实施例利用深度霍夫点云投票机制对点云进行投票,以对初始缺失点云进行优化,此优化过程建立了已知点云与缺失点云之间的本质关系,提高了点云预测的准确性;另外,所有目标点云集参与点云投票,提高了点云投票的准确性。
参见图5所示,为本申请的一些实施例公开的点云补全方法流程示意图,物体已知点云通过金字塔种子点学习网络和层次缺失点云投票网络得到物体缺失点云,然后进一步由物体已知点云和物体缺失点云得到物体完整点云。具体的,将物体的已知点云输入网络,利用金字塔种子点学习,学习得到多个层次的种子点集,并为每个种子点赋予多尺度特征。然后,利用层次缺失点云投票,在多个层次上,利用相应的种子点集投票得到缺失的点云。最后,融合已知点云,得到物体的完整点云。
需要指出的是,本申请的一些实施例设计了一种基于投票的点云补全方法,从已知点云中选取种子点,并利用深度霍夫投票机制,投票得到物体的缺失点云融合已知点云,可以得到物体的完整点云;金字塔种子点学习网络中设计了一种特征提取和提升模块,首先从已知点云中选取种子点,然后利用特征提取和提升网络,利用注意力机制,为种子点学习多尺度特征。多个特征提取和提升模块组成金字塔种子点生成部分,可以在多个层次上学习得到多个种子点集;层次缺失点云投票网络中设计了一种点云投票模块,利用种子点投票,从物体的已知点云直接预测得到物体的缺失点云。多个点云投票模块组成层次缺失点云投票部分,可以在多个层次上,利用相应层次的种子点集生成投票,预测缺失点云,将预测的点云作为中心点,利用基于折叠的解码器,可以解码得到物体缺失部分的稠密点云,从而得到物体的 完整点云。
参见图6所示,为本申请的一些实施例公开的具体的点云补全方法流程示意图,具体包括如下步骤:步骤一:构建金字塔种子点学习网络,从物体的已知点云中,学习得到多个层次的种子点集;步骤二:构建层次缺失点云投票网络,预测物体的缺失点云,并获得稀疏完整点云;步骤三:构建折叠解码网络,预测物体的稠密完整点云;另外,还有步骤四:设置损失函数,训练基于投票的点云补全网络。需要指出的是,本示意图中,以数据集ShapeNet为例对本申请进一步详细说明,ShapeNet数据集包含椅子、台灯、船等8个类别,一共有30974个三维点云模型,分别选取100和150个模型作为验证集和测试集,剩下的模型作为训练集。对于每个三维点云模型,随机从8个视角采样得到不完整的三维点云,作为物体不完整的已知点云。以类别“椅子”为例,具体说明点云补全方法。
需要指出的是,步骤一具体为:(1)构建点云特征提取和提升模块,为种子点学习多尺度特征。对于点云中的每个点,根据其坐标的欧氏距离选取两个不同大小的K1和K2邻域,在两个邻域上分别利用已有的EdgeConv算法融合领域中点的特征,将两个邻域上融合后的特征相加,得到当前点的多尺度特征。点云特征提升模块采用自注意力机制实现,对于已获得的点的多尺度特征,利用已有的自注意力机制,建立已知点云中点与点之间的关系,融合点的上下文特征,提升点云多尺度特征的语义和几何表达能力。(2)构建金字塔种子点学习网络,学习多个层次的种子点集,用以投票。采用多个层次的特征提取,图6中以三个层次为例说明,采用三个层次的特征提取,可以从已知点云中采样得到三个层次的种子点集(点集的大小依次减小),该步骤可利用FPS(farthest point sampling,最远点采样)等采样方法。对于需要的层次,对相应层次的种子点集进行特征提升,得到种子点集中每个点的多尺度特征,图6中利用了金字塔种子点学习网络的最后两个层次,可以得到两个层次的特征提升后种子点集S1和S2。步骤二的具体步骤为:(1)利用已知点云的全局特征,预测物体的初始缺失点云P0。首先,融合种子点集S1和S2的特征,并进行最大池化,得到已知点云的全局特征Fg,然后利用MLP(multilayer perceptron,多层感知机),预测得到物体的初始缺失点云P0,包含有M个点。将全局特征复制M次,并与P0中点的三维坐标融合,然后通过MLP得到P0中每个点的初始特征。(2)构建点云投票模块,优化初始缺失点云P0中点的初始特征。点云投票模块采用MLP实现,基于步骤一中得到的种子点集,先利用FPS得到M个种子点,利用每个种子点的特征进行投票,得到P0中点的特征偏移量,与P0中点的初始特征相加,得到P0中点的优化特征。(3)构建层次缺失点云投票网络,预测物体的稀疏完整点云。采用两个点云投票模块,构建层次缺失点云投票网络。将步骤一中得到的种子点集S1和S2分别输入到两个点云投票模块,依次在两个层次上优化P0中点的特征,最终得到缺失点云的特征。基于缺失点云的特征,利用MLP预测物体的修正缺失点云P1,包含M个点。从物体的已知点云中利用FPS选取M个点,与预测得到的缺失点云P1融合,得到物体的稀疏完整点云Ps。第三步具体步骤为:构建折叠解码网络,预测物体的稠密完整点云。折叠解码网络采用现有的FoldNet实现。将物体的稀疏完整点云Ps作为中心点,通过折叠解码网络进行解码,得到物体的稠密完整点云Pc。步骤四的具体步骤为:(1)设置损失函数,训练点云补全网络。给定物体完整的三维点云真值Pt,采用CD作为损失函数,计算预测点云与点云真值之间的距离。分别在预测的稀疏完整点云Ps和稠密完整点云Pc上计算损失,以两部分的损失和作为网络最终损失,利用梯度下降最小化最终损失,端对端地训练点云补全网络,以预测三维物体 的真实完整点云。当网络的训练误差达到一个指定的较小值或者迭代次数达到指定的最大值时,训练结束。保存网络以及网络参数,用以测试。之后,还可以获得测试集用于对网络进行测试,以类别“桌子”、“船”、“飞机”为例,将ShapeNet测试集中不完整的已知点云输入到步骤四中保存的已训练好的网络中,输出的测试结果为物体的完整点云;具体结果如图7所示。
需要指出的是,本申请的一些实施例利用提出了一种基于投票的点云补全方法,可以利用简单的投票机制,由粗到细地预测物体的完整点云。提出了一种特征提取和提升模块,可以从已知点云中学习得到种子点集,并为种子点学习多尺度特征,为优化物体缺失点云特征提供基础。提出了一种点云投票模块,可以学习物体已知点云与缺失点云之间的本质关系,从而利用种子点特征直接优化缺失点云的特征。多个点云投票模块,可以在多个层次上,利用相应层次的种子点集生成投票,由粗到细地预测并优化物体的缺失点云。此外,融合已知点云,得到物体的完整点云,这种方式可以使算法在保证不丢失已知点云的前提下,致力于预测物体的缺失点云;另外,在实际应用中,基于投票的点云补全算法,可以从算法上弥补深度相机等三维扫描设备的不足,得到表示三维物体的高质量三维点云,为虚拟现实、元宇宙等技术的发展提供基础;另外,本申请的一些实施例中提出的基于投票的点云补全方法,不仅可以适用于补全三维扫描设备得到的不完整物体点云,还适用于优化基于点云的三维重建算法得到的不完整点云重建结果。
参见图8所示,本申请的一些实施例公开了一种点云补全装置,应用于预先设置的点云补全网络,包括:
筛选模块11,用于从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;
第一预测模块12,用于基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,获取初始缺失点云的点云特征;
第二预测模块13,用于利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;
完整点云获取模块14,用于从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,基于稀疏完整点云获取目标对象的稠密完整点云。
其中,关于上述各个模块更加具体的工作过程可以参考前述实施例中公开的相应内容,在此不再进行赘述。
可见,本申请的一些实施例从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;基于目标多尺度特征得到所有已知点云对应的全局特征,并基于全局特征预测目标对象的初始缺失点云,并获取初始缺失点云的点云特征;利用目标多尺度特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于优化后点云特征预测目标对象的优化后缺失点云;从已知点云中选取第二目标点云,并将第二目标点云和优化后缺失点云进行融合以得到稀疏完整点云,然后基于稀疏完整点云获取目标对象的稠密完整点云。由此可见,本申请的一些实施例从已知点云中筛选得到第一目标点云。利用第一目标点云进行后续操作,考虑到了目标对象的局部的细节特征,提高了点云预测的准确性;本申请的一些实施例基于全局特征预测目标对象的初始缺失点云,并利用目标多尺度 特征对初始缺失点云的点云特征进行优化以得到优化后点云特征,此优化过程建立了已知点云与缺失点云之间的本质关系,提高了点云预测的准确性。
在本申请的一些实施例中,本申请实施例还提供了一种电子设备,图9是根据一示例性实施例示出的电子设备20结构图,图中的内容不能认为是对本申请的使用范围的任何限制。
图9为本申请实施例提供的一种电子设备20的结构示意图。该电子设备20,具体可以包括:至少一个处理器21、至少一个存储器22、电源23、输入输出接口24、通信接口25和通信总线26。其中,存储器22用于存储计算机程序,计算机程序由处理器21加载并执行,以实现前述任意实施例公开的点云补全方法的相关步骤。
本申请的一些实施例中,电源23用于为电子设备20上的各硬件设备提供工作电压;通信接口25能够为电子设备20创建与外界设备之间的数据传输通道,其所遵循的通信协议是能够适用于本申请技术方案的任意通信协议,在此不对其进行具体限定;输入输出接口24,用于获取外界输入数据或向外界输出数据,其具体的接口类型可以根据具体应用需要进行选取,在此不进行具体限定。
另外,存储器22作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,存储器22作为可以包括作为运行内存的随机存取存储器和用于外部内存的存储用途的非易失性存储器,其上的存储资源包括操作系统221、计算机程序222等,存储方式可以是短暂存储或者永久存储。
其中,操作系统221用于管理与控制源主机上电子设备20上的各硬件设备以及计算机程序222,操作系统221可以是Windows、Unix、Linux等。计算机程序222除了包括能够用于完成前述任一实施例公开的由电子设备20执行的点云补全方法的计算机程序之外,还可以进一步包括能够用于完成其他特定工作的计算机程序。
本申请的一些实施例中,输入输出接口24具体可以包括但不限于USB接口、硬盘读取接口、串行接口、语音输入接口、指纹输入接口等。
进一步的,本申请的一些实施例还公开了一种计算机非易失性可读存储介质,如图10所示,计算机非易失性可读存储介质10用于存储计算机程序110;其中,计算机程序110被处理器执行时实现前述公开的点云补全方法。
关于该方法的具体步骤可以参考前述实施例中公开的相应内容,在此不再进行赘述。
这里所说的计算机非易失性可读存储介质包括随机存取存储器(Random Access Memory,RAM)、内存、只读存储器(Read-Only Memory,ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、磁碟或者光盘或技术领域内所公知的任意其他形式的存储介质。其中,计算机程序被处理器执行时实现前述点云补全方法。关于该方法的具体步骤可以参考前述实施例中公开的相应内容,在此不再进行赘述。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的点云补全方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟 以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的一种点云补全方法、装置、设备及介质进行了详细介绍,本文中应用了具体个例对本申请实施例的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请实施例的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请实施例的限制。

Claims (20)

  1. 一种点云补全方法,其特征在于,应用于预先设置的点云补全网络,包括:
    从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;
    基于所述目标多尺度特征得到所有所述已知点云对应的全局特征,并基于所述全局特征预测所述目标对象的初始缺失点云,获取所述初始缺失点云的点云特征;
    利用所述目标多尺度特征对所述初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于所述优化后点云特征预测所述目标对象的优化后缺失点云;
    从所述已知点云中选取第二目标点云,并将所述第二目标点云和所述优化后缺失点云进行融合以得到稀疏完整点云,基于所述稀疏完整点云获取所述目标对象的稠密完整点云。
  2. 根据权利要求1所述的点云补全方法,其特征在于,所述从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集,包括:
    从目标对象的所有已知点云中筛选出第一目标点云;
    选取所述第一目标点云对应的第一邻域和第二邻域,并分别融合所述第一邻域和所述第二邻域中各个所述第一目标点云的原始特征得到第一特征和第二特征;
    将所述第一特征和所述第二特征相加得到所述待提升多尺度特征,并对所述待提升尺度特征进行特征提升以获取包含所述目标多尺度特征的目标点云集。
  3. 根据权利要求2所述的点云补全方法,其特征在于,所述从目标对象的所有已知点云中筛选出第一目标点云,包括:
    对目标对象的所有已知点云进行不同点云数量的筛选得到第一目标点,以获取包含目标多尺度特征的目标点云集;
    所述包含目标多尺度特征的目标点云集包括多个目标点云集;所述获取包含目标多尺度特征的目标点云集,包括:
    对目标对象的所有已知点云进行多次不同程度的筛选,得到相应的多组第一目标点云,并得到与所述多组第一目标点云相应的不同的目标点云集;
    和/或,对目标对象的所有已知点依次在前一点云集的基础上筛选得到下一点云集,获得多个不同的目标点云集。
  4. 根据权利要求2所述的点云补全方法,其特征在于,所述选取所述第一目标点云对应的第一邻域和第二邻域,包括:
    计算所述第一目标点云的点云坐标对应的欧式距离,并基于所述欧氏距离选取所述第一目标点云对应的第一邻域和第二邻域。
  5. 根据权利要求2所述的点云补全方法,其特征在于,所述对所述待提升尺度特征进行特征提升,包括:
    利用自注意力机制对所述待提升尺度特征进行特征提升。
  6. 根据权利要求1所述的点云补全方法,其特征在于,所述基于所述目标多尺度特征得到所有所述已知点云对应的全局特征,包括:
    对所述目标多尺度特征进行最大池化以得到所有所述已知点云对应的全局特征。
  7. 根据权利要求6所述的点云补全方法,其特征在于,所述对所述目标多尺度特征进行最大池化以得到所有所述已知点云对应的全局特征,包括:
    若存在不同的若干所述目标点云集,则将不同所述目标点云集之间的所述目标多尺度特征进行融合以得到融合后多尺度特征,并对所述融合后多尺度特征进行最大池化以得到所有所述已知点云对应的全局特征。
  8. 根据权利要求6所述的点云补全方法,其特征在于,所述获取所述初始缺失点云的点云特征,包括:
    将所述全局特征分别与每个所述初始缺失点云的三维坐标进行融合得到融合后特征,并基于所述融合后特征获取每个所述初始缺失点云的点云特征。
  9. 根据权利要求1所述的点云补全方法,其特征在于,所述利用所述目标多尺度特征对所述初始缺失点云的点云特征进行优化以得到优化后点云特征,包括:
    若存在不同的若干所述目标点云集,则将所述初始缺失点云的点云特征作为当前点云特征,并从所述目标点云集中选取点云数目最少的所述目标点云集作为当前点云集;
    利用所述当前点云集中的所述目标多尺度特征对所述当前点云特征进行优化得到目标云特征;
    将所述目标点云特征作为所述当前点云特征,并从未选取的所述目标点云集中选取点云数目最少的所述目标点云集作为所述当前点云集,跳转至所述利用所述当前点云集中的所述目标多尺度特征对所述当前点云特征进行优化得到目标点云特征的步骤,直至不存在未选取的所述目标点云集,以得到优化后点云特征。
  10. 根据权利要求9所述的点云补全方法,其特征在于,所述利用所述当前点云集中的所述目标多尺度特征对所述当前点云特征进行优化得到目标点云特征,包括:
    从所述当前点云集中选取出第三目标点云,利用所述第三目标点云进行点云投票以获取所述当前点云特征对应的所述初始缺失点云的特征偏移量;
    将所述当前点云特征和相应的所述特征偏移量相加,以对所述当前点云特征进行优化得到所述初始缺失点云的目标点云特征。
  11. 根据权利要求1所述的点云补全方法,其特征在于,所述基于所述稀疏完整点云获取所述目标对象的稠密完整点云,包括:
    以所述稀疏完整点云为中心点进行解码,得到所述目标对象的稠密完整点云。
  12. 根据权利要求1所述的点云补全方法,其特征在于,所述基于所述稀疏完整点云获取所述目标对象的稠密完整点云之后,还包括:
    确定所述目标对象的真实完整点云,计算所述真实完整点云与所述稀疏完整点云之间的第一损失,以及所述真实完整点云与所述稠密完整点云之间的第二损失;
    将所述第一损失和所述第二损失相加得到目标损失,并利用所述目标损失训练预先设置的所述点云补全网络。
  13. 根据权利要求12所述的点云补全方法,其特征在于,所述第一损失和所述第二损失基于倒角距离作为损失函数进行计算。
  14. 根据权利要求12所述的点云补全方法,其特征在于,所述利用所述目标损失训练预先设置的所述点云补全网络,包括:
    利用所述目标损失,并基于梯度下降算法修改所述点云补全网络中的各网络参数,利用修改后的所述点云补全网络计算新损失,并将所述新损失作为所述目标损失;
    跳转至所述利用所述目标损失,并基于梯度下降算法修改所述点云补全网络中的各网络参数的步骤,直至满足预设条件,保存最终修改后的所述点云补全网络。
  15. 根据权利要求14所述的点云补全方法,其特征在于,所述预设条件为所述目标损失不大于第一预设阈值或修改次数不小于第二预设阈值。
  16. 根据权利要求1至15任一项所述的点云补全方法,其特征在于,所述基于所述优化后点云特征预测所述目标对象的优化后缺失点云,包括:
    利用多层感知机,并基于所述优化后点云特征预测所述目标对象的优化后缺失点云。
  17. 根据权利要求1至15任一项所述的点云补全方法,其特征在于,所述目标对象包括物体,所述预先设置的点云补全网络为基于投票的点云补全网络,包括金字塔种子点学习网络、层次缺失点云投票网络、折叠解码网络,所述金字塔种子点学习网络用于从物体的已知点云中学习得到多个层次的种子点集;所述层次缺失点云投票网络用于预测物体的缺失点云,并获得稀疏完整点云;所述折叠解码网络用于预测物体的稠密完整点云;其中,所述基于投票的点云补全网络基于所设置的损失函数进行训练得到。
  18. 一种点云补全装置,其特征在于,应用于预先设置的点云补全网络,包括:
    筛选模块,用于从目标对象的所有已知点云中筛选出第一目标点云,以获取包含目标多尺度特征的目标点云集;
    第一预测模块,用于基于所述目标多尺度特征得到所有所述已知点云对应的全局特征,并基于所述全局特征预测所述目标对象的初始缺失点云,获取所述初始缺失点云的点云特征;
    第二预测模块,用于利用所述目标多尺度特征对所述初始缺失点云的点云特征进行优化以得到优化后点云特征,并基于所述优化后点云特征预测所述目标对象的优化后缺失点云;
    完整点云获取模块,用于从所述已知点云中选取第二目标点云,并将所述第二目标点云和所述优化后缺失点云进行融合以得到稀疏完整点云,基于所述稀疏完整点云获取所述目标对象的稠密完整点云。
  19. 一种电子设备,其特征在于,包括处理器和存储器;其中,所述处理器执行所述存储器中保存的计算机程序时实现如权利要求1至17任一项所述的点云补全方法。
  20. 一种计算机非易失性可读存储介质,其特征在于,用于存储计算机程序;其中,所述计算机程序被处理器执行时实现如权利要求1至17任一项所述的点云补全方法。
PCT/CN2023/081451 2022-07-06 2023-03-14 一种点云补全方法、装置、设备及介质 WO2024007616A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210785669.0 2022-07-06
CN202210785669.0A CN114842180B (zh) 2022-07-06 2022-07-06 一种点云补全方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2024007616A1 true WO2024007616A1 (zh) 2024-01-11

Family

ID=82575337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081451 WO2024007616A1 (zh) 2022-07-06 2023-03-14 一种点云补全方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN114842180B (zh)
WO (1) WO2024007616A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842180B (zh) * 2022-07-06 2022-12-02 山东海量信息技术研究院 一种点云补全方法、装置、设备及介质
CN115439694A (zh) * 2022-09-19 2022-12-06 南京邮电大学 一种基于深度学习的高精度点云补全方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706686A (zh) * 2021-07-09 2021-11-26 苏州浪潮智能科技有限公司 一种三维点云重建结果补全方法及相关组件
CN113850916A (zh) * 2021-09-26 2021-12-28 浪潮电子信息产业股份有限公司 模型训练、点云缺失补全方法、装置、设备及介质
CN114332302A (zh) * 2021-12-02 2022-04-12 广东工业大学 一种基于多尺度自注意力网络的点云补全系统及方法
WO2022077561A1 (zh) * 2020-10-12 2022-04-21 北京大学深圳研究生院 一种衡量缺失点云覆盖度的点云补全测评方法
CN114842180A (zh) * 2022-07-06 2022-08-02 山东海量信息技术研究院 一种点云补全方法、装置、设备及介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022096944A1 (en) * 2021-04-15 2022-05-12 Sensetime International Pte. Ltd. Method and apparatus for point cloud completion, network training method and apparatus, device, and storage medium
CN114331883A (zh) * 2021-12-22 2022-04-12 杭州电子科技大学 一种基于局部协方差优化的点云补全方法
CN114419258B (zh) * 2022-03-29 2022-07-15 苏州浪潮智能科技有限公司 一种三维物体形状的补全方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022077561A1 (zh) * 2020-10-12 2022-04-21 北京大学深圳研究生院 一种衡量缺失点云覆盖度的点云补全测评方法
CN113706686A (zh) * 2021-07-09 2021-11-26 苏州浪潮智能科技有限公司 一种三维点云重建结果补全方法及相关组件
CN113850916A (zh) * 2021-09-26 2021-12-28 浪潮电子信息产业股份有限公司 模型训练、点云缺失补全方法、装置、设备及介质
CN114332302A (zh) * 2021-12-02 2022-04-12 广东工业大学 一种基于多尺度自注意力网络的点云补全系统及方法
CN114842180A (zh) * 2022-07-06 2022-08-02 山东海量信息技术研究院 一种点云补全方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114842180B (zh) 2022-12-02
CN114842180A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2024007616A1 (zh) 一种点云补全方法、装置、设备及介质
WO2023045252A1 (zh) 模型训练、点云缺失补全方法、装置、设备及介质
CN110738697A (zh) 基于深度学习的单目深度估计方法
CN110210513B (zh) 数据分类方法、装置及终端设备
CN108399428B (zh) 一种基于迹比准则的三元组损失函数设计方法
CN112307940A (zh) 模型训练方法、人体姿态检测方法、装置、设备及介质
US20210358170A1 (en) Determining camera parameters from a single digital image
CN105513083B (zh) 一种ptam摄像机跟踪方法及装置
CN113435269A (zh) 一种基于YOLOv3改进的水面漂浮物检测与识别方法及系统
CN112102165B (zh) 一种基于零样本学习的光场图像角域超分辨系统及方法
CN114049515A (zh) 图像分类方法、系统、电子设备和存储介质
US11763471B1 (en) Method for large scene elastic semantic representation and self-supervised light field reconstruction
CN115063648A (zh) 一种绝缘子缺陷检测模型构建方法及系统
CN112668608A (zh) 一种图像识别方法、装置、电子设备及存储介质
CN117173568A (zh) 目标检测模型训练方法和目标检测方法
CN115115647A (zh) 一种融合注意力机制和残差aspp的遥感影像语义分割方法
CN114461853A (zh) 视频场景分类模型的训练样本生成方法、装置及设备
CN116993732B (zh) 一种缝隙检测方法、系统和存储介质
CN110309727B (zh) 一种建筑识别模型的建立方法、建筑识别方法和装置
CN113076806A (zh) 一种结构增强的半监督在线地图生成方法
CN116485975A (zh) 一种融合参数化人体模型的人体点云补全方法及系统
CN112446345B (zh) 一种低质量三维人脸识别方法、系统、设备和存储介质
CN115601745A (zh) 一种面向应用端的多视图三维物体识别方法
CN115019139A (zh) 一种基于双流网络的光场显著目标检测方法
CN114119438A (zh) 图像拼贴模型的训练方法和设备及图像拼贴方法和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834400

Country of ref document: EP

Kind code of ref document: A1