CN115908697A - Generation model based on point cloud probability distribution learning and method thereof - Google Patents

Generation model based on point cloud probability distribution learning and method thereof Download PDF

Info

Publication number
CN115908697A
CN115908697A CN202211280045.XA CN202211280045A CN115908697A CN 115908697 A CN115908697 A CN 115908697A CN 202211280045 A CN202211280045 A CN 202211280045A CN 115908697 A CN115908697 A CN 115908697A
Authority
CN
China
Prior art keywords
point cloud
layer
probability distribution
network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211280045.XA
Other languages
Chinese (zh)
Inventor
沈洋
许振楠
张海博
卢诚波
包艳霞
许浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lishui University
Original Assignee
Lishui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lishui University filed Critical Lishui University
Priority to CN202211280045.XA priority Critical patent/CN115908697A/en
Publication of CN115908697A publication Critical patent/CN115908697A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud probability distribution learning-based generation method, which comprises the following steps: adding an improved mapping network in a generator to learn the probability distribution of the point cloud; the probability distribution of the point cloud is changed by combining with the style migration, so that the effect of changing the probability distribution of the generated point cloud position is achieved; and constructing a complete classifier and a complete discriminator for generating the three-dimensional point cloud based on the local neighborhood Transformer layer. According to the method, the probability distribution of the point cloud positions is changed through style migration, so that the generated point cloud is more uniform, and the model training efficiency is improved due to the more regularity; a novel local neighborhood point cloud Transformer for point cloud learning is used in a classifier and a discriminator, local self-attention is calculated after a neighborhood is obtained through a farthest point sampling and K nearest neighbor classification algorithm, and a pyramid-type network structure is designed on the basis to better capture context information in the point cloud.

Description

Generation model based on point cloud probability distribution learning and method thereof
Technical Field
The invention relates to the technical field of data generation, in particular to a generation model based on point cloud probability distribution learning and a method thereof.
Background
The research of three-dimensional point cloud generation models is one of the hot problems concerned by the computer vision field. Two-dimensional images are not well-suited for representing depth information between multiple objects in the real world, are not suitable for scenes requiring depth and positioning information, and are increasingly challenged by using 3D data sets, from robotic navigation to autonomous vehicles, augmented reality to healthcare. In various data schemas, the original point cloud is becoming popular as a compact, isomorphic representation that is able to capture the complex details of the environment. A three-dimensional point cloud can be thought of as a set of unordered, irregular points collected from the surface of an object, each point consisting of a cartesian coordinate, and other additional information such as surface normal estimates and RGB color values.
In recent years, a point cloud generating model based on a generative countermeasure network (GAN), which is a generative model proposed by Goodfellow et al in 2014, has been a popular research direction in the artificial intelligence community. In contrast to two-dimensional images, the pixels generated by an image are arranged in a regular grid, whereas three-dimensional shapes are represented by points in a continuous three-dimensional space, with no common structure. Thus, three-dimensional GAN often produces point clouds with significant non-uniformity, where the points tend to be unevenly distributed over the shape surface. Given that the number of points in each point cloud is fixed (the GAN is typically 2048 in the prior art), this non-uniformity concentrates the points in one area, resulting in sparseness or even holes in other areas.
Without proper regularization, more points can be clustered at the geometric center of the object or at the junction of different semantic parts, resulting in a highly non-uniform shape for the point cloud. The point cloud data is an unordered point set containing three-dimensional coordinate information and is insensitive to data sequence, the same point cloud data model can completely have various different storage sequences, and the efficiency of the traditional three-dimensional GAN in training to generate the point cloud is not high.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the invention provides a generating model based on point cloud probability distribution learning and a method thereof. The technical scheme is as follows:
in one aspect, a point cloud probability distribution learning-based generation method is provided, and includes:
step 1): adding an improved mapping network in a generator to learn the probability distribution of the point cloud;
step 2): changing the probability distribution of the point cloud by combining style migration to achieve the effect of changing the probability distribution of the generated point cloud position;
and step 3): and constructing a complete classifier and a complete discriminator for generating the three-dimensional point cloud based on the local neighborhood Transformer layer.
Further, the generator includes a mapping network and a tree structure generating network; the mapping network is used for training the probability distribution of the learning point cloud; the tree structure generation network is used for training basic features of the learning point cloud.
Further, the tree structure generation network is composed of 7 feature tree modules, and each module is provided with an adaptive instance standardization module, a branching module and a graph convolution module.
Further, the step 1) specifically comprises:
improving the mapping network, outputting the results of the first 4 layers of fully-connected layers to the first layer of the tree structure generation network, wherein the dimensionality of each layer of fully-connected layer is 96, and the results are used for training the probability distribution of the learning point cloud;
the feature vectors are trained through a full connection layer to enable the probability distribution of the feature vectors to be more uniform, then the feature vectors are mapped into all feature tree modules through an affine transformation module, the point cloud distribution generated by each layer in a tree structure generation network is affected by the probability distribution of the feature vectors through the combination of an adaptive instance standardization module, the affine transformation module is used for nonlinear up-sampling, and the feature vector dimensionality is mapped to the corresponding dimensionality of each layer of feature tree module.
Further, the tree structure generation network performs branching and graph convolution operations at each layer to generate points of a next layer; all the points generated by the previous layer are stored and attached to the tree of the current layer, the tree is split into child nodes through branch operation from the root node, and the values of the nodes are modified through graph convolution operation; the branch module is used for increasing the total number of points in the process of generating the point cloud, and is similar to the up-sampling in the two-dimensional convolution;
further, in the generator, 2048 points are obtained through the branching operation of the last layer using different branching degrees for each layer.
Further, the step 2) specifically comprises:
combining the feature vector y obtained by the mapping network learning with the basis vector x in the tree structure generation network by using an AdaIN module to align the mean value and the variance of the three-dimensional point cloud position features to the mean value and the variance of the feature vector, so as to achieve the effect of changing the generated point cloud distribution, wherein the formula is as follows:
Figure BDA0003897929940000021
wherein σ (x), σ (y) are the variances of the basis vectors and the eigenvectors, respectively, and μ (x), μ (y) are the means of the basis vectors and the eigenvectors, respectively; generated in the StyleGAN is a two-dimensional image, local features of the image are changed by aligning the mean and variance of the generated image features to the features of the latent code passing through the mapping network through an adaptive instance normalization module, and it is the gray color information of the image that controls the two-dimensional image features, while the features of the three-dimensional point cloud are controlled by the position distribution of the point cloud.
Further, the step 3) specifically comprises:
local domain Transformer layer: the most usedAfter a local domain is obtained by distant point sampling and a K nearest classification algorithm, self-attention calculation is carried out on points in a neighborhood; a reasonable K value is set, so that different neighborhoods can have mutually overlapped parts, and the relationship between the neighborhoods is further enhanced; calculation of features W after calculation of weighted branches and transformations v Upper addition position coding δ:
Figure BDA0003897929940000031
Figure BDA0003897929940000032
is the centre point x sampled by the farthest point i A set of points in a local neighborhood (K nearest neighbors). Thus, self-attention is applied within the local neighborhood around the center point of each farthest point sample;
an encoder: an encoder inputs input coordinates into three stacked local neighborhood Transformer layers, learns rich semantic feature representation of each point in a local neighborhood, and then adopts a global Transformer layer, and neighborhood division is not performed in the layer, so that the value of K is set to be 1, and finally output features are generated; in a local neighborhood Transformer network frame, 2048 points are taken as input points, the number of downsampling target points of each layer in an encoding stage is [512,128,64,1], and a K value in each layer is set to be [16,16,32,1]; in the local neighborhood calculation, the parameter quantity of the model is greatly reduced to reach O (n).
And (4) classification: classifying point cloud data into N c An object class; and inputting the feature vector output by the encoder into two feedforward neural networks LBRD, finally predicting a final classification score by a linear layer, and determining the class with the highest score as a class label.
In another aspect, a generative model based on point cloud probability distribution learning is provided, comprising a generator; the generator comprises a mapping network and a tree structure generating network; the mapping network is used for training the probability distribution of the learning point cloud; the tree structure generation network is used for training basic features of the learning point cloud.
Further, the tree structure generation network is composed of 7 feature tree modules, and each module is provided with an adaptive instance standardization module, a branching module and a graph convolution module.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
(1) Latent codes sampled from Gaussian distribution are used for training and learning probability distribution of point cloud positions through a mapping network, the point cloud distribution is aligned to the probability distribution of the latent codes through style migration, the generated point cloud is more structural, and the training efficiency of the model can be improved.
(2) Probability distribution learned through training of the full connection layer in the mapping network is more uniform, and the problem of non-uniform point cloud generation can be solved.
(3) The invention designs a novel local neighborhood self-attention Transformer layer for three-dimensional point cloud processing. The arrangement of the layer to the point cloud is unchanged, the characteristic of non-uniformity of the point cloud is solved, and therefore the layer is essentially suitable for point cloud processing, and meanwhile, the space complexity of the model is lower due to the local attention thought of the layer.
(4) On the basis of a local neighborhood self-attention layer, a high-performance local neighborhood transform classifier and a high-performance local neighborhood transform discriminator are constructed and applied to a point cloud generation network.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for generating a point cloud probability distribution learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a F-TreeGAN network structure in an embodiment of the present invention;
FIG. 3 is a diagram of a local neighborhood Transformer layer in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a classifier model in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The invention provides a point cloud probability distribution learning-based generation method, which comprises the following steps of:
s100: adding an improved mapping network in a generator to learn the probability distribution of the point cloud;
s200: the probability distribution of the point cloud is changed by combining with the style migration, so that the effect of changing the probability distribution of the generated point cloud position is achieved;
s300: and constructing a complete classifier and a complete discriminator for generating the three-dimensional point cloud based on the local neighborhood Transformer layer.
Specifically, the latent code is trained in the mapping network to learn probability distribution of the point cloud, the network outputs a feature vector with uniform probability distribution, the point cloud distribution is aligned to the feature vector distribution through style migration in each layer of the tree structure generation network, the position distribution of the generated point cloud is changed, the generated point cloud is more structural, the problem of non-uniform distribution is solved, the real point cloud generation efficiency is higher than that of a three-dimensional GAN point cloud generation method in the prior art, and local features of the generated point cloud can be changed by changing the initial latent code.
In this embodiment, the details are as follows:
tree structure generation network based on point cloud probability distribution learning
The embodiment provides a TreeGAN-based tree-structured deep network Feature-TreeGAN (referred to as F-TreeGAN for short), which is inspired by StyleGAN, a mapping network is added in a generator for learning probability distribution of point cloud positions, the position distribution of the point cloud is changed by using a style migration method, and the effect of changing local characteristics of the point cloud can be achieved by changing initial latent codes in the mapping network.
The generator mainly comprises a mapping network and a tree structure generation network, the mapping network is improved in the embodiment, results of the first 4 layers of fully-connected layers are output to the first layer of the tree structure generation network, and the dimensionality of each layer of fully-connected layer is 96 and is used for training the probability distribution of the learning point cloud; the tree structure generation network is composed of 7 feature tree modules (FT-Block for short), wherein each module comprises three parts of adaptive instance standardization (AdaIN), branching (Branching) and graph convolution (GraphConv) and is used for training basic features of learning point clouds. The feature vectors are trained through a full connection layer to enable the probability distribution of the feature vectors to be more uniform, the feature vectors are mapped into feature tree modules (FT-Block) through Affine transformation (Affini), point cloud distribution generated by each layer in a tree structure generation network is affected by the probability distribution of the feature vectors through AdaIN module combination, the Affini modules are non-linear up-sampling, feature vector dimensions are mapped to corresponding dimensions of the feature tree modules of each layer, and the network structure is shown in figure 2.
At each layer, the tree structure generation network performs Branching (Branching) and graph convolution (GraphConv) operations to generate points at the next layer. All points generated by the previous layer are stored and appended to the tree of the current layer, the tree is split into child nodes by a branch operation starting from the root node, and the values of the nodes are modified by a graph convolution operation. The clustering module is used to increase the total number of points in the process of generating the point cloud, similar to the up-sampling in the two-dimensional convolution. In the generator, 2048 points are obtained through the branching operation of the last layer using different branching degrees for each layer. Unlike conventional graph convolution, which updates its value by the value of an adjacent point, treeGAN's proposed tree graph convolution (tregcn) updates its value by the value of each vertex ancestor, introducing a tree structure for the GCN. Since TreeGCN updates the value of the current node by the value of the ancestor node, it can use the ancestor information to improve the representability of the feature.
Changing probability distribution of point clouds in conjunction with style migration
To solve the problem of non-uniformity caused by tree networks, we introduce a method in style migration. The main method in style migration is Adaptive Instance Normalization (AdaIN), which aligns the mean and variance of the content image features to the mean and variance of the style images to realize style migration. In this embodiment, an AdaIN module is used to combine a feature vector y obtained by learning through a mapping network with a basis vector x in a generation network, so that a mean value and a variance of three-dimensional point cloud position features are aligned to the mean value and the variance of the feature vector, thereby achieving an effect of changing the distribution of the generated point cloud, and the formula is as follows:
Figure BDA0003897929940000061
where σ (x), σ (y) are the variances of the basis vector and the feature vector, respectively, and μ (x), μ (y) are the means of the basis vector and the feature vector, respectively. The two-dimensional image is generated in the StyleGAN, the local features of the image are changed by aligning the mean and variance of the generated image features to the features of the latent codes passing through the mapping network through an AdaIN module, the two-dimensional image features are controlled by the gray color information of the image, and the three-dimensional point cloud features are controlled by the position distribution of the point cloud. Latent codes in the StyleGAN learn gray color information of images, latent codes in the mapping network learn probability distribution of point clouds, and feature vector probability distribution obtained through training of the full connection layer of the mapping network is more uniform, so that the point cloud distribution generated after the point cloud distribution is aligned to the feature vector probability distribution is more uniform, and the problem of nonuniformity generated by dependence of tree structure generation points on ancestor points is solved. In the generation process, an AdaIN module is added to each layer of the tree-type generation network, so that the generated point cloud is more structural, and the training efficiency of the network model is improved.
Local neighborhood Transformer-based classifier
Self-attention is applicable to feature extraction of point clouds, as point clouds are essentially collections of discrete location information in three-dimensional space. Inspired by the idea of convolution in images, after a local domain is obtained by using a farthest point sampling and a K nearest neighbor classification algorithm, self-attention calculation is carried out on points in the neighborhood. We need to set reasonable K values so that they are differentThe neighborhoods can have mutually overlapped parts, and the relationship between the neighborhoods is further strengthened. Feature computation W after branching and transformation to compute weights v Upper addition position coding δ:
Figure BDA0003897929940000062
Figure BDA0003897929940000063
is the centre point x sampled by the farthest point i A set of points in a local neighborhood (K nearest neighbors). Therefore, self-attention is applied in the local neighborhood around the center point for each farthest point sample. The structure of the local neighborhood Transformer layer is shown in fig. 3.
The input is a set of feature vectors x and associated three-dimensional coordinates p. The local neighborhood Transformer layer promotes information exchange between feature vectors in local neighborhoods, meanwhile, the overlapping area between the neighborhoods also promotes information exchange between the neighborhoods, and finally, all data points generate new feature vectors as output. In which we use a multi-head attention mechanism [2] This allows the model to focus on the information of the different representation subspaces simultaneously. In general, in order to make a Transformer suitable for processing point clouds, we improve the point clouds, and construct a local neighborhood self-attention layer with a kernel of the Transformer. This layer integrates a self-attention layer, a linear projection and a residual join. The layer can reduce the number of points, improve the characteristic dimension and lay a cushion for constructing a pyramid network structure. A Maxpooling operator is added in the MLP before output, and the problem of point cloud replacement invariance is solved to a certain extent.
Based on the local neighborhood Transformer layer, we constructed a complete classifier for three-dimensional point cloud generation. As shown in fig. 4.
An encoder: purpose of local neighborhood Transformer layer: one is to encode the input point as a new high-dimensional feature vector, and the other is to reduce the cardinality of the point set. We take it as the base of the encoder, and construct the encoder in a pyramid structure. The encoder inputs input coordinates into three stacked local neighborhood Transformer layers, learns rich semantic feature representation of each point in a local neighborhood, and then is a global Transformer layer, and the layer does not divide the neighborhood, so that the value of K is set to be 1, and finally output features are generated. In the local neighborhood Transformer network framework, for example, 2048 points are taken as input points, the number of downsampling target points in each layer in the encoding stage is [512,128,64,1], and the value K in each layer is set to [16,16,32,1]. Calculation attention in the local neighborhood can greatly reduce the parameter quantity of the model to reach O (n).
And (4) classification: the details of the classification network are shown in fig. 4. Classifying point cloud data into N c Individual object categories (e.g., airplane, table, chair). We input the feature vector output by the encoder into two feedforward neural networks LBRD (a module combining a linear layer, a batch normalization layer and a Dropout layer, where the Dropout rate is set to 0.5), and finally predict the final classification score from the linear layer, and determine the highest-scoring class as the class label.
In the embodiment, the latent codes sampled from Gaussian distribution are trained through the mapping network to learn the probability distribution of the point cloud positions, the point cloud distribution is aligned to the probability distribution of the latent codes through style migration, the generated point cloud is more structural, and the training efficiency of the model can be improved.
Probability distribution learned through training of the full connection layer in the mapping network is more uniform, and the problem of non-uniform point cloud generation can be solved.
The invention designs a novel local neighborhood self-attention Transformer layer for three-dimensional point cloud processing. The arrangement of the layer to the point cloud is unchanged, the characteristic of non-uniformity of the point cloud is solved, and therefore the layer is essentially suitable for point cloud processing, and meanwhile, the space complexity of the model is lower due to the local attention thought of the layer.
On the basis of a local neighborhood self-attention layer, a high-performance local neighborhood Transformer classifier is constructed and applied to a point cloud generation network.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A point cloud probability distribution learning-based generation method is characterized by comprising the following steps:
step 1): adding an improved mapping network in a generator to learn the probability distribution of the point cloud;
step 2): the probability distribution of the point cloud is changed by combining with the style migration, so that the effect of changing the probability distribution of the generated point cloud position is achieved;
step 3): and constructing a complete classifier and a complete discriminator for generating the three-dimensional point cloud based on the local neighborhood Transformer layer.
2. The method of claim 1, wherein the generator comprises a mapping network and a tree structure generating network; the mapping network is used for training probability distribution of learning point cloud; the tree structure generation network is used for training basic features of the learning point cloud.
3. The method of claim 2, wherein the tree structure generation network is comprised of 7 feature tree modules, each module provided with an adaptive instance normalization module, a branching module, and a graph convolution module.
4. The method according to claim 3, wherein the step 1) is specifically:
improving the mapping network, outputting results of the first 4 layers of full connection layers to a first layer in a tree structure generation network, wherein the dimensionality of each layer of full connection layer is 96, and the dimensionality is used for training the probability distribution of the learning point cloud;
the feature vectors are trained through a full connection layer to enable the probability distribution of the feature vectors to be more uniform, then the feature vectors are mapped into all feature tree modules through an affine transformation module, the point cloud distribution generated by each layer in a tree structure generation network is affected by the probability distribution of the feature vectors through the combination of an adaptive instance standardization module, the affine transformation module is used for nonlinear up-sampling, and the feature vector dimensionality is mapped to the corresponding dimensionality of each layer of feature tree module.
5. The method of claim 4, wherein the tree structure generation network performs branching and graph convolution operations at each level to generate points at a next level; all the points generated by the previous layer are stored and attached to the tree of the current layer, the tree is split into child nodes through branch operation from the root node, and the values of the nodes are modified through graph convolution operation; the branching module is used to increase the total number of points in the process of generating the point cloud, similar to the up-sampling in the two-dimensional convolution.
6. The method of claim 5, wherein in the generator, 2048 points are obtained through a final layer of branching operations using different branching degrees for each layer.
7. The method according to claim 6, wherein the step 2) is specifically:
an AdaIN module is used for combining a feature vector y obtained by mapping network learning with a base vector x in a tree structure generation network, so that the mean value and the variance of the three-dimensional point cloud position features are aligned to the mean value and the variance of the feature vector, the effect of changing the generated point cloud distribution is achieved, and the formula is as follows:
Figure FDA0003897929930000021
wherein σ (x), σ (y) are the variances of the basis vectors and the eigenvectors, respectively, and μ (x), μ (y) are the means of the basis vectors and the eigenvectors, respectively; generated in StyleGAN is a two-dimensional image, local features of the image are changed by aligning the mean and variance of the generated image features to the features of latent codes passing through a mapping network through an adaptive instance standardization module, and the two-dimensional image features are controlled by the gray scale color information of the image, while the features of the three-dimensional point cloud are controlled by the position distribution of the point cloud.
8. The method according to claim 7, wherein the step 3) is specifically:
local area transform layer: after a local domain is obtained by using a farthest point sampling and K nearest classification algorithm, self-attention calculation is carried out on points in the neighborhood; a reasonable K value is set, so that different neighborhoods can have mutually overlapped parts, and the relationship between the neighborhoods is further strengthened; calculation of features W after calculation of weighted branches and transformations v Upper addition position coding δ:
Figure FDA0003897929930000022
Figure FDA0003897929930000023
is the centre point x sampled by the farthest point i A set of points in a local neighborhood (K nearest neighbors). Therefore, self-attention is applied within the local neighborhood around the center point for each farthest point sample;
an encoder: an encoder inputs input coordinates into three stacked local neighborhood Transformer layers, learns rich semantic feature representation of each point in a local neighborhood, and then adopts a global Transformer layer, and neighborhood division is not performed in the layer, so that the value of K is set to be 1, and finally output features are generated; in a local neighborhood Transformer network framework, 2048 points are taken as input points, the number of down-sampling target points in each layer in an encoding stage is [512,128,64,1], and the K value in each layer is set to be [16,16,32,1]; in the local neighborhood calculation, the parameter quantity of the model is greatly reduced to reach O (n).
And (4) classification: classifying point cloud data into N c An object class; and inputting the feature vector output by the encoder into two feedforward neural networks LBRD, finally predicting a final classification score by a linear layer, and determining the class with the highest score as a class label.
9. A generative model based on point cloud probability distribution learning is characterized by comprising a generator; the generator comprises a mapping network and a tree structure generating network; the mapping network is used for training the probability distribution of the learning point cloud; the tree structure generation network is used for training basic features of the learning point cloud.
10. The model of claim 9, wherein the tree structure generation network is composed of 7 feature tree modules, each of which is provided with an adaptive instance normalization module, a branching module, and a graph convolution module.
CN202211280045.XA 2022-10-19 2022-10-19 Generation model based on point cloud probability distribution learning and method thereof Pending CN115908697A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211280045.XA CN115908697A (en) 2022-10-19 2022-10-19 Generation model based on point cloud probability distribution learning and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211280045.XA CN115908697A (en) 2022-10-19 2022-10-19 Generation model based on point cloud probability distribution learning and method thereof

Publications (1)

Publication Number Publication Date
CN115908697A true CN115908697A (en) 2023-04-04

Family

ID=86496440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211280045.XA Pending CN115908697A (en) 2022-10-19 2022-10-19 Generation model based on point cloud probability distribution learning and method thereof

Country Status (1)

Country Link
CN (1) CN115908697A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786467A (en) * 2024-02-28 2024-03-29 上海交通大学四川研究院 Classification model construction method for aircraft landing risk prediction based on self-adaptive dotting

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786467A (en) * 2024-02-28 2024-03-29 上海交通大学四川研究院 Classification model construction method for aircraft landing risk prediction based on self-adaptive dotting
CN117786467B (en) * 2024-02-28 2024-04-30 上海交通大学四川研究院 Classification model construction method for aircraft landing risk prediction based on self-adaptive dotting

Similar Documents

Publication Publication Date Title
CN110263912B (en) Image question-answering method based on multi-target association depth reasoning
CN110390638B (en) High-resolution three-dimensional voxel model reconstruction method
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN108062551A (en) A kind of figure Feature Extraction System based on adjacency matrix, figure categorizing system and method
CN108520275A (en) A kind of regular system of link information based on adjacency matrix, figure Feature Extraction System, figure categorizing system and method
CN116152267B (en) Point cloud instance segmentation method based on contrast language image pre-training technology
CN112131959B (en) 2D human body posture estimation method based on multi-scale feature reinforcement
CN110263855B (en) Method for classifying images by utilizing common-basis capsule projection
Liu et al. RB-Net: Training highly accurate and efficient binary neural networks with reshaped point-wise convolution and balanced activation
CN112036260A (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN114648535A (en) Food image segmentation method and system based on dynamic transform
CN113362242A (en) Image restoration method based on multi-feature fusion network
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
CN110851627B (en) Method for describing sun black subgroup in full-sun image
CN117079098A (en) Space small target detection method based on position coding
CN115908697A (en) Generation model based on point cloud probability distribution learning and method thereof
CN115222998A (en) Image classification method
CN114972794A (en) Three-dimensional object recognition method based on multi-view Pooll transducer
CN114581918A (en) Text recognition model training method and device
CN114155560B (en) Light weight method of high-resolution human body posture estimation model based on space dimension reduction
CN115578574A (en) Three-dimensional point cloud completion method based on deep learning and topology perception
CN115272696A (en) Point cloud semantic segmentation method based on self-adaptive convolution and local geometric information
CN112365456B (en) Transformer substation equipment classification method based on three-dimensional point cloud data
CN114170460A (en) Multi-mode fusion-based artwork classification method and system
Liu et al. Capsule embedded resnet for image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination