CN113313831B - Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network - Google Patents

Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network Download PDF

Info

Publication number
CN113313831B
CN113313831B CN202110565190.1A CN202110565190A CN113313831B CN 113313831 B CN113313831 B CN 113313831B CN 202110565190 A CN202110565190 A CN 202110565190A CN 113313831 B CN113313831 B CN 113313831B
Authority
CN
China
Prior art keywords
point cloud
formula
patch
vertex
polar coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110565190.1A
Other languages
Chinese (zh)
Other versions
CN113313831A (en
Inventor
周燕
徐雪妙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110565190.1A priority Critical patent/CN113313831B/en
Publication of CN113313831A publication Critical patent/CN113313831A/en
Application granted granted Critical
Publication of CN113313831B publication Critical patent/CN113313831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network, which comprises the following steps of firstly, uniformly generating and sampling point clouds from three-dimensional grid model data by using an improved point cloud generation method; secondly, the point cloud model is standardized and aligned by using the calculated volume weighted centroid; thirdly, constructing polar coordinate representation and three-dimensional space rectangular coordinate system representation of the point cloud to obtain composite representation; and finally, using a graph convolution neural network to model and compound the representation, capturing local neighborhood and global information, and extracting the characteristics of the three-dimensional model. The method can extract the shape content characteristics of the three-dimensional model with the transformation invariance and high discrimination, and lays a foundation for subsequent tasks such as classification recognition, retrieval and the like.

Description

Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network
Technical Field
The invention relates to the technical field of three-dimensional model classification identification and retrieval, in particular to a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network.
Background
At present, the low-dimensional and high-discriminative shape content features of the three-dimensional model are effectively extracted, and classification, retrieval and the like of the three-dimensional model are facilitated, so that a new method for extracting the features of the three-dimensional model is an important research content in the field of current three-dimensional computer vision. Firstly, the traditional method based on point cloud mostly adopts a single three-dimensional rectangular coordinate system as network input and lacks auxiliary coding information; secondly, the traditional method for generating point cloud from grid sampling mostly uses a segmented interpolation method, lacks a sampling strategy, and easily ignores the actual size condition of a patch, so that the collected point set is not uniform enough; thirdly, the model generally faces the influences of rotation, translation, scale transformation and the like; finally, the traditional multi-layer perceptron MLP is used as a network feature extractor, non-European geometric data similar to point clouds cannot be effectively modeled, effective information of a local field of the model is difficult to capture, and performance improvement is limited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network, which can extract the shape content features of a three-dimensional model with transformation invariance and high discriminative power and lay a foundation for subsequent tasks such as classification recognition, retrieval and the like.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: the three-dimensional model feature extraction method based on the polar coordinate graph convolutional neural network comprises the following steps:
s1, obtaining a plurality of three-dimensional mesh model data, including a vertex set and a patch set;
s2, based on an improved point cloud generation method, for each three-dimensional grid model data, performing point cloud generation according to threshold setting judgment to obtain a corresponding first point cloud;
s3, acquiring a corresponding volume weighted centroid of each three-dimensional grid model data;
s4, constructing a unit Gaussian sphere with the volume weighted centroid as a sphere center to wrap the first point cloud through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, so as to realize the operation of converting the first point cloud into a standard unified coordinate space and obtain a standardized and aligned second point cloud;
s5, projecting the second point cloud to a polar coordinate system based on the second point cloud to obtain polar coordinate representation of the second point cloud, and then splicing the polar coordinate representation of the second point cloud and three-dimensional rectangular coordinate system representation of the second point cloud to obtain the second point cloud with composite representation;
and S6, acquiring corresponding deep learning features of each second point cloud with composite representation based on the polar coordinate graph convolutional neural network model.
Further, in step S1, three-dimensional mesh model data is read, and vertex set V = { V } of the three-dimensional mesh model is obtained i I =1,2,. N } and patch set F = { F = | j L j =1,2,. ·, m }; wherein v is i Represents the ith vertex element, v i =(v i 1 ,v i 2 ,v i 3 ) Is a three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, f j Representing the jth patch element, m being the number of patch elements in a patch set, the patch set using the patch elementsThe upper vertex index information stores patch element information.
Further, the step S2 includes the steps of:
s201, based on patch set F = { F j I j =1,2,. And m }, where m is the number of patch elements in the patch set, and the area of each patch element in the patch set is calculated by formula (1), and formula (1) is as follows:
Figure GDA0003764901210000021
wherein
Figure GDA0003764901210000022
a j =||(v j1 -v j2 )|| 2 ,b j =||(v j1 -v j3 )|| 2 ,c j =||(v j2 -v j3 )|| 2
In the formula, S (f) j ) Representing the jth patch element f in a patch set j Area of (v) j1 、v j2 、v j3 As patch element f j Three vertices of (a) j Representing the vertex v j1 And vertex v j2 Two norms of the constructed vector, b j Representing the vertex v j1 And vertex v j3 Two norms of the constructed vector, c j Representing the vertex v j2 And vertex v j3 Two-norm, p, of the constructed vector j Is represented by a j 、b j 、c j Calculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch set j ) Calculating the average value of the whole patch element area in the patch set through formula (2)
Figure GDA0003764901210000031
Taking the average value as a threshold, the formula (2) is as follows:
Figure GDA0003764901210000032
s203, an original point cloud generating method is to directly perform point cloud generating operation without considering the distribution condition of the area size of each patch element in the three-dimensional grid model; the improved point cloud generating method adds condition judgment, selectively performs point cloud generating operation on the patch elements through threshold setting judgment, performs linear interpolation based on information of the patch elements in the patch set, and calculates to obtain a new vertex set, specifically as follows:
based on patch set F = { F j I j =1, 2.. Eta., m }, according to formula (3) and formula (4), the area is greater than a threshold value
Figure GDA0003764901210000037
Performing a point cloud generation operation on the patch elements to obtain a patch element having an area less than a threshold
Figure GDA0003764901210000038
The point cloud generating operation is not performed on the patch elements, so that the point cloud generated by the corresponding three-dimensional mesh model is obtained, and the formula (3) and the formula (4) are as follows:
Figure GDA0003764901210000033
Figure GDA0003764901210000034
in the above formula, set (f) j ) For jth patch element f in the set of patches j Set of vertices of, v j ' representation set
Figure GDA0003764901210000035
J-th vertex set in (1), q 1 、q 2 Represents [0,1 ]]Number of divisions, ω 1 、ω 2 O, p are intermediate process variables,
Figure GDA0003764901210000036
for generating corresponding three-dimensional mesh modelsForming a point cloud;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure GDA0003764901210000041
Acquiring point clouds with fixed vertex numbers according to a farthest point sampling algorithm or a random sampling method, and acquiring a first point cloud corresponding to the three-dimensional mesh model, wherein a formula (5) is as follows:
Figure GDA0003764901210000042
wherein, sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V ' k Is the kth element in the first point cloud.
Further, the step S3 includes the steps of:
s301, for each three-dimensional mesh model, based on vertex set V = { V = { (V) } i I =1,2, ·, n }, the corresponding centroid is calculated by equation (6), and equation (6) is as follows:
Figure GDA0003764901210000043
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000044
representing the centroid, v, of a three-dimensional mesh model i Is the ith vertex element in the vertex set, and n is the number of the vertex elements in the vertex set;
s302, based on patch set F = { F j I j =1,2,. And m }, m being the number of patch elements in the patch set, and calculating to obtain the centroid through a formula 07)
Figure GDA0003764901210000047
And a patch element f in the patch set j Volume of tetrahedron formed, simultaneous rootsThe centroid of the patch set element is obtained according to equation (8), and equations (7) and (8) are respectively as follows:
Figure GDA0003764901210000045
Figure GDA0003764901210000046
in the formula, vol j Representing the center of mass
Figure GDA0003764901210000048
With patch element f in jth patch set j Volume of tetrahedron formed, v j1 、v j2 、v j3 As elements f of a patch j Three vertices of upper, g j Representing patch element f in jth patch set j The center of gravity of;
s303 tetrahedron-based volume Vol j And the gravity center g of the patch elements in the patch set j And calculating to obtain a volume weighted centroid corresponding to the three-dimensional grid model through a formula (9), wherein the formula (9) is as follows:
Figure GDA0003764901210000051
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000052
representing the corresponding volume weighted centroid of the three-dimensional mesh model.
Further, the step S4 includes the steps of:
s401, based on the first point cloud V' = { V = { (V) k '| k =1, 2.,. N' } and a volume weighted centroid
Figure GDA0003764901210000053
v′ k For the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud according to formula (10) and formula(11) Translating the first point cloud to a volume weighted centroid, and acquiring the first point cloud after translation transformation, wherein the formula (10) and the formula (11) are as follows:
Figure GDA0003764901210000054
V″={v″ k |k=1,2,...n′} (11);
in the formula, v ″) k The kth element of the first point cloud after the translation transformation, and V' is the first point cloud after the translation transformation;
s402, based on the first point cloud V '= { V ″' after translation transformation k I k =1,2,.. N' }, calculating a scale transformation factor according to formula (12), formula (13) and formula (14), further calculating a transformed model, and obtaining a first point cloud after translation transformation and scale transformation, wherein formula (12), formula (13) and formula (14) are as follows:
Figure GDA0003764901210000055
v″′ k =s·v″ k ,v″ k ″∈V″ (13);
V″′={v″′ k |k=1,2,...n′} (14);
wherein s is a scaling factor, v' k The kth element of the first point cloud V' after the translation transformation and the scale transformation;
s403, based on the first point cloud V ' = { V ' after translation transformation and scaling transformation ' k I k =1,2,.. N' }, calculating a rotation matrix R according to formula (15); wherein the step of obtaining the rotation matrix comprises:
s4031, based on the first point cloud V '"after translation transformation and scale transformation, construction of covariance matrix (V') T
S4032, covariance matrix (V') T Performing feature decomposition to obtain feature Vector corresponding to the first three maximum feature values 3×3
S4033, constructing a rotation matrix R according to the formula (15), wherein the formula (15) is as follows:
R=Vector 3×3 ·I (15);
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, first point cloud V ' = { V ' based on translation transformation and scaling transformation ' k I k =1,2,.. N '} and a rotation matrix R, further performing rotation transformation on V' ″ to obtain a second point cloud after translation transformation, scale transformation and rotation transformation, and equations (16) and (17) are as follows:
Figure GDA0003764901210000061
Figure GDA0003764901210000062
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000063
the k element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure GDA0003764901210000064
the second point cloud is the second point cloud after translation transformation, scale transformation and rotation transformation.
Further, the step S5 includes the steps of:
s501, based on the second point cloud
Figure GDA0003764901210000065
The k-th element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure GDA0003764901210000066
for the rectangular coordinate system representation of the three-dimensional space of the second point cloud, n' represents the number of vertex elements in the second point cloud, and the second point cloud is projected to the pole according to the formula (18)A coordinate system that obtains a polar coordinate representation of the second point cloud, equation (18) is as follows:
Figure GDA0003764901210000071
wherein (theta) kk ,r k ) Is a polar coordinate representation of the second point cloud, f sph An operator is projected for the polar coordinate system,
Figure GDA0003764901210000072
s502, representing the polar coordinates of the second point cloud
Figure GDA0003764901210000073
Three-dimensional rectangular coordinate system with second point cloud
Figure GDA0003764901210000074
The representations are spliced to obtain a composite representation
Figure GDA0003764901210000075
The second point cloud of (1).
Further, the step S6 includes the steps of:
s601, representing based on having composite
Figure GDA0003764901210000076
The second point cloud and the polar coordinate graph convolution neural network model obtain corresponding deep learning characteristics; the step of obtaining the polar coordinate graph convolution neural network model comprises the following steps:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar plot convolutional neural network model is based on having a composite representation
Figure GDA0003764901210000077
The output of the second point cloud is the corresponding deep learning characteristic; the polar coordinate graph convolution neural network modelThe structure of the method comprises a graph building module, a residual dynamic graph rolling block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure GDA0003764901210000078
Constructing a corresponding hole k neighbor graph representation of 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and the residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, wherein the 1 × 1 convolution kernel layer is followed by a Batch Normalization function of Batch by Batch and an LeakyReLU activation function, and the two pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, an LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure GDA0003764901210000079
The second point cloud of (2) building a database trained by a polar coordinate graph convolutional neural network, and dividing 80% of the database into a training set and 20% of the database into a verification set, wherein the intersection of the training set and the verification set is empty, and the second point cloud is used for correspondingly marking a true label; on the training set, will have a composite representation
Figure GDA0003764901210000081
Inputting the second point cloud into the polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure GDA0003764901210000082
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, and calculating the difference between the classification probability and a real class labelEvaluating the performance of the polar coordinate graph convolution neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure GDA0003764901210000083
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning features.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention samples and generates the three-dimensional grid model into a uniformly distributed point cloud set by an improved point cloud generation method. By calculating volume weighted centroid coordinates on the original three-dimensional mesh model, the model can be made to take into account volume set information when performing normalization and alignment operations. By constructing the graph of the k neighbor graph representation of the 3 branch cavities, the subsequent graph convolution neural network can obtain a larger receptive field. By combining an attention mechanism, the polar coordinate graph convolutional neural network model better considers information on the dimension of the characteristic channel, and can better model the local and global characteristic information of the model. The technical process of the invention can reduce the calculated amount in the sampling process and avoid the influence on feature extraction due to translation, proportion, rotation and other transformations, thereby laying a foundation for the subsequent graph convolution neural network modeling.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is a schematic diagram of a polar plot convolutional neural network model acquisition process.
FIG. 3 is a schematic diagram of an application flow of a polar coordinate graph convolutional neural network model in three-dimensional model feature extraction.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Referring to fig. 1, the method for extracting three-dimensional model features based on a polar coordinate graph convolutional neural network provided in this embodiment includes the following steps:
s1, obtaining a plurality of three-dimensional mesh model data including a vertex set and a patch set, and specifically comprising the following steps:
reading three-dimensional mesh model data, and obtaining a vertex set V = { V } of the three-dimensional mesh model i I =1,2,. N } and patch set F = { F = | j I j =1,2, ·, m }; wherein v is i Represents the ith vertex element, v i =(v i 1 ,v i 2 ,v i 3 ) Is the three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, f j Representing the jth patch element, wherein m is the number of patch elements in a patch set, and the patch set stores patch element information by using vertex index information on the patch elements.
S2, based on an improved point cloud generation method, for each three-dimensional grid model data, judging according to threshold setting, generating point cloud, and acquiring a corresponding first point cloud, wherein the specific process is as follows:
s201, based on patch set F = { F j I j =1,2,. And m }, where m is the number of patch elements in the patch set, and the area of each patch element in the patch set is calculated by formula (1), and formula (1) is as follows:
Figure GDA0003764901210000091
wherein
Figure GDA0003764901210000092
a j =||(v j1 -v j2 )|| 2 ,b j =||(v j1 -v j3 )|| 2 ,c j =||(v j2 -v j3 )|| 2
In the formula, S (f) j ) Representing the jth patch element f in a patch set j Area of (d), v j1 、v j2 、v j3 As elements f of a patch j Three vertices of (a) j Representing the vertex v j1 And vertex v j2 Two of the constructed vectorNorm, b j Representing the vertex v j1 And vertex v j3 Two norms of the constructed vector, c j Representing the vertex v j2 And vertex v j3 Two norms, p, of the constructed vector j Is represented by a j 、b j 、c j Calculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch set j ) Calculating the average value of the whole patch element area in the patch set by the formula (2)
Figure GDA0003764901210000106
Taking the average value as a threshold, the formula (2) is as follows:
Figure GDA0003764901210000101
s203, the original point cloud generating method is to directly perform point cloud generating operation, and the distribution condition of the area size of each patch element in the three-dimensional grid model is not considered. In the improved point cloud generating method, condition judgment is added, and a point cloud generating operation is selectively performed on patch elements through threshold setting judgment, and the point cloud generating operation is to perform linear interpolation based on information of the patch elements in a patch set and calculate to obtain a new vertex set, specifically as follows:
based on patch set F = { F j I j =1, 2.. Eta., m }, according to formula (3) and formula (4), the area is greater than a threshold value
Figure GDA0003764901210000108
Performing a point cloud generating operation on the patch elements while the area is less than a threshold
Figure GDA0003764901210000107
The point cloud generating operation is not performed on the patch elements, so that the point cloud generated by the corresponding three-dimensional mesh model is obtained, and the formula (3) and the formula (4) are as follows:
Figure GDA0003764901210000102
Figure GDA0003764901210000103
in the above formula, set (f) j ) For the jth patch element f in the patch set j Set of vertices of, v j ' representation set
Figure GDA0003764901210000104
The jth vertex set in (1), q 1 、q 2 Represents [0,1 ]]Number of interval divisions, ω 1 、ω 2 O, p are intermediate process variables,
Figure GDA0003764901210000105
point clouds generated for the corresponding three-dimensional mesh models;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure GDA0003764901210000111
According to a farthest point sampling algorithm or a random sampling method, point clouds with fixed number of vertexes are collected, a first point cloud corresponding to a three-dimensional grid model is obtained, and a formula (5) is as follows:
Figure GDA0003764901210000112
wherein, sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V ' k Is the kth element in the first point cloud.
S3, for each three-dimensional grid model data, obtaining a corresponding volume weighted centroid, wherein the specific process is as follows:
s301, for each three-dimensional mesh model, based on the vertex set V = { V = i I =1,2, ·, n }, byEquation (6) calculates the corresponding centroid, and equation (6) is as follows:
Figure GDA0003764901210000113
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000114
representing the centroid, v, of a three-dimensional mesh model i Is the ith vertex element in the vertex set, and n is the number of the vertex elements in the vertex set;
s302, based on patch set F = { F j I j =1,2,. And m, m is the number of patch elements in the patch set, and the centroid is calculated by formula (7)
Figure GDA0003764901210000117
And patch element f in the patch set j The volume of the formed tetrahedron is obtained, and the gravity center of the patch set element is obtained according to the formula (8), wherein the formula (7) and the formula (8) are respectively as follows:
Figure GDA0003764901210000115
Figure GDA0003764901210000116
in the formula, vol j Representing the center of mass
Figure GDA0003764901210000118
With patch element f in jth patch set j Volume of tetrahedra formed, v j1 、v j2 、v j3 As patch element f j Three vertices of upper, g j Representing patch element f in jth patch set j The center of gravity of (a);
s303, volume Vol based on tetrahedron j And the gravity center g of the patch elements in the patch set j Three-dimensional through calculation of formula (9)The volume weighted centroid for the mesh model, equation (9), is as follows:
Figure GDA0003764901210000121
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000122
representing the corresponding volumetric weighted centroid of the three-dimensional mesh model.
S4, based on the first point cloud and the volume weighted mass center, through translation, proportion and rotation transformation, a unit Gaussian sphere with the volume weighted mass center as a sphere center is constructed to wrap the first point cloud, so that the operation of converting the first point cloud into a standard unified coordinate space is realized, and a standardized and aligned second point cloud is obtained, wherein the specific process is as follows:
s401, based on the first point cloud V' = { V = { (V) k '| k =1, 2.,. N' } and a volume weighted centroid
Figure GDA0003764901210000123
v′ k For the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud, the first point cloud is translated to the volume weighted centroid according to a formula (10) and a formula (11), and the first point cloud after translation transformation is obtained, wherein the formula (10) and the formula (11) are as follows:
Figure GDA0003764901210000124
V″={v″ k |k=1,2,...n′} (11);
in the formula, v ″) k The kth element of the first point cloud after the translation transformation, and V' is the first point cloud after the translation transformation;
s402, based on the first point cloud V '= { V ″', which is subjected to translation transformation k I k =1,2,. N' }, calculating a scale conversion factor according to formula (12), formula (13) and formula (14), further calculating a converted model, and obtaining a first point cloud after translation conversion and scale conversionWherein, formula (12), formula (13) and formula (14) are as follows:
Figure GDA0003764901210000131
v″ k =s·v″ k ,v″ k ∈V″ (13);
V″′={v″′ k |k=1,2,...n′} (14);
wherein s is a scaling factor, v' k The kth element of the first point cloud V' after the translation transformation and the scale transformation;
s403, based on the first point cloud V ' "= { V '" after translation transformation and scale transformation ' k I k =1,2,.. N' }, calculating a rotation matrix R according to formula (15); wherein the step of obtaining the rotation matrix comprises:
s4031, a first point cloud V 'based on translation transformation and scaling transformation, construction of covariance matrix (V') T
S4032, covariance matrix (V') T Carrying out feature decomposition to obtain the feature Vector corresponding to the first three maximum feature values 3×3
S4033, constructing a rotation matrix R according to the formula (15), wherein the formula (15) is as follows:
R=Vector 3×3 ·I (15);
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, first point cloud V ' = { V ' based on translation transformation and scaling transformation ' k I k =1,2,.. N '} and a rotation matrix R, further performing rotation transformation on V' ″ to obtain a second point cloud after translation transformation, scale transformation and rotation transformation, and equations (16) and (17) are as follows:
Figure GDA0003764901210000132
Figure GDA0003764901210000133
in the formula (I), the compound is shown in the specification,
Figure GDA0003764901210000134
the k-th element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure GDA0003764901210000135
the second point cloud is subjected to translation transformation, scale transformation and rotation transformation.
S5, based on the second point cloud, projecting the second point cloud to a polar coordinate system, obtaining polar coordinate representation of the second point cloud, and then splicing the polar coordinate representation of the second point cloud and three-dimensional space rectangular coordinate representation of the second point cloud to obtain the second point cloud with composite representation, wherein the specific process is as follows:
s501, based on the second point cloud
Figure GDA0003764901210000141
The k-th element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure GDA0003764901210000142
representing the three-dimensional rectangular coordinate system of the second point cloud, wherein n' represents the number of vertex elements in the second point cloud, projecting the second point cloud to a polar coordinate system according to a formula (18), and obtaining the polar coordinate representation of the second point cloud, wherein the formula (18) is as follows:
Figure GDA0003764901210000143
wherein (theta) kk ,r k ) As a polar coordinate representation of the second point cloud, f sph An operator is projected for the polar coordinate system,
Figure GDA0003764901210000144
s502, the second point cloud is processedPolar coordinate representation of (theta) kk ,r k ) Three-dimensional rectangular coordinate system with second point cloud
Figure GDA0003764901210000145
The representations are spliced to obtain a composite representation
Figure GDA0003764901210000146
The second point cloud of (1).
S6, referring to the graph 2, acquiring corresponding deep learning features of each second point cloud with composite representation based on the polar coordinate graph convolution neural network model, wherein the specific process is as follows:
s601, representing based on having composite
Figure GDA0003764901210000147
Obtaining a corresponding deep learning characteristic of the second point cloud and the polar coordinate graph convolution neural network model; the step of obtaining the polar coordinate graph convolution neural network model comprises the following steps:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar graph convolutional neural network model is based on having a composite representation
Figure GDA0003764901210000148
The output of the second point cloud is the corresponding deep learning characteristic; the structure of the polar coordinate graph convolution neural network model comprises a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure GDA0003764901210000149
Constructing a hole k neighbor graph representation of the corresponding 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and a residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, whereinThe 1 multiplied by 1 convolution kernel layer is followed by a Batch Normalization function and a LeakyReLU activation function, and the two layers of pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, an LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure GDA0003764901210000151
The second point cloud of (2) building a database trained by a polar coordinate graph convolutional neural network, and dividing 80% of the database into a training set and 20% of the database into a verification set, wherein the intersection of the training set and the verification set is empty, and the second point cloud is used for correspondingly marking a true label; on the training set, there will be a composite representation
Figure GDA0003764901210000152
Inputting the second point cloud into the polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure GDA0003764901210000153
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a class label, and evaluating the performance of the polar coordinate graph convolutional neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure GDA0003764901210000154
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning characteristics.
Referring to fig. 3, an application process of the above-mentioned polar-coordinate-diagram convolutional neural network model in three-dimensional model feature extraction in this embodiment includes:
step 1: reading three-dimensional mesh model data, and acquiring a vertex set and a patch set of the three-dimensional mesh model; the patch set stores patch element information by using the vertex index information on patch elements;
step 2: based on an improved point cloud generation method, for each three-dimensional grid model, judging according to threshold setting, generating point clouds and acquiring a first point cloud corresponding to the point clouds; wherein, the threshold value is the average value of the whole patch element areas in the patch set;
and step 3: for each three-dimensional grid model, acquiring a corresponding weighted volume centroid;
and 4, step 4: constructing a unit Gaussian sphere wrapped point cloud with the volume weighted centroid as a sphere center through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, realizing the operation of converting the first point cloud into a standard unified coordinate space, and acquiring an aligned second point cloud;
and 5: based on the second point cloud, projecting the second point cloud to a polar coordinate system to obtain polar coordinate representation of the second point cloud, and splicing the polar coordinate representation of the second point cloud and three-dimensional rectangular coordinate system representation of the second point cloud to obtain the second point cloud with composite representation;
step 6: acquiring a corresponding deep learning characteristic based on a second point cloud with composite representation and a polar coordinate graph convolution neural network model; the polar coordinate graph convolution neural network model is constructed by a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (4)

1. The three-dimensional model feature extraction method based on the polar coordinate graph convolutional neural network is characterized by comprising the following steps of:
s1, obtaining a plurality of three-dimensional mesh model data, including a vertex set and a patch set;
s2, based on an improved point cloud generation method, for each three-dimensional grid model data, performing point cloud generation according to threshold setting judgment to obtain a corresponding first point cloud;
s3, for each three-dimensional grid model data, obtaining a corresponding volume weighted centroid, and the method comprises the following steps:
s301, for each three-dimensional mesh model, based on vertex set V = { V = { (V) } i I =1,2,. N }, and the corresponding centroid is calculated by equation (6), where equation (6) is as follows:
Figure FDA0003829126530000011
in the formula (I), the compound is shown in the specification,
Figure FDA0003829126530000016
representing the centroid, v, of a three-dimensional mesh model i Is the ith vertex element in the vertex set, and n is the number of the vertex elements in the vertex set;
s302, based on patch set F = { F j L j =1,2, a, m }, m is the number of patch elements in the patch set, and the centroid is calculated through a formula (7)
Figure FDA0003829126530000015
And patch element f in the patch set j The volume of the formed tetrahedron is obtained, and the gravity center of the patch set element is obtained according to the formula (8), wherein the formula (7) and the formula (8) are respectively as follows:
Figure FDA0003829126530000012
Figure FDA0003829126530000013
in the formula, vol j Representing the center of mass
Figure FDA0003829126530000014
With patch element f in jth patch set j Volume of tetrahedron formed, v j1 、v j2 、v j3 As patch element f j Three vertices of upper, g j Representing patch element f in jth patch set j The center of gravity of;
s303 tetrahedron-based volume Vol j And the center of gravity g of patch elements in the patch set j And calculating to obtain a volume weighted centroid corresponding to the three-dimensional grid model through a formula (9), wherein the formula (9) is as follows:
Figure FDA0003829126530000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003829126530000029
representing the corresponding volume weighted centroid of the three-dimensional grid model;
s4, constructing a unit Gaussian sphere with the volume weighted centroid as the sphere center to wrap the first point cloud through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, so as to realize the operation of converting the first point cloud into a standard unified coordinate space and obtain a standardized and aligned second point cloud;
s5, projecting the second point cloud to a polar coordinate system based on the second point cloud, obtaining polar coordinate representation of the second point cloud, splicing the polar coordinate representation of the second point cloud and three-dimensional space rectangular coordinate representation of the second point cloud, and obtaining the second point cloud with composite representation, wherein the method comprises the following steps:
s501, based on the second point cloud
Figure FDA0003829126530000022
Figure FDA0003829126530000023
The k element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure FDA0003829126530000024
representing the three-dimensional rectangular coordinate system of the second point cloud, wherein n' represents the number of vertex elements in the second point cloud, projecting the second point cloud to a polar coordinate system according to a formula (18), and obtaining the polar coordinate representation of the second point cloud, wherein the formula (18) is as follows:
Figure FDA0003829126530000025
wherein (theta) kk ,r k ) Is a polar coordinate representation of the second point cloud, f sph An operator is projected for the polar coordinate system,
Figure FDA0003829126530000026
s502, representing the polar coordinates of the second point cloud by using (theta) kk ,r k ) Three-dimensional rectangular coordinate system with second point cloud
Figure FDA0003829126530000027
The representations are spliced to obtain a composite representation
Figure FDA0003829126530000028
The second point cloud of (1);
s6, acquiring corresponding deep learning features of each second point cloud with composite representation based on the polar coordinate graph convolutional neural network model, wherein the deep learning features comprise the following steps:
s601, representing based on having composite
Figure FDA0003829126530000031
Second point of (2)The cloud and polar coordinate graph convolution neural network model obtains corresponding deep learning characteristics; the step of obtaining the polar coordinate graph convolution neural network model comprises the following steps:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar plot convolutional neural network model is based on having a composite representation
Figure FDA0003829126530000032
The output of the second point cloud is the corresponding deep learning characteristic; the structure of the polar coordinate graph convolution neural network model comprises a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure FDA0003829126530000033
Constructing a corresponding hole k neighbor graph representation of 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and a residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, wherein the 1 × 1 convolution kernel layer is followed by a Batch Normalization function of Batch by Batch and an LeakyReLU activation function, and the two pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, an LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure FDA0003829126530000034
The second point cloud of (2) is used for constructing a database trained by a polar coordinate graph convolutional neural network, 80% of the database is divided into a training set and 20% of the database is divided into a verification set, the intersection of the training set and the verification set is empty, and the true type label correspondingly marked by the second point cloud is used; on the training set, there will be a composite representation
Figure FDA0003829126530000035
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure FDA0003829126530000036
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and evaluating the performance of the polar coordinate graph convolutional neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure FDA0003829126530000041
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning characteristics.
2. The method of claim 1, wherein in step S1, the three-dimensional mesh model data is read to obtain a vertex set V = { V } of the three-dimensional mesh model i I =1,2,. N } and patch set F = { F = | j I j =1,2, ·, m }; wherein v is i Represents the ith vertex element, v i =(v i 1 ,v i 2 ,v i 3 ) Is the three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, f j Representing the jth patch element, wherein m is the number of patch elements in a patch set, and the patch set stores patch element information by using vertex index information on the patch elements.
3. The method for extracting features of a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S2 comprises the steps of:
s201, based on patch set F = { F j I j =1,2,. And m }, where m is the number of patch elements in the patch set, and the area of each patch element in the patch set is calculated by formula (1), and formula (1) is as follows:
Figure FDA0003829126530000042
wherein
Figure FDA0003829126530000043
a j =||(v j1 -v j2 )|| 2 ,b j =||(v j1 -v j3 )|| 2 ,c j =||(v j2 -v j3 )|| 2
In the formula, S (f) j ) Representing the jth patch element f in a patch set j Area of (d), v j1 、v j2 、v j3 As elements f of a patch j Three vertices of (a) j Representing the vertex v j1 And vertex v j2 Two norms of the constructed vector, b j Representing the vertex v j1 And vertex v j3 Two norms of the constructed vector, c j Representing the vertex v j2 And vertex v j3 Two-norm, p, of the constructed vector j Is represented by a j 、b j 、c j Calculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch set j ) Calculating an average value S of the whole patch element areas in the patch set through a formula (2), and taking the average value as a threshold, wherein the formula (2) is as follows:
Figure FDA0003829126530000051
s203, an original point cloud generating method is to directly perform point cloud generating operation without considering the distribution condition of the area size of each patch element in a three-dimensional grid model; the improved point cloud generating method adds condition judgment, selectively performs point cloud generating operation on the patch elements through threshold setting judgment, performs linear interpolation based on information of the patch elements in the patch set, and calculates to obtain a new vertex set, specifically as follows:
based on patch set F = { F j I j =1,2, ·, m }, according to equation (3) and equation (4), the area is greater than the threshold
Figure FDA0003829126530000057
Performing a point cloud generation operation on the patch elements to obtain a patch element having an area less than a threshold
Figure FDA0003829126530000058
The point cloud generating operation is not performed on the patch elements, so that the point cloud generated by the corresponding three-dimensional mesh model is obtained, and the formula (3) and the formula (4) are as follows:
Figure FDA0003829126530000052
Figure FDA0003829126530000053
in the above formula, set (f) j ) For jth patch element f in the set of patches j Set of vertices of, v j ' representation set
Figure FDA0003829126530000056
J-th vertex set in (1), q 1 、q 2 Represents [0,1 ]]Number of divisions, ω 1 、ω 2 O, p are intermediate process variables,
Figure FDA0003829126530000054
point clouds generated for the corresponding three-dimensional mesh models;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure FDA0003829126530000055
Acquiring point clouds with fixed vertex numbers according to a farthest point sampling algorithm or a random sampling method, and acquiring a first point cloud corresponding to the three-dimensional mesh model, wherein a formula (5) is as follows:
Figure FDA0003829126530000061
wherein, sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V ' k Is the kth element in the first point cloud.
4. The method for extracting features of a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S4 comprises the steps of:
s401, based on the first point cloud V' = { V = { (V) k '| k =1, 2.,. N' } and a volume weighted centroid
Figure FDA0003829126530000063
v′ k For the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud, the first point cloud is translated to the volume weighted centroid according to a formula (10) and a formula (11), and the first point cloud after translation transformation is obtained, wherein the formula (10) and the formula (11) are as follows:
Figure FDA0003829126530000064
V″={v″ k |k=1,2,...n′} (11);
in the formula, v ″) k For the k-th element of the first point cloud after translation transformationV' is the first point cloud after translation transformation;
s402, based on the first point cloud V '= { V ″', which is subjected to translation transformation k I k =1,2,.. N' }, calculating a scale transformation factor according to formula (12), formula (13) and formula (14), further calculating a transformed model, and obtaining a first point cloud after translation transformation and scale transformation, wherein formula (12), formula (13) and formula (14) are as follows:
Figure FDA0003829126530000062
v″′ k =s·v″ k ,v″ k ∈V″ (13);
V″′={v″′ k |k=1,2,...n′} (14);
wherein s is a scaling factor, v' k The kth element of the first point cloud V' after translation transformation and scale transformation;
s403, based on the first point cloud V ' "= { V '" after translation transformation and scale transformation ' k I k =1,2,.. N' }, calculating a rotation matrix R according to formula (15); wherein the step of obtaining the rotation matrix comprises:
s4031, based on the translation transformation and the first point cloud V ', constructing a covariance matrix (V') T
S4032, covariance matrix (V') T Performing feature decomposition to obtain feature Vector corresponding to the first three maximum feature values 3×3
S4033, constructing a rotation matrix R according to the formula (15), wherein the formula (15) is as follows:
R=Vector 3×3 ·I (15);
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, first point cloud V ' = { V ' based on translation transformation and scaling transformation ' k L k =1,2,. N ' } and a rotation matrix R, further performing rotation transformation on V ' to obtain the V ' subjected to translation transformation, scale transformation and rotation transformationThe second point cloud, equations (16) and (17), is as follows:
Figure FDA0003829126530000071
Figure FDA0003829126530000072
in the formula (I), the compound is shown in the specification,
Figure FDA0003829126530000073
the k-th element of the second point cloud after translation transformation, scale transformation and rotation transformation,
Figure FDA0003829126530000074
the second point cloud is the second point cloud after translation transformation, scale transformation and rotation transformation.
CN202110565190.1A 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network Active CN113313831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565190.1A CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565190.1A CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Publications (2)

Publication Number Publication Date
CN113313831A CN113313831A (en) 2021-08-27
CN113313831B true CN113313831B (en) 2022-12-16

Family

ID=77374195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565190.1A Active CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Country Status (1)

Country Link
CN (1) CN113313831B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286538B1 (en) * 2014-05-01 2016-03-15 Hrl Laboratories, Llc Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
DE102018128531A1 (en) * 2018-11-14 2020-05-14 Valeo Schalter Und Sensoren Gmbh System and method for analyzing a three-dimensional environment represented by a point cloud through deep learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590261A (en) * 1993-05-07 1996-12-31 Massachusetts Institute Of Technology Finite-element method for image alignment and morphing
US10013653B2 (en) * 2016-01-26 2018-07-03 Università della Svizzera italiana System and a method for learning features on geometric domains
JP6799169B2 (en) * 2017-03-17 2020-12-09 本田技研工業株式会社 Combining 3D object detection and orientation estimation by multimodal fusion
US11556745B2 (en) * 2019-03-22 2023-01-17 Huawei Technologies Co., Ltd. System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN110232438B (en) * 2019-06-06 2021-07-20 北京致远慧图科技有限公司 Image processing method and device of convolutional neural network under polar coordinate system
CN110942110A (en) * 2019-12-31 2020-03-31 新奥数能科技有限公司 Feature extraction method and device of three-dimensional model
CN111461063B (en) * 2020-04-24 2022-05-17 武汉大学 Behavior identification method based on graph convolution and capsule neural network
CN112488210A (en) * 2020-12-02 2021-03-12 北京工业大学 Three-dimensional point cloud automatic classification method based on graph convolution neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286538B1 (en) * 2014-05-01 2016-03-15 Hrl Laboratories, Llc Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
DE102018128531A1 (en) * 2018-11-14 2020-05-14 Valeo Schalter Und Sensoren Gmbh System and method for analyzing a three-dimensional environment represented by a point cloud through deep learning

Also Published As

Publication number Publication date
CN113313831A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN111062282B (en) Substation pointer instrument identification method based on improved YOLOV3 model
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN108304357A (en) A kind of Chinese word library automatic generation method based on font manifold
CN112949407B (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN110675421B (en) Depth image collaborative segmentation method based on few labeling frames
CN114612660A (en) Three-dimensional modeling method based on multi-feature fusion point cloud segmentation
CN111126389A (en) Text detection method and device, electronic equipment and storage medium
CN113313830B (en) Encoding point cloud feature extraction method based on multi-branch graph convolutional neural network
CN113627093A (en) Underwater mechanism cross-scale flow field characteristic prediction method based on improved Unet network
CN111460193A (en) Three-dimensional model classification method based on multi-mode information fusion
CN114187506A (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
CN117593666B (en) Geomagnetic station data prediction method and system for aurora image
CN116310339A (en) Remote sensing image segmentation method based on matrix decomposition enhanced global features
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN114519293A (en) Cable body fault identification method based on hand sample machine learning model
CN113569814A (en) Unsupervised pedestrian re-identification method based on feature consistency
CN113313831B (en) Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network
CN117788810A (en) Learning system for unsupervised semantic segmentation
Cao et al. Label-efficient deep learning-based semantic segmentation of building point clouds at LOD3 level
CN116704596A (en) Human behavior recognition method based on skeleton sequence
CN115849202A (en) Intelligent crane operation target identification method based on digital twin technology
CN116386042A (en) Point cloud semantic segmentation model based on three-dimensional pooling spatial attention mechanism
Li et al. Few-shot meta-learning on point cloud for semantic segmentation
CN116778161A (en) Point cloud semantic segmentation method based on joint transform and sparse convolution
CN113033669B (en) Visual scene recognition method based on learnable feature map filtering and graph annotation meaning network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant