CN113313831A - Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network - Google Patents

Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network Download PDF

Info

Publication number
CN113313831A
CN113313831A CN202110565190.1A CN202110565190A CN113313831A CN 113313831 A CN113313831 A CN 113313831A CN 202110565190 A CN202110565190 A CN 202110565190A CN 113313831 A CN113313831 A CN 113313831A
Authority
CN
China
Prior art keywords
point cloud
patch
formula
vertex
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110565190.1A
Other languages
Chinese (zh)
Other versions
CN113313831B (en
Inventor
周燕
徐雪妙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110565190.1A priority Critical patent/CN113313831B/en
Publication of CN113313831A publication Critical patent/CN113313831A/en
Application granted granted Critical
Publication of CN113313831B publication Critical patent/CN113313831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network, which comprises the following steps of firstly, uniformly generating and sampling point clouds from three-dimensional grid model data by using an improved point cloud generation method; secondly, the point cloud model is standardized and aligned by using the calculated volume weighted centroid; thirdly, constructing polar coordinate representation and three-dimensional space rectangular coordinate system representation of the point cloud to obtain composite representation; and finally, using a graph convolution neural network to model and compound the representation, capturing local neighborhood and global information, and extracting the characteristics of the three-dimensional model. The method can extract the shape content characteristics of the three-dimensional model with the transformation invariance and high discrimination, and lays a foundation for subsequent tasks such as classification recognition, retrieval and the like.

Description

Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network
Technical Field
The invention relates to the technical field of three-dimensional model classification identification and retrieval, in particular to a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network.
Background
At the present stage, the shape content features of the three-dimensional model with low dimension and high discrimination are effectively extracted, and the classification, the retrieval and the like of the shape content features are facilitated, so that the new method for extracting the three-dimensional model features is an important research content in the current three-dimensional computer vision field. Firstly, the traditional method based on point cloud mostly adopts a single three-dimensional rectangular coordinate system as network input and lacks auxiliary coding information; secondly, the traditional method for generating point cloud from grid sampling mostly uses a segmented interpolation method, lacks a sampling strategy, and easily ignores the actual size condition of a patch, so that the collected point set is not uniform enough; thirdly, the model generally faces the influences of rotation, translation, scale transformation and the like; finally, the traditional multi-layer perceptron MLP is used as a network feature extractor, non-European geometric data similar to point clouds cannot be effectively modeled, effective information of a local field of the model is difficult to capture, and performance improvement is limited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a three-dimensional model feature extraction method based on a polar coordinate graph convolutional neural network, which can extract the shape content features of a three-dimensional model with transformation invariance and high discrimination and lay the foundation for subsequent tasks such as classification identification and retrieval.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: the three-dimensional model feature extraction method based on the polar coordinate graph convolutional neural network comprises the following steps:
s1, acquiring a plurality of three-dimensional mesh model data, including a vertex set and a patch set;
s2, based on the improved point cloud generating method, for each three-dimensional grid model data, setting and judging according to a threshold value, generating a point cloud, and acquiring a first point cloud corresponding to the point cloud;
s3, acquiring a corresponding volume weighted centroid for each three-dimensional grid model data;
s4, constructing a unit Gaussian sphere with the volume weighted centroid as the sphere center to wrap the first point cloud through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, realizing the operation of converting the first point cloud into a standard unified coordinate space, and acquiring a standardized and aligned second point cloud;
s5, projecting the second point cloud to a polar coordinate system based on the second point cloud, obtaining polar coordinate representation of the second point cloud, and splicing the polar coordinate representation of the second point cloud and three-dimensional rectangular coordinate system representation of the second point cloud to obtain the second point cloud with composite representation;
and S6, acquiring the corresponding deep learning characteristics of each second point cloud with the composite representation based on the polar coordinate graph convolution neural network model.
Further, in step S1, the three-dimensional mesh model data is read, and the vertex set V ═ V of the three-dimensional mesh model is acquiredi I 1,2, n and patch set F j1, 2.·, m }; wherein v isiRepresents the ith vertex element, vi=(vi 1,vi 2,vi 3) Is the three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, fjRepresenting the jth patch element, wherein m is the number of patch elements in a patch set, and the patch set stores patch element information by using vertex index information on the patch elements.
Further, the step S2 includes the steps of:
s201, based on a patch set F ═ Fj1, 2., m }, m is the number of patch elements in the patch set, and the area of each patch element in the patch set is obtained through formula calculation, wherein the formula is as follows:
Figure BDA0003080454670000021
wherein
Figure BDA0003080454670000022
aj=||(vj1-vj2)||2,bj=||(vj1-vj3)||2,cj=||(vj2-vj3)||2
In the formula, S (f)j) Representing the jth patch element f in a patch setjArea of (v)j1、vj2、vj3As elements f of a patchjThree ofVertex, ajRepresenting the vertex vj1And vertex vj2Two norms of the constructed vector, bjRepresenting the vertex vj1And vertex vj3Two norms of the constructed vector, cjRepresenting the vertex vj2And vertex vj3Two norms, p, of the constructed vectorjIs represented by aj、bj、cjCalculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch setj) Calculating the average value of the whole patch element area in the patch set by a formula
Figure BDA0003080454670000031
Then, taking the average value as a threshold value, the formula is as follows:
Figure BDA0003080454670000032
s203, an original point cloud generating method is to directly perform point cloud generating operation without considering the distribution condition of the area size of each patch element in the three-dimensional grid model; the improved point cloud generating method adds condition judgment, selectively performs point cloud generating operation on the patch elements through threshold setting judgment, performs linear interpolation based on information of the patch elements in the patch set, and calculates to obtain a new vertex set, specifically as follows:
based on the set of patches F ═ FjAnd if j is 1,2, the other words, m, performing point cloud generation operation on the patch elements with the area larger than the threshold value S according to a formula, and not performing point cloud generation operation on the patch elements with the area smaller than the threshold value S, so as to obtain the point cloud generated by the corresponding three-dimensional mesh model, wherein the formula is as follows:
Figure BDA0003080454670000033
Figure BDA0003080454670000034
in the above formula, Set (f)j) For the jth patch element f in the patch setjSet of vertices of, vj' representation set
Figure BDA0003080454670000035
J-th vertex set in (1), q1、q2Represents [0,1 ]]Number of divisions, ω1、ω2O, p are intermediate process variables,
Figure BDA0003080454670000036
point clouds generated for the corresponding three-dimensional mesh models;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure BDA0003080454670000037
According to a farthest point sampling algorithm or a random sampling method, point clouds with fixed vertex numbers are collected, a first point cloud corresponding to a three-dimensional grid model is obtained, and the formula is as follows:
Figure BDA0003080454670000041
wherein, Sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V 'kIs the kth element in the first point cloud.
Further, the step S3 includes the steps of:
s301, for each three-dimensional mesh model, based on the vertex set V ═ Vi1, 2., n }, and calculating a corresponding centroid through a formula, wherein the formula is as follows:
Figure BDA0003080454670000042
in the formula,
Figure BDA0003080454670000043
Representing the centroid, v, of a three-dimensional mesh modeliIs the ith vertex element in the vertex set, and n is the number of vertex elements in the vertex set.
S302, F ═ { F } based on the patch setj1,2, the., m is the number of patch elements in the patch set, and the centroid v and the patch element f in the patch set are obtained through formula calculationjThe volume of the formed tetrahedron is obtained, and meanwhile, the gravity center of the element of the patch set is obtained according to the formula, wherein the formula is respectively as follows:
Figure BDA0003080454670000044
Figure BDA0003080454670000045
in the formula, VoljRepresenting centroid v and patch element f in jth patch setjVolume of tetrahedron formed, vj1、vj2、vj3As elements f of a patchjThree vertices of upper, gjRepresenting patch element f in jth patch setjThe center of gravity of;
s303 tetrahedron-based volume VoljAnd the center of gravity g of patch elements in the patch setjAnd calculating to obtain the volume weighted centroid corresponding to the three-dimensional grid model through a formula, wherein the formula is as follows:
Figure BDA0003080454670000051
in the formula (I), the compound is shown in the specification,
Figure BDA0003080454670000052
representing the corresponding volumetric weighted centroid of the three-dimensional mesh model.
Further, the step S4 includes the steps of:
s401, based on the first point cloud V ═ { V ═ Vk'| k ═ 1,2,. ·, n' } and the volume weighted centroid
Figure BDA0003080454670000053
v′kFor the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud, the first point cloud is translated to the volume weighted centroid according to a formula, and the first point cloud after translation transformation is obtained, wherein the formula is as follows:
Figure BDA0003080454670000054
V″={v″k|k=1,2,...n′}
in the formula, v ″)kThe k element of the first point cloud after the translation transformation is shown, and V' is the first point cloud after the translation transformation;
s402, based on the first point cloud V ″ ═ { V ″) after translation transformation k1, 2.. n', calculating a scale conversion factor according to a formula, further calculating a converted model, and obtaining a first point cloud after translation conversion and scale conversion, wherein the formula is as follows:
Figure BDA0003080454670000055
v″′k=s·v″k,v″k∈V″
V″′={v″′k|k=1,2,...n′}
wherein s is a scaling factor, v'kThe kth element of the first point cloud V' after the translation transformation and the scale transformation;
s403, based on the first point cloud V ' "{ V '" after the translation transformation and the scaling transformation 'k1,2,. n' }, calculating a rotation matrix R according to a formula; wherein the step of obtaining the rotation matrix comprises:
s4031, and first point cloud V' based on translation transformation and scale transformationCovariance matrix (V')T
S4032, covariance matrix (V')TCarrying out feature decomposition to obtain the feature Vector corresponding to the first three maximum feature values3×3
S4033, constructing a rotation matrix R according to a formula, wherein the formula is as follows:
R=Vector3×3·I
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, based on the first point cloud V ' { V ' after translation transformation and scaling transformation 'kN 'and a rotation matrix R, further performing rotation transformation on V' ″ to obtain a second point cloud after translation transformation, scaling transformation and rotation transformation, as follows:
Figure BDA0003080454670000061
Figure BDA0003080454670000062
in the formula (I), the compound is shown in the specification,
Figure BDA0003080454670000063
the k-th element of the second point cloud after the translation transformation, the scaling transformation and the rotation transformation,
Figure BDA0003080454670000064
the second point cloud is the second point cloud after translation transformation, scaling transformation and rotation transformation.
Further, the step S5 includes the steps of:
s501, based on second point cloud
Figure BDA0003080454670000065
Figure BDA0003080454670000066
Is subjected to translation transformation and scaling transformationTransforming and rotating the k-th element of the transformed second point cloud,
Figure BDA0003080454670000067
representing by a rectangular coordinate system of a three-dimensional space of second point cloud, wherein n' represents the number of vertex elements in the second point cloud, projecting the second point cloud to a polar coordinate system according to a formula, and obtaining the polar coordinate representation of the second point cloud, wherein the formula is as follows:
Figure BDA0003080454670000068
wherein (theta)kk,rk) As a polar coordinate representation of the second point cloud, fsphAn operator is projected for the polar coordinate system,
Figure BDA0003080454670000069
s502, expressing the polar coordinates (theta) of the second point cloud according to a formulakk,rk) Three-dimensional rectangular coordinate system with second point cloud
Figure BDA0003080454670000071
The representations are spliced to obtain a composite representation
Figure BDA0003080454670000072
The second point cloud of (1).
Further, the step S6 includes the steps of:
s601, representing based on having composite
Figure BDA0003080454670000073
The second point cloud and the polar coordinate graph convolution neural network model obtain corresponding deep learning characteristics; the step of obtaining the polar coordinate graph convolution neural network model comprises the following steps:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar plot convolutional neural network model is based onWith a composite representation
Figure BDA0003080454670000074
The output of the second point cloud is the corresponding deep learning characteristic; the structure of the polar coordinate graph convolution neural network model comprises a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure BDA0003080454670000075
Constructing a hole k neighbor graph representation of the corresponding 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and a residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, wherein the 1 × 1 convolution kernel layer is followed by a Batch Normalization function of Batch by Batch and an LeakyReLU activation function, and the two pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, a LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure BDA0003080454670000076
The second point cloud of (2) building a database of network training, and dividing 80% of the database into a training set and 20% of the database into a verification set, wherein the intersection of the training set and the verification set is empty, and the second point cloud is used for corresponding to the labeled real class label; on the training set, will have a composite representation
Figure BDA0003080454670000077
Inputting the second point cloud into the polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure BDA0003080454670000081
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a class label, and evaluating the performance of the polar coordinate graph convolutional neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure BDA0003080454670000082
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning characteristics.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention samples and generates the three-dimensional grid model into a uniformly distributed point cloud set by an improved point cloud generation method. By calculating volume weighted centroid coordinates in the original three-dimensional mesh model, the model can be made to take into account volume set information when performing normalization and alignment operations. By constructing the graph of the k neighbor graph representation of the 3 branch cavities, the subsequent graph convolution neural network can obtain a larger receptive field. By combining an attention mechanism, the polar coordinate graph convolutional neural network model better considers information on the dimension of the characteristic channel, and can better model the local and global characteristic information of the model. The technical process of the invention can reduce the calculated amount in the sampling process and avoid the influence on feature extraction caused by translation, proportion, rotation and other transformations, thereby laying a foundation for the subsequent graph convolution neural network modeling.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is a schematic diagram of a polar plot convolutional neural network model acquisition process.
FIG. 3 is a schematic diagram of an application process of a polar coordinate graph convolutional neural network model in three-dimensional model feature extraction.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Referring to fig. 1, the method for extracting three-dimensional model features based on a polar coordinate graph convolutional neural network provided in this embodiment includes the following steps:
s1, obtaining a plurality of three-dimensional mesh model data, including a vertex set and a patch set, as follows:
reading three-dimensional mesh model data, and acquiring vertex set V ═ V of the three-dimensional mesh modeliI 1,2, n and patch set F j1, 2.·, m }; wherein v isiRepresents the ith vertex element, vi=(vi 1,vi 2,vi 3) Is the three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, fjRepresenting the jth patch element, wherein m is the number of patch elements in a patch set, and the patch set stores patch element information by using vertex index information on the patch elements.
S2, based on the improved point cloud generating method, for each three-dimensional grid model data, according to threshold setting judgment, point cloud generation is carried out, and a first point cloud corresponding to the point cloud generation is obtained, wherein the specific process is as follows:
s201, based on a patch set F ═ Fj1, 2., m }, m is the number of patch elements in the patch set, and the area of each patch element in the patch set is obtained through formula calculation, wherein the formula is as follows:
Figure BDA0003080454670000091
wherein
Figure BDA0003080454670000092
aj=||(vj1-vj2)||2,bj=||(vj1-vj3)||2,cj=||(vj2-vj3)||2
In the formula, S (f)j) Representing the jth face in a patch setFlake element fjArea of (v)j1、vj2、vj3As elements f of a patchjThree vertices of (a)jRepresenting the vertex vj1And vertex vj2Two norms of the constructed vector, bjRepresenting the vertex vj1And vertex vj3Two norms of the constructed vector, cjRepresenting the vertex vj2And vertex vj3Two norms, p, of the constructed vectorjIs represented by aj、bj、cjCalculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch setj) Calculating the average value of the whole patch element area in the patch set by a formula
Figure BDA0003080454670000093
Then, taking the average value as a threshold value, the formula is as follows:
Figure BDA0003080454670000094
s203, the original point cloud generating method is to directly perform point cloud generating operation, and the distribution condition of the area size of each patch element in the three-dimensional grid model is not considered. The improved point cloud generating method adds condition judgment, and selectively performs point cloud generating operation on the patch elements through threshold setting judgment, wherein the point cloud generating operation is to perform linear interpolation based on information of the patch elements in the patch set, and calculate to obtain a new vertex set, and specifically, the method comprises the following steps:
based on the set of patches F ═ FjAnd if j is 1,2, the other words, m, performing point cloud generation operation on the patch elements with the area larger than the threshold value S according to a formula, and not performing point cloud generation operation on the patch elements with the area smaller than the threshold value S, so as to obtain the point cloud generated by the corresponding three-dimensional mesh model, wherein the formula is as follows:
Figure BDA0003080454670000101
Figure BDA0003080454670000102
in the above formula, Set (f)j) For the jth patch element f in the patch setjSet of vertices of, vj' representation set
Figure BDA0003080454670000103
J-th vertex set in (1), q1、q2Represents [0,1 ]]Number of divisions, ω1、ω2O, p are intermediate process variables,
Figure BDA0003080454670000104
point clouds generated for the corresponding three-dimensional mesh models;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure BDA0003080454670000105
According to a farthest point sampling algorithm or a random sampling method, point clouds with fixed vertex numbers are collected, a first point cloud corresponding to a three-dimensional grid model is obtained, and the formula is as follows:
Figure BDA0003080454670000106
wherein, Sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V 'kIs the kth element in the first point cloud.
S3, for each three-dimensional grid model data, obtaining the corresponding volume weighted centroid, and the specific process is as follows:
s301, for each three-dimensional mesh model, based on the vertex set V ═ Vi1, 2., n }, and calculating a corresponding centroid through a formula, wherein the formula is as follows:
Figure BDA0003080454670000111
in the formula (I), the compound is shown in the specification,
Figure BDA0003080454670000112
representing the centroid, v, of a three-dimensional mesh modeliIs the ith vertex element in the vertex set, and n is the number of vertex elements in the vertex set.
S302, F ═ { F } based on the patch setj1,2, the., m is the number of patch elements in the patch set, and the centroid v and the patch element f in the patch set are obtained through formula calculationjThe volume of the formed tetrahedron is obtained, and meanwhile, the gravity center of the element of the patch set is obtained according to the formula, wherein the formula is respectively as follows:
Figure BDA0003080454670000113
Figure BDA0003080454670000114
in the formula, VoljRepresenting centroid v and patch element f in jth patch setjVolume of tetrahedron formed, vj1、vj2、vj3As elements f of a patchjThree vertices of upper, gjRepresenting patch element f in jth patch setjThe center of gravity of;
s303 tetrahedron-based volume VoljAnd the center of gravity g of patch elements in the patch setjAnd calculating to obtain the volume weighted centroid corresponding to the three-dimensional grid model through a formula, wherein the formula is as follows:
Figure BDA0003080454670000115
in the formula (I), the compound is shown in the specification,
Figure BDA0003080454670000121
representing the corresponding volumetric weighted centroid of the three-dimensional mesh model.
S4, based on the first point cloud and the volume weighted centroid, through translation, proportion and rotation transformation, constructing a unit Gaussian sphere with the volume weighted centroid as a sphere center to wrap the first point cloud, realizing the operation of converting the first point cloud into a standard unified coordinate space, and acquiring a standardized and aligned second point cloud, wherein the specific process is as follows:
s401, based on the first point cloud V ═ { V ═ Vk'| k ═ 1,2,. ·, n' } and the volume weighted centroid
Figure BDA0003080454670000122
v′kFor the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud, the first point cloud is translated to the volume weighted centroid according to a formula, and the first point cloud after translation transformation is obtained, wherein the formula is as follows:
Figure BDA0003080454670000124
V″={v″k|k=1,2,...n′}
in the formula, v ″)kThe k element of the first point cloud after the translation transformation is shown, and V' is the first point cloud after the translation transformation;
s402, based on the first point cloud V ″ ═ { V ″) after translation transformation k1, 2.. n', calculating a scale conversion factor according to a formula, further calculating a converted model, and obtaining a first point cloud after translation conversion and scale conversion, wherein the formula is as follows:
Figure BDA0003080454670000123
v″′k=s·v″k,v″k∈V″
V″′={v″′k|k=1,2,...n′}
in the formula (I), the compound is shown in the specification,s is a scaling factor, v'kThe kth element of the first point cloud V' after the translation transformation and the scale transformation;
s403, based on the first point cloud V '"after the translation transformation and the scaling transformation, { V'" ]k1,2,. n' }, calculating a rotation matrix R according to a formula; wherein the step of obtaining the rotation matrix comprises:
s4031, based on the translation transformation and the first point cloud V ', constructing a covariance matrix (V')T
S4032, covariance matrix (V')TCarrying out feature decomposition to obtain the feature Vector corresponding to the first three maximum feature values3×3
S4033, constructing a rotation matrix R according to a formula, wherein the formula is as follows:
R=Vector3×3·I
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, based on the first point cloud V ' { V ' after translation transformation and scaling transformation 'kN 'and a rotation matrix R, further performing rotation transformation on V' ″ to obtain a second point cloud after translation transformation, scaling transformation and rotation transformation, as follows:
Figure BDA0003080454670000131
Figure BDA0003080454670000132
in the formula (I), the compound is shown in the specification,
Figure BDA0003080454670000133
the k-th element of the second point cloud after the translation transformation, the scaling transformation and the rotation transformation,
Figure BDA0003080454670000134
after translation transformation, scaling transformation and rotation transformationAnd (5) second point cloud.
S5, projecting the second point cloud to a polar coordinate system based on the second point cloud, obtaining polar coordinate representation of the second point cloud, splicing the polar coordinate representation of the second point cloud and three-dimensional space rectangular coordinate representation of the second point cloud, and obtaining the second point cloud with composite representation, wherein the specific process is as follows:
s501, based on second point cloud
Figure BDA0003080454670000135
Figure BDA0003080454670000136
The k-th element of the second point cloud after the translation transformation, the scaling transformation and the rotation transformation,
Figure BDA0003080454670000137
representing by a rectangular coordinate system of a three-dimensional space of second point cloud, wherein n' represents the number of vertex elements in the second point cloud, projecting the second point cloud to a polar coordinate system according to a formula, and obtaining the polar coordinate representation of the second point cloud, wherein the formula is as follows:
Figure BDA0003080454670000138
wherein (theta)kk,rk) As a polar coordinate representation of the second point cloud, fsphAn operator is projected for the polar coordinate system,
Figure BDA0003080454670000141
s502, expressing the polar coordinates (theta) of the second point cloud according to a formulakk,rk) Three-dimensional rectangular coordinate system with second point cloud
Figure BDA0003080454670000142
The representations are spliced to obtain a composite representation
Figure BDA0003080454670000143
The second point cloud of (1).
S6, referring to fig. 2, based on the polar-coordinate graph convolutional neural network model, for each second point cloud having a composite representation, obtaining a corresponding deep learning feature, where the specific process is as follows:
s601, representing based on having composite
Figure BDA0003080454670000144
The second point cloud and the polar coordinate graph convolution neural network model obtain corresponding deep learning characteristics; the method comprises the following steps of obtaining a polar coordinate graph convolution neural network model, wherein the steps comprise:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar plot convolutional neural network model is based on having a composite representation
Figure BDA0003080454670000145
The output of the second point cloud is the corresponding deep learning characteristic; the structure of the polar coordinate graph convolution neural network model comprises a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure BDA0003080454670000146
Constructing a hole k neighbor graph representation of the corresponding 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and a residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, wherein the 1 × 1 convolution kernel layer is followed by a Batch Normalization function of Batch by Batch and an LeakyReLU activation function, and the two pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, a LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure BDA0003080454670000147
The second point cloud of (2) building a database of network training, and dividing 80% of the database into a training set and 20% of the database into a verification set, wherein the intersection of the training set and the verification set is empty, and the second point cloud is used for corresponding to the labeled real class label; on the training set, will have a composite representation
Figure BDA0003080454670000151
Inputting the second point cloud into the polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure BDA0003080454670000152
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a class label, and evaluating the performance of the polar coordinate graph convolutional neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure BDA0003080454670000153
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning characteristics.
Referring to fig. 3, an application process of the above-mentioned polar-coordinate-diagram convolutional neural network model in three-dimensional model feature extraction in this embodiment includes:
step 1: reading three-dimensional mesh model data, and acquiring a vertex set and a patch set of the three-dimensional mesh model; the patch set stores patch element information by using the vertex index information on patch elements;
step 2: based on an improved point cloud generation method, for each three-dimensional grid model, point cloud generation is carried out according to threshold setting judgment, and a first point cloud corresponding to the point cloud generation is obtained; wherein, the threshold value is the average value of the element areas of the whole patches in the patch set;
and step 3: for each three-dimensional grid model, acquiring a corresponding weighted volume centroid;
and 4, step 4: constructing a unit Gaussian sphere wrapped point cloud with the volume weighted centroid as a sphere center through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, realizing the operation of converting the first point cloud into a standard unified coordinate space, and acquiring an aligned second point cloud;
and 5: based on the second point cloud, projecting the second point cloud to a polar coordinate system to obtain polar coordinate representation of the second point cloud, and splicing the polar coordinate representation of the second point cloud and three-dimensional rectangular coordinate system representation of the second point cloud to obtain the second point cloud with composite representation;
step 6: acquiring a corresponding deep learning characteristic based on a second point cloud with composite representation and a polar coordinate graph convolution neural network model; the polar coordinate graph convolution neural network model is constructed by a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. The three-dimensional model feature extraction method based on the polar coordinate graph convolutional neural network is characterized by comprising the following steps of:
s1, acquiring a plurality of three-dimensional mesh model data, including a vertex set and a patch set;
s2, based on the improved point cloud generating method, for each three-dimensional grid model data, setting and judging according to a threshold value, generating a point cloud, and acquiring a first point cloud corresponding to the point cloud;
s3, acquiring a corresponding volume weighted centroid for each three-dimensional grid model data;
s4, constructing a unit Gaussian sphere with the volume weighted centroid as the sphere center to wrap the first point cloud through translation, proportion and rotation transformation based on the first point cloud and the volume weighted centroid, realizing the operation of converting the first point cloud into a standard unified coordinate space, and acquiring a standardized and aligned second point cloud;
s5, projecting the second point cloud to a polar coordinate system based on the second point cloud, obtaining polar coordinate representation of the second point cloud, and splicing the polar coordinate representation of the second point cloud and three-dimensional rectangular coordinate system representation of the second point cloud to obtain the second point cloud with composite representation;
and S6, acquiring the corresponding deep learning characteristics of each second point cloud with the composite representation based on the polar coordinate graph convolution neural network model.
2. The method of claim 1, wherein in step S1, the three-dimensional mesh model data is read to obtain a vertex set V ═ V { V } of the three-dimensional mesh modeliI 1,2, n and patch set Fj1, 2.·, m }; wherein v isiRepresents the ith vertex element, vi=(vi 1,vi 2,vi 3) Is the three-dimensional rectangular coordinate system representation of the vertex elements in the vertex set, n is the number of the vertex elements in the vertex set, fjRepresenting the jth patch element, wherein m is the number of patch elements in a patch set, and the patch set stores patch element information by using vertex index information on the patch elements.
3. The method for extracting features from a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S2 comprises the steps of:
s201, based on a patch set F ═ Fj1,2,., m }, m is the number of patch elements in the patch set, and the area of each patch element in the patch set is obtained through formula calculation, wherein the formula is as followsShown below:
Figure FDA0003080454660000021
wherein
Figure FDA0003080454660000022
aj=||(vj1-vj2)||2,bj=||(vj1-vj3)||2,cj=||(vj2-vj3)||2
In the formula, S (f)j) Representing the jth patch element f in a patch setjArea of (v)j1、vj2、vj3As elements f of a patchjThree vertices of (a)jRepresenting the vertex vj1And vertex vj2Two norms of the constructed vector, bjRepresenting the vertex vj1And vertex vj3Two norms of the constructed vector, cjRepresenting the vertex vj2And vertex vj3Two norms, p, of the constructed vectorjIs represented by aj、bj、cjCalculating the obtained intermediate process variable;
s202, area S (f) based on each patch element in patch setj) Calculating the average value of the whole patch element area in the patch set by a formula
Figure FDA0003080454660000023
Then, taking the average value as a threshold value, the formula is as follows:
Figure FDA0003080454660000024
s203, an original point cloud generating method is to directly perform point cloud generating operation without considering the distribution condition of the area size of each patch element in the three-dimensional grid model; the improved point cloud generating method adds condition judgment, selectively performs point cloud generating operation on the patch elements through threshold setting judgment, performs linear interpolation based on information of the patch elements in the patch set, and calculates to obtain a new vertex set, specifically as follows:
based on the set of patches F ═ Fj1, 2., m }, according to a formula, the area is larger than a threshold value
Figure FDA0003080454660000025
Performing a point cloud generation operation on the patch elements to obtain a patch element having an area less than a threshold
Figure FDA0003080454660000026
The patch elements do not carry out point cloud generation operation, so that point clouds generated by corresponding three-dimensional grid models are obtained, and the formula is as follows:
Figure FDA0003080454660000031
Figure FDA0003080454660000032
in the above formula, Set (f)j) For the jth patch element f in the patch setjSet of vertices of, vj' representation set
Figure FDA0003080454660000033
J-th vertex set in (1), q1、q2Represents [0,1 ]]Number of divisions, ω1、ω2O, p are intermediate process variables,
Figure FDA0003080454660000034
point clouds generated for the corresponding three-dimensional mesh models;
s204, point cloud generated based on corresponding three-dimensional grid model
Figure FDA0003080454660000035
According toAccording to the farthest point sampling algorithm or the random sampling method, point clouds with fixed vertex numbers are collected, a first point cloud corresponding to a three-dimensional grid model is obtained, and the formula is as follows:
Figure FDA0003080454660000036
wherein, Sample _ Function is a farthest point sampling algorithm Function or a random sampling algorithm Function, V ' is a first point cloud corresponding to the three-dimensional grid model, n ' represents the number of vertex elements in the first point cloud to be sampled, V 'kIs the kth element in the first point cloud.
4. The method for extracting features from a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S3 comprises the steps of:
s301, for each three-dimensional mesh model, based on the vertex set V ═ Vi1, 2., n }, and calculating a corresponding centroid through a formula, wherein the formula is as follows:
Figure FDA0003080454660000037
in the formula (I), the compound is shown in the specification,
Figure FDA0003080454660000038
representing the centroid, v, of a three-dimensional mesh modeliIs the ith vertex element in the vertex set, and n is the number of vertex elements in the vertex set.
S302, F ═ { F } based on the patch setj1,2, the sum of m, m is the number of patch elements in the patch set, and the centroid is obtained through formula calculation
Figure FDA0003080454660000041
And patch element f in the patch setjThe volume of the formed tetrahedron, and the gravity center and the male degree of the surface patch assembly elements are obtained according to a formulaThe formulae are respectively as follows:
Figure FDA0003080454660000042
Figure FDA0003080454660000043
in the formula, VoljRepresenting the center of mass
Figure FDA0003080454660000048
With patch element f in jth patch setjVolume of tetrahedron formed, vj1、vj2、vj3As elements f of a patchjThree vertices of upper, gjRepresenting patch element f in jth patch setjThe center of gravity of;
s303 tetrahedron-based volume VoljAnd the center of gravity g of patch elements in the patch setjAnd calculating to obtain the volume weighted centroid corresponding to the three-dimensional grid model through a formula, wherein the formula is as follows:
Figure FDA0003080454660000044
in the formula (I), the compound is shown in the specification,
Figure FDA0003080454660000045
representing the corresponding volumetric weighted centroid of the three-dimensional mesh model.
5. The method for extracting features from a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S4 comprises the steps of:
s401, based on the first point cloud V ═ { V ═ Vk'| k ═ 1,2,. ·, n' } and the volume weighted centroid
Figure FDA0003080454660000046
v′kFor the kth element in the first point cloud, n' represents the number of vertex elements in the first point cloud, the first point cloud is translated to the volume weighted centroid according to a formula, and the first point cloud after translation transformation is obtained, wherein the formula is as follows:
Figure FDA0003080454660000047
V″={v″k|k=1,2,...n′}
in the formula, v ″)kThe k element of the first point cloud after the translation transformation is shown, and V' is the first point cloud after the translation transformation;
s402, based on the first point cloud V ″ ═ { V ″) after translation transformationk1, 2.. n', calculating a scale conversion factor according to a formula, further calculating a converted model, and obtaining a first point cloud after translation conversion and scale conversion, wherein the formula is as follows:
Figure FDA0003080454660000051
v″′k=s·v″k,v″k∈V″
V″′={v″′k|k=1,2,...n′}
wherein s is a scaling factor, v'kThe kth element of the first point cloud V' after the translation transformation and the scale transformation;
s403, based on the first point cloud V ' "{ V '" after the translation transformation and the scaling transformation 'k1,2,. n' }, calculating a rotation matrix R according to a formula; wherein the step of obtaining the rotation matrix comprises:
s4031, based on the translation transformation and the first point cloud V ', constructing a covariance matrix (V')T
S4032, covariance matrix (V')TPerforming characteristic decomposition to obtain the first three maximumsEigenvector Vector corresponding to eigenvalue3×3
S4033, constructing a rotation matrix R according to a formula, wherein the formula is as follows:
R=Vector3×3·I
in the formula, I is a unit matrix with the size of 3 multiplied by 3;
s404, based on the first point cloud V ' { V ' after translation transformation and scaling transformation 'kN 'and a rotation matrix R, further performing rotation transformation on V' ″ to obtain a second point cloud after translation transformation, scaling transformation and rotation transformation, as follows:
Figure FDA0003080454660000052
Figure FDA0003080454660000061
in the formula (I), the compound is shown in the specification,
Figure FDA0003080454660000062
the k-th element of the second point cloud after the translation transformation, the scaling transformation and the rotation transformation,
Figure FDA0003080454660000063
the second point cloud is the second point cloud after translation transformation, scaling transformation and rotation transformation.
6. The method for extracting features from a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S5 comprises the steps of:
s501, based on second point cloud
Figure FDA0003080454660000064
Figure FDA0003080454660000065
The k-th element of the second point cloud after the translation transformation, the scaling transformation and the rotation transformation,
Figure FDA0003080454660000066
representing by a rectangular coordinate system of a three-dimensional space of second point cloud, wherein n' represents the number of vertex elements in the second point cloud, projecting the second point cloud to a polar coordinate system according to a formula, and obtaining the polar coordinate representation of the second point cloud, wherein the formula is as follows:
Figure FDA0003080454660000067
wherein (theta)kk,rk) As a polar coordinate representation of the second point cloud, fsphAn operator is projected for the polar coordinate system,
Figure FDA0003080454660000068
s502, expressing the polar coordinates (theta) of the second point cloud according to a formulakk,rk) Three-dimensional rectangular coordinate system with second point cloud
Figure FDA0003080454660000069
The representations are spliced to obtain a composite representation
Figure FDA00030804546600000610
The second point cloud of (1).
7. The method for extracting features from a three-dimensional model based on a polar graph convolutional neural network as claimed in claim 1, wherein the step S6 comprises the steps of:
s601, representing based on having composite
Figure FDA00030804546600000611
The second point cloud and the polar coordinate graph convolution neural network model are obtained, and the corresponding depth science is obtainedLearning characteristics; the step of obtaining the polar coordinate graph convolution neural network model comprises the following steps:
s6011, designing a network structure of a polar coordinate graph convolution neural network model based on a graph convolution network mode; wherein the input of the polar plot convolutional neural network model is based on having a composite representation
Figure FDA0003080454660000071
The output of the second point cloud is the corresponding deep learning characteristic; the structure of the polar coordinate graph convolution neural network model comprises a graph building module, a residual dynamic graph convolution block, a fusion module and a prediction module; the mapping module is based on a second point cloud having a composite representation
Figure FDA0003080454660000072
Constructing a hole k neighbor graph representation of the corresponding 3 branches by using a hole k neighbor algorithm to serve as input; the residual dynamic graph convolution block comprises an EdgeConv edge convolution layer connected based on the dynamic graph convolution layer and a residual graph, and an SE attention mechanism block is embedded; the fusion module comprises a 1 × 1 convolution kernel layer and two pooling layers, wherein the 1 × 1 convolution kernel layer is followed by a Batch Normalization function of Batch by Batch and an LeakyReLU activation function, and the two pooling layers respectively use maximum pooling and average pooling; the prediction module comprises two fully-connected layers, wherein one fully-connected layer is followed by a Batch Normalization function of Batch, a LeakyReLU activation function and Dropout random inactivation;
s6012, representing based on having a composite
Figure FDA0003080454660000073
The second point cloud of (2) building a database of network training, and dividing 80% of the database into a training set and 20% of the database into a verification set, wherein the intersection of the training set and the verification set is empty, and the second point cloud is used for corresponding to the labeled real class label; on the training set, will have a composite representation
Figure FDA0003080454660000074
Inputting the second point cloud into the polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and reversely adjusting the parameter value of the polar coordinate graph convolutional neural network model; on the verification set, there will be a composite representation
Figure FDA0003080454660000075
Inputting the second point cloud into a polar coordinate graph convolutional neural network model to obtain an output characteristic vector and a classification probability, calculating the difference between the classification probability and a real class label, and evaluating the performance of the polar coordinate graph convolutional neural network model; until the training is finished, using the output feature vector as the feature of the representation three-dimensional model;
s602, a second point cloud with a composite representation is to be based on
Figure FDA0003080454660000076
And inputting the data into a polar coordinate graph convolution neural network model, and extracting corresponding deep learning characteristics.
CN202110565190.1A 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network Active CN113313831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565190.1A CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565190.1A CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Publications (2)

Publication Number Publication Date
CN113313831A true CN113313831A (en) 2021-08-27
CN113313831B CN113313831B (en) 2022-12-16

Family

ID=77374195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565190.1A Active CN113313831B (en) 2021-05-24 2021-05-24 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network

Country Status (1)

Country Link
CN (1) CN113313831B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590261A (en) * 1993-05-07 1996-12-31 Massachusetts Institute Of Technology Finite-element method for image alignment and morphing
US9286538B1 (en) * 2014-05-01 2016-03-15 Hrl Laboratories, Llc Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
US20170213381A1 (en) * 2016-01-26 2017-07-27 Università della Svizzera italiana System and a method for learning features on geometric domains
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
US20190188541A1 (en) * 2017-03-17 2019-06-20 Chien-Yi WANG Joint 3d object detection and orientation estimation via multimodal fusion
CN110232438A (en) * 2019-06-06 2019-09-13 北京致远慧图科技有限公司 The image processing method and device of convolutional neural networks under a kind of polar coordinate system
CN110942110A (en) * 2019-12-31 2020-03-31 新奥数能科技有限公司 Feature extraction method and device of three-dimensional model
DE102018128531A1 (en) * 2018-11-14 2020-05-14 Valeo Schalter Und Sensoren Gmbh System and method for analyzing a three-dimensional environment represented by a point cloud through deep learning
CN111461063A (en) * 2020-04-24 2020-07-28 武汉大学 Behavior identification method based on graph convolution and capsule neural network
US20200302237A1 (en) * 2019-03-22 2020-09-24 Pablo Horacio Hennings Yeomans System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN112488210A (en) * 2020-12-02 2021-03-12 北京工业大学 Three-dimensional point cloud automatic classification method based on graph convolution neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590261A (en) * 1993-05-07 1996-12-31 Massachusetts Institute Of Technology Finite-element method for image alignment and morphing
US9286538B1 (en) * 2014-05-01 2016-03-15 Hrl Laboratories, Llc Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
US20170213381A1 (en) * 2016-01-26 2017-07-27 Università della Svizzera italiana System and a method for learning features on geometric domains
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
US20190188541A1 (en) * 2017-03-17 2019-06-20 Chien-Yi WANG Joint 3d object detection and orientation estimation via multimodal fusion
DE102018128531A1 (en) * 2018-11-14 2020-05-14 Valeo Schalter Und Sensoren Gmbh System and method for analyzing a three-dimensional environment represented by a point cloud through deep learning
US20200302237A1 (en) * 2019-03-22 2020-09-24 Pablo Horacio Hennings Yeomans System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN110232438A (en) * 2019-06-06 2019-09-13 北京致远慧图科技有限公司 The image processing method and device of convolutional neural networks under a kind of polar coordinate system
CN110942110A (en) * 2019-12-31 2020-03-31 新奥数能科技有限公司 Feature extraction method and device of three-dimensional model
CN111461063A (en) * 2020-04-24 2020-07-28 武汉大学 Behavior identification method based on graph convolution and capsule neural network
CN112488210A (en) * 2020-12-02 2021-03-12 北京工业大学 Three-dimensional point cloud automatic classification method based on graph convolution neural network

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
HUA LIN ET AL.: "PointSpherical: deep shape context for point cloud learning in spherical coordinates", 《2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》, 15 January 2021 (2021-01-15), pages 10266 - 10273, XP033910118, DOI: 10.1109/ICPR48806.2021.9412978 *
YING,LONGZHENG ET AL.: "Rolling normal filtering for point clouds", 《COMPUTER AIDD GEOMETRIC DESIGN》, 30 May 2018 (2018-05-30), pages 16 - 28 *
ZENG, FANZHI ET AL.: "Improved Three-Dimensional Model Feature of Non-rigid Based on HKS", 《SMART COMPUTING AND COMMUNICATION, SMARTCOM 2017》, 12 December 2017 (2017-12-12), pages 427 - 437 *
刘斌等: "基于不可分小波分解的图像配准方法", 《计算机工程》, vol. 40, no. 10, 15 October 2014 (2014-10-15), pages 252 - 257 *
周燕等: "基于多特征融合的三维模型检索算法", 《计算机科学》, vol. 43, no. 7, 15 July 2016 (2016-07-15), pages 47 - 58 *
周燕等: "基于深度学习的三维形状特征提取方法", 《计算机科学》, vol. 46, no. 9, 15 September 2019 (2019-09-15), pages 303 - 309 *
汤磊等: "基于卷积神经网络的高效三维模型检索方法", 《电子学报》, vol. 49, no. 1, 15 March 2021 (2021-03-15), pages 64 - 71 *
白静等: "MSP-Net:多尺度点云分类网络", 《计算机辅助设计与图形学学报》, vol. 31, no. 11, 15 November 2019 (2019-11-15), pages 1917 - 1924 *

Also Published As

Publication number Publication date
CN113313831B (en) 2022-12-16

Similar Documents

Publication Publication Date Title
CN112927357A (en) 3D object reconstruction method based on dynamic graph network
CN110675421B (en) Depth image collaborative segmentation method based on few labeling frames
CN114612660A (en) Three-dimensional modeling method based on multi-feature fusion point cloud segmentation
WO2024060395A1 (en) Deep learning-based high-precision point cloud completion method and apparatus
CN116229079A (en) Three-dimensional point cloud semantic segmentation method and system based on visual assistance and feature enhancement
CN113313830B (en) Encoding point cloud feature extraction method based on multi-branch graph convolutional neural network
CN111460193A (en) Three-dimensional model classification method based on multi-mode information fusion
CN115527036A (en) Power grid scene point cloud semantic segmentation method and device, computer equipment and medium
CN115223017B (en) Multi-scale feature fusion bridge detection method based on depth separable convolution
CN117593666B (en) Geomagnetic station data prediction method and system for aurora image
CN114187506A (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN112967296B (en) Point cloud dynamic region graph convolution method, classification method and segmentation method
CN117788810A (en) Learning system for unsupervised semantic segmentation
CN113628329A (en) Zero-sample sketch three-dimensional point cloud retrieval method
CN111612046B (en) Feature pyramid graph convolution neural network and application thereof in 3D point cloud classification
CN113408651A (en) Unsupervised three-dimensional object classification method based on local discriminability enhancement
CN113313831B (en) Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network
Li et al. LPCCNet: A lightweight network for point cloud classification
CN116958958A (en) Self-adaptive class-level object attitude estimation method based on graph convolution double-flow shape prior
Cao et al. Label-efficient deep learning-based semantic segmentation of building point clouds at LOD3 level
Weng et al. Image inpainting technique based on smart terminal: A case study in CPS ancient image data
CN112396089B (en) Image matching method based on LFGC network and compression excitation module
CN117523548B (en) Three-dimensional model object extraction and recognition method based on neural network
CN118135405B (en) Optical remote sensing image road extraction method and system based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant