CN117523548B - Three-dimensional model object extraction and recognition method based on neural network - Google Patents

Three-dimensional model object extraction and recognition method based on neural network Download PDF

Info

Publication number
CN117523548B
CN117523548B CN202410008847.8A CN202410008847A CN117523548B CN 117523548 B CN117523548 B CN 117523548B CN 202410008847 A CN202410008847 A CN 202410008847A CN 117523548 B CN117523548 B CN 117523548B
Authority
CN
China
Prior art keywords
dimensional model
model object
feature
point
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410008847.8A
Other languages
Chinese (zh)
Other versions
CN117523548A (en
Inventor
闫宗宝
王晓龙
王瑞琪
毕习远
林芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhentu Information Technology Co ltd
Qingdao Zhentu Information Technology Co ltd
Original Assignee
Shanghai Zhentu Information Technology Co ltd
Qingdao Zhentu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhentu Information Technology Co ltd, Qingdao Zhentu Information Technology Co ltd filed Critical Shanghai Zhentu Information Technology Co ltd
Priority to CN202410008847.8A priority Critical patent/CN117523548B/en
Publication of CN117523548A publication Critical patent/CN117523548A/en
Application granted granted Critical
Publication of CN117523548B publication Critical patent/CN117523548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention relates to the technical field of three-dimensional model processing, in particular to a three-dimensional model object extraction and identification method based on a neural network.

Description

Three-dimensional model object extraction and recognition method based on neural network
Technical Field
The invention relates to the technical field of three-dimensional model processing, in particular to a three-dimensional model object extraction and identification method based on a neural network.
Background
With the rapid development of computer technology and three-dimensional modeling software, multimedia data used by people has transitioned from traditional two-dimensional images to three-dimensional digital geometry. Due to the wide application of three-dimensional modeling techniques in various fields, a large number of three-dimensional models are presented, and in order to effectively organize the three-dimensional models, classification of the three-dimensional models is required. If the three-dimensional model is classified only by hand, a large amount of manpower and material resources are consumed, so that a rapid and effective recognition algorithm is needed to automatically determine the class information of the three-dimensional model, and the application of the deep learning method has different difficulties under each three-dimensional model representation form due to the diversity of the expression of the three-dimensional data. For three-dimensional data in the form of point cloud, spatial points in the structure of the three-dimensional data have disorder. For three-dimensional data in the form of voxels, the computational complexity of applying three-dimensional convolution grows exponentially. For three-dimensional data in grid form, the topology is difficult to process using conventional convolutional neural networks. On the task of extracting and identifying the three-dimensional target features, the current related method has the advantages of slower speed, low precision and higher requirement on the three-dimensional model.
For example, patent publication No. CN110942110a discloses a feature extraction method and apparatus of a three-dimensional model, the feature extraction method comprising: preprocessing the three-dimensional model to obtain original point cloud data; taking the original point cloud data as input of a pre-built neural network, and acquiring global features fused with local features of the three-dimensional model, specifically taking the original point cloud data as input, wherein the original point cloud data in the pre-built neural network is subjected to differential symmetric function processing and gesture conversion network processing to respectively acquire a first local feature and a second local feature; and integrating the first local feature and the second local feature to obtain the global feature of the three-dimensional model. The method is oriented to three-dimensional model feature extraction, and solves the problems of low task precision, low speed and the like of three-dimensional data identification, retrieval, segmentation and the like.
For example, patent publication No. CN116778470a provides a method, apparatus, device and medium for object recognition and object recognition model training, which obtains a sample three-dimensional image and object labels thereof, a plurality of sample three-dimensional voxel maps and an object recognition model to be trained, where the object recognition model includes a two-dimensional global feature extraction network, a two-dimensional feature processing network, a three-dimensional encoding/decoding network and a multi-dimensional feature fusion network; inputting the sample three-dimensional image and the sample three-dimensional voxel map into the object recognition model to obtain a first object prediction result of the sample three-dimensional image and a second prediction result of the sample three-dimensional voxel map; and adjusting parameters of the object recognition model according to the first object prediction result, the second prediction result and the object annotation, so as to obtain the trained object recognition model. And then, identifying the object in the three-dimensional image to be identified based on the trained object identification model, and improving the accuracy of object identification.
The problems proposed in the background art exist in the above patents: processing global features, lacking local feature information, extracting single features to enable the content of three-dimensional model information to be low, and affecting identification accuracy; in order to solve the problems, the application designs a three-dimensional model object extraction and identification method based on a neural network.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a three-dimensional model object extraction and identification method based on a neural network, which comprises the steps of firstly carrying out gridding display on a three-dimensional model object and extracting three-dimensional model object characteristics, wherein the three-dimensional model object characteristics comprise external geometric characteristics, internal topological characteristics and surface texture characteristics, secondly, fusing the three-dimensional model object characteristics, determining optimal fusion characteristics and optimal characteristic weights, finally constructing a three-dimensional model object identification network, taking the optimal fusion characteristics and the optimal characteristic weights as input parameters of three-dimensional model object identification, and training the three-dimensional model object identification network to output identification results of the three-dimensional model object.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a neural network-based three-dimensional model object extraction and recognition method, the method comprising:
a1: gridding and displaying the three-dimensional model object, and extracting three-dimensional model object characteristics, wherein the three-dimensional model object characteristics comprise appearance geometric characteristics, internal topological characteristics and surface texture characteristics;
a2: fusing the object features of the three-dimensional model according to an optimal feature fusion strategy, and determining optimal fusion features and optimal feature weights;
a3: constructing a three-dimensional model object recognition network, taking the optimal fusion characteristics and the optimal characteristic weights as input parameters of the three-dimensional model object recognition network, training the three-dimensional model object recognition network, and outputting recognition results of the three-dimensional model object;
the feature extraction of the appearance geometric features comprises the following specific steps:
s2.1: marking surface vertexes of the three-dimensional model object after gridding display, calculating surface projection characteristics of all points in a radius area with r serving as the surface vertexes one by taking the surface vertexes as the center, wherein r meets the requirement ofThe calculation formula of the curved surface projection characteristics of the points is as follows:
wherein,curved projection characteristic of the point, e denotes an exponential function,>representing discretized Laplace-Bei Erda La Mi Suanzi->Represents the area of the triangle enclosed by the projection of the point on the surface, the cot (& gt) represents the cotangent function, & lt>Representing the included angles between the x point, the y point and the surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of x point, z point and surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of the z point, the y point and the surface vertex of the point in the three-dimensional gridding model, +.>Representing the Euclidean distance between the point furthest from the surface vertex and the surface vertex, < >>Representing the Euclidean distance between the point and the vertex of the surface;
s2.2: taking the largest curved surface projection characteristic corresponding point in the radius area as a shape sampling point, confirming a shape sampling point neighborhood according to a local shape distribution function, calculating the Euclidean distance between the point in the shape sampling point neighborhood and the shape sampling point, and constructing a local shape distribution histogram of the shape sampling point;
s2.3: corresponding weights are distributed to each component in the local shape distribution histogram through a Gaussian window function, a local shape distribution matrix is obtained, local shape area characteristics are calculated, the local shape area characteristics of each surface vertex are reorganized, multi-scale area characteristics are obtained, and a calculation formula of the local shape area characteristics is as follows:
wherein,representing local shape region characteristics, i representing a single point in a neighborhood of shape sampling points, N representing the total number of points in the neighborhood of shape sampling points, G representing a local shape distribution function, +.>Curved projection characteristics of the ith point are represented, H represents a local shape distribution matrix, +.>Curved surface projection characteristic average matrix representing all points in neighborhood of shape sampling point, < >>Representing matrix multiplication, T representing matrix transposition;
s2.4: constructing an exterior geometric feature extraction network, taking the multi-scale region features as input parameters of the exterior geometric feature extraction network, and outputting the exterior geometric features by training the exterior geometric feature extraction network;
the specific steps of feature extraction of the internal topological feature are as follows:
s3.1: preprocessing the three-dimensional model object after gridding display to obtain a three-dimensional skeleton model;
s3.2: creating a geodesic distance table, initializing a height function value g (v) of a surface vertex of the three-dimensional skeleton model, and setting the height function value of a center point of the three-dimensional skeleton model asFilling the height function values of the center point and all the surface vertexes in a geodesic distance table, wherein height represents the height value of the three-dimensional skeleton model, g (level) represents the height function, and v represents the surface vertexes of the three-dimensional skeleton model;
s3.3: the Euclidean distance from the surface vertex to the center point is calculated sequentially, the geodesic distance value of the surface vertex is calculated according to the Euclidean distance, the geodesic distance value is compared with the height function value of the center point, if the Euclidean distance value is larger than the height function value of the center point, the height function value of the surface vertex in the geodesic distance table is modified to be the geodesic distance value, if the height function value of the center point is smaller than or equal to the height function value of the center point, the surface vertex is deleted from the geodesic distance table, and the geodesic distance value of the surface vertex is calculated according to the formula:
wherein,ground distance value representing the surface vertex, +.>Representing the Euclidean distance from the surface vertex to the center point, < >>Represents the center point +.>Mean value representing the Euclidean distance from the surface vertex to the center point,/->Maximum value representing the Euclidean distance from the surface vertex to the center point,/->A minimum value representing the Euclidean distance from the surface vertex to the center point;
s3.4: normalizing the geodesic distance values of the surface vertices according to the rest surface vertices in the geodesic distance table, and aggregating the surface vertices belonging to the same interval to obtain skeleton joints, and sequentially connecting the skeleton joints according to the topological characteristics of the skeleton joints to obtain a skeleton map;
s3.5: constructing an internal topological feature extraction network, taking the skeleton diagram as an input parameter of the internal topological feature extraction network, training through the internal topological feature extraction network, and outputting the internal topological feature;
the specific steps of feature extraction of the surface texture feature are as follows:
s4.1: carrying out coordinate normalization processing on the three-dimensional model object after gridding display, wherein the coordinate normalization processing comprises translation of a coordinate system, proportional conversion of the coordinate system and rotation of the coordinate system;
s4.2: taking the normalized three-dimensional model object as a center, placing a first virtual camera according to an included angle of 30 degrees between the camera and the center plane of the model, and placing a next virtual camera every 30 degrees by taking a Z axis as a rotating shaft to obtain twelve views of the three-dimensional model object, wherein the virtual cameras meet the condition that the shooting direction and the center of the model are on the same straight line;
s4.3: constructing a view feature classification network, taking the twelve views as input parameters of the view feature classification network, training the view feature classification network, and outputting classification probability of each viewWherein P represents view feature classification probability, M represents total number of view feature classes, ++>Representing the classification probability of the view corresponding to the ith view feature class;
s4.4: voting the view feature class according to the view feature classification probability by each view, and taking the view feature class with the largest statistics vote number as the surface texture feature of the three-dimensional model object;
the optimal feature fusion strategy specifically comprises the following steps:
s5.1: inquiring the three-dimensional model object characteristics, acquiring the input of the three-dimensional model object characteristics, and mapping the input into a tuple sequenceWherein->Corresponding input representing the mth feature of the geometrical features of the appearance,/->Representing internal topologyCorresponding input of the mth one of the features, < ->A corresponding input representing an mth feature of the surface texture features;
s5.2: carrying out dot product on the corresponding input feature in each candidate sequence in the tuple sequence, calculating the correlation score after the dot product of the candidate sequence through the pearson correlation coefficient, compressing the correlation score according to the sigmoid function, comparing with a correlation threshold, if the correlation score is larger than the correlation threshold, reserving the candidate sequence, and if the correlation score is smaller than or equal to the correlation threshold, filtering the candidate sequence;
s5.3: collecting local features of the candidate sequences reserved in the tuple sequence, enhancing, splicing the enhanced candidate sequences along the head sequence of the tuple sequence, and fusing the local features through residual connection;
s5.4: pooling the feature vectors after residual connection fusion to obtain optimal fusion features;
s5.5: continuously optimizing the optimal fusion characteristics according to a continuous particle swarm algorithm, and calculating the optimal characteristic weight of the optimal fusion characteristics;
constructing a three-dimensional model object recognition network, the three-dimensional model object recognition network comprising:
the input layer calculates spatial features, structural features and mapping features according to input parameters, and performs feature learning on the three features through three parallel network modules;
the hidden layer inputs the learning results of the three features into the full-connection layer sharing the weight through cyclic iteration, obtains sparse codes through maximum pooling operation, and splices the sparse codes to obtain a base vector matrix;
and the output layer carries out loss learning on the basis vector matrix according to the cost function, judges whether the basis vector matrix meets the recognition learning threshold, returns to the hidden layer for retraining if the basis vector matrix does not meet the recognition learning threshold, and outputs the visual characteristics of the basis vector matrix if the basis vector matrix meets the recognition learning threshold, and takes the visual characteristics as the recognition result of the three-dimensional model object.
The three-dimensional model object extraction and recognition system based on the neural network comprises a three-dimensional model object feature extraction module, a three-dimensional model object feature fusion module and a three-dimensional model object recognition module;
the three-dimensional model object feature extraction module is used for extracting three-dimensional model object features;
the three-dimensional model object feature fusion module is used for fusing the three-dimensional model object features to obtain optimal fusion features and optimal feature weights;
the three-dimensional model object recognition module is used for recognizing the three-dimensional model object according to the optimal fusion characteristics and the optimal characteristic weights;
the three-dimensional model object feature extraction module comprises:
the exterior geometric feature extraction unit is used for extracting exterior geometric features of the gridding three-dimensional model object;
an internal topological feature extraction unit for extracting internal topological features of the gridding three-dimensional model object;
and the surface texture feature extraction unit is used for extracting the surface texture features of the meshed three-dimensional model object.
A storage medium having instructions stored therein, which when read by a computer, cause the computer to perform the neural network-based three-dimensional model object extraction and recognition method described above.
An electronic device comprising a processor and a storage medium as described above, the processor executing instructions in the storage medium.
Compared with the prior art, the invention has the beneficial effects that:
1. the three-dimensional model object is subjected to three feature extraction methods, the geometric features, topological features and texture features of the three-dimensional model object are fully represented, the situation that feature information is lacking when a single feature describes the three-dimensional model object is avoided, and meanwhile, the efficiency of three-dimensional model feature extraction is improved through a method of locally extracting features and then splicing the local features;
2. the invention provides an optimal feature fusion strategy, which sequentially completes the searching of an optimal feature combination and the calculation of an optimal feature weight through a supervised learning mode, can meet different three-dimensional model identification application scenes, and improves the using effect of multi-feature fusion in three-dimensional model identification;
3. according to the invention, the neural network is applied to the feature extraction and recognition of the three-dimensional model object, the accuracy of the feature extraction and recognition of the three-dimensional model object is improved through continuous training and learning optimization, the nonlinear features are converted into linear features through sparse coding, the condition of over fitting in the training process of the neural network is avoided, and the recognition speed is improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings in which:
FIG. 1 is a flow chart of a three-dimensional model object extraction and recognition method based on a neural network according to embodiment 1 of the present invention;
FIG. 2 is a schematic view of a projection characteristic of a curved surface of a point in a vertex region of a three-dimensional model object according to embodiment 1 of the present invention;
FIG. 3 is a schematic view of the angles of projection of curved surfaces of points in the vertex region of the surface of the three-dimensional model object according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of the external geometry extraction network according to embodiment 1 of the present invention;
FIG. 5 is a schematic diagram of a pretreatment flow of a three-dimensional gridding model according to embodiment 1 of the present invention;
FIG. 6 is a diagram showing the internal topology feature extraction network according to embodiment 1 of the present invention;
FIG. 7 is a schematic diagram of a virtual camera layout according to embodiment 1 of the present invention;
FIG. 8 is a diagram of a view feature classification network according to embodiment 1 of the present invention;
FIG. 9 is a flow chart of the optimal feature fusion according to embodiment 1 of the present invention;
FIG. 10 is a block diagram of a three-dimensional model object extraction and recognition system based on a neural network according to embodiment 2 of the present invention;
fig. 11 is a diagram of a three-dimensional model object extraction and recognition electronic device based on a neural network according to embodiment 5 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1:
referring to fig. 1, an embodiment of the present invention is provided: a neural network-based three-dimensional model object extraction and recognition method, the method comprising:
a1: gridding and displaying the three-dimensional model object, and extracting three-dimensional model object characteristics, wherein the three-dimensional model object characteristics comprise appearance geometric characteristics, internal topological characteristics and surface texture characteristics;
a2: fusing the object features of the three-dimensional model according to an optimal feature fusion strategy, and determining optimal fusion features and optimal feature weights;
a3: constructing a three-dimensional model object recognition network, taking the optimal fusion characteristics and the optimal characteristic weights as input parameters of three-dimensional model object recognition, training the three-dimensional model object recognition, and outputting recognition results of the three-dimensional model object;
referring to fig. 2 and 3, in the embodiment of the present invention, a curved surface projection feature diagram of a point in a vertex area of a three-dimensional model object surface, a dotted line portion represents a curved surface projection feature area with r as a radius of the surface vertex, and connection lines between an x-axis, a y-axis and a z-axis of the point and the radius of the surface are triangle projections, and symmetry and positive characteristics of the point on a continuous curved surface are expressed by calculating an included angle cotangent weight;
the feature extraction of the appearance geometric features comprises the following specific steps:
s2.1: marking surface vertexes of the three-dimensional model object after gridding display, calculating surface projection characteristics of all points in a radius area with r serving as the surface vertexes one by taking the surface vertexes as the center, wherein r meets the requirement ofThe calculation formula of the curved surface projection characteristics of the points is as follows:
wherein,curved projection characteristic of the point, e denotes an exponential function,>representing discretized Laplace-Bei Erda La Mi Suanzi->Represents the area of the triangle enclosed by the projection of the point on the surface, the cot (& gt) represents the cotangent function, & lt>Representing the included angles between the x point, the y point and the surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of x point, z point and surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of the z point, the y point and the surface vertex of the point in the three-dimensional gridding model, +.>Representing the Euclidean distance between the point furthest from the surface vertex and the surface vertex, < >>Representing the Euclidean distance between the point and the vertex of the surface;
s2.2: taking the largest curved surface projection characteristic corresponding point in the radius area as a shape sampling point, confirming a shape sampling point neighborhood according to a local shape distribution function, calculating the Euclidean distance between the point in the shape sampling point neighborhood and the shape sampling point, and constructing a local shape distribution histogram of the shape sampling point;
s2.3: corresponding weights are distributed to each component in the local shape distribution histogram through a Gaussian window function, a local shape distribution matrix is obtained, local shape area characteristics are calculated, the local shape area characteristics of each surface vertex are reorganized, multi-scale area characteristics are obtained, and a calculation formula of the local shape area characteristics is as follows:
wherein,representing local shape region characteristics, i representing a single point in a neighborhood of shape sampling points, N representing the total number of points in the neighborhood of shape sampling points, G representing a local shape distribution function, +.>Curved projection characteristics of the ith point are represented, H represents a local shape distribution matrix, +.>Curved surface projection characteristic average matrix representing all points in neighborhood of shape sampling point, < >>Representing matrix multiplication, T representing matrix transposition;
s2.4: constructing an exterior geometric feature extraction network, taking the multi-scale region features as input parameters of the exterior geometric feature extraction network, and outputting the exterior geometric features by training the exterior geometric feature extraction network;
referring to fig. 4, an embodiment of the present invention is a schematic diagram of an external geometric feature extraction network, where the external geometric feature extraction network includes an input layer, a convolution pooling layer, and an output layer, and specifically includes:
the input layer takes the multi-scale region characteristics as input parameters, normalizes the input parameters and quantizes the matrix, and converts the input parameters into a characteristic matrix with the size of 64 multiplied by 64;
the convolution pooling layer is used for convolving the input 64×64-sized feature matrix through a convolution kernel with a size of 3×3 and obtaining 2×2-sized mapping features through RELU activation functions and maximum pooling operation, wherein the maximum pooling operation comprises 2×2-sized pooling kernels;
the output layer flattens the mapping feature into a compact feature vector to obtain a shape descriptor, and converts the shape descriptor into an appearance geometric feature through a loss function, wherein the calculation formula of the appearance geometric feature is as follows:
wherein O represents the appearance geometry, M represents the shape descriptor,representing a loss function;
the specific steps of feature extraction of the internal topological feature are as follows:
s3.1: preprocessing the three-dimensional model object after gridding display to obtain a three-dimensional skeleton model;
s3.2: creating a geodesic distance table, initializing a height function value g (v) of a surface vertex of the three-dimensional skeleton model, and setting the height function value of a center point of the three-dimensional skeleton model asFilling the height function values of the center point and all the surface vertexes in a geodesic distance table, wherein height represents the height value of the three-dimensional skeleton model, g (level) represents the height function, and v represents the surface vertexes of the three-dimensional skeleton model;
s3.3: the Euclidean distance from the surface vertex to the center point is calculated sequentially, the geodesic distance value of the surface vertex is calculated according to the Euclidean distance, the geodesic distance value is compared with the height function value of the center point, if the Euclidean distance value is larger than the height function value of the center point, the height function value of the surface vertex in the geodesic distance table is modified to be the geodesic distance value, if the height function value of the center point is smaller than or equal to the height function value of the center point, the surface vertex is deleted from the geodesic distance table, and the geodesic distance value of the surface vertex is calculated according to the formula:
wherein,ground distance value representing the surface vertex, +.>Representing the Euclidean distance from the surface vertex to the center point, < >>Represents the center point +.>Mean value representing the Euclidean distance from the surface vertex to the center point,/->Maximum value representing the Euclidean distance from the surface vertex to the center point,/->A minimum value representing the Euclidean distance from the surface vertex to the center point;
s3.4: normalizing the geodesic distance values of the surface vertices according to the rest surface vertices in the geodesic distance table, and aggregating the surface vertices belonging to the same interval to obtain skeleton joints, and sequentially connecting the skeleton joints according to the topological characteristics of the skeleton joints to obtain a skeleton map;
s3.5: constructing an internal topological feature extraction network, taking the skeleton diagram as an input parameter of the internal topological feature extraction network, training through the internal topological feature extraction network, and outputting the internal topological feature;
referring to fig. 5, a schematic diagram of a preprocessing flow of a three-dimensional gridding model according to an embodiment of the present invention includes voxelization of the three-dimensional gridding model and extraction of a voxel model skeleton, and the specific steps are as follows:
s3.1.1: first, calculating the coordinates of the central point of the three-dimensional gridded modelAnd scaling the scaleNormalizing the grid three-dimensional model coordinates according to the center point coordinates and the scaling scale, wherein +_in>Representing the coordinates of the point on the boundary of the meshed three-dimensional model furthest from the origin of coordinates, < >>Representing the point coordinates closest to the origin of coordinates on the boundary of the meshed three-dimensional model;
s3.1.2: voxel processing is carried out on the meshed three-dimensional model according to a triangular patch distance method, all triangles in the meshed three-dimensional model are traversed, the distance from a point in the triangle to a central point is calculated, whether the triangle is covered or not is judged according to a distance threshold, and if the triangle is covered, coordinates of the point are reserved, so that a three-dimensional voxel model is obtained;
s3.1.3: 3X 3 pixel areas are segmented on the three-dimensional voxel model, each pixel area comprises 9 pixel points, pixel judgment is carried out on each pixel point, if the pixel judgment condition is not met, the pixel point is marked as a skeleton point, if the pixel judgment condition is met, the pixel point is deleted, and the calculation formula of pixel judgment is as follows:
wherein,representing the selected pixel,/>Pixel points representing non-zero value pixels near the selected pixel pointQuantity of->、/>、/>And->Respectively representing the pixel values of the 2 nd, 4 th, 6 th and 8 th pixel points near the selected pixel point,/for>A pixel value representing the selected pixel point;
s3.1.4: traversing all pixel areas to finish skeleton point marking, performing connectivity analysis on marked skeleton points, and connecting the skeleton points subjected to the connectivity analysis to obtain a three-dimensional skeleton model;
referring to fig. 6, an internal topology feature extraction network structure diagram of an embodiment of the present invention includes an input layer, two 3D convolution layers, a pooling layer, and an output layer;
the input layer is used for performing matrix transposition on an input skeleton graph and converting the input skeleton graph into a skeleton tensor of 32 multiplied by 32;
the 3D convolution layer, according to a skeleton tensor of 32 x 32 resolution size, convolution kernels of dimensions 5 x 5 and 3 x 3 are set respectively, converting skeleton tensors to 32× 8×8×8 feature data;
the pooling layer is used for reducing the dimension of the characteristic data according to the maximum pooling operation;
the output layer is used for outputting internal topological characteristics through full connection operation;
the specific steps of feature extraction of the surface texture feature are as follows:
s4.1: carrying out coordinate normalization processing on the three-dimensional model object after gridding display, wherein the coordinate normalization processing comprises translation of a coordinate system, proportional conversion of the coordinate system and rotation of the coordinate system;
s4.2: referring to fig. 7, a virtual camera layout schematic diagram of an embodiment of the present invention is shown, a normalized three-dimensional model object is taken as a center, a first virtual camera is placed according to an included angle of 30 ° between the camera and a center plane of the model, a Z-axis is taken as a rotation axis, a next virtual camera is placed every 30 ° to obtain twelve views of the three-dimensional model object, and the virtual camera satisfies that a shooting direction and the model center are on the same straight line;
s4.3: constructing a view feature classification network, taking the twelve views as input parameters of the view feature classification network, training the view feature classification network, and outputting classification probability of each viewWherein P represents view feature classification probability, M represents total number of view feature classes, ++>Representing the classification probability of the view corresponding to the ith view feature class;
s4.4: voting the view feature class according to the view feature classification probability by each view, and taking the view feature class with the largest statistics vote number as the surface texture feature of the three-dimensional model object;
referring to fig. 8, a view feature classification network structure diagram according to an embodiment of the present invention includes a view characterization layer, a view classification layer, and a view feature voting layer;
the view characterization layer is used for extracting view shallow layer characteristics of the input parameters through the VGG-11 network according to the input parameters of the view characteristic classification network;
the view classification layer takes view shallow features as input and outputs classification probability of each view shallow feature through an n-gram feature learning unit;
and the view feature voting layer is used for voting the view feature category according to the view feature classification probability, the view feature category with the largest statistical vote number is used as the surface texture feature of the three-dimensional model object, if the condition that the vote number is the same occurs, the voting probability value is increased to be used as the additional weight to recalculate the vote number, and the concrete voting method is as follows:
finding viewsMaximum classification probability of>Voting the category to which the largest classification probability belongs, and simultaneously re-ordering the rest classification probabilities of the view, taking the largest classification probability in the rest classification probabilities, comparing the largest classification probability with the largest classification probability, and voting the category corresponding to the largest classification probability in the rest classification probabilities if the difference value of the two probabilities is smaller than a comparison threshold, wherein the comparison threshold is determined by a person skilled in the art according to a large number of experiments;
referring to fig. 9, an optimal feature fusion flowchart of an embodiment of the present invention includes the following specific steps:
s5.1: inquiring the three-dimensional model object characteristics, acquiring the input of the three-dimensional model object characteristics, and mapping the input into a tuple sequenceWherein->Corresponding input representing the mth feature of the geometrical features of the appearance,/->Corresponding input representing the mth feature of the internal topology features,/->A corresponding input representing an mth feature of the surface texture features;
s5.2: carrying out dot product on corresponding input features in each candidate sequence in the tuple sequence, calculating a correlation score after the dot product of the candidate sequence through a pearson correlation coefficient, compressing the correlation score according to a sigmoid function, comparing the correlation score with a correlation threshold, if the correlation score is larger than the correlation threshold, reserving the candidate sequence, and if the correlation threshold is smaller than or equal to the correlation threshold, filtering the candidate sequence, wherein the correlation threshold is determined by a person skilled in the art according to a large number of experiments;
s5.3: collecting local features of the candidate sequences reserved in the tuple sequence, enhancing, splicing the enhanced candidate sequences along the head sequence of the tuple sequence, and fusing the local features through residual connection;
s5.4: pooling the feature vectors after residual connection fusion to obtain optimal fusion features;
s5.5: continuously optimizing the optimal fusion characteristics according to a continuous particle swarm algorithm, and calculating the optimal characteristic weight of the optimal fusion characteristics;
constructing a three-dimensional model object recognition network, the three-dimensional model object recognition network comprising:
the input layer calculates spatial features, structural features and mapping features according to input parameters, and performs feature learning on the three features through three parallel network modules;
the hidden layer inputs the learning results of the three features into the full-connection layer sharing the weight through cyclic iteration, obtains sparse codes through maximum pooling operation, and splices the sparse codes to obtain a base vector matrix;
and the output layer carries out loss learning on the basis vector matrix according to the cost function, judges whether the basis vector matrix meets the recognition learning threshold, returns to the hidden layer for retraining if the basis vector matrix does not meet the recognition learning threshold, and outputs the visual characteristics of the basis vector matrix if the basis vector matrix meets the recognition learning threshold, and takes the visual characteristics as the recognition result of the three-dimensional model object.
Example 2:
referring to fig. 10, the present invention provides an embodiment: the three-dimensional model object extraction and recognition system based on the neural network comprises a three-dimensional model object feature extraction module, a three-dimensional model object feature fusion module and a three-dimensional model object recognition module;
the three-dimensional model object feature extraction module is used for extracting three-dimensional model object features;
the three-dimensional model object feature fusion module is used for fusing the three-dimensional model object features to obtain optimal fusion features and optimal feature weights;
the three-dimensional model object recognition module is used for recognizing the three-dimensional model object according to the optimal fusion characteristics and the optimal characteristic weights;
the three-dimensional model object feature extraction module comprises:
the exterior geometric feature extraction unit is used for extracting exterior geometric features of the gridding three-dimensional model object;
an internal topological feature extraction unit for extracting internal topological features of the gridding three-dimensional model object;
and the surface texture feature extraction unit is used for extracting the surface texture features of the meshed three-dimensional model object.
Example 3:
the storage medium of the embodiment of the invention stores instructions, and when the instructions are read by a computer, the computer is caused to execute the three-dimensional model object extraction and identification method based on the neural network.
Example 4:
referring to fig. 11, an electronic device according to an embodiment of the present invention includes a three-dimensional model object 410, a processor 420, a storage medium 430, and a three-dimensional model object recognition panel 440, where the electronic device may be a computer, a mobile phone, or the like.
The three-dimensional model object 410 is used to acquire a three-dimensional model object, the processor 420 may be electrically connected with elements in the electronic device, and execute various instructions in the storage medium 430, and the three-dimensional model object recognition panel 440 is used to display the acquired three-dimensional model object recognition result.
Those skilled in the art will appreciate that the present invention may be implemented as a system, method, or computer program product
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (6)

1. The three-dimensional model object extraction and identification method based on the neural network is characterized by comprising the following steps of:
a1: gridding and displaying the three-dimensional model object, and extracting three-dimensional model object characteristics, wherein the three-dimensional model object characteristics comprise appearance geometric characteristics, internal topological characteristics and surface texture characteristics;
a2: fusing the object features of the three-dimensional model according to an optimal feature fusion strategy, and determining optimal fusion features and optimal feature weights;
a3: constructing a three-dimensional model object recognition network, taking the optimal fusion characteristics and the optimal characteristic weights as input parameters of the three-dimensional model object recognition network, training the three-dimensional model object recognition network, and outputting recognition results of the three-dimensional model object;
the feature extraction of the appearance geometric features comprises the following specific steps:
s2.1: marking surface vertexes of the three-dimensional model object after gridding display, calculating surface projection characteristics of all points in a radius area with r serving as the surface vertexes one by taking the surface vertexes as the center, wherein r meets the requirement ofThe calculation formula of the curved surface projection characteristics of the points is as follows:
wherein,curved projection characteristic of the point, e denotes an exponential function,>representing discretized Laplace-Bei Erda La Mi Suanzi->Represents the area of the triangle enclosed by the projection of the point on the surface, the cot (& gt) represents the cotangent function, & lt>Representing the included angles between the x point, the y point and the surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of x point, z point and surface vertex in the three-dimensional gridding model, and +.>Representing the included angles of the z point, the y point and the surface vertex of the point in the three-dimensional gridding model, +.>Representing the Euclidean distance between the point furthest from the surface vertex and the surface vertex, < >>Representing the Euclidean distance between the point and the vertex of the surface;
s2.2: taking the largest curved surface projection characteristic corresponding point in the radius area as a shape sampling point, confirming a shape sampling point neighborhood according to a local shape distribution function, calculating the Euclidean distance between the point in the shape sampling point neighborhood and the shape sampling point, and constructing a local shape distribution histogram of the shape sampling point;
s2.3: corresponding weights are distributed to each component in the local shape distribution histogram through a Gaussian window function, a local shape distribution matrix is obtained, local shape area characteristics are calculated, the local shape area characteristics of each surface vertex are reorganized, multi-scale area characteristics are obtained, and a calculation formula of the local shape area characteristics is as follows:
wherein,representing local shape region characteristics, i representing a single point in a neighborhood of shape sampling points, N representing the total number of points in the neighborhood of shape sampling points, G representing a local shape distribution function, +.>Curved projection characteristics of the ith point are represented, H represents a local shape distribution matrix, +.>Curved surface projection characteristic average matrix representing all points in neighborhood of shape sampling point, < >>Representing matrix multiplication, T representing matrix transposition;
s2.4: constructing an exterior geometric feature extraction network, taking the multi-scale region features as input parameters of the exterior geometric feature extraction network, and outputting the exterior geometric features by training the exterior geometric feature extraction network;
the specific steps of feature extraction of the internal topological feature are as follows:
s3.1: preprocessing the three-dimensional model object after gridding display to obtain a three-dimensional skeleton model;
s3.2: creating a geodesic distance table, initializing a height function value g (v) of a surface vertex of the three-dimensional skeleton model, and setting the height function value of a center point of the three-dimensional skeleton model asFilling the height function values of the center point and all the surface vertexes in a geodesic distance table, wherein height represents the height value of the three-dimensional skeleton model, g (level) represents the height function, and v represents the surface vertexes of the three-dimensional skeleton model;
s3.3: the Euclidean distance from the surface vertex to the center point is calculated sequentially, the geodesic distance value of the surface vertex is calculated according to the Euclidean distance, the geodesic distance value is compared with the height function value of the center point, if the Euclidean distance value is larger than the height function value of the center point, the height function value of the surface vertex in the geodesic distance table is modified to be the geodesic distance value, if the height function value of the center point is smaller than or equal to the height function value of the center point, the surface vertex is deleted from the geodesic distance table, and the geodesic distance value of the surface vertex is calculated according to the formula:
wherein,ground distance value representing the surface vertex, +.>Representing the euclidean distance of the surface vertex to the center point,represents the center point +.>Mean value representing the Euclidean distance from the surface vertex to the center point,/->Maximum value representing the Euclidean distance from the surface vertex to the center point,/->A minimum value representing the Euclidean distance from the surface vertex to the center point;
s3.4: normalizing the geodesic distance values of the surface vertices according to the rest surface vertices in the geodesic distance table, and aggregating the surface vertices belonging to the same interval to obtain skeleton joints, and sequentially connecting the skeleton joints according to the topological characteristics of the skeleton joints to obtain a skeleton map;
s3.5: constructing an internal topological feature extraction network, taking the skeleton diagram as an input parameter of the internal topological feature extraction network, training through the internal topological feature extraction network, and outputting the internal topological feature;
the specific steps of feature extraction of the surface texture feature are as follows:
s4.1: carrying out coordinate normalization processing on the three-dimensional model object after gridding display, wherein the coordinate normalization processing comprises translation of a coordinate system, proportional conversion of the coordinate system and rotation of the coordinate system;
s4.2: taking the normalized three-dimensional model object as a center, placing a first virtual camera according to an included angle of 30 degrees between the camera and the center plane of the model, and placing a next virtual camera every 30 degrees by taking a Z axis as a rotating shaft to obtain twelve views of the three-dimensional model object, wherein the virtual cameras meet the condition that the shooting direction and the center of the model are on the same straight line;
s4.3: constructing a view feature classification network, taking the twelve views as input parameters of the view feature classification network, training the view feature classification network, and outputting classification probability of each viewWherein P represents view feature classification probability, M represents total number of view feature classes, ++>Representing the classification probability of the view corresponding to the ith view feature class;
s4.4: voting the view feature class according to the view feature classification probability by each view, and taking the view feature class with the largest statistics vote number as the surface texture feature of the three-dimensional model object;
the optimal feature fusion strategy specifically comprises the following steps:
s5.1: inquiring the three-dimensional model object characteristics, acquiring the input of the three-dimensional model object characteristics, and mapping the input into a tuple sequenceWherein->Corresponding input representing the mth feature of the geometrical features of the appearance,/->Corresponding input representing the mth feature of the internal topology features,/->A corresponding input representing an mth feature of the surface texture features;
s5.2: carrying out dot product on the corresponding input feature in each candidate sequence in the tuple sequence, calculating the correlation score after the dot product of the candidate sequence through the pearson correlation coefficient, compressing the correlation score according to the sigmoid function, comparing with a correlation threshold, if the correlation score is larger than the correlation threshold, reserving the candidate sequence, and if the correlation score is smaller than or equal to the correlation threshold, filtering the candidate sequence;
s5.3: collecting local features of the candidate sequences reserved in the tuple sequence, enhancing, splicing the enhanced candidate sequences along the head sequence of the tuple sequence, and fusing the local features through residual connection;
s5.4: pooling the feature vectors after residual connection fusion to obtain optimal fusion features;
s5.5: and continuously optimizing the optimal fusion characteristic according to a continuous particle swarm algorithm, and calculating the optimal characteristic weight of the optimal fusion characteristic.
2. The neural network-based three-dimensional model object extraction and recognition method according to claim 1, wherein a three-dimensional model object recognition network is constructed, the three-dimensional model object recognition network comprising:
the input layer calculates spatial features, structural features and mapping features according to input parameters, and performs feature learning on the three features through three parallel network modules;
the hidden layer inputs the learning results of the three features into the full-connection layer sharing the weight through cyclic iteration, obtains sparse codes through maximum pooling operation, and splices the sparse codes to obtain a base vector matrix;
and the output layer carries out loss learning on the basis vector matrix according to the cost function, judges whether the basis vector matrix meets the recognition learning threshold, returns to the hidden layer for retraining if the basis vector matrix does not meet the recognition learning threshold, and outputs the visual characteristics of the basis vector matrix if the basis vector matrix meets the recognition learning threshold, and takes the visual characteristics as the recognition result of the three-dimensional model object.
3. A three-dimensional model object extraction and recognition system based on a neural network, which is realized based on the three-dimensional model object extraction and recognition method based on the neural network according to any one of claims 1-2, and is characterized in that the system comprises a three-dimensional model object feature extraction module, a three-dimensional model object feature fusion module and a three-dimensional model object recognition module;
the three-dimensional model object feature extraction module is used for extracting three-dimensional model object features;
the three-dimensional model object feature fusion module is used for fusing the three-dimensional model object features to obtain optimal fusion features and optimal feature weights;
and the three-dimensional model object recognition module is used for recognizing the three-dimensional model object according to the optimal fusion characteristic and the optimal characteristic weight.
4. A neural network based three-dimensional model object extraction and recognition system according to claim 3, wherein the three-dimensional model object feature extraction module comprises:
the exterior geometric feature extraction unit is used for extracting exterior geometric features of the gridding three-dimensional model object;
an internal topological feature extraction unit for extracting internal topological features of the gridding three-dimensional model object;
and the surface texture feature extraction unit is used for extracting the surface texture features of the meshed three-dimensional model object.
5. A storage medium having instructions stored therein, which when read by a computer, cause the computer to perform the neural network-based three-dimensional model object extraction and recognition method of any one of claims 1-2.
6. An electronic device comprising a processor and the storage medium of claim 5, the processor executing instructions in the storage medium.
CN202410008847.8A 2024-01-04 2024-01-04 Three-dimensional model object extraction and recognition method based on neural network Active CN117523548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410008847.8A CN117523548B (en) 2024-01-04 2024-01-04 Three-dimensional model object extraction and recognition method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410008847.8A CN117523548B (en) 2024-01-04 2024-01-04 Three-dimensional model object extraction and recognition method based on neural network

Publications (2)

Publication Number Publication Date
CN117523548A CN117523548A (en) 2024-02-06
CN117523548B true CN117523548B (en) 2024-03-26

Family

ID=89753428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410008847.8A Active CN117523548B (en) 2024-01-04 2024-01-04 Three-dimensional model object extraction and recognition method based on neural network

Country Status (1)

Country Link
CN (1) CN117523548B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520513A (en) * 2018-03-30 2018-09-11 中国科学院计算技术研究所 A kind of threedimensional model local deformation component extraction method and system
CN109308486A (en) * 2018-08-03 2019-02-05 天津大学 Multi-source image fusion and feature extraction algorithm based on deep learning
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN113538689A (en) * 2021-06-16 2021-10-22 杭州电子科技大学 Three-dimensional model mesh simplification method based on feature fusion of neural network
CN114267060A (en) * 2021-11-19 2022-04-01 哈尔滨工业大学(深圳) Face age identification method and system based on uncertain suppression network model
CN116778470A (en) * 2023-06-30 2023-09-19 京东方科技集团股份有限公司 Object recognition and object recognition model training method, device, equipment and medium
CN117274756A (en) * 2023-08-30 2023-12-22 国网山东省电力公司电力科学研究院 Fusion method and device of two-dimensional image and point cloud based on multi-dimensional feature registration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021042277A1 (en) * 2019-09-03 2021-03-11 浙江大学 Method for acquiring normal vector, geometry and material of three-dimensional object employing neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520513A (en) * 2018-03-30 2018-09-11 中国科学院计算技术研究所 A kind of threedimensional model local deformation component extraction method and system
CN109308486A (en) * 2018-08-03 2019-02-05 天津大学 Multi-source image fusion and feature extraction algorithm based on deep learning
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN113538689A (en) * 2021-06-16 2021-10-22 杭州电子科技大学 Three-dimensional model mesh simplification method based on feature fusion of neural network
CN114267060A (en) * 2021-11-19 2022-04-01 哈尔滨工业大学(深圳) Face age identification method and system based on uncertain suppression network model
CN116778470A (en) * 2023-06-30 2023-09-19 京东方科技集团股份有限公司 Object recognition and object recognition model training method, device, equipment and medium
CN117274756A (en) * 2023-08-30 2023-12-22 国网山东省电力公司电力科学研究院 Fusion method and device of two-dimensional image and point cloud based on multi-dimensional feature registration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-Feature Fusion Based on Multi-View Feature and 3D Shape Feature for Non-Rigid 3D Model Retrieval;Hui Zeng 等;IEEE Access;20190326;全文 *
Rotational Projection Statistics for 3D Local Surface Description and Object Recognition;Yulan Guo 等;Intional Journal of Computer Vision;20130424;全文 *
基于点云数据的三维目标识别和模型分割方法;牛辰庚;刘玉杰;李宗民;李华;;图学学报;20190415(02);全文 *

Also Published As

Publication number Publication date
CN117523548A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
Zhang et al. A review of deep learning-based semantic segmentation for point cloud
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
WO2024077812A1 (en) Single building three-dimensional reconstruction method based on point cloud semantic segmentation and structure fitting
CN100559398C (en) Automatic deepness image registration method
CN108875813B (en) Three-dimensional grid model retrieval method based on geometric image
CN108052942B (en) Visual image recognition method for aircraft flight attitude
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
Han et al. Urban scene LOD vectorized modeling from photogrammetry meshes
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
CN114998890B (en) Three-dimensional point cloud target detection algorithm based on graph neural network
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN112330825A (en) Three-dimensional model retrieval method based on two-dimensional image information
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
Yuan et al. 3D point cloud recognition of substation equipment based on plane detection
Lei et al. What's the Situation With Intelligent Mesh Generation: A Survey and Perspectives
Li et al. Deep-learning-based 3D reconstruction: a review and applications
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN114120095A (en) Mobile robot autonomous positioning system and method based on aerial three-dimensional model
CN117523548B (en) Three-dimensional model object extraction and recognition method based on neural network
CN113408651B (en) Unsupervised three-dimensional object classification method based on local discriminant enhancement
Shui et al. Automatic planar shape segmentation from indoor point clouds
CN110163091B (en) Three-dimensional model retrieval method based on LSTM network multi-mode information fusion
CN111414802B (en) Protein data characteristic extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant