CN107092859A - A kind of depth characteristic extracting method of threedimensional model - Google Patents

A kind of depth characteristic extracting method of threedimensional model Download PDF

Info

Publication number
CN107092859A
CN107092859A CN201710148547.XA CN201710148547A CN107092859A CN 107092859 A CN107092859 A CN 107092859A CN 201710148547 A CN201710148547 A CN 201710148547A CN 107092859 A CN107092859 A CN 107092859A
Authority
CN
China
Prior art keywords
mrow
layer
msub
neural networks
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710148547.XA
Other languages
Chinese (zh)
Inventor
周燕
曾凡智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN201710148547.XA priority Critical patent/CN107092859A/en
Publication of CN107092859A publication Critical patent/CN107092859A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of depth characteristic extracting method of threedimensional model, first, extracts the pole view of threedimensional model, is used as the training input data of depth convolutional neural networks;Secondly, depth convolutional neural networks are built, and pole view is trained;Again, pole view is input into depth convolutional neural networks to be trained, until the convergence of depth convolutional neural networks, realizes the determination of the internal weights of depth convolutional neural networks after the completion of training;Finally, input need to extract the pole view of the threedimensional model of feature in the depth convolutional neural networks trained, calculate the characteristic vector of full linking layer in depth convolutional neural networks, then as the depth characteristic for the threedimensional model that need to extract feature.The present invention builds deep layer convolutional neural networks and reduces residual error by iterated revision weights so that network convergence.After training is finished, the full linking layer for extracting convolutional neural networks is used as the depth characteristic of threedimensional model pole view.

Description

A kind of depth characteristic extracting method of threedimensional model
Technical field
The present invention relates to threedimensional model processing technology field, more specifically to a kind of depth characteristic of threedimensional model Extracting method.
Background technology
With threedimensional model treatment technology and computer hardware, the fast development of software, and multimedia technology and interconnection The popularization of network technology, substantial amounts of threedimensional model is applied in every field, and people also increasingly increase to the application demand of threedimensional model Greatly.Threedimensional model plays the part of important in multiple fields such as ecommerce, architectural design, industrial design, advertisement video display and 3d gamings Role.The threedimensional model of large-scale dataset needs design reuse and model index in the various aspects that social production is lived, Therefore, how to be concentrated from existing all types of three-dimensional modeling data and quickly and accurately retrieve target three-dimensional, become Key issue urgently to be resolved hurrily.
In recent years, the threedimensional model analysis based on deep learning turns into study hotspot, and it combines computer vision, artificial The research contents such as intelligence, intelligence computation, can solve the visual task of threedimensional model includes threedimensional model feature extraction, classification, knows Not, the problem such as detection and prediction.The hidden feature of threedimensional model can automatically be learnt using depth learning technology, and can be big It is trained on scale data collection, strengthens the generalization ability of learning model.
The feature extracting method of deep learning is currently based on, there are the following problems:The spy that framework such as deep learning is extracted Levy and can not express three-dimensional model information completely, high and network the over-fitting problem of computation complexity that network layer bathozone is come, with And e-learning time length and memory storage space it is big etc..Used and to threedimensional model feature with the technology maturation of deep learning Capability list reaches strong demand, and it will be threedimensional model classification, retrieval, detection and identification problem band to go to extract feature using deep learning Carry out new breakthrough.
The content of the invention
It is an object of the invention to overcome shortcoming and deficiency of the prior art, there is provided a kind of depth characteristic of threedimensional model Extracting method, this method builds deep layer convolutional neural networks and the pole view of threedimensional model is trained, and is weighed by iterated revision Value reduces residual error so that network convergence.After training is finished, the full linking layer for extracting convolutional neural networks is used as threedimensional model pole The depth characteristic of view so that depth characteristic can be used for the visual tasks such as the classification, retrieval and identification of threedimensional model.This method structure The convolutional neural networks level for building deep layer enriches, and can accelerate network training speed and improve the degree of accuracy of depth network fitting.
In order to achieve the above object, the technical scheme is that:A kind of depth of threedimensional model is special Levy extracting method, it is characterised in that:
First, the pole view of threedimensional model is extracted, the training input data of depth convolutional neural networks is used as;
Secondly, depth convolutional neural networks are built, and pole view is trained;Wherein, depth convolutional neural networks bag Include pole view as training input data input layer, learnt for the feature to pole view and obtain two dimensional character figure Convolutional layer, for the two dimensional character figure of diverse location is carried out aggregate statistics and reduce characteristic dimension pond layer, for pair Two dimensional character figure carries out arrangement link and forms the full linking layer of one-dimensional vector and for exporting the output that classification predicts the outcome Layer;
Again, pole view is input into depth convolutional neural networks to be trained, until the convergence of depth convolutional neural networks, Realize the determination of the internal weights of depth convolutional neural networks after the completion of training;
Finally, input need to extract the pole view of the threedimensional model of feature in the depth convolutional neural networks trained, Calculate the characteristic vector of full linking layer in depth convolutional neural networks, then it is special as the depth for the threedimensional model that need to extract feature Levy.
In such scheme, the depth characteristic extracting method of threedimensional model of the invention is by depth convolutional neural networks The pole view of threedimensional model is trained, residual error is reduced by iterated revision weights so that network convergence.Treat that training is finished Afterwards, the full linking layer for extracting convolutional neural networks is used as the depth characteristic of threedimensional model pole view.Wherein, pole view is to three-dimensional mould The spatial aggregation structure global table of type is reached, and the instruction of depth convolutional neural networks can be simplified and mitigate using the pole view of threedimensional model Practice amount of calculation.Depth convolutional neural networks level constructed by the present invention enriches, and can accelerate network training speed and improve depth The degree of accuracy of network fitting.Wherein, pole view refers to outwards launch one group of sampling ray, ray and mould from the barycenter of threedimensional model The two-dimentional sample graph that the distance of the intersection point of type to barycenter is arranged in.
Specifically, this method comprises the following steps:
Step s101:The pole view of threedimensional model is extracted, as the training input data of depth convolutional neural networks, wherein Training input data is set to x(i)∈ χ, χ are the pole view for having N number of threedimensional model;The corresponding class label of i-th of model is y(i)∈ { 1,2 ..., K }, K are the categorical measure of threedimensional model;
Step s102:Build depth convolutional neural networks;In depth convolutional neural networks, depth convolutional neural networks bag Include:Pole view x(i)It is used as the input layer Ι, 4 convolutional layer C (t), t=1,2 of training input data, 3,4,1 pond layer P, two Individual full linking layer FC (1) and FC (2), and output layer Ο;Wherein, 4 convolutional layer C (t), t=1,2, each convolutional layer in 3,4 With each full linking layer in two full linking layer FC (1) and FC (2) using correct linear activation primitive f (a)=max (0, a) Substitute sigmoid function f (a)=1/ (1+e in depth convolutional neural networks-a);
Step s103:The parameter of depth convolutional neural networks is set, i.e., it is each layer weights of depth convolutional neural networks are initial Change:
Input layer Ι:Input data is the pole view x that a size is (32 × 32)(i)
Convolutional layer C:Feature of 4 convolutional layers successively to pole view learns, the two dimensional character that each convolutional layer is obtained The quantity of figure is Ft=(6,8,10,12), the two dimensional character figure of each convolutional layer is obtained according to below equation:
WhereinC (t) q-th of two dimensional character figure is represented, M represents t-1 layers of feature set of graphs, when t-1 is 0, i.e. table Show the pole view that two dimensional character figure is input,Represent t-1 layers of p-th of characteristic pattern to q-th of two dimension of t layers of convolutional layer The convolution kernel of characteristic pattern, initial convolution kernel is used as by the matrix of the generating random number (5 × 5) of [- 1,1].Bias is biasing, initially Change value is 0.(*) represents the computing of convolution, and f () is the linear activation primitive of amendment;It is available by above-mentioned formula:
Convolutional layer C (1) calculates the two dimensional character figure that 6 sizes are (28 × 28);
Convolutional layer C (2) calculates the two dimensional character figure that 8 sizes are (24 × 24);
Convolutional layer C (3) calculates the two dimensional character figure that 10 sizes are (20 × 20);
Convolutional layer C (4) calculates the two dimensional character figure that 12 sizes are (16 × 16);
Pond layer P:The two dimensional character figure calculated by following formula to convolutional layer C (4) carries out maximum pondization and handled, and obtains To the eigenmatrix that 12 sizes are (8 × 8):
Wherein, PMp(u0,v0) the corresponding coordinate of eigenmatrix that the maximum pondization of two dimensional character figure is handled is represented,It is convolution The two dimensional character figure that layer C (4) is calculated, max () is the maximum of calculating matrix element;
Full linking layer FC (1):By PMpColumn vector (64 × 1) is arranged in, one is obtained to the full link of each matrix column vector Dimensional vector L0(768 × 1), i.e. one-dimensional vector L0(768 × 1) as full linking layer FC (1) input vector;
Full linking layer FC (2):Setting full linking layer FC (1) has 512 neurons, and full linking layer FC (2) has 128 nerves Member;And the output vector L of full linking layer FC (1) is calculated by following full linking layer propagation formulas1It is used as full linking layer FC (2) Input vector, and calculate the output vector L of full linking layer FC (2)2It is used as output layer O input vector:
WhereinForWithNetwork weight, is made by the matrix of the generating random number (512 × 768) of [- 1,1] For initial weight W1, generation (128 × 512) matrix be used as initial weight W2ForBiasing, initial value is 0;f () is the linear activation primitive of amendment;Take 1 and 2;
Output layer O:Output layer is set to have K neuron, calculation formula is as follows:
Y '=f (W3L2+b2);
Wherein, y '(i)For the final output of depth convolutional neural networks, i is i-th of model;W3Random number by [- 1,1] The matrix of generation (K × 128) is used as initial weight;L2For the output vector of full linking layer FC (2);
Step s104:The data set χ of pole view is trained, it is 1 to set learning rate η, declines calculation at random using gradient Method, the result y ' of prediction is exported according to depth convolutional neural networks(i)With real class label y(i)∈ { 1,2 ..., K } mistake Difference, carries out backpropagation, and 20 algorithms of iteration can restrain, and realizes the internal weights of depth convolutional neural networks after the completion of training It is determined that;
Step s105:The pole that the threedimensional model that need to extract feature is inputted in the depth convolutional neural networks trained is regarded Scheme x(i), and calculate the characteristic vector L of the 2nd full linking layer FC (2) outputs2, the threedimensional model pole that feature is extracted needed for being is regarded The depth characteristic of figure.
The pole view for extracting threedimensional model refers to:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional Point cloud model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point The direction of each point of cloud model and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates each pixel sampling distance set Ultimate range, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is that extracted pole is regarded Figure.
The pole view for extracting threedimensional model specifically includes following steps:
Step S201:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S202:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then chi of the three-dimensional point cloud model in rectangular coordinate system Spend for pi'=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p after translation transformationi' barycenter be located at rectangular coordinate system original Point;
Step s203:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale isOn, wherein, zoom factor s is
Step s204:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, it is now three-dimensional Yardstick of the point cloud model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s205:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations areThen three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, There is spherical coordinates point, multiple spherical coordinates point on one location of pixels (u, v) or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s206:Each the sampled distance collection of pixel (u, v) isGo out each picture according to the following formula The maximum of plain sampled distance collection, to obtain pixel sampling value in the view of pole, and is arranged in two-dimentional collection figure and made as ultimate range For pole view I;
Above-mentioned depth characteristic extracting method directly carries out feature extraction to the pole view of threedimensional model, is regarded for two-dimentional pole The information of each position can be perceived with convolution operation by desiring to make money or profit.The characteristic pattern obtained by multilayer process of convolution, in full linking layer It is middle to extract the strong depth characteristic of discrimination property.
And when extracting pole view, preprocessing process needs to carry out threedimensional model Pan and Zoom conversion, it is ensured that three-dimensional Model is normalized and standardization in standard scale.Threedimensional model point cloud is converted into spherical coordinate system, beneficial to by spherical coordinates Point is mapped to the respective pixel position of two-dimentional pole view, by mapping, counts the maximum of the distance set of point on the location of pixels Value, forms two-dimentional sample graph by maximum sampled value, is the new pole view of threedimensional model.
Compared with prior art, the invention has the advantages that and beneficial effect:The depth characteristic of threedimensional model of the present invention Extracting method builds deep layer convolutional neural networks and the pole view of threedimensional model is trained, and reduces residual by iterated revision weights Difference so that network convergence.After training is finished, the full linking layer for extracting convolutional neural networks is used as the depth of threedimensional model pole view Spend feature so that depth characteristic can be used for the visual tasks such as the classification, retrieval and identification of threedimensional model.This method builds deep layer Convolutional neural networks level enriches, and can accelerate network training speed and improve the degree of accuracy of depth network fitting.
Brief description of the drawings
Fig. 1 is the flow chart of the depth characteristic extracting method of threedimensional model of the present invention;
Fig. 2 is the schematic diagram of the depth characteristic extracting method mid-deep strata convolutional neural networks of threedimensional model of the present invention;
Fig. 3 is the flow chart of the pole view of extraction threedimensional model in the inventive method;
Fig. 4 is to extract the schematic diagram for obtaining pole view in the inventive method by threedimensional model;
Embodiment
The present invention is described in further detail with embodiment below in conjunction with the accompanying drawings.
Embodiment
As shown in Figures 1 to 4, the depth characteristic extracting method of threedimensional model of the present invention is such:
First, the pole view of threedimensional model is extracted, the training input data of depth convolutional neural networks is used as;
Secondly, depth convolutional neural networks are built, and pole view is trained;Wherein, depth convolutional neural networks bag Include pole view as training input data input layer, learnt for the feature to pole view and obtain two dimensional character figure Convolutional layer, for the two dimensional character figure of diverse location is carried out aggregate statistics and reduce characteristic dimension pond layer, for pair Two dimensional character figure carries out arrangement link and forms the full linking layer of one-dimensional vector and for exporting the output that classification predicts the outcome Layer;
Again, pole view is input into depth convolutional neural networks to be trained, until the convergence of depth convolutional neural networks, Realize the determination of the internal weights of depth convolutional neural networks after the completion of training;
Finally, input need to extract the pole view of the threedimensional model of feature in the depth convolutional neural networks trained, Calculate the characteristic vector of full linking layer in depth convolutional neural networks, then it is special as the depth for the threedimensional model that need to extract feature Levy.
In such scheme, the depth characteristic extracting method of threedimensional model of the invention is by depth convolutional neural networks The pole view of threedimensional model is trained, residual error is reduced by iterated revision weights so that network convergence.Treat that training is finished Afterwards, the full linking layer for extracting convolutional neural networks is used as the depth characteristic of threedimensional model pole view.Wherein, pole view is to three-dimensional mould The spatial aggregation structure global table of type is reached, and the instruction of depth convolutional neural networks can be simplified and mitigate using the pole view of threedimensional model Practice amount of calculation.Depth convolutional neural networks level constructed by the present invention enriches, and can accelerate network training speed and improve depth The degree of accuracy of network fitting.Wherein, pole view refers to outwards launch one group of sampling ray, ray and mould from the barycenter of threedimensional model The two-dimentional sample graph that the distance of the intersection point of type to barycenter is arranged in.
Specifically, this method comprises the following steps:
Step s101:The pole view of threedimensional model is extracted, as the training input data of depth convolutional neural networks, wherein Training input data is set to x(i)∈ χ, χ are the pole view for having N number of threedimensional model;The corresponding class label of i-th of model is y(i)∈ { 1,2 ..., K }, K are the categorical measure of threedimensional model;
Step s102:Build depth convolutional neural networks;In depth convolutional neural networks, depth convolutional neural networks bag Include:Pole view x(i)It is used as the input layer Ι, 4 convolutional layer C (t), t=1,2 of training input data, 3,4,1 pond layer P, two Individual full linking layer FC (1) and FC (2), and output layer Ο;Wherein, 4 convolutional layer C (t), t=1,2, each convolutional layer in 3,4 With each full linking layer in two full linking layer FC (1) and FC (2) using correct linear activation primitive f (a)=max (0, a) Substitute sigmoid function f (a)=1/ (1+e in depth convolutional neural networks-a);
Step s103:The parameter of depth convolutional neural networks is set, i.e., it is each layer weights of depth convolutional neural networks are initial Change:
Input layer Ι:Input data is the pole view x that a size is (32 × 32)(i)
Convolutional layer C:Feature of 4 convolutional layers successively to pole view learns, the two dimensional character that each convolutional layer is obtained The quantity of figure is Ft=(6,8,10,12), the two dimensional character figure of each convolutional layer is obtained according to below equation:
WhereinC (t) q-th of two dimensional character figure is represented, M represents t-1 layers of feature set of graphs, when t-1 is 0, i.e. table Show the pole view that two dimensional character figure is input,Represent t-1 layers of p-th of characteristic pattern to q-th of two dimension of t layers of convolutional layer The convolution kernel of characteristic pattern, initial convolution kernel is used as by the matrix of the generating random number (5 × 5) of [- 1,1].Bias is biasing, initially Change value is 0.(*) represents the computing of convolution, and f () is the linear activation primitive of amendment;It is available by above-mentioned formula:
Convolutional layer C (1) calculates the two dimensional character figure that 6 sizes are (28 × 28);
Convolutional layer C (2) calculates the two dimensional character figure that 8 sizes are (24 × 24);
Convolutional layer C (3) calculates the two dimensional character figure that 10 sizes are (20 × 20);
Convolutional layer C (4) calculates the two dimensional character figure that 12 sizes are (16 × 16);
Pond layer P:The two dimensional character figure calculated by following formula to convolutional layer C (4) carries out maximum pondization and handled, and obtains To the eigenmatrix that 12 sizes are (8 × 8):
Wherein, PMp(u0,v0) the corresponding coordinate of eigenmatrix that the maximum pondization of two dimensional character figure is handled is represented,It is convolution The two dimensional character figure that layer C (4) is calculated, max () is the maximum of calculating matrix element;
Full linking layer FC (1):By PMpColumn vector (64 × 1) is arranged in, one is obtained to the full link of each matrix column vector Dimensional vector L0(768 × 1), i.e. one-dimensional vector L0(768 × 1) as full linking layer FC (1) input vector;
Full linking layer FC (2):Setting full linking layer FC (1) has 512 neurons, and full linking layer FC (2) has 128 nerves Member;And the output vector L of full linking layer FC (1) is calculated by following full linking layer propagation formulas1It is used as full linking layer FC (2) Input vector, and calculate the output vector L of full linking layer FC (2)2It is used as output layer O input vector:
WhereinForWithNetwork weight, is made by the matrix of the generating random number (512 × 768) of [- 1,1] For initial weight W1, generation (128 × 512) matrix be used as initial weight W2ForBiasing, initial value is 0;f () is the linear activation primitive of amendment;Take 1 and 2;
Output layer O:Output layer is set to have K neuron, calculation formula is as follows:
Y '=f (W3L2+b2);
Wherein, y '(i)For the final output of depth convolutional neural networks, i is i-th of model;W3Random number by [- 1,1] The matrix of generation (K × 128) is used as initial weight;L2For the output vector of full linking layer FC (2);
Step s104:The data set χ of pole view is trained, it is 1 to set learning rate η, declines calculation at random using gradient Method, the result y ' of prediction is exported according to depth convolutional neural networks(i)With real class label y(i)∈ { 1,2 ..., K } mistake Difference, carries out backpropagation, and 20 algorithms of iteration can restrain, and realizes the internal weights of depth convolutional neural networks after the completion of training It is determined that;
Step s105:The pole that the threedimensional model that need to extract feature is inputted in the depth convolutional neural networks trained is regarded Scheme x(i), and calculate the characteristic vector L of the 2nd full linking layer FC (2) outputs2, the threedimensional model pole that feature is extracted needed for being is regarded The depth characteristic of figure.
The pole view of said extracted threedimensional model refers to:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional Point cloud model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point The direction of each point of cloud model and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates each pixel sampling distance set Ultimate range, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is that extracted pole is regarded Figure.
Specifically, the pole view for extracting threedimensional model specifically includes following steps:
Step S201:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S202:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then chi of the three-dimensional point cloud model in rectangular coordinate system Spend for pi'=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p after translation transformationi' barycenter be located at rectangular coordinate system original Point;
Step s203:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale isOn, wherein, zoom factor s is
Step s204:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, it is now three-dimensional Yardstick of the point cloud model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s205:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations areThen three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, There is spherical coordinates point, multiple spherical coordinates point on one location of pixels (u, v) or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s206:Each the sampled distance collection of pixel (u, v) isGo out each picture according to the following formula The maximum of plain sampled distance collection, to obtain pixel sampling value in the view of pole, and is arranged in two-dimentional collection figure and made as ultimate range For pole view I;
Above-mentioned depth characteristic extracting method directly carries out feature extraction to the pole view of threedimensional model, is regarded for two-dimentional pole The information of each position can be perceived with convolution operation by desiring to make money or profit.The characteristic pattern obtained by multilayer process of convolution, in full linking layer It is middle to extract the strong depth characteristic of discrimination property.
And when extracting pole view, preprocessing process needs to carry out threedimensional model Pan and Zoom conversion, it is ensured that three-dimensional Model is normalized and standardization in standard scale.Threedimensional model point cloud is converted into spherical coordinate system, beneficial to by spherical coordinates Point is mapped to the respective pixel position of two-dimentional pole view, by mapping, counts the maximum of the distance set of point on the location of pixels Value, forms two-dimentional sample graph by maximum sampled value, is the new pole view of threedimensional model.
Above-described embodiment is preferably embodiment, but embodiments of the present invention are not by above-described embodiment of the invention Limitation, other any Spirit Essences without departing from the present invention and the change made under principle, modification, replacement, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.

Claims (4)

1. a kind of depth characteristic extracting method of threedimensional model, it is characterised in that:
First, the pole view of threedimensional model is extracted, the training input data of depth convolutional neural networks is used as;
Secondly, depth convolutional neural networks are built, and pole view is trained;Wherein, depth convolutional neural networks include Pole view as training input data input layer, learnt for the feature to pole view and obtain the volume of two dimensional character figure Lamination, for carrying out aggregate statistics to the two dimensional character figure of diverse location and reducing the pond layer of characteristic dimension, for two dimension Characteristic pattern carries out arrangement link and forms the full linking layer of one-dimensional vector and for exporting the output layer that classification predicts the outcome;
Again, pole view is input into depth convolutional neural networks to be trained, until the convergence of depth convolutional neural networks, realized The determination of the internal weights of depth convolutional neural networks after the completion of training;
Finally, input need to extract the pole view of the threedimensional model of feature in the depth convolutional neural networks trained, calculate The characteristic vector of full linking layer in depth convolutional neural networks, then as the depth characteristic for the threedimensional model that need to extract feature.
2. the depth characteristic extracting method of threedimensional model according to claim 1, it is characterised in that:Comprise the following steps:
Step s101:The pole view of threedimensional model is extracted, as the training input data of depth convolutional neural networks, wherein training Input data is set to x(i)∈ χ, χ are the pole view for having N number of threedimensional model;The corresponding class label of i-th of model is y(i) ∈ { 1,2 ..., K }, K are the categorical measure of threedimensional model;
Step s102:Build depth convolutional neural networks;In depth convolutional neural networks, depth convolutional neural networks include: Pole view x(i)As the input layer Ι, 4 convolutional layer C (t), t=1,2 of training input data, 3,4,1 pond layer P, two entirely Linking layer FC (1) and FC (2), and output layer Ο;Wherein, 4 convolutional layer C (t), t=1,2, each convolutional layer and two in 3,4 Each linking layer uses linear activation primitive f (a)=max of amendment (0, a) replacement entirely in individual full linking layer FC (1) and FC (2) Sigmoid function f (a)=1/ (1+e in depth convolutional neural networks-a);
Step s103:The parameter of depth convolutional neural networks is set, i.e., by each layer weight initialization of depth convolutional neural networks:
Input layer Ι:Input data is the pole view x that a size is (32 × 32)(i)
Convolutional layer C:Feature of 4 convolutional layers successively to pole view learns, the two dimensional character figure that each convolutional layer is obtained Quantity is Ft=(6,8,10,12), the two dimensional character figure of each convolutional layer is obtained according to below equation:
<mrow> <msubsup> <mi>x</mi> <mi>q</mi> <mi>t</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>x</mi> <mo>&amp;Element;</mo> <mi>M</mi> </mrow> </munder> <msubsup> <mi>x</mi> <mi>p</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>*</mo> <msubsup> <mi>k</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> <mi>t</mi> </msubsup> <mo>+</mo> <msubsup> <mi>bias</mi> <mi>q</mi> <mi>t</mi> </msubsup> <mo>)</mo> </mrow> </mrow>
WhereinC (t) q-th of two dimensional character figure is represented, M represents t-1 layers of feature set of graphs, when t-1 is 0, that is, represents two dimension Characteristic pattern is the pole view of input,Represent t-1 layers of p-th of characteristic pattern to q-th of two dimensional character of t layers of convolutional layer The convolution kernel of figure, initial convolution kernel is used as by the matrix of the generating random number (5 × 5) of [- 1,1].Bias is biasing, initialization value For 0.(*) represents the computing of convolution, and f () is the linear activation primitive of amendment;It is available by above-mentioned formula:
Convolutional layer C (1) calculates the two dimensional character figure that 6 sizes are (28 × 28);
Convolutional layer C (2) calculates the two dimensional character figure that 8 sizes are (24 × 24);
Convolutional layer C (3) calculates the two dimensional character figure that 10 sizes are (20 × 20);
Convolutional layer C (4) calculates the two dimensional character figure that 12 sizes are (16 × 16);
Pond layer P:The two dimensional character figure calculated by following formula to convolutional layer C (4) carries out maximum pondization and handled, and obtains 12 Individual size is the eigenmatrix of (8 × 8):
<mrow> <msub> <mi>PM</mi> <mi>p</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mi>max</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>p</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>p</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>p</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>,</mo> <mn>2</mn> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>p</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>,</mo> <mn>2</mn> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>p</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msup> <mi>F</mi> <mn>4</mn> </msup> <mo>;</mo> </mrow>
Wherein, PMp(u0,v0) the corresponding coordinate of eigenmatrix that the maximum pondization of two dimensional character figure is handled is represented,It is convolutional layer C (4) the two dimensional character figure calculated, max () is the maximum of calculating matrix element;
Full linking layer FC (1):By PMpBe arranged in column vector (64 × 1), the full link of each matrix column vector is obtained it is one-dimensional to Measure L0(768 × 1), i.e. one-dimensional vector L0(768 × 1) as full linking layer FC (1) input vector;
Full linking layer FC (2):Setting full linking layer FC (1) has 512 neurons, and full linking layer FC (2) has 128 neurons; And the output vector L of full linking layer FC (1) is calculated by following full linking layer propagation formulas1It is used as the defeated of full linking layer FC (2) Incoming vector, and calculate the output vector L of full linking layer FC (2)2It is used as output layer O input vector:
Ll=f (WlLl-1+bl-1)
Wherein WlFor FC (l-1) and FC (l) network weights, by [- 1,1] generating random number (512 × 768) matrix as first Beginning weights W1, generation (128 × 512) matrix be used as initial weight W2;bl-1For FC (l-1) biasing, initial value is 0;f(·) To correct linear activation primitive;L takes 1 and 2;
Output layer O:Output layer is set to have K neuron, calculation formula is as follows:
Y '=f (W3L2+b2);
Wherein, y '(i)For the final output of depth convolutional neural networks, i is i-th of model;W3Generating random number by [- 1,1] The matrix of (K × 128) is used as initial weight;L2For the output vector of full linking layer FC (2);
Step s104:The data set χ of pole view is trained, it is 1 to set learning rate η, uses the random descent algorithm of gradient, root The result y ' of prediction is exported according to depth convolutional neural networks(i)With real class label y(i)∈ { 1,2 ..., K } error, enters Row backpropagation, 20 algorithms of iteration can restrain, and realize the determination of the internal weights of depth convolutional neural networks after the completion of training;
Step s105:Input need to extract the pole view x of the threedimensional model of feature in the depth convolutional neural networks trained(i), and calculate the characteristic vector L of the 2nd full linking layer FC (2) outputs2, the threedimensional model pole view of feature is extracted needed for being Depth characteristic.
3. the depth characteristic extracting method of threedimensional model according to claim 1, it is characterised in that:It is described to extract three-dimensional mould The pole view of type refers to:
First, three-dimensional point cloud model is pre-processed, calculates the barycenter and yardstick of three-dimensional point cloud model, and by three-dimensional point cloud Model is moved in rectangular coordinate system and zoomed in and out, and realizes that three-dimensional point cloud model fastens normalization in rectangular co-ordinate;
Secondly, spherical coordinates is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, and obtain three-dimensional point cloud mould The direction of each point of type and distance property;
Again, the spherical coordinates of point set is mapped on the location of pixels of pole view, calculates the maximum of each pixel sampling distance set Distance, is used as the ray sampled value of Direction interval;
Finally, the ultimate range of each pixel sampling distance set is arranged in two-dimentional sample graph, is extracted pole view.
4. the depth characteristic extracting method of threedimensional model according to claim 3, it is characterised in that:It is described to extract three-dimensional mould The pole view of type specifically includes following steps:
Step S201:Three-dimensional point cloud model is inputted, the yardstick of the three-dimensional point cloud model is P={ pi(xi,yi,zi) | i=1, 2,...,N};
Step S202:Barycenter g (the g of three-dimensional point cloud model are calculated according to the following equationx, gy, gz), pass through obtained barycenter g (gx, gy, gz) fasten three-dimensional point cloud model translation transformation to rectangular co-ordinate, then three-dimensional point cloud model is in the yardstick of rectangular coordinate system p′i=pi- g, i=1,2 ..., N;Three-dimensional point cloud model p after translation transformationi' barycenter be located at rectangular coordinate system origin;
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>y</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>z</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>x</mi> <mi>z</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Step s203:The zoom factor s of three-dimensional point cloud model is calculated, three-dimensional point cloud model is zoomed into Unit Scale is On, wherein, zoom factor s is
<mrow> <mi>s</mi> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
Step s204:Spherical coordinates Q is transformed into by the three-dimensional point cloud model of scaling by being fastened in rectangular co-ordinate, now three-dimensional point cloud Yardstick of the model on spherical coordinates beConversion formula is as follows:
Wherein θ ∈ [0, π], it is 0 that the elevation angle is born on semiaxis in Z axis;
Step s205:Spherical coordinates Q is mapped on the location of pixels (u, v) of pole view, mapping relations are Then three-dimensional point cloud model is calculated according to the following equation on the location of pixels (u, v) of pole view;Wherein, location of pixels (u, V) there is spherical coordinates point, multiple spherical coordinates point on or in the absence of there is spherical coordinates point;
Wherein nuAnd nvRespectively pole view is wide and long;
Step s206:Each the sampled distance collection of pixel (u, v) isGo out each pixel sampling according to the following formula The maximum of distance set is as ultimate range to obtain pixel sampling value in the view of pole, and be arranged in two-dimentional collection figure to regard as pole Scheme I;
CN201710148547.XA 2017-03-14 2017-03-14 A kind of depth characteristic extracting method of threedimensional model Pending CN107092859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710148547.XA CN107092859A (en) 2017-03-14 2017-03-14 A kind of depth characteristic extracting method of threedimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710148547.XA CN107092859A (en) 2017-03-14 2017-03-14 A kind of depth characteristic extracting method of threedimensional model

Publications (1)

Publication Number Publication Date
CN107092859A true CN107092859A (en) 2017-08-25

Family

ID=59648576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710148547.XA Pending CN107092859A (en) 2017-03-14 2017-03-14 A kind of depth characteristic extracting method of threedimensional model

Country Status (1)

Country Link
CN (1) CN107092859A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171217A (en) * 2018-01-29 2018-06-15 深圳市唯特视科技有限公司 A kind of three-dimension object detection method based on converged network
CN108345831A (en) * 2017-12-28 2018-07-31 新智数字科技有限公司 The method, apparatus and electronic equipment of Road image segmentation based on point cloud data
CN108427958A (en) * 2018-02-02 2018-08-21 哈尔滨工程大学 Adaptive weight convolutional neural networks underwater sonar image classification method based on deep learning
CN108986159A (en) * 2018-04-25 2018-12-11 浙江森马服饰股份有限公司 A kind of method and apparatus that three-dimensional (3 D) manikin is rebuild and measured
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109064549A (en) * 2018-07-16 2018-12-21 中南大学 Index point detection model generation method and mark point detecting method
CN109291657A (en) * 2018-09-11 2019-02-01 东华大学 Laser Jet system is identified based on convolutional neural networks space structure part industry Internet of Things
CN109410321A (en) * 2018-10-17 2019-03-01 大连理工大学 Three-dimensional rebuilding method based on convolutional neural networks
CN109685848A (en) * 2018-12-14 2019-04-26 上海交通大学 A kind of neural network coordinate transformation method of three-dimensional point cloud and three-dimension sensor
CN110059608A (en) * 2019-04-11 2019-07-26 腾讯科技(深圳)有限公司 A kind of object detecting method, device, electronic equipment and storage medium
CN110097077A (en) * 2019-03-26 2019-08-06 深圳市速腾聚创科技有限公司 Point cloud data classification method, device, computer equipment and storage medium
CN110321910A (en) * 2018-03-29 2019-10-11 中国科学院深圳先进技术研究院 Feature extracting method, device and equipment towards cloud
CN111709983A (en) * 2020-06-16 2020-09-25 天津工业大学 Bubble flow field three-dimensional reconstruction method based on convolutional neural network and light field image
CN111782879A (en) * 2020-07-06 2020-10-16 Oppo(重庆)智能科技有限公司 Model training method and device
CN113313830A (en) * 2021-05-24 2021-08-27 华南理工大学 Encoding point cloud feature extraction method based on multi-branch graph convolutional neural network
CN113313831A (en) * 2021-05-24 2021-08-27 华南理工大学 Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network
CN113643336A (en) * 2021-07-26 2021-11-12 之江实验室 Three-dimensional image rigid matching method based on spherical polar coordinate system deep neural network
CN113744237A (en) * 2021-08-31 2021-12-03 华中科技大学 Deep learning-based automatic detection method and system for muck fluidity
CN113777571A (en) * 2021-08-04 2021-12-10 中山大学 Unmanned aerial vehicle cluster dynamic directional diagram synthesis method based on deep learning
US11967873B2 (en) 2019-09-23 2024-04-23 Canoo Technologies Inc. Fractional slot electric motors with coil elements having rectangular cross-sections

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868706A (en) * 2016-03-28 2016-08-17 天津大学 Method for identifying 3D model based on sparse coding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868706A (en) * 2016-03-28 2016-08-17 天津大学 Method for identifying 3D model based on sparse coding

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BAOGUANG SHI等: "DeepPano: Deep Panoramic Representation for 3-D Shape Recognition", 《IEEE SIGNAL PROCESSING LETTERS》 *
冯毅攀: "基于视图的三维模型检索技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
曾向阳: "《智能水中目标识别》", 31 March 2016, 国防工业出版社 *
杜卓明等: "3D模型的加权特征点曲率球面调和表达", 《计算机工程与应用》 *
陈雯柏: "《人工神经网络原理与实践》", 31 January 2016, 西安电子科技大学出版社 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345831A (en) * 2017-12-28 2018-07-31 新智数字科技有限公司 The method, apparatus and electronic equipment of Road image segmentation based on point cloud data
CN108171217A (en) * 2018-01-29 2018-06-15 深圳市唯特视科技有限公司 A kind of three-dimension object detection method based on converged network
CN108427958A (en) * 2018-02-02 2018-08-21 哈尔滨工程大学 Adaptive weight convolutional neural networks underwater sonar image classification method based on deep learning
CN110321910B (en) * 2018-03-29 2021-05-28 中国科学院深圳先进技术研究院 Point cloud-oriented feature extraction method, device and equipment
CN110321910A (en) * 2018-03-29 2019-10-11 中国科学院深圳先进技术研究院 Feature extracting method, device and equipment towards cloud
CN108986159A (en) * 2018-04-25 2018-12-11 浙江森马服饰股份有限公司 A kind of method and apparatus that three-dimensional (3 D) manikin is rebuild and measured
CN108986159B (en) * 2018-04-25 2021-10-22 浙江森马服饰股份有限公司 Method and equipment for reconstructing and measuring three-dimensional human body model
CN109064549A (en) * 2018-07-16 2018-12-21 中南大学 Index point detection model generation method and mark point detecting method
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109063753B (en) * 2018-07-18 2021-09-14 北方民族大学 Three-dimensional point cloud model classification method based on convolutional neural network
CN109291657B (en) * 2018-09-11 2020-10-30 东华大学 Convolutional neural network-based aerospace structure industrial Internet of things identification laser coding system
CN109291657A (en) * 2018-09-11 2019-02-01 东华大学 Laser Jet system is identified based on convolutional neural networks space structure part industry Internet of Things
CN109410321A (en) * 2018-10-17 2019-03-01 大连理工大学 Three-dimensional rebuilding method based on convolutional neural networks
CN109410321B (en) * 2018-10-17 2022-09-20 大连理工大学 Three-dimensional reconstruction method based on convolutional neural network
CN109685848A (en) * 2018-12-14 2019-04-26 上海交通大学 A kind of neural network coordinate transformation method of three-dimensional point cloud and three-dimension sensor
CN109685848B (en) * 2018-12-14 2023-06-09 上海交通大学 Neural network coordinate transformation method of three-dimensional point cloud and three-dimensional sensor
CN110097077A (en) * 2019-03-26 2019-08-06 深圳市速腾聚创科技有限公司 Point cloud data classification method, device, computer equipment and storage medium
US11915501B2 (en) 2019-04-11 2024-02-27 Tencent Technology (Shenzhen) Company Limited Object detection method and apparatus, electronic device, and storage medium
CN110059608A (en) * 2019-04-11 2019-07-26 腾讯科技(深圳)有限公司 A kind of object detecting method, device, electronic equipment and storage medium
US11967873B2 (en) 2019-09-23 2024-04-23 Canoo Technologies Inc. Fractional slot electric motors with coil elements having rectangular cross-sections
CN111709983A (en) * 2020-06-16 2020-09-25 天津工业大学 Bubble flow field three-dimensional reconstruction method based on convolutional neural network and light field image
CN111782879A (en) * 2020-07-06 2020-10-16 Oppo(重庆)智能科技有限公司 Model training method and device
CN113313830B (en) * 2021-05-24 2022-12-16 华南理工大学 Encoding point cloud feature extraction method based on multi-branch graph convolutional neural network
CN113313831B (en) * 2021-05-24 2022-12-16 华南理工大学 Three-dimensional model feature extraction method based on polar coordinate graph convolution neural network
CN113313831A (en) * 2021-05-24 2021-08-27 华南理工大学 Three-dimensional model feature extraction method based on polar coordinate graph convolutional neural network
CN113313830A (en) * 2021-05-24 2021-08-27 华南理工大学 Encoding point cloud feature extraction method based on multi-branch graph convolutional neural network
CN113643336A (en) * 2021-07-26 2021-11-12 之江实验室 Three-dimensional image rigid matching method based on spherical polar coordinate system deep neural network
CN113643336B (en) * 2021-07-26 2024-03-15 之江实验室 Three-dimensional image rigid matching method based on spherical polar coordinate system depth neural network
CN113777571A (en) * 2021-08-04 2021-12-10 中山大学 Unmanned aerial vehicle cluster dynamic directional diagram synthesis method based on deep learning
CN113777571B (en) * 2021-08-04 2023-08-11 中山大学 Unmanned aerial vehicle cluster dynamic pattern synthesis method based on deep learning
CN113744237A (en) * 2021-08-31 2021-12-03 华中科技大学 Deep learning-based automatic detection method and system for muck fluidity

Similar Documents

Publication Publication Date Title
CN107092859A (en) A kind of depth characteristic extracting method of threedimensional model
Yan et al. Second: Sparsely embedded convolutional detection
Lu et al. 3DCTN: 3D convolution-transformer network for point cloud classification
Zhang et al. Latentgnn: Learning efficient non-local relations for visual recognition
Kang et al. Depth-adaptive deep neural network for semantic segmentation
Wang et al. An efficient and effective convolutional auto-encoder extreme learning machine network for 3d feature learning
Wang et al. Cross self-attention network for 3D point cloud
Teow Understanding convolutional neural networks using a minimal model for handwritten digit recognition
CN111242208A (en) Point cloud classification method, point cloud segmentation method and related equipment
CN107437096A (en) Image classification method based on the efficient depth residual error network model of parameter
CN106845499A (en) A kind of image object detection method semantic based on natural language
CN108334830A (en) A kind of scene recognition method based on target semanteme and appearance of depth Fusion Features
Jiang et al. An eight-layer convolutional neural network with stochastic pooling, batch normalization and dropout for fingerspelling recognition of Chinese sign language
Khalifa et al. Deep galaxy V2: Robust deep convolutional neural networks for galaxy morphology classifications
CN114049381A (en) Twin cross target tracking method fusing multilayer semantic information
Sun et al. Vicinity vision transformer
CN105224935A (en) A kind of real-time face key point localization method based on Android platform
CN114638408B (en) Pedestrian track prediction method based on space-time information
Tao et al. Pooling operations in deep learning: from “invariable” to “variable”
CN110197255A (en) A kind of deformable convolutional network based on deep learning
Yang et al. Detection of river floating garbage based on improved YOLOv5
Gao et al. Natural scene recognition based on convolutional neural networks and deep Boltzmannn machines
Ning et al. Point-voxel and bird-eye-view representation aggregation network for single stage 3D object detection
Fan et al. Hcpvf: Hierarchical cascaded point-voxel fusion for 3D object detection
Gao et al. Joint learning of semantic segmentation and height estimation for remote sensing image leveraging contrastive learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170825