CN112149725B - Fourier transform-based spectrum domain map convolution 3D point cloud classification method - Google Patents

Fourier transform-based spectrum domain map convolution 3D point cloud classification method Download PDF

Info

Publication number
CN112149725B
CN112149725B CN202010991678.6A CN202010991678A CN112149725B CN 112149725 B CN112149725 B CN 112149725B CN 202010991678 A CN202010991678 A CN 202010991678A CN 112149725 B CN112149725 B CN 112149725B
Authority
CN
China
Prior art keywords
graph
convolution
local
point cloud
geometric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010991678.6A
Other languages
Chinese (zh)
Other versions
CN112149725A (en
Inventor
陈苏婷
陈怀新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202010991678.6A priority Critical patent/CN112149725B/en
Publication of CN112149725A publication Critical patent/CN112149725A/en
Application granted granted Critical
Publication of CN112149725B publication Critical patent/CN112149725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a Fourier transform-based spectrum domain map convolution 3D point cloud classification method, which comprises the following steps: performing geometric sampling processing on the input original point cloud by using a G-PointNet network model: dividing points with the neighborhood included angle value larger than the point into a geometric feature region G and dividing the rest points into other regions T by setting an angle threshold V, and sampling to obtain point clouds of each region; and introducing an expansion rate E based on a Dynamic KNN local graph construction method, and selectively building a local geometric graph every E adjacent point clouds. And carrying out spectrum domain graph convolution by using a spectrum domain graph convolution method based on Fourier transformation to obtain a plurality of pooled graph local features, and classifying the pooled graph local features by obtaining global features through G-PointNet to obtain a classification result. The method effectively solves the problem of uneven distribution of the density degree of the point cloud, reserves space geometric information, can efficiently distinguish the edge points of the point cloud and simultaneously separate noise points, and improves classification accuracy.

Description

Fourier transform-based spectrum domain map convolution 3D point cloud classification method
Technical Field
The invention relates to a Fourier transform-based spectrum domain map convolution 3D point cloud classification method, and belongs to the technical field of remote sensing image processing.
Background
Along with the debate of image processing technology, classification methods based on two-dimensional images are layered endlessly, and little achievement is achieved. However, the effect of the deep learning processing method based on the three-dimensional data is far less than that of the two-dimensional image classification. Three-dimensional data is typically represented in depth images, voxels, grids, and point clouds. The three-dimensional point cloud acquired by the lidar can provide more reliable depth and contour information of the three-dimensional object than the three-dimensional data acquired by the RGB-D camera or the mainstream sensor, and is increasingly applied to three-dimensional object classification in recent years.
In the previous work, most computer vision researchers have relied on the success of CNN in image processing for point cloud classification, either extracting two-dimensional features from three-dimensional objects or obtaining multiple two-dimensional perspective views from different "perspectives" of the objects, projecting the three-dimensional objects into multiple views, extracting corresponding view features, and then fusing these features for accurate object recognition. However, these methods derive the shape of the 3D object from the 2D image, discard the inherent spatial structure of the 3D point cloud, lose a lot of spatial structure information, and consume excessive memory usage. In view of the shortcomings of 2D multi-view, researchers have attempted to voxel 3D point clouds. However, voxel construction does not build up complete edge information, which makes it difficult for such methods to capture high granularity.
Thus, there are several problems common to the current point cloud classification task. First, image convolution can determine the size of a convolution kernel to define a local area in an image, unlike a grid structure in which regular arrangements are arranged in the image, a point cloud is a set of points dispersed in a three-dimensional space, the point clouds are continuously distributed in space, and their arrangement order does not change the spatial distribution, so that the point clouds cannot be directly convolved by using a conventional deep neural network. Secondly, the non-uniformity of the spatial distribution of point clouds also presents a significant challenge for classification problems.
Disclosure of Invention
In order to overcome the defects of the prior art and effectively solve the problem that the traditional 3D point cloud classification method is affected by the space relation and the uneven distribution of the point cloud, the invention provides a Fourier transform-based spectral domain graph convolution 3D point cloud classification method, on the premise of not changing the space information of the point cloud, a new expression form-graph is provided, the graph structure effectively solves the problem of the point-point adjacent relation in a multi-point cloud deep learning model, the space geometric information is reserved, and the graph is very suitable for non-European data with irregular arrangement; deep learning lacks many research works in a spectral domain, the model combines a method of spectral domain graph convolution in a 3D point cloud frame for the first time, the spectral domain convolution has a solid mathematical theory basis, and the graph convolution is used for more filling the adjacency relation among key points; the G-PointNet of the invention makes great improvement in the aspects of feature point acquisition and local area division, provides geometric sampling pretreatment and designs a Dynamic K neighbor graph construction method Dynamic KNN, and effectively solves the problem of uneven distribution of the density degree of point cloud.
The technical scheme adopted by the invention specifically solves the technical problems as follows:
the spectrum domain map convolution 3D point cloud classification method based on Fourier transformation comprises the following steps:
performing geometric sampling processing on the input original point cloud by using a G-PointNet network model: dividing points with neighborhood included angle values larger than an angle threshold V into geometric feature areas G and dividing the rest points into other areas T by setting the angle threshold V, and uniformly sampling point clouds in the two areas to obtain point clouds after geometric sampling of each area;
constructing an undirected graph by using the geometrically sampled point clouds of each region, introducing an expansion rate E based on a Dynamic KNN local graph construction method, and selectively constructing a local geometric graph every E adjacent point clouds to obtain a plurality of local geometric graphs;
and carrying out spectral domain graph convolution on each local geometric graph by using a Fourier transform-based spectral domain graph convolution method to obtain a plurality of pooled graph local features, and classifying the pooled graph local features by obtaining global features through a G-PointNet network model to obtain a classification result.
Further, as a preferred technical solution of the present invention, the method uses a fourier transform-based spectral domain map convolution method to convolve each local geometric map with a spectral domain map, specifically:
inputting a local geometric figure G= (V, epsilon), wherein V, epsilon respectively represent a corresponding node set and an edge set, mu, V epsilon V represent nodes in the figure, and (mu, V) epsilon represent edges in the figure;
the laplacian matrix l=d-a defining the local geometry diagram, wherein a represents the adjacency matrix of the diagram, with element a therein i,j =A j,i A matrix for representing the connection situation of the nodes in the graph; d represents the degree matrix of the graph, in which element D i,i =∑ j A i,j The degree of a node represents the number of edges connected to that node;
normalized to obtain a pullThe laplace matrix, expressed asWherein I is n Is an identity matrix and performs feature decomposition into a set of laplace feature vectors u= (U) 1 ,u 2 ,...u n );
Taking the decomposed Laplace matrix eigenvectors as a set of basis, taking the local geometry as input x and performing Fourier transform to obtainT is the transpose of the matrix; obtaining convolution kernel h θ And (Λ) Fourier transform diagonal matrix form to obtain Fourier transform convolution in the spectrum domain, and then carrying out inverse transformation on the Fourier transform convolution to finally obtain spectrum domain graph convolution output.
Further, as a preferred technical scheme of the invention, the spectral domain graph convolution output formula obtained by the method is as follows:
y=σ(Uh θ (Λ)U T x)
wherein y is the output of the spectrum domain graph convolution; x is the local geometry of the input and σ () is the activation function relu.
By adopting the technical scheme, the invention can produce the following technical effects:
according to the Fourier transform-based spectrum domain map convolution 3D point cloud classification method, a PointNet depth network model and Fourier transform-based spectrum domain map convolution operation are combined to obtain the G-PointNet network model of the method. G-PointNet retains the spatial transformation network T-Net in PointNet, if the point clouds undergo some specific set transformation, such as rigid transformation, the semantic tags of these point clouds must be unchanged. The representation of the learned set of points is thus also invariant to these transformations. The solution is to adjust all input sets to one canonical space before feature extraction. The three-dimensional space is aligned by sampling and interpolation, and the network layer is specially designed to be implemented on the GPU.
And in the preprocessing stage, geometric sampling processing is carried out on the original point cloud. Inspired by cavity convolution, the invention provides a graph construction method Dynamic KNN (D-KNN), a plurality of local geometric graph structures are constructed as graph convolution input, the graph convolution input is mapped to a spectrum domain through Fourier transformation to carry out convolution operation, the graph convolution operation is returned to a airspace through inverse Fourier transformation, and finally a plurality of pooled graph local features are classified by PointNet to obtain global features. The process can be divided into three parts: geometric sampling, dynamic KNN local graph construction and spectrum domain graph convolution based on Fourier transformation. The advantages are as follows:
1. geometric sampling: the advantages of geometric sampling are very significant. In the area with more obvious point cloud geometric features, the number distribution of the sampled point clouds is more, the edge features are quite obvious, the calculation efficiency is high, and the noise immunity of the sampling result is stronger.
Dynamic KNN partial graph construction: dynamic KNN is a Dynamic composition method for selecting a K near neighborhood, elicits from cavity convolution, and on a two-dimensional image task, the cavity convolution effectively increases a receptive field on the premise of not changing the output size of an image by introducing an hyper-parameter called a "expansion rate". The Dynamic KNN introduces an expansion rate E to the KNN algorithm, and the expansion rate can be selected according to the degree of the density of the point cloud.
3. Spectral domain map convolution based on fourier transform: the Fourier transform-based spectrum domain graph convolution network is used as a point cloud feature extraction network, and the advantage of spectrum domain graph convolution on a point cloud classification task is obvious. In the end-to-end deep learning task relative to spatial convolution, the mathematical theory cannot be used for interpretation, and the Fourier transform has a solid theoretical basis for interpreting the feasibility. Secondly, points with larger edge changes and noise points in the point cloud are generally regarded as high-frequency signals, the Fourier transform can distinguish high-frequency signals from low-frequency signals, and aiming at classification tasks, the Fourier transform can efficiently distinguish the edge points of the point cloud and separate the noise points, which is important for the classification tasks.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention.
Fig. 2 is a schematic diagram of a geometric sampling process in the method of the present invention.
FIG. 3 is a schematic structural diagram of a Dynamic KNN partial diagram in the invention.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings.
As shown in fig. 1, the invention designs a spectrum domain map convolution 3D point cloud classification method based on fourier transform, and the process can be divided into three parts: geometric sampling, dynamic KNN local graph construction and spectrum domain graph convolution based on Fourier transformation, which concretely comprises the following steps:
and step 1, performing geometric sampling processing on the input original point cloud by using a G-PointNet network model. The method comprises the following steps:
first, the G-PointNet network model of the present invention is obtained under the inspired by the PointNet depth network model and the Fourier transform-based spectral domain map convolution operation. The G-PointNet network model retains the spatial transformation network T-Net in the PointNet. The G-PointNet network model directly uses as input a point cloud, which is a collection of consecutive points in three-dimensional space, represented as three-dimensional space coordinates (x, y, z) coordinates, sometimes with additional features such as color, laser reflection intensity, etc. Unless otherwise specified, G-PointNet uses only three-dimensional coordinates (x, y, z) as point features.
The nature of the point cloud means that it cannot be directly convolved with the depth model in image processing, the point cloud has 3 characteristics:
1. the set of points is unordered. Any two pixel exchange positions in the image can be changed, the image is different from a two-dimensional image, the point cloud has no specific arrangement sequence, namely, the arrangement sequence of any point in the space is changed, and the shape change of the point cloud is not influenced.
2. Interrelationship between points. Each point has three-dimensional coordinates (x, y, z) that represent each point with spatial information representing its shape, each point is not independent, and adjacent local sets of points may represent meaningful spatial information. Thus, the model needs to be able to capture local structures from nearby points, as well as combined interactions between local structures.
3. The substitution is unchanged. The point cloud is three-dimensional data, which represents any rotation or translation operation, should not affect the final point cloud classification effect.
The G-PointNet network model of the method carries out geometric sampling processing on the original point cloud in a preprocessing stage, so that a space transformation network T-Net in the PointNet is reserved, and the T-Net network can adjust the point cloud to a position suitable for point cloud classification.
A schematic diagram of the geometric sampling process of the method of the present invention is shown in fig. 2. The method selects geometric sampling point cloud data processing. A conventional point cloud classification model, such as PointNet, adopts a Farthest Point Sampling algorithm, randomly selects an initial point, selects a point which is farthest from the initial point to add to the initial point, and then continues iteration until the needed number is iterated out. Farthest Point Sampling each time a sample is taken, the set-to-set distance is calculated, the algorithm time complexity is high, and the obtained sampling point edge features are not obvious. In contrast, the advantages of geometric sampling are very obvious. The geometric sampling treatment process of the invention is as follows:
assuming the number C of input point clouds, the number S of target sampling and the uniform sampling rate U, setting an angle threshold V, dividing the points with neighborhood included angle values larger than the angle threshold V into geometric feature areas G, and dividing the rest points into other areas T; the point cloud is divided into two parts, namely a geometric characteristic area G and other areas T, and the point clouds in the two areas G, T are uniformly sampled respectively to obtain the point clouds after geometric sampling of each area.
The geometric sampling can acquire more point cloud numbers at the places with larger point cloud curvature, however, calculating the point cloud curvature is very time-consuming, greatly increases the working difficulty, and therefore, the curvature effect is approximately achieved by using a simple method: and calculating the normal angle value from the feature point to the neighborhood point in the local point cloud image structure, wherein the larger the normal angle value is, the larger the curvature value is. In fig. 2, c1, c2 and c3 are respectively 3 points cloud points, alpha represents a normal angle value, the curvature effect of c1 is replaced by a normal angle value of c2 and c3 sides, and the two points are in positive correlation.
Step 2, constructing an undirected graph of the geometrically sampled point clouds of each region, introducing an expansion rate E based on a Dynamic KNN local graph construction method, and selectively constructing a local geometric graph every other E adjacent point clouds to obtain a plurality of local geometric graphs, wherein the method comprises the following steps of:
as shown in fig. 3, a schematic diagram of a partial graph construction method Dynamic KNN is shown. A receptive field of a point cloud refers to a set of point cloud nodes including a central node and its neighbors, however, the non-uniformity of the point cloud distribution results in some nodes that may have only one neighbor, while neighbors of other nodes may be as many as thousands.
According to the local graph construction method, the Dynamic KNN introduces an expansion rate E to a KNN algorithm, and the expansion rate can be selected according to the degree of density of point clouds. Dynamic KNN sets two threshold points M, N, where M < N. M and N are the number of target point clouds, the number of the geometrically sampled point clouds is set as X, and the expansion rate is set as E.
Giving an undirected graph G= (v, epsilon) to represent a graph structure of the point cloud, wherein v= {1, &..n, the idea of the edge epsilon v multiplied by v, and the Dynamic KNN is to selectively establish a local geometric graph every E adjacent point clouds according to the sparseness of the point cloud, and establishing a plurality of local geometric graphs after multiple selections. When the point cloud is sparse, e=1, the conventional KNN nearest neighbor idea is adopted. When the point cloud is denser, E points are connected every adjacent E points, a local geometric figure structure is established as a graph convolution input, and the local geometric figure structure is sent into a spectral domain mapping convolution. The Dynamic KNN effectively solves the problem of overhigh node overlap ratio in dense point cloud, and simultaneously reduces the calculation complexity.
And step 3, carrying out spectrum domain graph convolution on each local geometric graph by utilizing a spectrum domain graph convolution method based on Fourier transformation to obtain a plurality of pooled graph local features, and classifying the pooled graph local features by obtaining global features through a PointNet network model to obtain a classification result. Namely, a plurality of constructed local geometric figures are respectively used as a figure convolution input x through a local figure construction method Dynamic KNN, are mapped to a spectrum domain through Fourier transformation to carry out convolution operation, are returned to a airspace through inverse Fourier transformation, and finally a plurality of pooled local figures are classified through a PointNet network model to obtain global features, wherein the method comprises the following specific steps:
firstly, inputting a local geometric graph G= (V, epsilon) to represent an undirected graph, wherein V, epsilon respectively represent a corresponding node set and an edge set, mu, V epsilon V represent nodes in the graph, and mu, V epsilon represent edges in the graph;
next, a laplacian matrix l=d-a of the partial geometry graph G is defined, where a represents an adjacency matrix (adjacency matrix) of the graph in which element a i,j =A j,i I and j respectively represent the number of rows and columns of the matrix where the elements are located, and are used for representing the connection condition of the nodes in the diagram; for an undirected graph with N nodes, the adjacency matrix is an N x N real symmetric matrix. D represents the degree matrix (degree matrix) of the graph, in which element D i,i =∑ j A i,j The degree of a node represents the number of edges connected to that node. L represents the laplacian matrix of the graph, which may be binary or weighted.
The normalization then yields a Laplace matrix, which can be expressed asWherein I is n Is an identity matrix, and since the Laplace matrix is a symmetric matrix, the feature decomposition can be performed into a set of Laplace feature vectors U= (U) 1 ,u 2 ,...u n ) The method comprises the steps of carrying out a first treatment on the surface of the The characteristic vector associated with the Laplace characteristic vector and the larger characteristic value represents a signal with fast change, and is regarded as a high-frequency signal; the eigenvectors associated with smaller eigenvalues carry the slower varying signals, treated as low frequency signals. In the point cloud classification task, finding edge information of an object can be regarded as distinguishing high-frequency signals from low-frequency signals.
Secondly, taking the decomposed Laplace matrix eigenvectors as a group of basis, taking the local geometry as input x and performing Fourier transform to obtainT is the transpose of the matrix; inverse Fourier transform is->The traditional Fourier transform and convolution are migrated to graph convolution, and the core work is to change the characteristic function of the Laplacian operator into the characteristic vector of the Laplacian matrix corresponding to the local geometric graph G. I.e. to obtain the convolution kernel h θ The Fourier transform diagonal matrix form of (Λ) is used for obtaining Fourier transform convolution in a spectrum domain, and then the Fourier transform convolution is subjected to inverse transformation to finally obtain a spectrum domain graph convolution output, and the deduction process is as follows:
(1) regarding the input as f, the Fourier transform of f is then
(2) Convolution kernel h θ Fourier transform diagonal matrix form of (Λ):wherein θ is the kernel, λ is the eigenvector,>u l is a feature vector.
(3) Then a fourier convolution in the spectral domain is obtained:
(4) inverse transform of the fourier transform product
(5) The final spectral domain map convolution formula is:
y=σ(Uh θ (Λ)U T x)
wherein y is the output of the spectrum domain graph convolution; x is the local geometry of the input and σ () is the activation function relu.
Finally, obtaining a plurality of pooled graph local features according to the output y of the spectrum domain graph convolution, and obtaining global features through a PointNet network model to classify the local features to obtain a final classification result.
Therefore, the method can make great improvement in the aspects of feature point acquisition and local area division, provides geometric sampling pretreatment and designs a Dynamic local graph construction method Dynamic KNN, and effectively solves the problem of uneven distribution of the density degree of point cloud. On the premise of not changing the space information of the point cloud, a new expression form-graph is provided, the graph structure effectively solves the problem of the point-to-point adjacent relation of the point cloud deep learning model, the space geometric information is reserved, and the receptive field is effectively increased on the premise of not changing the output size of the image by introducing an hyper-parameter called expansion rate. The Fourier transform-based spectral domain graph convolution network is used as a point cloud feature extraction network, and aiming at a classification task, the Fourier transform can efficiently distinguish edge points of point cloud, and meanwhile, noise points are separated, so that classification precision is improved, and the advantage of spectral domain graph convolution on the point cloud classification task is obvious.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (2)

1. The spectrum domain map convolution 3D point cloud classification method based on Fourier transformation is characterized by comprising the following steps of:
performing geometric sampling processing on the input original point cloud by using a G-PointNet network model: dividing points with neighborhood included angle values larger than an angle threshold V into geometric feature areas G and dividing the rest points into other areas T by setting the angle threshold V, and uniformly sampling point clouds in the two areas to obtain point clouds after geometric sampling of each area;
constructing an undirected graph by using the geometrically sampled point clouds of each region, introducing an expansion rate E based on a Dynamic KNN local graph construction method, and selectively constructing a local geometric graph every E adjacent point clouds to obtain a plurality of local geometric graphs;
carrying out spectral domain graph convolution on each local geometric graph by utilizing a Fourier transform-based spectral domain graph convolution method to obtain a plurality of pooled graph local features, and classifying the global features through a G-PointNet network model to obtain a classification result;
the spectrum domain graph convolution method for carrying out spectrum domain graph convolution on each local geometric graph by utilizing a spectrum domain graph convolution method based on Fourier transformation specifically comprises the following steps:
inputting a local geometric figure G= (V, epsilon), wherein V, epsilon respectively represent a corresponding node set and an edge set, mu, V epsilon V represent nodes in the figure, and (mu, V) epsilon represent edges in the figure;
the laplacian matrix l=d-a defining the local geometry diagram, wherein a represents the adjacency matrix of the diagram, with element a therein i,j =A j,i A matrix for representing the connection situation of the nodes in the graph; d represents the degree matrix of the graph, in which element D i,i =∑ j A i,j The degree of a node represents the number of edges connected to that node;
normalization results in a Laplace matrix, expressed asWherein I is n Is an identity matrix and performs feature decomposition into a set of laplace feature vectors u= (U) 1 ,u 2 ,...u n );
Taking the decomposed Laplace matrix eigenvectors as a set of basis, taking the local geometry as input x and performing Fourier transform to obtainT is the transpose of the matrix; obtaining convolution kernel h θ And (Λ) Fourier transform diagonal matrix form to obtain Fourier transform convolution in the spectrum domain, and then carrying out inverse transformation on the Fourier transform convolution to finally obtain spectrum domain graph convolution output.
2. The fourier transform-based spectral domain map convolution 3D point cloud classification method as recited in claim 1, wherein the spectral domain map convolution output formula obtained by the method is:
y=σ(Uh θ (Λ)U T x)
wherein y is the output of the spectrum domain graph convolution; x is the local geometry of the input and σ () is the activation function relu.
CN202010991678.6A 2020-09-18 2020-09-18 Fourier transform-based spectrum domain map convolution 3D point cloud classification method Active CN112149725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010991678.6A CN112149725B (en) 2020-09-18 2020-09-18 Fourier transform-based spectrum domain map convolution 3D point cloud classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010991678.6A CN112149725B (en) 2020-09-18 2020-09-18 Fourier transform-based spectrum domain map convolution 3D point cloud classification method

Publications (2)

Publication Number Publication Date
CN112149725A CN112149725A (en) 2020-12-29
CN112149725B true CN112149725B (en) 2023-08-22

Family

ID=73892671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010991678.6A Active CN112149725B (en) 2020-09-18 2020-09-18 Fourier transform-based spectrum domain map convolution 3D point cloud classification method

Country Status (1)

Country Link
CN (1) CN112149725B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967296B (en) * 2021-03-10 2022-11-15 重庆理工大学 Point cloud dynamic region graph convolution method, classification method and segmentation method
CN113157864A (en) * 2021-04-25 2021-07-23 平安科技(深圳)有限公司 Key information extraction method and device, electronic equipment and medium
US11295170B1 (en) 2021-08-17 2022-04-05 FPT USA Corp. Group-equivariant convolutional neural networks for 3D point clouds
CN114565774B (en) * 2022-02-21 2024-04-05 辽宁师范大学 3D (three-dimensional) graph volume integration class method based on local geometry and global structure joint learning
GB202207459D0 (en) * 2022-05-20 2022-07-06 Cobra Simulation Ltd Content generation from sparse point datasets
CN115099287B (en) * 2022-08-24 2022-11-11 山东大学 Space variable gene identification and analysis system based on graph Fourier transform
CN116977572B (en) * 2023-09-15 2023-12-08 南京信息工程大学 Building elevation structure extraction method for multi-scale dynamic graph convolution

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102292187A (en) * 2008-11-21 2011-12-21 普雷茨特两合公司 Method and device for monitoring a laser machining operation to be performed on a workpiece and laser machining head having such a device
CN106897707A (en) * 2017-03-02 2017-06-27 苏州中科天启遥感科技有限公司 Characteristic image time series synthetic method and device based in multi-source points
CN110348299A (en) * 2019-06-04 2019-10-18 上海交通大学 The recognition methods of three-dimension object
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111160171A (en) * 2019-12-19 2020-05-15 哈尔滨工程大学 Radiation source signal identification method combining two-domain multi-features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2014254426B2 (en) * 2013-01-29 2018-05-10 Andrew Robert Korb Methods for analyzing and compressing multiple images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102292187A (en) * 2008-11-21 2011-12-21 普雷茨特两合公司 Method and device for monitoring a laser machining operation to be performed on a workpiece and laser machining head having such a device
CN106897707A (en) * 2017-03-02 2017-06-27 苏州中科天启遥感科技有限公司 Characteristic image time series synthetic method and device based in multi-source points
CN110348299A (en) * 2019-06-04 2019-10-18 上海交通大学 The recognition methods of three-dimension object
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111160171A (en) * 2019-12-19 2020-05-15 哈尔滨工程大学 Radiation source signal identification method combining two-domain multi-features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
霉变出芽花生的近红外光谱无损检测研究;黄星奕等;《中国农业科技导报》;第5卷(第6期);27-32 *

Also Published As

Publication number Publication date
CN112149725A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112149725B (en) Fourier transform-based spectrum domain map convolution 3D point cloud classification method
CN105427296B (en) A kind of thyroid gland focus image-recognizing method based on ultrasonoscopy low rank analysis
CN111178432A (en) Weak supervision fine-grained image classification method of multi-branch neural network model
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
Chen et al. LSANet: Feature learning on point sets by local spatial aware layer
CN106650744B (en) The image object of local shape migration guidance is divided into segmentation method
CN103310481B (en) A kind of point cloud compressing method based on fuzzy entropy iteration
Liu et al. TreePartNet: neural decomposition of point clouds for 3D tree reconstruction
CN111028327A (en) Three-dimensional point cloud processing method, device and equipment
Huang et al. Automatic extraction of urban impervious surfaces based on deep learning and multi-source remote sensing data
Lv et al. Deep learning model of image classification using machine learning
CN112634149A (en) Point cloud denoising method based on graph convolution network
He et al. A Method of Identifying Thunderstorm Clouds in Satellite Cloud Image Based on Clustering.
Wang et al. Multispectral point cloud superpoint segmentation
Feng et al. Infrared and visible image fusion based on the total variational model and adaptive wolf pack algorithm
CN116721121A (en) Plant phenotype color image feature extraction method
Wang et al. Novel segmentation algorithm for jacquard patterns based on multi‐view image fusion
CN111986223B (en) Method for extracting trees in outdoor point cloud scene based on energy function
Shetty et al. Skin cancer detection using image processing: A review
Cheng et al. Visual information quantification for object recognition and retrieval
Zhang et al. CAD-Aided 3D Reconstruction of Intelligent Manufacturing Image Based on Time Series
Zhang et al. A method for identifying and repairing holes on the surface of unorganized point cloud
CN111815640A (en) Memristor-based RBF neural network medical image segmentation algorithm
Zhou Lip Print Recognition Algorithm Based on Convolutional Network
Dong et al. 3D Object recognition method based on point cloud sequential coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant