CN115456064A - Object classification method based on point cloud and related equipment - Google Patents

Object classification method based on point cloud and related equipment Download PDF

Info

Publication number
CN115456064A
CN115456064A CN202211076689.7A CN202211076689A CN115456064A CN 115456064 A CN115456064 A CN 115456064A CN 202211076689 A CN202211076689 A CN 202211076689A CN 115456064 A CN115456064 A CN 115456064A
Authority
CN
China
Prior art keywords
point cloud
global
attention mechanism
feature matrix
global attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211076689.7A
Other languages
Chinese (zh)
Other versions
CN115456064B (en
Inventor
吴显峰
赖重远
王俊飞
刘心怡
刘宇炜
周静
刘霞
刘哲
胡亦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yanhong Automation Equipment Co ltd
Yanhong Intelligent Technology Wuhan Co ltd
Jianghan University
Original Assignee
Hunan Yanhong Automation Equipment Co ltd
Yanhong Intelligent Technology Wuhan Co ltd
Jianghan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yanhong Automation Equipment Co ltd, Yanhong Intelligent Technology Wuhan Co ltd, Jianghan University filed Critical Hunan Yanhong Automation Equipment Co ltd
Priority to CN202211076689.7A priority Critical patent/CN115456064B/en
Publication of CN115456064A publication Critical patent/CN115456064A/en
Application granted granted Critical
Publication of CN115456064B publication Critical patent/CN115456064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an object classification method based on point cloud and related equipment, relates to the field of point cloud, and mainly aims to solve the problem that classification precision and stability are difficult to take into account when object classification is carried out based on point cloud. The method comprises the following steps: determining the coordinate data of the aligned point cloud of the target object; determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism; determining global features of the target object based on the high-level feature matrix to determine a classification result. The method is used for the object classification process based on the point cloud.

Description

Object classification method based on point cloud and related equipment
Technical Field
The invention relates to the field of point cloud, in particular to an object classification method based on point cloud and related equipment.
Background
Object classification is a classical problem in visual computing and pattern recognition. With the development of deep neural network technology, the performance of object classification is leap forward, and stronger application potential is shown in robots, automatic driving and augmented reality. Common object representation methods include images and point clouds. Because the image structure has natural orderliness, uniformity and regularity, the deep neural network technology has succeeded in classifying objects by taking images as input first. Compared with image input, although three-dimensional point cloud has the advantages of richer spatial information and being not easily influenced by illumination change, natural disorder, heterogeneity and irregularity of the three-dimensional point cloud make designing a neural network feature extraction and classification method directly taking the three-dimensional point cloud as input full of challenges.
The current common classification methods are as follows: global feature based methods, local feature based methods, and neighborhood feature based methods. Although the global feature-based method has very strong stability to the density change of point cloud caused by the reasons of the distance between captured targets and the like because the features of the points are not influenced by the distribution of surrounding points, the method has the defect of poor classification accuracy; the local feature-based method and the neighborhood feature-based method take the local features and neighborhood characteristics of the point cloud into consideration, so that the performance of the method can be influenced by local deletion and distribution change of the point cloud. Therefore, the technical problem that the classification precision and the stability are difficult to be considered still exists in the prior art.
Disclosure of Invention
In view of the above problems, the present invention provides a method and related apparatus for object classification based on point cloud, and mainly aims to solve the problem that the classification accuracy and stability are difficult to be considered when performing object classification based on point cloud.
In order to solve at least one technical problem, in a first aspect, the present invention provides a method for object classification based on point cloud, including:
determining the coordinate data of the aligned point cloud of the target object;
determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
and determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result.
Optionally, the method further includes:
determining point cloud coordinate data based on the target object;
and carrying out space transformation on the point cloud coordinate data based on a space transformation network to determine the aligned point cloud coordinate data.
Optionally, the determining a high-level feature matrix based on the alignment point cloud coordinate data and the global attention mechanism includes:
determining a target global feature extraction model based on a global attention mechanism based on a global feature extraction framework and the global attention mechanism;
and determining a high-level feature matrix based on the aligned point cloud coordinate data and the target global feature extraction model based on the global attention mechanism.
Optionally, the target global feature extraction model based on the global attention mechanism includes: the system comprises a multilayer perceptron network based on a global attention mechanism, a feature transformation network based on a cascading global attention mechanism and a multilayer perceptron network based on the cascading global attention mechanism.
Alternatively to this, the first and second parts may,
the cascaded global attention mechanism is formed by cascading a plurality of global attention mechanisms,
the multi-layer perceptron network is used for extracting the characteristics of the point cloud data,
the feature transformation network is used for aligning the features of the point cloud data.
Optionally, the determining a high-level feature matrix based on the alignment point cloud coordinate data and the global attention mechanism includes:
carrying out multi-layer perceptron network processing based on a global attention mechanism on the aligned point cloud coordinate data to obtain a low-layer feature matrix;
processing the low-level feature matrix through a feature transformation network based on a cascade global attention mechanism to obtain an alignment low-level feature matrix;
and carrying out multilayer perceptron network processing based on a cascade global attention mechanism on the alignment low-layer feature matrix to obtain a high-layer feature matrix.
Optionally, the determining the global feature of the target object based on the high-level feature matrix to determine a classification result includes:
performing maximum pooling processing on the high-level feature matrix to obtain global features;
and performing full-connection network processing on the global features to classify the target object.
In a second aspect, an embodiment of the present invention further provides an object classification apparatus based on point cloud, including:
the first determining unit is used for determining the aligned point cloud coordinate data of the target object;
a second determining unit, configured to determine a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
and a third determining unit configured to determine a classification result by determining a global feature of the target object based on the high-level feature matrix.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a computer-readable storage medium including a stored program, wherein the steps of the above-described point cloud-based object classification method are implemented when the program is executed by a processor.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided an electronic device comprising at least one processor, and at least one memory connected to the processor; the processor is used for calling the program instructions in the memory and executing the steps of the object classification method based on the point cloud.
By means of the technical scheme, the invention provides an object classification method based on point cloud and related equipment. For the problem that the classification precision and stability are difficult to be considered when object classification is carried out based on point cloud, the invention determines the aligned point cloud coordinate data of a target object; determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism; and determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result. In the scheme, three key networks of the point cloud target global feature extraction model are redesigned, and different global attention mechanisms are organically fused with the original network in a new network, so that each point in the point cloud can fully utilize the features of all the points in each key stage of feature extraction, thereby improving the classification precision, meanwhile, each stage of feature extraction does not involve the division of local point clouds or the calculation of neighborhood of the points, thereby ensuring the classification stability, and solving the technical problem that the classification precision and the stability are difficult to be considered in the prior art.
Accordingly, the object classification device, the apparatus and the computer-readable storage medium based on the point cloud provided by the embodiment of the invention also have the technical effects.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various additional advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a point cloud-based object classification method according to an embodiment of the present invention;
fig. 2 is a schematic network structure diagram illustrating an object classification method based on point cloud according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-layer perceptron network based on a global attention mechanism according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a cascaded global attention mechanism provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a feature transformation network based on a cascaded global attention mechanism according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a multi-layer perceptron network based on a cascaded global attention mechanism according to an embodiment of the present invention;
FIG. 7 is a block diagram illustrating a point cloud-based object classification apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram illustrating a schematic composition of an electronic device for object classification based on point cloud according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to solve the problem that classification accuracy and stability are difficult to be considered at the same time when object classification is performed based on point cloud, the embodiment of the invention provides an object classification method based on point cloud, as shown in fig. 1, the method comprises the following steps:
s101, determining the coordinate data of the aligned point cloud of a target object;
illustratively, the point data set of the product appearance surface obtained by the measuring instrument in the reverse engineering is also called point cloud. The point cloud is a massive point set which expresses the target space distribution and the target surface characteristics under the same space reference system and is often directly obtained by measurement. Each point corresponds to a measurement point, and the maximum information content is included without other processing means. The point cloud obtained according to the laser measurement principle comprises three-dimensional coordinates and laser reflection intensity. The point cloud obtained according to the photogrammetry principle comprises three-dimensional coordinates and color information. And (4) combining laser measurement and photogrammetry principles to obtain point clouds comprising three-dimensional coordinates, laser reflection intensity and color information. After the spatial coordinates of each sampling point on the surface of the object are obtained, a set of points, called a "point cloud", is obtained. The method comprises the steps of firstly obtaining the coordinate data of the aligned point cloud of a target object to be classified.
S102, determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
illustratively, the method introduces a global attention mechanism technology into three-dimensional point cloud classification, three key networks of a point cloud target global feature extraction model are redesigned, and different global attention mechanism modules and an original network framework are organically fused together in a new network to determine a high-level feature matrix.
And S103, determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result.
Exemplarily, the method enables all the features of the points to be fully utilized in each key stage of feature extraction of each point in the point cloud, and each stage of feature extraction does not involve division of local point cloud or calculation of neighborhood of the point, so that the technical problem that classification accuracy and stability are difficult to be considered in the prior art is solved.
Illustratively, the specific terms in the scheme are as follows: the Point Cloud is Point Cloud, the Down-Sampling is Down Sampling, the space transformation Network is Spatial transform Network, the three-dimensional coordinate Matrix 3 DCoordinationMatrix, the Aligned three-dimensional coordinate Matrix is Aligned 3D Coordinates Matrix, the Feature Extraction Model is Feature Extraction Model, the Global Attention machine is Global Attention Mechanism, the cascading Global Attention machine is Cascaded Global Attention Mechanism, the Multi-Layer Perceptron Network is Multi-Layer Perfectron Network, the Feature transformation Network is Feature transformation Network, the Low-Layer Feature Matrix is Low-Level Feature Matrix, the High-Layer Feature Matrix is High-Level Feature Matrix, the Aligned Low-Layer Feature Matrix is Aligned Low-Level Feature Matrix, the maximum is Max-Max matching, the full-dimensional coordinate Matrix is full-Level Feature Matrix, the Aligned Low-Level Feature Matrix is Aligned Low-Level Feature Matrix, the full-Level Feature Matrix is Aligned Low-Level Feature Matrix, the Aligned Low-Level Feature Matrix is Aligned Low-Level Feature Matrix, the maximum is Max-Max, the full-Feature Matrix is Connected with the Feature Vector, and the Feature Classification is Global Feature Vector. .
By means of the technical scheme, the invention provides an object classification method based on point cloud. For the problem that the classification precision and stability are difficult to be considered when object classification is carried out based on point cloud, the invention determines the aligned point cloud coordinate data of a target object; determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism; and determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result. In the scheme, three key networks of the point cloud target global feature extraction model are redesigned, and different global attention mechanisms are organically fused with the original network in a new network, so that each point in the point cloud can fully utilize the features of all the points in each key stage of feature extraction, the classification precision is improved, and the division of local point cloud or the calculation of neighborhood of the point is not involved in each stage of feature extraction, so that the classification stability is ensured, and the technical problem that the classification precision and the stability are difficult to take into account in the prior art is solved.
In one embodiment, the method further comprises:
determining point cloud coordinate data based on the target object;
and performing space transformation on the point cloud coordinate data based on a space transformation network to determine the aligned point cloud coordinate data.
Illustratively, for convenience of description, let P = [ P ] 1 ,p 2 ,K,p N ] T A three-dimensional coordinate matrix representing the down-sampled N × 3-dimensional input point cloud, i.e., the point cloud coordinate data, where p is i And the three-dimensional coordinate vector of the ith point in the input point cloud three-dimensional coordinate matrix after the down-sampling is represented, N represents the number of the midpoint of the input point cloud after the down-sampling, and T represents the transposition of the matrix. Let C denote the output classification vector in the C dimension, where C denotes the number of classes. In the process of classifying the point cloud object, the input point cloud P firstly obtains an aligned N × 3 dimensional coordinate matrix through a spatial transformation network, that is, the aligned point cloud coordinate data.
In an embodiment, the determining the high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
determining a target global feature extraction model based on a global attention mechanism based on a global feature extraction framework and the global attention mechanism;
and determining a high-level feature matrix based on the alignment point cloud coordinate data and the target global feature extraction model based on the global attention mechanism.
Illustratively, the method redesigns a point cloud target global feature extraction model, and in a new model, different global attention mechanisms are organically fused with an original global feature extraction framework, so that each point in the point cloud can fully utilize the features of all the points with the aid of the global attention mechanism at each key stage of feature extraction, and the classification precision of the point cloud object is greatly improved. Meanwhile, each stage of feature extraction does not involve division of local point cloud or calculation of neighborhood of points, so that the influence of local loss and distribution change of the point cloud is avoided, and the stability of super-strong classification precision can be kept under the extreme condition of sharp reduction of the number of point clouds.
In one embodiment, the global attention mechanism-based target global feature extraction model includes: the system comprises a multilayer perceptron network based on a global attention mechanism, a feature transformation network based on a cascading global attention mechanism and a multilayer perceptron network based on the cascading global attention mechanism.
Illustratively, the method redesigns three key networks of a point cloud target global feature extraction model, namely a multilayer perceptron network based on a global attention mechanism for extracting low-layer features, a feature transformation network based on a cascade global attention mechanism for aligning the low-layer features and a multilayer perceptron network based on a cascade global attention mechanism for extracting high-layer features. In a new network, different global attention mechanisms are organically combined with an original global feature extraction framework, so that each point in the point cloud can fully utilize the features of all the points in each key stage of feature extraction, and the classification precision of the point cloud object is greatly improved.
In one embodiment of the method of manufacturing the optical fiber,
the cascaded global attention mechanism is formed by cascading a plurality of global attention mechanisms,
the multi-layer perceptron network is used for extracting the characteristics of the point cloud data,
the feature transformation network is used for aligning the features of the point cloud data.
Illustratively, the cascaded global attention mechanism is used in the following two key networks, namely the feature transformation network based on the cascaded global attention mechanism and the multilayer perceptron network based on the cascaded global attention mechanism, and the main function of the mechanism is to obtain global features under different attention concentration degrees. As shown in fig. 4, the cascaded global attention mechanism is designed as follows. The mechanism is formed by cascading m global attention mechanisms with the same structure. In the process of extracting global features under different attention concentration degrees, an input feature matrix of N multiplied by D dimension is firstly processed by a global attention mechanism 1 to obtain a first feature matrix of N multiplied by D dimension, the feature matrix is processed by a global attention mechanism 2 to obtain a feature matrix of second N multiplied by D dimension, and the like until the feature matrix is processed by a global attention mechanism m, wherein D represents the dimension of an input feature vector of a point cloud midpoint. And finally, sequentially splicing the m NxD dimensional matrixes to form the final NxmD dimensional characteristic. As shown in fig. 4, as the feature matrix passes through the global attention mechanism, the degree of focus of the features is deepened, the features with different degrees of focus are cascaded together, and the obtained final features can more accurately represent the features of various scales in the object point cloud. Therefore, compared with a single global attention mechanism, the features obtained by the cascade global attention mechanism have stronger resolution, and the overall classification precision is further improved.
In an embodiment, the determining the high level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
carrying out multi-layer perceptron network processing based on a global attention mechanism on the aligned point cloud coordinate data to obtain a low-layer feature matrix;
performing feature transformation network processing based on a cascade global attention mechanism on the low-level feature matrix to obtain an aligned low-level feature matrix;
and carrying out multilayer perceptron network processing based on a cascade global attention mechanism on the alignment low-layer feature matrix to obtain a high-layer feature matrix.
Illustratively, the function of the multi-tier perceptron network based on the global attention mechanism is to extract low-tier features from the aligned point cloud coordinates. As shown in fig. 3, the network consists of a multi-tier perceptron and a global attention mechanism. In the process of extracting low-level features, aligning an Nx3 dimensional point cloud coordinate matrix, and obtaining NxD through a multilayer perceptron L Dimension feature matrix of which D L The dimension of the low-level features of the points in the point cloud. The feature matrix and its NxD obtained by the global attention mechanism module L Adding dimensional feature matrices to obtain NxD L The low-level feature matrix is dimensional. As shown in FIG. 3, compared with the multi-layer perceptron network for extracting low-layer features in the point cloud object classification method based on global features, the method is novelA global attention mechanism module is added. In the process of extracting the low-level features, the features output by the global attention module are superposed on the features output by the original multi-level perceptron. The network design effectively enhances the identification power of low-level features and is beneficial to the improvement of the whole classification precision and stability.
Illustratively, the function of the feature transformation network based on the cascaded global attention mechanism is to align the extracted low-level features. As shown in fig. 5, the network consists of three multi-layered perceptrons, a cascaded global attention mechanism and a max-pooling layer. Input NxD during alignment of low-level features L Firstly, the dimensional low-level feature matrix passes through a multilayer perceptron 1 to obtain NxD 1 Dimensional feature matrix, wherein D 1 The dimensions of the features of the points output by the multi-layer perceptron 1. Then, by cascading global attention mechanism modules, nxm is obtained 1 D 1 Dimensional feature matrix, where m 1 Representing the number of cascades of global attention mechanism modules for the feature transformation network. Then, the NxD is obtained through a multilayer perceptron 2 2 Dimensional feature matrix, wherein D 2 The dimension of the feature of the point output by the multi-layer perceptron 2. Then passing through the maximum pooling layer to obtain 1 XD 2 And (5) dimension vector. Finally, obtaining D through a multilayer perceptron 3 L ×D L And (5) dimension feature transformation matrix. To be inputted NxD L Multiplying the dimensional low-level feature matrix by the matrix to obtain the alignment NxD L The low-level feature matrix is dimensional. As shown in fig. 5, compared with the feature transformation network for aligning the low-level features in the point cloud object classification method based on the global features, the network adds a cascaded global attention mechanism in the feature transformation matrix solving part. Due to the characteristic that the cascade global attention mechanism focuses on important features in multiple layers in the global range, the design can align the object point cloud features under all scales more comprehensively and accurately, and the improvement of the overall classification precision is facilitated.
Illustratively, the function of the multi-tier perceptron network based on a cascaded global attention mechanism is from alignmentAnd extracting high-level features from the low-level features. As shown in fig. 6, the multilayer perceptron network based on the cascaded global attention mechanism is designed as follows, and the network is composed of two multilayer perceptrons and one cascaded global attention mechanism. In the process of extracting high-level features, aligning a low-level feature matrix, and obtaining NxD through a multilayer perceptron 1 3 Dimensional feature matrix, wherein D 3 The dimensions of the features of the points output by the multi-layer perceptron 1. Then obtaining Nxm through a cascade global attention mechanism 2 D 3 Dimensional feature matrix, where m 2 The number of cascades of global attention mechanism modules for a multi-tier perceptron network is indicated. Finally, obtaining NxD through a multilayer perceptron 2 H Dimension the high level feature matrix, wherein D H Is the dimension of the high level feature at the point in the point cloud. As shown in fig. 6, compared with the multi-layer perceptron network for extracting high-layer features in the point cloud object classification method based on global features, the network adds a cascaded global attention mechanism. The characteristics that the cascade global attention mechanism focuses on important features in a global range in a multi-scale mode and improves stability in a multi-level mode are benefited, the network design comprehensively enhances the identification power of high-level features, and the improvement of the overall classification precision and the stability is facilitated.
In an embodiment, the determining the global feature of the target object based on the high-level feature matrix to determine the classification result includes:
performing maximum pooling processing on the high-level feature matrix to obtain global features;
and performing full-connection network processing on the global features to classify the target object.
Exemplary, get NxD H After the high-level feature matrix of the dimension is obtained, the maximum pooling layer is used for obtaining 1 multiplied by D H And finally, obtaining a C-dimensional output classification vector C through a full-connection network to classify the target object.
Illustratively, as shown in FIG. 2, the overall network framework is designed as follows. The framework is composed of a space transformation network, a multilayer perceptron network based on a global attention mechanism and a special frame based on a cascade global attention mechanism in sequenceThe system comprises a sign conversion network, a multilayer perceptron network based on a cascade global attention mechanism, a maximum pooling layer and a full-connection network. In the process of point cloud object classification, an input point cloud P is firstly aligned through a space transformation network to obtain an aligned point cloud, namely an aligned NxD 3-dimensional coordinate matrix, and then an NxD is obtained through a multi-layer perceptron network based on a global attention mechanism L A dimensional low-level feature matrix is obtained, and then N multiplied by D is obtained through a feature transformation network based on a cascading global attention mechanism L The dimension is aligned with the low-level feature matrix, and then NxD is obtained through a multilayer perceptron network based on a cascading global attention mechanism H Obtaining 1 × D through maximum pooling layer by using high-level feature matrix of dimension H And finally, obtaining a C-dimensional output classification vector C through a full-connection network by using the global features of the dimension.
According to the method, three key networks of the point cloud object classification method based on the global features are redesigned, namely a multilayer perceptron network for extracting low-level features, a feature transformation network for aligning the low-level features and a multilayer perceptron network for extracting high-level features. In a new network, different global attention mechanisms are organically fused with an original network architecture, so that each point in the point cloud can fully utilize the characteristics of all the points in each key stage of characteristic extraction, and the classification precision of the point cloud object is greatly improved. And each stage of feature extraction does not involve division of local point cloud or calculation of neighborhood of points, so that the method is not influenced by local loss and distribution change of the point cloud unlike the existing method based on local features and the method based on neighborhood features, and can still maintain the stability of super-strong classification precision under the extreme condition of sharp reduction of the number of point clouds. In the three global attention mechanism modules used in the method, the feature calculation of each point is related to all points in the point cloud, and the calculation cost is in direct proportion to the square of the number of points in the input point cloud. Due to the fact that the number of the midpoints of the input point clouds can be effectively controlled through down sampling in point cloud preprocessing, compared with a point cloud object classification method based on global features, the point cloud object classification method based on the global features is low in newly added calculation cost of the three global attention mechanism modules. Therefore, the method well keeps the advantages of moderate parameters and high calculation efficiency of the point cloud object classification method based on the global characteristics. Meanwhile, the characteristics of the three global attention mechanism modules for improving the feature resolution bring about great improvement on the overall classification precision.
As an example, as an implementation of the method, the specific implementation steps may be:
(1) Performing space transformation on the down-sampled input point cloud coordinate matrix P to obtain an aligned Nx 3-dimensional point cloud coordinate matrix;
(2) NxD through global attention-based multi-tier perceptron network L A dimensional low-level feature matrix;
(2.1) inputting the aligned Nx3-dimensional point cloud coordinate matrix obtained in the step (1) into a multilayer perceptron based on a global attention mechanism multilayer perceptron network to obtain the NxD of the object point cloud L A dimensional feature matrix;
(2.2) mixing the above NxD L Dimension characteristic matrix is input into a global attention mechanism based on a global attention mechanism multilayer perceptron network to obtain NxD L A dimensional feature matrix;
(2.3) converting the NxD obtained in step (2.1) L Dimension feature matrix and NxD obtained in step (2.2) L Adding dimensional feature matrices to obtain NxD L The low-level feature matrix is dimensional.
(3) Obtaining aligned nxd through feature transformation networks based on cascaded global attention mechanism L Dimensional low-level features;
(3.1) subjecting the NxD obtained in the step (2) L Inputting the feature matrix of the dimensional low layer into the first multi-layer perceptron of the feature transformation network to obtain NxD 1 A dimensional feature matrix;
(3.2) mixing the above N.times.D 1 Dimension characteristic matrix input characteristic transformation network cascade global attention mechanism to obtain Nxm 1 D 1 A dimensional feature matrix;
(3.2.1) mixing the NxD obtained in the step (3.1) 1 The dimension characteristic matrix is input into a first global attention mechanism module of the cascade global attention mechanism to obtain the NxD output by the first module 1 A dimensional feature matrix;
(3.2.2) NxD of the output of the first module 1 The dimensional feature matrix is input into a second global attention mechanism module to obtain the NxD output by the second module 1 Dimension feature matrix, and so on until the m-th is obtained 1 NxD of global attention mechanism module output 1 A dimensional feature matrix;
(3.2.3) subjecting m obtained in the steps (3.2.1) and (3.2.2) to 1 NxD of global attention mechanism module output 1 The dimension characteristic matrixes are connected end to obtain the Nxm output by the cascade global attention mechanism module in the characteristic transformation network based on the cascade global attention mechanism 1 D 1 A dimensional feature matrix.
(3.3) mixing the above N.times.m 1 D 1 Inputting the dimensional feature matrix into a second multi-layer perceptron of the feature transformation network to obtain NxD 2 A dimensional feature matrix;
(3.4) mixing the above N.times.D 2 Dimension feature matrix is input into the maximum pooling layer of the feature transformation network to obtain 1 × D 2 A dimension vector;
(3.5) mixing the above-mentioned 1 XD 2 Inputting the dimension vector into a third multilayer perceptron of the feature transformation network to obtain D L ×D L A dimensional feature transformation matrix;
(3.6) converting the NxD obtained in the step (2) L Dimension lower layer feature matrix and D above L ×D L Multiplying the dimensional feature transformation matrix to obtain the alignment NxD L The low-level feature matrix is dimensional.
(4) NxD through a multi-tier perceptron network based on a cascaded global attention mechanism H Dimension high-level features;
(4.1) aligning NxD obtained in step (3) L Inputting the dimensional low-level feature matrix into the first multi-layer perceptron of the multi-layer perceptron network based on the cascade global attention mechanism to obtain NxD 3 A dimensional feature matrix;
(4.2) mixing the above NxD 3 The dimension characteristic matrix is input into a cascade global attention mechanism of a multilayer perceptron network based on the cascade global attention mechanism to obtain Nxm 2 D 3 A dimensional feature matrix;
(4.3) mixing the above N.times.m 2 D 3 Inputting the dimensional characteristic matrix into a second multilayer perceptron of the multilayer perceptron network based on the cascade global attention mechanism to obtain NxD H Maintaining a high-level feature matrix;
(5) Mixing the above N × D H Obtaining 1 XD by adopting maximum pooling for dimensional high-rise feature matrix H And (5) carrying out object classification by using the dimensional global features to obtain a C-dimensional output classification vector C.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention further provides an object classification apparatus based on point cloud. The embodiment of the apparatus corresponds to the embodiment of the method, and for convenience of reading, details in the embodiment of the apparatus are not described again one by one, but it should be clear that the apparatus in the embodiment can correspondingly implement all the contents in the embodiment of the method. As shown in fig. 7, the apparatus includes: a first determining unit 21, a second determining unit 22 and a third determining unit 23, wherein
A first determination unit 21 for determining alignment point cloud coordinate data of the target object;
a second determining unit 22, configured to determine a high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism;
a third determining unit 23, configured to determine a global feature of the target object based on the high-level feature matrix to determine a classification result.
Exemplarily, the unit is further configured to:
determining point cloud coordinate data based on the target object;
and carrying out space transformation on the point cloud coordinate data based on a space transformation network to determine the aligned point cloud coordinate data.
Illustratively, the determining the high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
determining a target global feature extraction model based on a global attention mechanism based on a global feature extraction framework and the global attention mechanism;
and determining a high-level feature matrix based on the alignment point cloud coordinate data and the target global feature extraction model based on the global attention mechanism.
Illustratively, the target global feature extraction model based on the global attention mechanism includes: the system comprises a multilayer perceptron network based on a global attention mechanism, a feature transformation network based on a cascading global attention mechanism and a multilayer perceptron network based on the cascading global attention mechanism.
In an exemplary manner, the first and second electrodes are,
the cascaded global attention mechanism is formed by cascading a plurality of global attention mechanisms,
the multi-layer perceptron network is used for extracting the characteristics of the point cloud data,
the feature transformation network is used for aligning the features of the point cloud data.
Illustratively, the determining the high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
carrying out multi-layer perceptron network processing based on a global attention mechanism on the aligned point cloud coordinate data to obtain a low-layer characteristic matrix;
performing feature transformation network processing based on a cascade global attention mechanism on the low-level feature matrix to obtain an alignment low-level feature matrix;
and carrying out multilayer perceptron network processing based on a cascade global attention mechanism on the alignment low-layer feature matrix to obtain a high-layer feature matrix.
For example, the determining the global feature of the target object based on the high-level feature matrix to determine the classification result includes:
performing maximum pooling processing on the high-level feature matrix to obtain global features;
and carrying out full-connection network processing on the global features so as to classify the target object.
By means of the technical scheme, the object classification device based on the point cloud provided by the invention solves the problem that the classification precision and stability are difficult to take into account when the object classification is carried out based on the point cloud, and the object classification device based on the point cloud determines the aligned point cloud coordinate data of a target object; determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism; and determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result. In the scheme, three key networks of the point cloud classification method based on the global features are redesigned, and different global attention mechanisms are organically fused with the original network in a new network, so that each point in the point cloud can fully utilize the features of all the points in each key stage of feature extraction, the classification precision is improved, and the division of local point cloud or the calculation of neighborhood of the point is not involved in each stage of feature extraction, so that the classification stability is ensured, and the technical problem that the classification precision and the stability are difficult to take into account in the prior art is solved.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, the object classification method based on the point cloud is realized by adjusting the kernel parameters, and the problem that the classification precision and the stability are difficult to be considered when the object classification is carried out based on the point cloud can be solved.
An embodiment of the present invention provides a computer-readable storage medium, which includes a stored program, and when the program is executed by a processor, the program implements the above object classification method based on point cloud.
The embodiment of the invention provides a processor, which is used for running a program, wherein the object classification method based on point cloud is executed when the program runs.
The embodiment of the invention provides electronic equipment, which comprises at least one processor and at least one memory connected with the processor; the processor is used for calling the program instructions in the memory and executing the object classification method based on the point cloud
An embodiment of the present invention provides an electronic device 30, as shown in fig. 8, the electronic device includes at least one processor 301, at least one memory 302 connected to the processor, and a bus 303; wherein, the processor 301 and the memory 302 complete the communication with each other through the bus 303; the processor 301 is configured to call program instructions in the memory to perform the above-described point cloud-based object classification method.
The intelligent electronic device herein may be a PC, PAD, mobile phone, etc.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a flow management electronic device:
determining the coordinate data of the aligned point cloud of the target object;
determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
and determining the global characteristics of the target object based on the high-level characteristic matrix to determine a classification result.
Further, the method further comprises:
determining point cloud coordinate data based on the target object;
and carrying out space transformation on the point cloud coordinate data based on a space transformation network to determine the aligned point cloud coordinate data.
Further, the determining a high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
determining a target global feature extraction model based on a global attention mechanism based on a global feature extraction framework and the global attention mechanism;
and determining a high-level feature matrix based on the alignment point cloud coordinate data and the target global feature extraction model based on the global attention mechanism.
Further, the target global feature extraction model based on the global attention mechanism includes: the system comprises a multilayer perceptron network based on a global attention mechanism, a feature transformation network based on a cascading global attention mechanism and a multilayer perceptron network based on the cascading global attention mechanism.
In a further aspect of the present invention,
the cascaded global attention mechanism is formed by cascading a plurality of global attention mechanisms,
the multi-layer perceptron network is used for extracting the characteristics of the point cloud data,
the feature transformation network is used for aligning the features of the point cloud data.
Further, the determining a high level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism includes:
carrying out multi-layer perceptron network processing based on a global attention mechanism on the aligned point cloud coordinate data to obtain a low-layer characteristic matrix;
performing feature transformation network processing based on a cascade global attention mechanism on the low-level feature matrix to obtain an aligned low-level feature matrix;
and carrying out multilayer perceptron network processing based on a cascade global attention mechanism on the alignment low-layer feature matrix to obtain a high-layer feature matrix.
Further, the determining the global feature of the target object based on the high-level feature matrix to determine a classification result includes:
performing maximum pooling processing on the high-level feature matrix to obtain global features;
and performing full-connection network processing on the global features to classify the target object.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present application further provide a computer program product, which includes computer software instructions, when the computer software instructions are executed on a processing device, the processing device is caused to execute the flow of controlling the memory as in the corresponding embodiment of fig. 1.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). A computer-readable storage medium may be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An object classification method based on point cloud, characterized by comprising:
determining the coordinate data of the aligned point cloud of the target object;
determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
determining global features of the target object based on the high-level feature matrix to determine a classification result.
2. The method of claim 1, further comprising:
determining point cloud coordinate data based on the target object;
spatially transforming the point cloud coordinate data based on a spatial transformation network to determine the aligned point cloud coordinate data.
3. The method of claim 1, wherein determining a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism comprises:
determining a target global feature extraction model based on a global attention mechanism based on a global feature extraction architecture and the global attention mechanism;
determining a high-level feature matrix based on the aligned point cloud coordinate data and the global attention mechanism-based target global feature extraction model.
4. The method of claim 3, wherein the global attention mechanism based target global feature extraction model comprises: the system comprises a multilayer perceptron network based on a global attention mechanism, a feature transformation network based on a cascading global attention mechanism and a multilayer perceptron network based on the cascading global attention mechanism.
5. The method of claim 4,
the cascade global attention mechanism is formed by cascading a plurality of global attention mechanisms,
the multilayer perceptron network is used for extracting the characteristics of the point cloud data,
the feature transformation network is used for aligning the features of the point cloud data.
6. The method of claim 4, wherein determining a high level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism comprises:
carrying out multi-layer perceptron network processing based on a global attention mechanism on the aligned point cloud coordinate data to obtain a low-layer feature matrix;
processing the low-level feature matrix through a feature transformation network based on a cascading global attention mechanism to obtain an alignment low-level feature matrix;
and carrying out multilayer perceptron network processing based on a cascade global attention mechanism on the aligned low-layer feature matrix to obtain a high-layer feature matrix.
7. The method of claim 1, wherein the determining global features of the target object based on the high-level feature matrix to determine a classification result comprises:
performing maximum pooling processing on the high-level feature matrix to obtain global features;
and performing full-connection network processing on the global features to classify the target object.
8. An object classification method device based on point cloud is characterized in that,
the first determining unit is used for determining the aligned point cloud coordinate data of the target object;
a second determining unit, configured to determine a high-level feature matrix based on the aligned point cloud coordinate data and a global attention mechanism;
a third determining unit, configured to determine a global feature of the target object based on the high-level feature matrix to determine a classification result.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the steps of the point cloud based object classification method according to any one of claims 1 to 7 are implemented when the program is executed by a processor.
10. An electronic device, comprising at least one processor, and at least one memory coupled to the processor; wherein the processor is configured to invoke program instructions in the memory to perform the steps of the point cloud based object classification method of any one of claims 1 to 7.
CN202211076689.7A 2022-09-05 2022-09-05 Object classification method based on point cloud and related equipment Active CN115456064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211076689.7A CN115456064B (en) 2022-09-05 2022-09-05 Object classification method based on point cloud and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211076689.7A CN115456064B (en) 2022-09-05 2022-09-05 Object classification method based on point cloud and related equipment

Publications (2)

Publication Number Publication Date
CN115456064A true CN115456064A (en) 2022-12-09
CN115456064B CN115456064B (en) 2024-02-02

Family

ID=84300193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211076689.7A Active CN115456064B (en) 2022-09-05 2022-09-05 Object classification method based on point cloud and related equipment

Country Status (1)

Country Link
CN (1) CN115456064B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197215A (en) * 2019-05-22 2019-09-03 深圳市牧月科技有限公司 A kind of ground perception point cloud semantic segmentation method of autonomous driving
CN111046781A (en) * 2019-12-09 2020-04-21 华中科技大学 Robust three-dimensional target detection method based on ternary attention mechanism
CN111242208A (en) * 2020-01-08 2020-06-05 深圳大学 Point cloud classification method, point cloud segmentation method and related equipment
CN111931790A (en) * 2020-08-10 2020-11-13 武汉慧通智云信息技术有限公司 Laser point cloud extraction method and device
CN112257637A (en) * 2020-10-30 2021-01-22 福州大学 Vehicle-mounted laser point cloud multi-target identification method integrating point cloud and multiple views
CN112818999A (en) * 2021-02-10 2021-05-18 桂林电子科技大学 Complex scene 3D point cloud semantic segmentation method based on convolutional neural network
KR20210106703A (en) * 2020-02-21 2021-08-31 전남대학교산학협력단 Semantic segmentation system in 3D point cloud and semantic segmentation method in 3D point cloud using the same
CN113569979A (en) * 2021-08-06 2021-10-29 中国科学院宁波材料技术与工程研究所 Three-dimensional object point cloud classification method based on attention mechanism
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN113988164A (en) * 2021-10-21 2022-01-28 电子科技大学 Representative point self-attention mechanism-oriented lightweight point cloud target detection method
WO2022032823A1 (en) * 2020-08-10 2022-02-17 中国科学院深圳先进技术研究院 Image segmentation method, apparatus and device, and storage medium
CN114170465A (en) * 2021-12-08 2022-03-11 中国联合网络通信集团有限公司 Attention mechanism-based 3D point cloud classification method, terminal device and storage medium
CN114444613A (en) * 2022-02-11 2022-05-06 吉林大学 Object classification and object segmentation method based on 3D point cloud information
CN114445816A (en) * 2022-01-24 2022-05-06 内蒙古包钢医院 Pollen classification method based on two-dimensional image and three-dimensional point cloud
CN114913330A (en) * 2022-07-18 2022-08-16 中科视语(北京)科技有限公司 Point cloud component segmentation method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197215A (en) * 2019-05-22 2019-09-03 深圳市牧月科技有限公司 A kind of ground perception point cloud semantic segmentation method of autonomous driving
CN111046781A (en) * 2019-12-09 2020-04-21 华中科技大学 Robust three-dimensional target detection method based on ternary attention mechanism
CN111242208A (en) * 2020-01-08 2020-06-05 深圳大学 Point cloud classification method, point cloud segmentation method and related equipment
KR20210106703A (en) * 2020-02-21 2021-08-31 전남대학교산학협력단 Semantic segmentation system in 3D point cloud and semantic segmentation method in 3D point cloud using the same
CN111931790A (en) * 2020-08-10 2020-11-13 武汉慧通智云信息技术有限公司 Laser point cloud extraction method and device
WO2022032823A1 (en) * 2020-08-10 2022-02-17 中国科学院深圳先进技术研究院 Image segmentation method, apparatus and device, and storage medium
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN112257637A (en) * 2020-10-30 2021-01-22 福州大学 Vehicle-mounted laser point cloud multi-target identification method integrating point cloud and multiple views
CN112818999A (en) * 2021-02-10 2021-05-18 桂林电子科技大学 Complex scene 3D point cloud semantic segmentation method based on convolutional neural network
CN113569979A (en) * 2021-08-06 2021-10-29 中国科学院宁波材料技术与工程研究所 Three-dimensional object point cloud classification method based on attention mechanism
CN113988164A (en) * 2021-10-21 2022-01-28 电子科技大学 Representative point self-attention mechanism-oriented lightweight point cloud target detection method
CN114170465A (en) * 2021-12-08 2022-03-11 中国联合网络通信集团有限公司 Attention mechanism-based 3D point cloud classification method, terminal device and storage medium
CN114445816A (en) * 2022-01-24 2022-05-06 内蒙古包钢医院 Pollen classification method based on two-dimensional image and three-dimensional point cloud
CN114444613A (en) * 2022-02-11 2022-05-06 吉林大学 Object classification and object segmentation method based on 3D point cloud information
CN114913330A (en) * 2022-07-18 2022-08-16 中科视语(北京)科技有限公司 Point cloud component segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUANG DENG, ET AL.: "GA-NET: Global Attention Network for Point Cloud Semantic Segmentation", ARXIV:2107.03101V1 [CS.CV], pages 1 - 5 *
YICHAO LIU, ET AL.: "Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions", ARXIV:2112.05561V1 [CS.CV], pages 1 - 6 *
潘海鹏等: "基于语义信息与动态特征点剔除的SLAM算法", 浙江理工大学学报(自然科学版), vol. 47, no. 5, pages 764 - 773 *

Also Published As

Publication number Publication date
CN115456064B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN111414953B (en) Point cloud classification method and device
CN112560980A (en) Training method and device of target detection model and terminal equipment
EP3907671A2 (en) Method and apparatus for incrementally training model
CN114943673A (en) Defect image generation method and device, electronic equipment and storage medium
Dengpan et al. Faster and transferable deep learning steganalysis on GPU
CN112258625A (en) Single image to three-dimensional point cloud model reconstruction method and system based on attention mechanism
CN116630514A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN114330565A (en) Face recognition method and device
Wiesner et al. On generative modeling of cell shape using 3D GANs
JP2023131117A (en) Joint perception model training, joint perception method, device, and medium
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN117521768A (en) Training method, device, equipment and storage medium of image search model
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN111898544A (en) Character and image matching method, device and equipment and computer storage medium
CN115631192B (en) Control method, device, equipment and medium for valve pressure tester
CN115456064A (en) Object classification method based on point cloud and related equipment
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN114494782B (en) Image processing method, model training method, related device and electronic equipment
CN112307243A (en) Method and apparatus for retrieving image
CN113255512B (en) Method, apparatus, device and storage medium for living body identification
CN113065521B (en) Object identification method, device, equipment and medium
CN113096199B (en) Point cloud attribute prediction method, device and medium based on Morton code
CN113780148A (en) Traffic sign image recognition model training method and traffic sign image recognition method
CN113158801A (en) Method for training face recognition model and recognizing face and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant