CN115471833A - Dynamic local self-attention convolution network point cloud analysis system and method - Google Patents
Dynamic local self-attention convolution network point cloud analysis system and method Download PDFInfo
- Publication number
- CN115471833A CN115471833A CN202211010334.8A CN202211010334A CN115471833A CN 115471833 A CN115471833 A CN 115471833A CN 202211010334 A CN202211010334 A CN 202211010334A CN 115471833 A CN115471833 A CN 115471833A
- Authority
- CN
- China
- Prior art keywords
- attention
- local self
- dynamic local
- point cloud
- dimensional point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000004458 analytical method Methods 0.000 title claims abstract description 26
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 17
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000002776 aggregation Effects 0.000 claims description 25
- 238000004220 aggregation Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 241000282326 Felis catus Species 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000006116 polymerization reaction Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a dynamic local self-attention convolution network point cloud analysis system and a dynamic local self-attention convolution network point cloud analysis method. The invention comprises a dynamic local self-attention convolution network point cloud analysis system. The method includes the steps that multiple groups of original three-dimensional point cloud data are introduced, each group of preprocessed three-dimensional point cloud data is obtained through data preprocessing, and real label categories are marked manually; constructing a dynamic local self-attention convolution network, inputting the preprocessed three-dimensional point cloud data into the dynamic local self-attention convolution network to obtain a prediction label category, and performing network optimization by combining a loss function and an SGD algorithm; the upper computer collects indoor three-dimensional point cloud data in real time through a laser radar, obtains the indoor three-dimensional point cloud data after real-time preprocessing through data preprocessing, and then obtains the prediction label category of the point cloud data through the optimized dynamic local self-attention convolution network prediction; the method overcomes the uncertainty problems of noise, space deformation and the like, and improves the accuracy of 3D point cloud shape identification.
Description
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a dynamic local self-attention convolution network point cloud analysis system and method.
Background
Point clouds are a common 3D data format in CAX applications, which span a variety of projects and disciplines. Recently, real world point cloud collection devices and software tools have been developed to make point cloud collection faster, cheaper, and larger. The point cloud is becoming a general data representation in engineering fields such as civil engineering, building modeling, traffic engineering and the like by virtue of abundant geometric and semantic information and simple data format, and is receiving more and more attention. However, due to the fact that the geometrical and semantic information of the point cloud is complex, the data structure is discrete, particularly the amount of the point cloud data is continuously increased, and the complicated application scene is accompanied, the processing difficulty of the point cloud is large, and the point cloud is difficult to apply to the field of CAX engineering.
In early engineering applications, point clouds were often used for reverse engineering. Point clouds are typically acquired and initially processed by three-dimensional coordinate measuring equipment, with a very small number of points in each point cloud target. In order to be applied in different application fields, people usually need to construct different algorithms for different point cloud data to better process the point cloud. Gradually, the past methods have been unable to process point cloud data of increasing scale, and thus, deep learning-based methods have been considered and various types of deep learning-based solutions have been devised. In short, methods based on deep learning can be classified into two categories, regularized data methods and regularized computation methods. As the name implies, the regularization data method is to convert irregular and disordered point cloud data into regularization data, such as a two-dimensional image or a three-dimensional mesh, thereby performing information extraction using a deep learning method. The other regularization calculation method is to directly process disordered discrete point cloud data, and design and construct some regularization calculation operators in the process so that the regularization calculation operators can directly extract point cloud characteristic information by applying a deep learning method. Obviously, the way of designing the regularized calculation method greatly facilitates the processing of the point cloud data, reduces the calculation burden and simultaneously reduces the loss caused by data conversion. In this work, we focus on applying deep learning techniques to directly process 3D point cloud data and design a regularization calculation method that directly processes unordered point clouds end-to-end.
At the same time, the transform approach, which is centered on the self-attention mechanism, has successfully migrated from the Natural Language Processing (NLP) task to the Computer Vision (CV) task, with excellent results on many two-dimensional image datasets. In the three-dimensional visual task, corresponding frames are designed by some methods based on a self-attention mechanism to complete challenging three-dimensional tasks, such as three-dimensional shape classification, three-dimensional segmentation and the like, and the method has the greatest characteristic of sensing more global information. However, in complex CAX tasks, rich local geometric information is crucial, especially in special three-dimensional point cloud data representations. Although some of the above methods for processing 3D point cloud data have made some progress, there is a lack of an end-to-end processing method that is both globally and locally aware. Therefore, it is of great significance to research a 3D point cloud analysis method based on dynamic local self-attention convolution and deploy the method to a hardware computing platform.
Disclosure of Invention
The invention aims to solve the problems of insufficient point cloud geometric semantic information mining and insufficient local key semantic information perception capability in a 3D point cloud data processing technology, and therefore, the invention provides a 3D point cloud analysis method and device deployment based on dynamic local self-attention convolution, and solves the tasks of 3D point cloud shape classification, 3D point cloud component segmentation and real complex indoor scene 3D point cloud target discrimination and hardware device deployment.
In order to achieve the purpose, the invention provides a dynamic local self-attention convolution network point cloud analysis system and a dynamic local self-attention convolution network point cloud analysis method.
The technical scheme of the system is a dynamic local self-attention convolution network point cloud analysis system, which comprises the following steps:
three-dimensional laser radar and an upper computer;
the three-dimensional laser radar is connected with the upper computer;
the three-dimensional laser radar is used for acquiring indoor three-dimensional point cloud data in real time and transmitting the indoor three-dimensional point cloud data acquired in real time to the upper computer;
and the upper computer processes the indoor three-dimensional point cloud data acquired in real time by a 3D point cloud analysis method based on dynamic local self-attention convolution to obtain the prediction label category of the indoor three-dimensional point cloud data acquired in real time.
The technical scheme of the method is a dynamic local self-attention convolution network point cloud analysis method, which comprises the following specific steps:
step 1: introducing multiple groups of original three-dimensional point cloud data, carrying out data preprocessing on each group of original three-dimensional point cloud data to obtain each group of preprocessed three-dimensional point cloud data, and manually marking the real label category of each group of preprocessed three-dimensional point cloud data;
step 2: constructing a dynamic local self-attention convolution network, inputting each group of preprocessed three-dimensional point cloud data into the dynamic local self-attention convolution network for prediction to obtain a prediction label category of each group of preprocessed three-dimensional point cloud data, constructing a loss function model by combining the real label categories of each group of preprocessed three-dimensional point cloud data, and obtaining an optimized dynamic local self-attention convolution network through SGD algorithm optimization training;
and 3, step 3: the upper computer collects indoor three-dimensional point cloud data in real time through a three-dimensional laser radar, the indoor three-dimensional point cloud data collected in real time are preprocessed through the data in the step 1 to obtain real-time preprocessed indoor three-dimensional point cloud data, and the real-time preprocessed indoor three-dimensional point cloud data are predicted through an optimized dynamic local self-attention convolution network to obtain prediction label types of the real-time preprocessed indoor three-dimensional point cloud data;
preferably, the dynamic local self-attention convolution network in step 2 includes: the system comprises a first dynamic local self-attention learning module, a second dynamic local self-attention learning module, a third dynamic local self-attention learning module, a fourth dynamic local self-attention learning module, an aggregation module, a pooling module and a SofMax classifier;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module in the step 2 are sequentially connected in a cascade way;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module are respectively connected with the aggregation module;
the polymerization module is connected with the pooling module;
the pooling module is connected with the SofMax classifier;
the first dynamic local self-attention learning module takes each group of preprocessed three-dimensional point cloud data as input features of the first dynamic local self-attention learning module, and dynamically and locally self-attention learns all three-dimensional points in the input features of the first dynamic local self-attention learning module to obtain output features of the first dynamic local self-attention learning module;
the second dynamic local self-attention learning module takes the output characteristic of the first dynamic local self-attention learning module as the input characteristic of the second dynamic local self-attention learning module, and dynamically and locally self-attention learns all three-dimensional points in the input characteristic of the second dynamic local self-attention learning module to obtain the output characteristic of the second dynamic local self-attention learning module;
the third dynamic local self-attention learning module takes the output characteristic of the second dynamic local self-attention learning module as the input characteristic of the third dynamic local self-attention learning module, and obtains the output characteristic of the third dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of the point cloud;
the fourth dynamic local self-attention learning module takes the output characteristic of the third dynamic local self-attention learning module as the input characteristic of the fourth dynamic local self-attention learning module, and obtains the output characteristic of the fourth dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of the point cloud;
the specific calculation process of the dynamic local self-attention learning is as follows:
in the T-th dynamic local self-attention learning module, the input features of the T-th dynamic local self-attention learning module are used to obtain a local neighborhood of each three-dimensional point in the input features of the T-th dynamic local self-attention learning module by using a K-nearest neighbor algorithm, which is specifically defined as follows:
T∈[1,4]
i∈[1,M]
wherein,local neighborhood, x, representing the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i,j The method comprises the steps of representing the jth local neighborhood point in the local neighborhood of the ith three-dimensional point in the input feature of a Tth dynamic local self-attention learning module, M representing the number of the three-dimensional points in the input feature of the Tth dynamic local self-attention learning module, N representing the number of the local neighborhood points in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, and j is equal to [1, N ∈];
According toConstructing a directed graph of a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, and specifically defining the following steps:
G T,i =(V T,i ,E T,i )
T∈[1,4]
i∈[1,M]
wherein, G T,i Representing inputs to the Tth dynamic local self-attention learning moduleDirected graph, V, of local neighborhood of the ith three-dimensional point in a feature T,i Set of vertices in the directed graph representing the local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module, i.e., N neighborhood points, E, the local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module T,i Representing the set of edges in the digraph of the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, namely, each neighborhood point and the center point x in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module T,i The M represents the number of three-dimensional points in the input features of the Tth dynamic local self-attention learning module;
at G T,i The self-attention feature of the local neighborhood of the ith three-dimensional point in the input features of the tth dynamic local self-attention learning module is calculated in the following specific calculation mode:
wherein,self-attention features representing a local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module,self-attention information representing the query dimension at the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module,the attention information on the dimension of the upper key of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is represented,representing the Tth dynamic officeAttention information on an upper-value dimension of an ith three-dimensional point in the input features of the self-attention learning module is determined; f T,i Representing the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module in the corresponding local neighborhoodInput feature of
The matrix learning method comprises the steps of respectively learning a matrix which can be learned on a query dimension of an ith three-dimensional point in input features of a Tth dynamic local self-attention learning module, a matrix which can be learned on a key dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, and a matrix which can be learned on a value dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module;
for theThe self-attention semantic information learning is carried out by using the local point cloud semantic learning, and the self-attention semantic information learning method specifically comprises the following steps:
j∈[1,N],i∈[1,M]
wherein,local semantic learning information theta representing the jth local neighborhood point of the ith three-dimensional point in the input features of the tth dynamic local self-attention learning module L Is a set of M parameters used for learning the local self-attention semantic information of each three-dimensional point in the Tth dynamic local self-attention learning module, wherein ReLU represents an activation function, x T,i,j Represents the jth local neighborhood point in the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i RepresentThe method comprises the steps that the ith three-dimensional point in the input feature of a Tth dynamic local self-attention learning module is obtained, M represents the number of the three-dimensional points in the input feature of the Tth dynamic local self-attention learning module, and N represents the number of local neighborhood points in a local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module;
in thatOf the maximum valueInformation is used for updating the ith three-dimensional point in the input characteristics of the Tth dynamic local self-attention learning module
The local neighborhood point with the maximum local semantic learning information in the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is represented;
the aggregation module carries out global feature aggregation on the output feature of a first dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a second dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a third dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, and the output feature of a fourth dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data to obtain a global aggregation feature corresponding to each group of preprocessed three-dimensional point cloud data:
wherein cat represents global feature aggregation, and F represents global aggregation features corresponding to each group of preprocessed three-dimensional point cloud data;
the pooling module is used for performing high-dimensional multi-channel information dimensionality reduction on global aggregation features corresponding to each group of preprocessed three-dimensional point cloud data to obtain global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data, wherein the global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data comprise global geometric information of each group of preprocessed three-dimensional point cloud data subjected to local semantic self-attention learning;
the SofMax classifier classifies the global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data through SofMax to obtain predicted label category probability representing each group of preprocessed three-dimensional point cloud data
The real label category of each group of preprocessed three-dimensional point cloud data isThe predicted label category corresponding to the predicted label category probability of each group of preprocessed three-dimensional point cloud data is the predicted label category obtained by the dynamic local self-attention convolution network prediction;
the loss function model in step 2 is specifically defined as follows:
wherein,representing each set of preprocessed three-dimensional pointsThe real tag class of the cloud data,representing the class probability of a predicted label of each group of preprocessed three-dimensional point cloud data, representing the difference between a real sample label and the predicted probability by Loss, and representing SofMax classification by softmax;
step 2, obtaining the optimized dynamic local self-attention convolution network through the SGD algorithm optimization training, which is specifically as follows:
and (3) iteratively executing the following optimization processes by using multiple groups of preprocessed three-dimensional point cloud data:
the group of preprocessed three-dimensional point cloud data is sequentially subjected to dynamic local self-attention convolution network prediction after being optimized by the SGD algorithm in combination with the group of preprocessed three-dimensional point cloud data to obtain the predicted label category probability of the group of preprocessed three-dimensional point cloud data, a Loss function model is further calculated, and the dynamic local self-attention convolution network after the group of preprocessed three-dimensional point cloud data is optimized is obtained in combination with the SGD algorithm optimization training;
the invention constructs a model for point cloud semantic classification and segmentation, the model comprises a dynamic local self-attention learning module, an aggregation module, a pooling module, a classification module and a segmentation module, the geometric semantic information of the point cloud is learned through a designed depth model, the local key information of the 3D point cloud is further deeply understood, the method is integrated in an end-to-end depth network model, the robustness characteristic of 3D point cloud shape identification is learned, the uncertainty problems of noise, space deformation and the like are overcome, and the accuracy of the 3D point cloud shape identification is improved.
Drawings
FIG. 1: the system structure of the embodiment of the invention is schematic;
FIG. 2: a method flow diagram of an embodiment of the invention;
FIG. 3: the network model of the embodiment of the invention is shown schematically;
FIG. 4: the invention discloses an architecture diagram of a dynamic local self-attention semantic learning module;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In specific implementation, a person skilled in the art can implement the automatic operation process by using a computer software technology, and a system device for implementing the method, such as a computer-readable storage medium storing a corresponding computer program according to the technical solution of the present invention and a computer device including a corresponding computer program for operating the computer program, should also be within the scope of the present invention.
Fig. 1 is a schematic structural diagram of a system according to an embodiment of the present invention, and the technical solution of the system according to the embodiment of the present invention is:
a dynamic local self-attention convolution network point cloud analysis system, comprising:
three-dimensional laser radar and an upper computer;
the three-dimensional laser radar is connected with the upper computer;
the three-dimensional laser radar model selection is a RIEGL miniVUX-1LR three-dimensional laser radar scanner;
the type selection configuration of the upper computer is as follows:
CPU:Intel i5 10500;
a graphics processor: NVIDIA GeForce RTX 3090;
the three-dimensional laser radar is used for acquiring indoor three-dimensional point cloud data in real time and transmitting the indoor three-dimensional point cloud data acquired in real time to the upper computer;
and the upper computer processes the indoor three-dimensional point cloud data acquired in real time by a dynamic local self-attention convolution network point cloud analysis method to obtain the prediction label category of the indoor three-dimensional point cloud data acquired in real time.
The following describes a point cloud analysis method for a dynamic local self-attention convolution network according to an embodiment of the present invention with reference to fig. 2, which includes the following steps:
step 1: introducing multiple groups of original three-dimensional point cloud data, carrying out data preprocessing on each group of original three-dimensional point cloud data in a shaking, rotating and translating mode to obtain each group of preprocessed three-dimensional point cloud data, and manually marking the real label category of each group of preprocessed three-dimensional point cloud data;
step 2: constructing a dynamic local self-attention convolution network, inputting each group of preprocessed three-dimensional point cloud data into the dynamic local self-attention convolution network for prediction to obtain a prediction label category of each group of preprocessed three-dimensional point cloud data, constructing a loss function model by combining the real label categories of each group of preprocessed three-dimensional point cloud data, and obtaining the optimized dynamic local self-attention convolution network through SGD algorithm optimization training;
as shown in fig. 3, the dynamic local self-attention convolution network of step 2 includes: the system comprises a first dynamic local self-attention learning module, a second dynamic local self-attention learning module, a third dynamic local self-attention learning module, a fourth dynamic local self-attention learning module, an aggregation module, a pooling module and a SofMax classifier;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module in the step 2 are sequentially connected in a cascade way;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module are respectively connected with the aggregation module;
the polymerization module is connected with the pooling module;
the pooling module is connected with the SofMax classifier;
the first dynamic local self-attention learning module takes each group of preprocessed three-dimensional point cloud data as input features of the first dynamic local self-attention learning module, and dynamically and locally self-attentively learns all three-dimensional points in the input features of the first dynamic local self-attention learning module to obtain output features of the first dynamic local self-attention learning module;
the second dynamic local self-attention learning module takes the output characteristic of the first dynamic local self-attention learning module as the input characteristic of the second dynamic local self-attention learning module, and obtains the output characteristic of the second dynamic local self-attention learning module by performing dynamic local self-attention learning on all three-dimensional points in the input characteristic of the second dynamic local self-attention learning module;
the third dynamic local self-attention learning module takes the output characteristic of the second dynamic local self-attention learning module as the input characteristic of the third dynamic local self-attention learning module, and obtains the output characteristic of the third dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of the point cloud;
the fourth dynamic local self-attention learning module takes the output characteristic of the third dynamic local self-attention learning module as the input characteristic of the fourth dynamic local self-attention learning module, and obtains the output characteristic of the fourth dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of the point cloud;
as shown in fig. 4, the specific calculation process of the dynamic local self-attention learning is as follows:
in the T-th dynamic local self-attention learning module, the input features of the T-th dynamic local self-attention learning module are used to obtain a local neighborhood of each three-dimensional point in the input features of the T-th dynamic local self-attention learning module by using a K-nearest neighbor algorithm, which is specifically defined as follows:
T∈[1,4]
i∈[1,M]
wherein,local neighborhood, x, representing the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i,j Representing the ith three-dimensional in the input features of the Tth dynamic local self-attention learning moduleJ is the j local neighborhood point in the local neighborhood of the point, M represents the number of three-dimensional points in the input feature of the T dynamic local self-attention learning module, N =20 represents the number of local neighborhood points in the local neighborhood of the ith three-dimensional point in the input feature of the T dynamic local self-attention learning module, and j belongs to [1, N ]];
According toConstructing a directed graph of a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, which is specifically defined as follows:
G T,i =(V T,i ,E T,i )
T∈[1,4]
i∈[1,M]
wherein G is T,i A directed graph, V, representing a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i Set of vertices in the directed graph representing the local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module, i.e., N neighborhood points, E, the local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module T,i Representing the set of edges in the digraph of the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, namely, each neighborhood point and the center point x in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module T,i M represents the number of three-dimensional points in the input features of the Tth dynamic local self-attention learning module;
at V T,i The self-attention feature of the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is calculated in the following specific calculation mode:
wherein,self-attention features representing a local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module,self-attention information representing the query dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module,the attention information on the dimension of the upper key of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is represented,representing the attention information on the upper value dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module; f T,i Means for representing the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module in the corresponding local neighborhoodInput feature of
A matrix which can be learnt on the query dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, a matrix which can be learnt on the key dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module and a matrix which can be learnt on the value dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module are respectively arranged;
forThe self-attention semantic information learning is carried out by using the local point cloud semantic learning, and the method specifically comprises the following steps:
j∈[1,N],i∈[1,M]
wherein,local semantic learning information theta representing the jth local neighborhood point of the ith three-dimensional point in the input features of the tth dynamic local self-attention learning module L Is a set of M parameters used for learning the local self-attention semantic information of each three-dimensional point in the Tth dynamic local self-attention learning module, wherein ReLU represents an activation function, x T,i,j Represents the jth local neighborhood point in the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i The method comprises the steps of representing the ith three-dimensional point in the input feature of a Tth dynamic local self-attention learning module, representing the number of three-dimensional points in the input feature of the Tth dynamic local self-attention learning module by M, and representing the number of local neighborhood points in a local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module by N;
in thatOf the maximum valueInformation is used for updating the ith three-dimensional point in the input characteristic of the Tth dynamic local self-attention learning module jmax represents a local neighborhood point with maximum local semantic learning information in a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, and N =20 represents the Tth dynamic local self-attention learning moduleThe number of local neighborhood points in the local neighborhood of the ith three-dimensional point in the input features of the learning module, j ∈ [1, N ]];
The aggregation module performs global feature aggregation on the output feature of a first dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a second dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a third dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, and the output feature of a fourth dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data to obtain a global aggregation feature corresponding to each group of preprocessed three-dimensional point cloud data:
wherein cat represents global feature aggregation, F represents global aggregation features corresponding to each group of preprocessed three-dimensional point cloud data,
the pooling module is used for performing high-dimensional multi-channel information dimensionality reduction on global aggregation features corresponding to each group of preprocessed three-dimensional point cloud data to obtain global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data, wherein the global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data comprise global geometric information of each group of preprocessed three-dimensional point cloud data subjected to local semantic self-attention learning;
the SofMax classifier obtains the prediction label category probability representing each group of preprocessed three-dimensional point cloud data by SofMax classifying the global feature vector corresponding to each group of preprocessed three-dimensional point cloud data
The real label category of each group of preprocessed three-dimensional point cloud data isForecasting target of each group of preprocessed three-dimensional point cloud dataThe predicted label category corresponding to the label category probability is the predicted label category obtained by the dynamic local self-attention convolution network prediction;
the loss function model in step 2 is specifically defined as follows:
wherein,representing the real label category of each group of preprocessed three-dimensional point cloud data,representing the class probability of a predicted label of each group of preprocessed three-dimensional point cloud data, representing the difference between a real sample label and the predicted probability by Loss, and representing SofMax classification by softmax;
step 2, obtaining the optimized dynamic local self-attention convolution network through the SGD algorithm optimization training, which is specifically as follows:
and (3) iteratively executing the following optimization processes by using multiple groups of preprocessed three-dimensional point cloud data:
the group of preprocessed three-dimensional point cloud data are sequentially subjected to dynamic local self-attention convolution network prediction after being optimized by combining the group of preprocessed three-dimensional point cloud data with the Loss through the SGD algorithm to obtain the predicted label category probability of the group of preprocessed three-dimensional point cloud data, a Loss function model is further calculated, and the dynamic local self-attention convolution network after the group of preprocessed three-dimensional point cloud data are optimized is obtained through combining the SGD algorithm optimization training;
and step 3: the upper computer collects indoor three-dimensional point cloud data in real time through a three-dimensional laser radar, the indoor three-dimensional point cloud data collected in real time are preprocessed through the data in the step 1 to obtain indoor three-dimensional point cloud data preprocessed in real time, and the indoor three-dimensional point cloud data preprocessed in real time are predicted through an optimized dynamic local self-attention convolution network to obtain prediction label types of the indoor three-dimensional point cloud data preprocessed in real time;
it should be understood that parts of the specification not set forth in detail are well within the prior art.
Although the terms three-dimensional lidar, upper computer, etc. are used more often herein, the possibility of using other terms is not excluded. These terms are used merely to more conveniently describe the nature of the invention and they are to be construed as any additional limitation which is not in accordance with the spirit of the invention.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (8)
1. A dynamic local self-attention convolution network point cloud analysis system, comprising:
three-dimensional laser radar and an upper computer;
the three-dimensional laser radar is connected with the upper computer;
the three-dimensional laser radar is used for acquiring indoor three-dimensional point cloud data in real time and transmitting the indoor three-dimensional point cloud data acquired in real time to the upper computer;
and the upper computer processes the indoor three-dimensional point cloud data acquired in real time by a dynamic local self-attention convolution network point cloud analysis method to obtain the prediction label category of the indoor three-dimensional point cloud data acquired in real time.
2. A method for performing dynamic local self-attention convolution network point cloud analysis using the dynamic local self-attention convolution network point cloud analysis system of claim 1, comprising the steps of:
step 1: introducing multiple groups of original three-dimensional point cloud data, carrying out data preprocessing on each group of original three-dimensional point cloud data to obtain each group of preprocessed three-dimensional point cloud data, and manually marking the real label category of each group of preprocessed three-dimensional point cloud data;
and 2, step: constructing a dynamic local self-attention convolution network, inputting each group of preprocessed three-dimensional point cloud data into the dynamic local self-attention convolution network for prediction to obtain a prediction label category of each group of preprocessed three-dimensional point cloud data, constructing a loss function model by combining the real label categories of each group of preprocessed three-dimensional point cloud data, and obtaining an optimized dynamic local self-attention convolution network through SGD algorithm optimization training;
and 3, step 3: and (3) the upper computer collects indoor three-dimensional point cloud data in real time through the three-dimensional laser radar, the indoor three-dimensional point cloud data collected in real time is preprocessed through the data preprocessing in the step (1) to obtain indoor three-dimensional point cloud data after real-time preprocessing, and the indoor three-dimensional point cloud data after real-time preprocessing is predicted through an optimized dynamic local self-attention convolution network to obtain the prediction label category of the indoor three-dimensional point cloud data after real-time preprocessing.
3. The dynamic local self-attention convolution network point cloud analysis method of claim 2, characterized in that:
step 2, the dynamic local self-attention convolution network comprises: the system comprises a first dynamic local self-attention learning module, a second dynamic local self-attention learning module, a third dynamic local self-attention learning module, a fourth dynamic local self-attention learning module, an aggregation module, a pooling module and a SofMax classifier;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module in the step 2 are sequentially connected in a cascade manner;
the first dynamic local self-attention learning module, the second dynamic local self-attention learning module, the third dynamic local self-attention learning module and the fourth dynamic local self-attention learning module are respectively connected with the aggregation module;
the polymerization module is connected with the pooling module;
the pooling module is connected with the SofMax classifier.
4. The dynamic local self-attention convolution network point cloud analysis method of claim 3, characterized in that:
the first dynamic local self-attention learning module takes each group of preprocessed three-dimensional point cloud data as input features of the first dynamic local self-attention learning module, and dynamically and locally self-attentively learns all three-dimensional points in the input features of the first dynamic local self-attention learning module to obtain output features of the first dynamic local self-attention learning module;
the second dynamic local self-attention learning module takes the output characteristic of the first dynamic local self-attention learning module as the input characteristic of the second dynamic local self-attention learning module, and obtains the output characteristic of the second dynamic local self-attention learning module by performing dynamic local self-attention learning on all three-dimensional points in the input characteristic of the second dynamic local self-attention learning module;
the third dynamic local self-attention learning module takes the output characteristic of the second dynamic local self-attention learning module as the input characteristic of the third dynamic local self-attention learning module, and obtains the output characteristic of the third dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of a point cloud;
the fourth dynamic local self-attention learning module takes the output characteristics of the third dynamic local self-attention learning module as the input characteristics of the fourth dynamic local self-attention learning module, and obtains the output characteristics of the fourth dynamic local self-attention learning module by performing dynamic local self-attention learning on all points of the point cloud.
5. The dynamic local self-attention convolution network point cloud analysis method of claim 4, characterized in that:
the specific calculation process of the dynamic local self-attention learning is as follows:
in the T-th dynamic local self-attention learning module, the input features of the T-th dynamic local self-attention learning module are used to obtain a local neighborhood of each three-dimensional point in the input features of the T-th dynamic local self-attention learning module by using a K-nearest neighbor algorithm, which is specifically defined as follows:
wherein,local neighborhood, x, representing the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i,j Represents the jth local neighborhood point in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, M represents the number of the three-dimensional points in the input feature of the Tth dynamic local self-attention learning module, N represents the number of the local neighborhood points in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, and j belongs to [1, N ]];
According toConstructing a directed graph of a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, and specifically defining the following steps:
G T,l =(V T,l ,E T,l )
T∈[1,4]
i∈[1,M]
wherein G is T,l A directed graph representing a local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module,V T,l set of vertexes in a directed graph representing a local neighborhood of an ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, i.e., N neighborhood points of a local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module, E T,l The set of edges in the directed graph representing the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module is each neighborhood point in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module and the central point x T,l M represents the number of three-dimensional points in the input features of the Tth dynamic local self-attention learning module;
at G T,l The self-attention feature of the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is calculated in the following specific calculation mode:
wherein,self-attention features representing a local neighborhood of the ith three-dimensional point in the input features of the T-th dynamic local self-attention learning module,self-attention information representing the query dimension at the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module,the attention information on the dimension of the upper key of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module is represented,represents the TthAttention information on the upper value dimension of the ith three-dimensional point in the input features of the dynamic local self-attention learning module; f T,l Means for representing the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module in the corresponding local neighborhoodInput feature ofThe matrix learning method comprises the steps of respectively learning a matrix which can be learned on a query dimension of an ith three-dimensional point in input features of a Tth dynamic local self-attention learning module, a matrix which can be learned on a key dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module, and a matrix which can be learned on a value dimension of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module;
forThe self-attention semantic information learning is carried out by using the local point cloud semantic learning, and the method specifically comprises the following steps:
wherein,local semantic learning information theta representing the jth local neighborhood point of the ith three-dimensional point in the input features of the tth dynamic local self-attention learning module L Is a set of M parameters used for learning the local self-attention semantic information of each three-dimensional point in the Tth dynamic local self-attention learning module, wherein ReLU represents an activation function, x T,i,j Represents the jth local neighborhood point in the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module T,i Indicating the Tth dynamic local self-injectionThe method comprises the steps that the ith three-dimensional point in the input feature of an intention learning module is obtained, M represents the number of the three-dimensional points in the input feature of a Tth dynamic local self-attention learning module, and N represents the number of local neighborhood points in the local neighborhood of the ith three-dimensional point in the input feature of the Tth dynamic local self-attention learning module;
in thatOf which the maximum value is selectedInformation updating ith three-dimensional point x 'in input features of Tth dynamic local self-attention learning module' T,l ,jmax represents a local neighborhood point with maximum local semantic learning information in the local neighborhood of the ith three-dimensional point in the input features of the Tth dynamic local self-attention learning module.
6. The dynamic local self-attention convolution network point cloud analysis method of claim 3, characterized in that:
the aggregation module performs global feature aggregation on the output feature of a first dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a second dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, the output feature of a third dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data, and the output feature of a fourth dynamic local self-attention learning module corresponding to each group of preprocessed three-dimensional point cloud data to obtain a global aggregation feature corresponding to each group of preprocessed three-dimensional point cloud data:
the method comprises the following steps of A, obtaining a three-dimensional point cloud data set, wherein cat represents global feature aggregation, and F represents global aggregation features corresponding to each set of preprocessed three-dimensional point cloud data;
the pooling module is used for performing high-dimensional multi-channel information dimensionality reduction on global aggregation features corresponding to each group of preprocessed three-dimensional point cloud data to obtain global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data, wherein the global feature vectors corresponding to each group of preprocessed three-dimensional point cloud data comprise global geometric information of each group of preprocessed three-dimensional point cloud data subjected to local semantic self-attention learning;
the SofMax classifier obtains the prediction label category probability representing each group of preprocessed three-dimensional point cloud data by SofMax classifying the global feature vector corresponding to each group of preprocessed three-dimensional point cloud dataThe real label category of each group of preprocessed three-dimensional point cloud data isAnd the predicted label category corresponding to the predicted label category probability of each group of preprocessed three-dimensional point cloud data is the predicted label category obtained by the dynamic local self-attention convolution network prediction.
7. The dynamic local self-attention convolution network point cloud analysis method of claim 2, characterized in that:
the loss function model in step 2 is specifically defined as follows:
wherein,representing the real label category of each group of preprocessed three-dimensional point cloud data,and representing the class probability of the predicted label of each group of preprocessed three-dimensional point cloud data, representing the difference between the real sample label and the predicted probability by Loss, and representing SofMax classification by softmax.
8. The dynamic local self-attention convolution network point cloud analysis method of claim 2, characterized in that:
step 2, obtaining the optimized dynamic local self-attention convolution network through the SGD algorithm optimization training, which is specifically as follows:
and (3) iteratively executing the following optimization processes by using a plurality of groups of preprocessed three-dimensional point cloud data:
the group of preprocessed three-dimensional point cloud data sequentially passes through the group of preprocessed three-dimensional point cloud data, is optimized by combining with the Loss through an SGD algorithm and then is predicted by a dynamic local self-attention convolution network to obtain the predicted label category probability of the group of preprocessed three-dimensional point cloud data, and a Loss function model is further calculated and optimized training is combined with the SGD algorithm to obtain the optimized dynamic local self-attention convolution network of the group of preprocessed three-dimensional point cloud data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211010334.8A CN115471833A (en) | 2022-08-23 | 2022-08-23 | Dynamic local self-attention convolution network point cloud analysis system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211010334.8A CN115471833A (en) | 2022-08-23 | 2022-08-23 | Dynamic local self-attention convolution network point cloud analysis system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115471833A true CN115471833A (en) | 2022-12-13 |
Family
ID=84366424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211010334.8A Pending CN115471833A (en) | 2022-08-23 | 2022-08-23 | Dynamic local self-attention convolution network point cloud analysis system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115471833A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117473331A (en) * | 2023-12-27 | 2024-01-30 | 苏州元脑智能科技有限公司 | Stream data processing method, device, equipment and storage medium |
CN118642030A (en) * | 2024-08-13 | 2024-09-13 | 国网福建省电力有限公司 | Error prediction method, device and storage medium for capacitive voltage transformer |
-
2022
- 2022-08-23 CN CN202211010334.8A patent/CN115471833A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117473331A (en) * | 2023-12-27 | 2024-01-30 | 苏州元脑智能科技有限公司 | Stream data processing method, device, equipment and storage medium |
CN117473331B (en) * | 2023-12-27 | 2024-03-08 | 苏州元脑智能科技有限公司 | Stream data processing method, device, equipment and storage medium |
CN118642030A (en) * | 2024-08-13 | 2024-09-13 | 国网福建省电力有限公司 | Error prediction method, device and storage medium for capacitive voltage transformer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11853903B2 (en) | SGCNN: structural graph convolutional neural network | |
CN111242208B (en) | Point cloud classification method, segmentation method and related equipment | |
Abu-Jassar et al. | Some Features of Classifiers Implementation for Object Recognition in Specialized Computer systems. | |
CN111291819B (en) | Image recognition method, device, electronic equipment and storage medium | |
CN112633350B (en) | Multi-scale point cloud classification implementation method based on graph convolution | |
CN111192270A (en) | Point cloud semantic segmentation method based on point global context reasoning | |
CN115471833A (en) | Dynamic local self-attention convolution network point cloud analysis system and method | |
CN105809672A (en) | Super pixels and structure constraint based image's multiple targets synchronous segmentation method | |
CN113807399A (en) | Neural network training method, neural network detection method and neural network detection device | |
CN110569926B (en) | Point cloud classification method based on local edge feature enhancement | |
Chiu et al. | A novel directional object detection method for piled objects using a hybrid region-based convolutional neural network | |
CN113011568B (en) | Model training method, data processing method and equipment | |
CN107067410A (en) | A kind of manifold regularization correlation filtering method for tracking target based on augmented sample | |
Prakash et al. | Accurate hand gesture recognition using CNN and RNN approaches | |
CN115861619A (en) | Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network | |
CN111738074B (en) | Pedestrian attribute identification method, system and device based on weak supervision learning | |
CN114387608B (en) | Table structure identification method combining convolution and graph neural network | |
Ouadiay et al. | Simultaneous object detection and localization using convolutional neural networks | |
KR20110037184A (en) | Pipelining computer system combining neuro-fuzzy system and parallel processor, method and apparatus for recognizing objects using the computer system in images | |
Nath et al. | Deep learning models for content-based retrieval of construction visual data | |
Li et al. | Rethinking scene representation: A saliency-driven hierarchical multi-scale resampling for RGB-D scene point cloud in robotic applications | |
Hu et al. | Automated BIM-to-scan point cloud semantic segmentation using a domain adaptation network with hybrid attention and whitening (DawNet) | |
CN117689887A (en) | Workpiece grabbing method, device, equipment and storage medium based on point cloud segmentation | |
CN116386042A (en) | Point cloud semantic segmentation model based on three-dimensional pooling spatial attention mechanism | |
CN116912486A (en) | Target segmentation method based on edge convolution and multidimensional feature fusion and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |