CN113838109A - Low-coincidence point cloud registration method - Google Patents
Low-coincidence point cloud registration method Download PDFInfo
- Publication number
- CN113838109A CN113838109A CN202111437345.XA CN202111437345A CN113838109A CN 113838109 A CN113838109 A CN 113838109A CN 202111437345 A CN202111437345 A CN 202111437345A CN 113838109 A CN113838109 A CN 113838109A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- information
- convolution
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a low-coincidence point cloud registration method which can complete a point cloud registration task in a low-coincidence scene. Aiming at the problem that homonymous point pairs are difficult to search in a low-coincidence degree scene, an automatic attention mechanism is adopted to enable the integral point cloud of the aggregation point pair to achieve integral perception, meanwhile, an intersection attention mechanism is utilized to explicitly mine information of an overlapping region, confidence degrees of all points in the point cloud in the overlapping region are predicted, the point pairs in a matching stage are sampled in the overlapping region through probability selection, and the recall rate of registration is improved. Meanwhile, the perception field of the convolution kernel is dynamically limited in the overlapping region, so that the extraction of invalid geometric neighborhood information is avoided, and the precision and the accuracy of point-by-point characteristics are improved.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence, control science and engineering, and particularly relates to a low-coincidence point cloud registration method.
Background
The single photon imaging is a novel technology which adopts a pulse laser light source and utilizes nonlinear optical technologies such as wavelength division multiplexing and the like to realize ultrahigh precision and strong anti-noise imaging. Compared with the traditional laser radar system, the single photon radar system has remarkable advantages under the influence of long-distance and severe environment noise, and has huge application potential in the fields of weak signal detection, long-distance imaging, precise measurement and the like.
In three-dimensional imaging, in order to obtain a complete three-dimensional model of a real-world object or scene, a single photon radar system is generally required to perform multiple point cloud data acquisition on a target object point cloud at different spatial angles to obtain multiple point cloud segments partially overlapped in different spatial coordinate systems, and then complete three-dimensional reconstruction is realized through a point cloud registration technology.
The point cloud registration method can be divided into a method based on a feature matching formula and a method based on an optimization iterative formula from different optimization angles, and the effect of the method based on the feature matching formula is better than that of the iterative formula method from the viewpoint of generalization performance aiming at the point cloud registration problem with different characteristics. The way of extracting features is to use spatial statistical information such as normal vector information to construct manual features based on statistical histograms at first, and in recent years, with the development of deep learning, the way of generating deep features with rotation invariance based on deep learning gradually evolves. After dense feature extraction, the point cloud selects accurate corresponding relation of homonymous point pairs in a high-dimensional feature space through a robust error point pair removing method RANSAC, and then restores rotation and translation pose information among point cloud segments through methods such as Singular Value Decomposition (SVD).
Currently, a commonly used point cloud registration algorithm generally requires a high coincidence degree between point cloud segments shot from multiple angles to provide sufficient point pair matching information. At present, the registration rate of a registration algorithm data set based on deep learning is generally higher than 30%, and under the high-overlapping-degree scene, the registration recall rate of more than 90% is achieved by a feature matching algorithm and a global optimization algorithm.
However, due to the limited depth imaging range and the large point cloud density of the single photon radar, in many unconventional imaging tasks, multi-point cloud segments with multiple angles and high coincidence degree are difficult to obtain. The point cloud registration problem under the low-coincidence condition (the coincidence ratio is less than 30%) is still not well solved because the point cloud total number of the homonymous point pairs is low, the homonymous point pairs with enough number cannot be obtained by conventional random sampling or Furthest Point Sampling (FPS), and a large error is generated in the matching stage.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a low-coincidence point cloud registration method, which is oriented to a single-photon radar detection system, solves the problems of small quantity of point cloud homonymous point pairs and poor registration precision in a low-coincidence scene, explicitly excavates overlapping region information, and improves the feature matching precision and the registration recall rate of point cloud registration in the low-coincidence scene. The specific technical scheme of the invention is as follows:
a low-coincidence point cloud registration method comprises the following steps:
s1: data preprocessing:
source point cloud acquisition by using single photon radar detection systemPWith a target point cloudQThe high-density array point cloud data is characterized in that an Octree Octree is established based on the existing point cloud data, and an index mechanism of disordered point cloud is established to realize quick neighbor searching;
s2: extracting full convolution characteristics:
aiming at the point cloud with space sparse characteristic, aiming at the source point cloudPWith a target point cloudQCarrying out full convolution aggregation point by point to obtain feature aggregation point coordinates representing each neighborhood information and feature aggregation point high-dimensional feature vectors;
s3: constructing an attention mechanism module and endowing the sensing capability of the full convolution characteristic overlapping area;
s4: and reversely solving a rigid body transformation matrix T through a rigid body transformation estimation module to realize the three-dimensional reconstruction of the complete scene.
Further, the specific process of step S2 is as follows:
s2-1: modeling a point cloud registration problem asGiven a cloud of source points under different coordinate systemsWith a target point cloudSolving the rotation matrixAnd translation vectorTo minimize the point pair error:
wherein the content of the first and second substances,representing a cloud of sourcesPWith a target point cloudQThe same-name point pair in (2) exists only inPAndQin the region of the overlap of (a) and (b),is a set of all rotation matrices around the origin in the euclidean space,represents a real number;
the degree of overlap between two relative point clouds is defined as:
wherein the content of the first and second substances,as a cloud of sourcesPTo the target point cloudQThe true value of (a) is rotationally transformed,as a cloud of sourcesPTo the target point cloudQThe true-value translation transformation of (1),in order to be the distance threshold value,representing a nearest neighbor operation;
s2-2: extracting sparse full-volume aggregation joint characteristics;
replacing the traditional voxel grid convolution with a sparse full convolution, the sparse convolution is defined as:
wherein the content of the first and second substances,urepresenting the location of the input,the representation of the complex field is represented by a complex field,for the input dimension of a full convolutional neural network,for the output dimension of a full convolutional neural network,is defined as relative to the inputuIs selected from the group consisting of the bias of (c),in order to output the sparse convolution,to correspond toRelative offset of the convolution outputiThe input of the position(s) to be determined,representing the parameters of the convolution kernel, inputIf and only ifTemporal presence definition, Source Point cloudPWith a target point cloudQGenerating characteristic aggregation point coordinates representing each neighborhood information through hierarchical convolution and downsamplingAnd feature aggregation point high-dimensional vector。
Further, the specific process of step S3 is as follows:
s3-1: based on the characteristic aggregation point high-dimensional vector, the DGCNN is adopted to enhance local information so as to enable the local information to be enhancedEnergy source point cloudPWith a target point cloudQThe integral perception is achieved;
s3-2: feature aggregation point high-dimensional vector passing through self-attention moduleThe method comprises the steps of dynamically detecting whether corresponding aggregation points with similar feature spaces exist in a target point cloud by adopting a cross attention module transducer, and explicitly mining the confidence degree of whether the aggregation points and the fields thereof exist in an overlapping region based on semantic information of sequence level;
S3-3: the overlapping confidence degree of the dynamic attention convolution network GAC and a single aggregation point and the neighborhood thereof is utilized, the convolution kernel is dynamically limited to the overlapping area obtained by prediction according to the confidence degree, and the occupation ratio of invalid features in a matching stage is reduced;
s3-4: feature aggregation point high-dimensional vector based on outputHigh-dimensional information decoding is carried out by adopting hole sparse convolution, and reception field dynamic selection is carried out by utilizing a GAC network, so that effective information extraction is realized, and point-by-point characteristics are obtainedConfidence with overlapping region。
Further, the step S3-1 includes:
convolving the DGCNN with a dynamic graph to aggregate point coordinates with featuresAnd its high-dimensional vectorFor inputting and enhancing the characteristic information of the aggregation point, the DGCNN core step edge convolution EdgeConv is defined as:
wherein for the inputConstructing a neighborhood set,,hBy feeling of multiple layersThe composition of the base is known, and the base is,which represents the global information, is,representing local neighborhood information, the significance of the dynamic map update isNo longer limited to the Euclidean space, but extends to the feature space, so in the hierarchical edge convolution, each time the edge convolution is carried out,the space definition of the system is dynamically changed, and the feature neighborhood is reconstructed, so that the receptive field covers the whole point cloud, and meanwhile, the sparse characteristic is kept;representing the mutual information of each edge in the feature space structure,represents a multilayer sensing base structure;is composed ofAt the final output convolution result, RELU represents the ramp activation function.
Further, the specific process of step S3-2 is:
s3-2-1: processing a kernel model Transformer in NLP by adopting a natural language, modeling the point cloud into an information sequence with continuous characteristics, and aggregating the characteristic points into high-dimensional vectorsPerforming information interaction to mine spatial overlapped information, and a Transformer module coreThe cardiac operation is defined as:
wherein the indexQueryKey valueKeyInformation, informationValueThe equivalence is regarded as a database searching operation, is embodied into a point cloud registration problem,Query,Key,Valuehigh-dimensional vector of feature aggregation point,Is the vector dimension of Query;
s3-2-2: target point cloudQTo source point cloudPThe information flow of (1), then Query isKey isThe Transformer operation is regarded as firstly calculating cosine similarity in a feature space, taking an index vector Query and a Key value vector Key as dot products, and taking vector dimensionsMaking a scale parameter;
s3-2-3: normalizing the obtained similarity degree Value into probability Value distribution which belongs to 0-1 and is added to 1 through a softmax function, multiplying the probability Value distribution by information Value corresponding to the Key Value Key respectively to obtain cross Attention information Attention (Query, Key, Value) of the index Query and all the Key Value keys, namely the target point cloudQCloud to sourcePThe cross information stream of (2); and isPToQThe reverse information flow calculation process follows the same calculation rule;
s3-2-4: the overlapping area information is implicit in the cross attention information, so that the source point clouds are respectively alignedPWith a target point cloudQRespectively carries out a cross injectionCalculating the intention information flow:
aggregating the obtained attention information and the original feature into a high-dimensional vectorSplicing, cat [ | ] non-calculation in the above formula]Representing the vector splicing operation on the feature dimension, and outputting the obtained mixed features as the confidence coefficient of a point-by-point overlapping region of aggregation points through a multi-layer perceptron MLP。
Further, the specific process of step S3-3 is:
s3-3-1: the high-dimensional vectors of the feature aggregation points are obtained by network forward propagationAnd overlap region confidenceThe method adopts a dynamic attention network mechanism GAC to dynamically limit the receptive field of a convolution kernel in a predicted overlapping area, and reduces invalid geometric information of a non-overlapping area to extract pollution point-by-point characteristics;
s3-3-2: the polymerization point at this timeUsing undirected graphsIt is shown that,respectively representing the vertices and non-directional edges of the graph structure,the number of undirected graph vertices; order toTo representNeighborhood vertex ofThe set of components is composed of a plurality of groups,representation versus undirected graphGThe vertex of (1);
s3-3-3: reduced symbols, useRepresenting a point cloudPOrQThe feature of (2) is aggregated into a high-dimensional vector,brepresenting a characteristic dimension of a current incoming GAC network, the GAC network andselecting the probability for the prior information, selecting the aggregation point in the overlapping area, and aggregating the coordinates and high-dimensional characteristics of the overlapping area again by using the graph convolution operation defined in the GAC, namely performing the graph convolution operation only in the overlapping area:
wherein the content of the first and second substances,in order to be a multi-layer perceptron MLP,,on behalf of the operation of the network,is a central pointIs determined by the point of the neighborhood of the point,andare respectively a central pointAnd adjacent neighbor pointThe characteristics of the respective intermediate layer(s),is an offset; therefore, the graph convolution operation is performed by simultaneously utilizing Euclidean space coordinate information and feature space distance information in the neighborhood for aggregation, and the definition of the neighborhood of the feature space is different in each forward propagation, so that the dynamic limitation of the receptive field is realized.
Further, the specific process of step S3-4 is:
s3-4-1: will be provided withAndperforming dimensionality splicing operation to obtain aggregation point characteristicsA feature map with an overlap region confidence;
s3-4-2: utilizing sparse hole convolution to carry out up-sampling operation on the characteristic graph, wherein parameters, namely convolution kernel size and sliding step length, are selected and down-sampled consistently to recover and input point cloudP、QThe confidence of point-by-point overlapping region with the same output point cloud size and output simultaneously。
Further, the specific process of step S4 is as follows:
s4-1: selecting a certain number of candidate matching points by adopting a probability sorting method based on point-by-point overlapping confidence coefficients, and obtaining an accurate corresponding relation of the homonymous point pairs in a high-dimensional feature space by using a robust outlier rejection method RANSAC;
s4-2: based on the corresponding relation of the homonymous point pairs, Singular Value Decomposition (SVD) is utilized to reversely solve pose transformation information of the rotation matrix and the translation matrix to form a rigid transformation matrix T;
s4-3: and (3) utilizing the relative pose transformation information to act on the target point cloud, transforming the target point cloud to a source point cloud coordinate system, and then carrying out point cloud density equalization operation to obtain a complete three-dimensional model.
Further, in the step S3-4-2, the output is up-sampledP、QPoint by point characteristicsDimension is 32-dimensional compact features, and simultaneously, the confidence coefficient of the point-by-point overlapping region is output。
The invention has the beneficial effects that:
1. compared with the traditional local geometric feature descriptor extraction mode, the method adopts the full-convolution neural network architecture to extract the point cloud features, and realizes the real-time and rapid generation of the point cloud features.
2. According to the method, the overlapped region point cloud region is predicted by the aid of the Transformer module in an explicit mode, points in a subsequent matching stage are sampled in the overlapped region, and accuracy of point cloud registration in a low-overlap degree scene is greatly improved.
3. The method adopts the GAC module to limit the sparse convolution kernel receptive field in the overlapping region, and reduces the influence of the geometric information of the non-overlapping region on the feature descriptor.
Drawings
In order to illustrate embodiments of the present invention or technical solutions in the prior art more clearly, the drawings which are needed in the embodiments will be briefly described below, so that the features and advantages of the present invention can be understood more clearly by referring to the drawings, which are schematic and should not be construed as limiting the present invention in any way, and for a person skilled in the art, other drawings can be obtained on the basis of these drawings without any inventive effort. Wherein:
FIG. 1 is a process diagram of the self-attention mechanism of the present invention;
FIG. 2 is a cross-attention mechanism process diagram of the present invention;
FIG. 3 is a schematic diagram of the GAC network mechanism of the present invention;
FIG. 4 is a flow chart of a method of the present invention;
fig. 5 is a diagram illustrating feature extraction robustness.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
The invention particularly relates to a feature matching type low-overlap-ratio point cloud registration method based on a sparse full convolution neural network, which is used for solving the problem of low-overlap-ratio point cloud registration of an object level generated by scanning of a single photon radar system and can be widely applied to other low-overlap-ratio scenes such as indoor scene point clouds and outdoor scanning point clouds.
The method can complete the point cloud registration task in a low-coincidence degree scene. Aiming at the problem that homonymous point pairs are difficult to search in a low-coincidence degree scene, an automatic attention mechanism is adopted to enable the integral point cloud of the aggregation point pair to achieve integral perception, meanwhile, an intersection attention mechanism is utilized to explicitly mine information of an overlapping region, confidence degrees of all points in the point cloud in the overlapping region are predicted, the point pairs in a matching stage are sampled in the overlapping region through probability selection, and the recall rate of registration is improved. Meanwhile, the perception field of the convolution kernel is dynamically limited in the overlapping region, so that the extraction of invalid geometric neighborhood information is avoided, and the precision and the accuracy of point-by-point characteristics are improved.
Specifically, as shown in fig. 4, a low-coincidence point cloud registration method includes the following steps:
s1: data preprocessing:
source point cloud acquisition by using single photon radar detection systemPWith a target point cloudQThe high-density array point cloud data is characterized in that an Octree Octree is established based on the existing point cloud data, and an index mechanism of disordered point cloud is established to realize quick neighbor searching;
s2: extracting full convolution characteristics:
aiming at the point cloud with space sparse characteristic, aiming at the source point cloudPWith a target point cloudQCarrying out full convolution aggregation point by point to obtain feature aggregation point coordinates representing each neighborhood information and feature aggregation point high-dimensional feature vectors; the specific process is as follows:
s2-1: modeling point cloud registration problem as a source point cloud given under different coordinate systemsWith a target point cloudSolving the rotation matrixAnd translation vectorTo minimize the point pair error:
wherein the content of the first and second substances,representing a cloud of sourcesPWith a target point cloudQThe same-name point pair in (2) exists only inPAndQin the region of the overlap of (a) and (b),is a set of all rotation matrices around the origin in the euclidean space,represents a real number;
the degree of overlap between two relative point clouds is defined as:
wherein the content of the first and second substances,as a cloud of sourcesPTo the target point cloudQThe true value of (a) is rotationally transformed,as a cloud of sourcesPTo the target point cloudQThe true-value translation transformation of (1),in order to be the distance threshold value,representing a nearest neighbor operation;
s2-2: extracting sparse full-volume aggregation joint characteristics;
replacing the traditional voxel grid convolution with a sparse full convolution, the sparse convolution is defined as:
wherein the content of the first and second substances,urepresenting the location of the input,the representation of the complex field is represented by a complex field,for the input dimension of a full convolutional neural network,for the output dimension of a full convolutional neural network,is defined as relative to the inputuIs selected from the group consisting of the bias of (c),in order to output the sparse convolution,to correspond toRelative offset of the convolution outputiThe input of the position(s) to be determined,representing the parameters of the convolution kernel, inputIf and only ifTemporal presence definition, Source Point cloudPWith a target point cloudQDownsampling via hierarchical convolutionGenerating feature aggregation point coordinates representing each neighborhood informationAnd feature aggregation point high-dimensional vector。
S3: constructing an attention mechanism module and endowing the sensing capability of the full convolution characteristic overlapping area; as shown in fig. 1-3, the specific process is as follows:
s3-1: based on the characteristic aggregation point high-dimensional vector, the DGCNN is adopted to enhance local information so as to enable the local information to be enhancedEnergy source point cloudPWith a target point cloudQThe integral perception is achieved;
in some embodiments, the step S3-1 is specifically performed as follows:
convolving the DGCNN with a dynamic graph to aggregate point coordinates with featuresAnd its high-dimensional vectorFor inputting and enhancing the characteristic information of the aggregation point, the DGCNN core step edge convolution EdgeConv is defined as:
wherein for the inputConstructing a neighborhood set,,hIs composed of a plurality of layers of sensing bases,which represents the global information, is,representing local neighborhood information, the significance of the dynamic map update isNo longer limited to the Euclidean space, but extends to the feature space, so in the hierarchical edge convolution, each time the edge convolution is carried out,the space definition of the system is dynamically changed, and the feature neighborhood is reconstructed, so that the receptive field covers the whole point cloud, and meanwhile, the sparse characteristic is kept;representing the mutual information of each edge in the feature space structure,represents a multilayer sensing base structure;is composed ofAt the final output convolution result, RELU represents the ramp activation function.
S3-2: feature aggregation point high-dimensional vector passing through self-attention moduleThe method comprises the steps of dynamically detecting whether corresponding aggregation points with similar feature spaces exist in a target point cloud by adopting a cross attention module Transformer, and explicitly mining whether the aggregation points and the field thereof exist in the target point cloud or not based on semantic information of sequence levelDegree of confidence that there is an overlapping region;
In some embodiments, the specific process of step S3-2 is:
s3-2-1: processing a kernel model Transformer in NLP by adopting a natural language, modeling the point cloud into an information sequence with continuous characteristics, and aggregating the characteristic points into high-dimensional vectorsAnd carrying out information interaction so as to mine spatial overlapping information, wherein the kernel operation of the Transformer module is defined as:
the index Query, the Key Value Key and the information Value are equivalently regarded as database searching operation, and are embodied in the point cloud registration problem, and the Query, the Key Value Key and the Value are regarded as high-dimensional vectors of feature aggregation points,Is the vector dimension of Query;
s3-2-2: target point cloudQTo source point cloudPThe information flow of (1), then Query isKey isThe Transformer operation is regarded as firstly calculating cosine similarity in a feature space, taking an index vector Query and a Key value vector Key as dot products, and taking vector dimensionsMaking a scale parameter;
s3-2-3: normalizing the obtained similarity degree Value into probability Value distribution which belongs to 0-1 and is added to 1 through a softmax function, multiplying the probability Value distribution by information Value corresponding to the Key Value Key respectively to obtain cross Attention information Attention (Query, Key, Value) of the index Query and all the Key Value keys, namely the target point cloudQCloud to sourcePThe cross information stream of (2); and isPToQThe reverse information flow calculation process follows the same calculation rule;
s3-2-4: the overlapping area information is implicit in the cross attention information, so that the source point clouds are respectively alignedPWith a target point cloudQThe aggregation points of (a) perform cross attention information flow calculation for one time respectively:
aggregating the obtained attention information and the original feature into a high-dimensional vectorSplicing, cat [ | ] non-calculation in the above formula]Representing the vector splicing operation on the feature dimension, and outputting the obtained mixed features as the confidence coefficient of a point-by-point overlapping region of aggregation points through a multi-layer perceptron MLP。
S3-3: the overlapping confidence degree of the dynamic attention convolution network GAC and a single aggregation point and the neighborhood thereof is utilized, the convolution kernel is dynamically limited to the overlapping area obtained by prediction according to the confidence degree, and the occupation ratio of invalid features in a matching stage is reduced;
the specific process of step S3-3 is:
s3-3-1: the high-dimensional vectors of the feature aggregation points are obtained by network forward propagationAnd overlap region confidenceThe method adopts a dynamic attention network mechanism GAC to dynamically limit the receptive field of a convolution kernel in a predicted overlapping area, and reduces invalid geometric information of a non-overlapping area to extract pollution point-by-point characteristics;
s3-3-2: the polymerization point at this timeUsing undirected graphsIt is shown that,respectively representing the vertices and non-directional edges of the graph structure,the number of undirected graph vertices; order toTo representNeighborhood vertex ofThe set of components is composed of a plurality of groups,representation versus undirected graphGThe vertex of (1);
s3-3-3: reduced symbols, useRepresenting a point cloudPOrQThe feature of (2) is aggregated into a high-dimensional vector,brepresenting a characteristic dimension of a current incoming GAC network, the GAC network andbselecting probability for prior information, selecting aggregation point in the overlapping area, and aggregating the coordinates and high-dimensional features of the overlapping area again by using graph convolution operation defined in GAC (generalized open area code) to obtain the aggregation point only in the overlapping areaPerforming graph convolution operation on the domain:
wherein the content of the first and second substances,in order to be a multi-layer perceptron MLP,,on behalf of the operation of the network,is a central pointIs determined by the point of the neighborhood of the point,andare respectively a central pointAnd adjacent neighbor pointThe characteristics of the respective intermediate layer(s),is an offset; therefore, the graph convolution operation is performed by simultaneously utilizing Euclidean space coordinate information and feature space distance information in the neighborhood for aggregation, and the definition of the neighborhood of the feature space is different in each forward propagation, so that the dynamic limitation of the receptive field is realized.
S3-4: feature aggregation point high-dimensional vector based on outputHigh-dimensional information decoding is carried out by adopting hole sparse convolution, and reception field dynamic selection is carried out by utilizing a GAC network, so that effective information extraction is realized, and point-by-point characteristics are obtainedConfidence with overlapping region。
In some embodiments, the specific process of step S3-4 is:
s3-4-1: will be provided withAndperforming dimension splicing operation to obtain a feature map of the polymerization point features and the confidence coefficient of the overlapping region;
s3-4-2: utilizing sparse hole convolution to carry out up-sampling operation on the characteristic graph, wherein parameters, namely convolution kernel size and sliding step length, are selected and down-sampled consistently to recover and input point cloudP、QThe confidence of point-by-point overlapping region with the same output point cloud size and output simultaneously。
In some embodiments, in step S3-4-2, the output is upsampledP、QPoint by point characteristicsDimension is 32-dimensional compact features, and simultaneously, the confidence coefficient of the point-by-point overlapping region is output。
S4: the method comprises the following steps of reversely solving a rigid body transformation matrix T through a rigid body transformation estimation module to realize the three-dimensional reconstruction of a complete scene, and specifically comprising the following steps:
s4-1: selecting a certain number of candidate matching points by adopting a probability sorting method based on point-by-point overlapping confidence coefficients, and obtaining an accurate corresponding relation of the homonymous point pairs in a high-dimensional feature space by using a robust outlier rejection method RANSAC;
s4-2: based on the corresponding relation of the homonymous point pairs, Singular Value Decomposition (SVD) is utilized to reversely solve pose transformation information of the rotation matrix and the translation matrix to form a rigid transformation matrix T;
s4-3: and (3) utilizing the relative pose transformation information to act on the target point cloud, transforming the target point cloud to a source point cloud coordinate system, and then carrying out point cloud density equalization operation to obtain a complete three-dimensional model.
The point cloud registration method with low overlap ratio is researched, and the point cloud registration method has a key effect on improving the integral three-dimensional imaging capability of the single photon radar and improving the application value of the single photon radar.
For the convenience of understanding the above technical aspects of the present invention, the following detailed description will be given of the above technical aspects of the present invention by way of specific examples.
Example 1
In order to verify the effectiveness of the invention, the invention is compared with the latest algorithm in the aspect of point cloud registration on a public data set through experimental tests, and the superiority of the invention in the actual application scene compared with the contemporary algorithm is verified.
Preparation of data set: the invention is also applicable to general point cloud registration under general contact ratio by verifying on an indoor data set 3DMatch commonly used by the current point cloud registration algorithm, wherein the 3DMatch contains point cloud data of 62 different indoor scenes in total, wherein 54 scenes are used as training sets, and 8 verification sets. The invention is verified on a low-coincidence scene data set 3DLoMatch to have superior performance in a low-coincidence scene.
Evaluation indexes are as follows: the invention belongs to a point cloud Registration algorithm based on a Feature Matching formula, and therefore, the point cloud Registration algorithm is mainly used for evaluating a Feature Matching Recall ratio (Feature Matching Recall) and a Registration success ratio (Registration success ratio), wherein the Feature Matching Recall ratio (Feature Matching Recall) is used for measuring the description capacity of a Feature extraction module for extracting features, and the Registration success ratio (Registration success ratio) is used for representing the proportion of logarithms successfully matched in Registration to the total number of point clouds. The two indexes have a positive correlation with each other, and the higher the recall rate of feature matching is, the higher the success rate of registration is.
The specific experimental process comprises the following steps:
(1) model training: the method of the invention is used for training on a training set of a public data set 3DMatch, an SGD optimizer is selected, the learning rate is set to be 5e-3, a TITAN X is selected as a GPU, 1.5h is needed for training one period, and 40 periods of integral model training can be converged. The back-end RANSAC is implemented using open3D (version 0.9.0).
(2) And (3) testing a model: and testing the model on a 3D match test set and a low-coincidence scene data set 3D LoMatch, respectively verifying the effectiveness of the model on the two data sets, and simultaneously performing an ablation experiment to verify the effectiveness of the method.
(3) The experimental results are as follows: the performance on the two evaluation indices is shown in the following table:
TABLE 13 model representation on DMatch
TABLE 23 DLoMatch modeling Performance
From the above experimental results, it can be seen that, no matter in the standard coincidence scene 3d match or in the low coincidence scene 3d lomatch data set, the method of the present invention obtains an improvement in the index compared with some mainstream feature matching point cloud registration algorithms at present, and especially in the low coincidence scene, obtains a large improvement under the condition of 1000 sampling points.
As shown in fig. 5, the left graph illustrates that the feature matching recall rate of the method of the present invention is most pronounced when the inner point distance threshold is increased to the same value as compared to other methods; in the right side diagram, three-dimensional feature matching is 3DMatch, a fast point feature histogram is FPFH, a rotation diagram feature is SpinImage, a direction histogram feature is SHOT, a core point convolution feature in an overlapping area is PREDATOR, a sparse full convolution feature FCGF, a core point convolution feature D3Feat, a compact geometric feature CGF, a point pair feature PPFNet and a three-dimensional smooth feature 3DSmoothNet, and it can be known from the diagram that when an internal point proportion threshold value is increased, the speed of curve reduction corresponding to the method is slowest, and further, the extracted point pair feature in the model has obvious robustness improvement compared with other methods;
the ablation experiment is performed on an overlap area prediction module (overlap assessment module) and a receptive field restriction module (receptive field restriction module) in the method of the present invention, and the results on the registration success rate are as follows:
TABLE 3 ablation experiment
Ablation experiments show that the method has remarkable advantages particularly in registration in a low-coincidence-degree scene, and registration indexes in a common coincidence-degree scene are improved.
(4) And (4) experimental conclusion: table 1 shows that the present invention exhibits significant advantages in a low coincidence scene registration scenario, table 2 shows that the present invention can be improved in a standard coincidence scene, and is not limited to a low coincidence scene registration scenario, and table 3 shows that the present invention is effective for a module specifically designed for a low coincidence scene. The above conclusions illustrate the effectiveness of the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A low-coincidence point cloud registration method is characterized by comprising the following steps:
s1: data preprocessing:
source point cloud acquisition by using single photon radar detection systemPWith a target point cloudQThe high-density array point cloud data is characterized in that an Octree Octree is established based on the existing point cloud data, and an index mechanism of disordered point cloud is established to realize quick neighbor searching;
s2: extracting full convolution characteristics:
aiming at the point cloud with space sparse characteristic, aiming at the source point cloudPWith a target point cloudQCarrying out full convolution aggregation point by point to obtain feature aggregation point coordinates representing each neighborhood information and feature aggregation point high-dimensional feature vectors;
s3: constructing an attention mechanism module and endowing the sensing capability of the full convolution characteristic overlapping area;
s4: and reversely solving a rigid body transformation matrix T through a rigid body transformation estimation module to realize the three-dimensional reconstruction of the complete scene.
2. The low-coincidence point cloud registration method according to claim 1, wherein the specific process of the step S2 is as follows:
s2-1: modeling point cloud registration problem as a source point cloud given under different coordinate systemsWith a target point cloudSolving the rotation matrixAnd translation vectorTo minimize the point pair error:
wherein the content of the first and second substances,representing a cloud of sourcesPWith a target point cloudQThe same-name point pair in (2) exists only inPAndQin the region of the overlap of (a) and (b),is a set of all rotation matrices around the origin in the euclidean space,represents a real number;
the degree of overlap between two relative point clouds is defined as:
wherein the content of the first and second substances,as a cloud of sourcesPTo the target point cloudQThe true value of (a) is rotationally transformed,as a cloud of sourcesPTo the target point cloudQThe true-value translation transformation of (1),in order to be the distance threshold value,representing a nearest neighbor operation;
s2-2: extracting sparse full-volume aggregation joint characteristics;
replacing the traditional voxel grid convolution with a sparse full convolution, the sparse convolution is defined as:
wherein the content of the first and second substances,urepresenting the location of the input,the representation of the complex field is represented by a complex field,for the input dimension of a full convolutional neural network,for the output dimension of a full convolutional neural network,is defined as relative to the inputuIs selected from the group consisting of the bias of (c),in order to output the sparse convolution,to correspond toRelative offset of the convolution outputiThe input of the position(s) to be determined,representing the parameters of the convolution kernel, inputIf and only ifTemporal presence definition, Source Point cloudPWith a target point cloudQGenerating characteristic aggregation point coordinates representing each neighborhood information through hierarchical convolution and downsamplingAnd feature aggregation point high-dimensional vector。
3. The low-coincidence point cloud registration method according to claim 1 or 2, wherein the specific process of the step S3 is as follows:
s3-1: based on the characteristic aggregation point high-dimensional vector, the DGCNN is adopted to enhance local information so as to enable the local information to be enhancedEnergy source point cloudPWith a target point cloudQThe integral perception is achieved;
s3-2: feature aggregation point high-dimensional vector passing through self-attention moduleThe method comprises the steps of dynamically detecting whether corresponding aggregation points with similar feature spaces exist in a target point cloud by adopting a cross attention module transducer, and explicitly mining the confidence degree of whether the aggregation points and the fields thereof exist in an overlapping region based on semantic information of sequence level;
S3-3: the overlapping confidence degree of the dynamic attention convolution network GAC and a single aggregation point and the neighborhood thereof is utilized, the convolution kernel is dynamically limited to the overlapping area obtained by prediction according to the confidence degree, and the occupation ratio of invalid features in a matching stage is reduced;
s3-4: feature aggregation point high-dimensional vector based on outputHigh-dimensional information decoding is carried out by adopting hole sparse convolution, and reception field dynamic selection is carried out by utilizing a GAC network, so that effective information extraction is realized, and point-by-point characteristics are obtainedConfidence with overlapping region。
4. The point cloud registration method with low coincidence degree according to claim 3, wherein the step S3-1 is implemented by:
convolving the DGCNN with a dynamic graph to aggregate point coordinates with featuresAnd its high-dimensional vectorFor inputting and enhancing the characteristic information of the aggregation point, the DGCNN core step edge convolution EdgeConv is defined as:
wherein for the inputConstructing a neighborhood set,,hIs composed of a plurality of layers of sensing bases,which represents the global information, is,representing local neighborhood information, the significance of the dynamic map update isNo longer limited to the Euclidean space, but extends to the feature space, so in the hierarchical edge convolution, each time the edge convolution is carried out,the space definition of the system is dynamically changed, and the feature neighborhood is reconstructed, so that the receptive field covers the whole point cloud, and meanwhile, the sparse characteristic is kept;representing the mutual information of each edge in the feature space structure,represents a multilayer sensing base structure;is composed ofAt the final output convolution result, RELU represents the ramp activation function.
5. The low-coincidence point cloud registration method according to claim 3, wherein the specific process of the step S3-2 is as follows:
s3-2-1: processing a kernel model Transformer in NLP by adopting a natural language, modeling the point cloud into an information sequence with continuous characteristics, and aggregating the characteristic points into high-dimensional vectorsAnd carrying out information interaction so as to mine spatial overlapping information, wherein the kernel operation of the Transformer module is defined as:
wherein the indexQueryKey valueKeyInformation, informationValueThe equivalence is regarded as a database searching operation, is embodied into a point cloud registration problem,Query,Key,Valuehigh-dimensional vector of feature aggregation point,Is the vector dimension of Query;
s3-2-2: target point cloudQTo source point cloudPThe information flow of (1), then Query isKey isThe Transformer operation is regarded as firstly calculating cosine similarity in a feature space, taking an index vector Query and a Key value vector Key as dot products, and taking vector dimensionsMaking a scale parameter;
s3-2-3: normalizing the obtained similarity degree value into an addition belonging to 0-1 through a softmax functionAnd probability Value distribution of 1, and multiplying with information Value corresponding to Key Value Key respectively to obtain cross Attention information Attention (Query, Key, Value) of index Query and all Key Value Key, namely target point cloudQCloud to sourcePThe cross information stream of (2); and isPToQThe reverse information flow calculation process follows the same calculation rule;
s3-2-4: the overlapping area information is implicit in the cross attention information, so that the source point clouds are respectively alignedPWith a target point cloudQThe aggregation points of (a) perform cross attention information flow calculation for one time respectively:
aggregating the obtained attention information and the original feature into a high-dimensional vectorSplicing, cat [ | ] non-calculation in the above formula]Representing the vector splicing operation on the feature dimension, and outputting the obtained mixed features as the confidence coefficient of a point-by-point overlapping region of aggregation points through a multi-layer perceptron MLP。
6. The low-coincidence point cloud registration method according to claim 3, wherein the specific process of the step S3-3 is as follows:
s3-3-1: the high-dimensional vectors of the feature aggregation points are obtained by network forward propagationAnd overlap region confidenceThe method adopts a dynamic attention network mechanism GAC to dynamically limit the receptive field of a convolution kernel in a predicted overlapping region, and reduces non-zero probabilityExtracting pollution point-by-point characteristics from invalid geometric information of the overlapping area;
s3-3-2: the polymerization point at this timeUsing undirected graphsIt is shown that,respectively representing the vertices and non-directional edges of the graph structure,the number of undirected graph vertices; order toTo representNeighborhood vertex ofThe set of components is composed of a plurality of groups,representation versus undirected graphGThe vertex of (1);
s3-3-3: reduced symbols, useRepresenting a point cloudPOrQThe feature of (2) is aggregated into a high-dimensional vector,brepresenting a characteristic dimension of a current incoming GAC network, the GAC network andselecting probability for prior information, selecting aggregation point in overlapped region, and utilizing GACThe defined graph convolution operation aggregates the coordinates and high-dimensional features of the overlapping area again, namely the graph convolution operation is performed only on the overlapping area:
wherein the content of the first and second substances,in order to be a multi-layer perceptron MLP,,on behalf of the operation of the network,is a central pointIs determined by the point of the neighborhood of the point,andare respectively a central pointAnd adjacent neighbor pointThe characteristics of the respective intermediate layer(s),is an offset; therefore, the graph convolution operation is performed by simultaneously utilizing Euclidean space coordinate information and feature space distance information in the neighborhood for aggregation, and the definition of the neighborhood of the feature space is different in each forward propagation, so that the dynamic limitation of the receptive field is realized.
7. The low-coincidence point cloud registration method of claim 3, wherein the specific process of the step S3-4 is as follows:
s3-4-1: will be provided withAndperforming dimension splicing operation to obtain a feature map of the polymerization point features and the confidence coefficient of the overlapping region;
s3-4-2: utilizing sparse hole convolution to carry out up-sampling operation on the characteristic graph, wherein parameters, namely convolution kernel size and sliding step length, are selected and down-sampled consistently to recover and input point cloudP、QThe confidence of point-by-point overlapping region with the same output point cloud size and output simultaneously。
8. The low-coincidence point cloud registration method according to claim 1 or 2, wherein the specific process of the step S4 is as follows:
s4-1: selecting a certain number of candidate matching points by adopting a probability sorting method based on point-by-point overlapping confidence coefficients, and obtaining an accurate corresponding relation of the homonymous point pairs in a high-dimensional feature space by using a robust outlier rejection method RANSAC;
s4-2: based on the corresponding relation of the homonymous point pairs, Singular Value Decomposition (SVD) is utilized to reversely solve pose transformation information of the rotation matrix and the translation matrix to form a rigid transformation matrix T;
s4-3: and (3) utilizing the relative pose transformation information to act on the target point cloud, transforming the target point cloud to a source point cloud coordinate system, and then carrying out point cloud density equalization operation to obtain a complete three-dimensional model.
9. The method of registering point clouds of low coincidence degree of claim 7, wherein in the step S3-4-2, the output is up-sampledP、QPoint by point characteristicsDimension is 32-dimensional compact features, and simultaneously, the confidence coefficient of the point-by-point overlapping region is output。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111437345.XA CN113838109B (en) | 2021-11-30 | 2021-11-30 | Low-coincidence point cloud registration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111437345.XA CN113838109B (en) | 2021-11-30 | 2021-11-30 | Low-coincidence point cloud registration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838109A true CN113838109A (en) | 2021-12-24 |
CN113838109B CN113838109B (en) | 2022-02-15 |
Family
ID=78971944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111437345.XA Active CN113838109B (en) | 2021-11-30 | 2021-11-30 | Low-coincidence point cloud registration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838109B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004871A (en) * | 2022-01-04 | 2022-02-01 | 山东大学 | Point cloud registration method and system based on point cloud completion |
CN114937122A (en) * | 2022-06-16 | 2022-08-23 | 黄冈强源电力设计有限公司 | Rapid three-dimensional model reconstruction method for cement fiberboard house |
CN115063459A (en) * | 2022-08-09 | 2022-09-16 | 苏州立创致恒电子科技有限公司 | Point cloud registration method and device and panoramic point cloud fusion method and system |
CN115631221A (en) * | 2022-11-30 | 2023-01-20 | 北京航空航天大学 | Low-overlapping-degree point cloud registration method based on consistency sampling |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110634161A (en) * | 2019-08-30 | 2019-12-31 | 哈尔滨工业大学(深圳) | Method and device for quickly and accurately estimating pose of workpiece based on point cloud data |
CN112150523A (en) * | 2020-09-24 | 2020-12-29 | 中北大学 | Three-dimensional point cloud registration method with low overlapping rate |
CN113160293A (en) * | 2021-05-13 | 2021-07-23 | 南京信息工程大学 | Complex scene ground station point cloud automatic registration method based on feature probability |
-
2021
- 2021-11-30 CN CN202111437345.XA patent/CN113838109B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110634161A (en) * | 2019-08-30 | 2019-12-31 | 哈尔滨工业大学(深圳) | Method and device for quickly and accurately estimating pose of workpiece based on point cloud data |
CN112150523A (en) * | 2020-09-24 | 2020-12-29 | 中北大学 | Three-dimensional point cloud registration method with low overlapping rate |
CN113160293A (en) * | 2021-05-13 | 2021-07-23 | 南京信息工程大学 | Complex scene ground station point cloud automatic registration method based on feature probability |
Non-Patent Citations (4)
Title |
---|
ADRIEN GRESSIN 等: "Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
CONGCONG WEN 等: "Airborne LiDAR point cloud classification with global-local graph attention convolution neural network", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
HAOZHE CHENG 等: "PTANet: Triple Attention Network for point cloud semantic segmentation", 《ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE》 * |
李强: "基于多约束八叉树和多重特征的点云配准算法", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑(月刊)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004871A (en) * | 2022-01-04 | 2022-02-01 | 山东大学 | Point cloud registration method and system based on point cloud completion |
CN114937122A (en) * | 2022-06-16 | 2022-08-23 | 黄冈强源电力设计有限公司 | Rapid three-dimensional model reconstruction method for cement fiberboard house |
CN115063459A (en) * | 2022-08-09 | 2022-09-16 | 苏州立创致恒电子科技有限公司 | Point cloud registration method and device and panoramic point cloud fusion method and system |
CN115063459B (en) * | 2022-08-09 | 2022-11-04 | 苏州立创致恒电子科技有限公司 | Point cloud registration method and device and panoramic point cloud fusion method and system |
CN115631221A (en) * | 2022-11-30 | 2023-01-20 | 北京航空航天大学 | Low-overlapping-degree point cloud registration method based on consistency sampling |
CN115631221B (en) * | 2022-11-30 | 2023-04-28 | 北京航空航天大学 | Low-overlapping-degree point cloud registration method based on consistency sampling |
Also Published As
Publication number | Publication date |
---|---|
CN113838109B (en) | 2022-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113838109B (en) | Low-coincidence point cloud registration method | |
Xie et al. | Point clouds learning with attention-based graph convolution networks | |
Tong et al. | Polynomial fitting algorithm based on neural network | |
Wu et al. | Object-compositional neural implicit surfaces | |
Chen et al. | HAPGN: Hierarchical attentive pooling graph network for point cloud segmentation | |
CN113191387A (en) | Cultural relic fragment point cloud classification method combining unsupervised learning and data self-enhancement | |
Cui et al. | Image steganography based on foreground object generation by generative adversarial networks in mobile edge computing with Internet of Things | |
CN115311502A (en) | Remote sensing image small sample scene classification method based on multi-scale double-flow architecture | |
Zhan et al. | FA-ResNet: Feature affine residual network for large-scale point cloud segmentation | |
Li et al. | Multi-view-based siamese convolutional neural network for 3D object retrieval | |
CN115984339A (en) | Double-pipeline point cloud completion method based on geometric feature refining and confrontation generation network | |
Min et al. | Geometry guided network for point cloud registration | |
Tian et al. | SAR object classification using the DAE with a modified triplet restriction | |
Zhou et al. | Retrieval and localization with observation constraints | |
Li et al. | A new algorithm of vehicle license plate location based on convolutional neural network | |
Stypułkowski et al. | Representing point clouds with generative conditional invertible flow networks | |
Wang et al. | Two-stage vSLAM loop closure detection based on sequence node matching and semi-semantic autoencoder | |
Zheng et al. | Instance-Aware Spatial-Frequency Feature Fusion Detector for Oriented Object Detection in Remote Sensing Images | |
Song | 3D virtual reality implementation of tourist attractions based on the deep belief neural network | |
Chen et al. | 3D point cloud generation reconstruction from single image based on image retrieval | |
Yu et al. | A deep neural network using double self-attention mechanism for als point cloud segmentation | |
Xiao et al. | Multi-dimensional graph interactional network for progressive point cloud completion | |
Chen et al. | Recognition and Classification of High Resolution Remote Sensing Image based on Convolutional Neural Network | |
Zao et al. | Topology-Guided Road Graph Extraction From Remote Sensing Images | |
Wang et al. | Quantitative Evaluation of Plant and Modern Urban Landscape Spatial Scale Based on Multiscale Convolutional Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |