CN115880183A - Point cloud model repairing method, system, device and medium based on deep network - Google Patents

Point cloud model repairing method, system, device and medium based on deep network Download PDF

Info

Publication number
CN115880183A
CN115880183A CN202211693396.3A CN202211693396A CN115880183A CN 115880183 A CN115880183 A CN 115880183A CN 202211693396 A CN202211693396 A CN 202211693396A CN 115880183 A CN115880183 A CN 115880183A
Authority
CN
China
Prior art keywords
point cloud
original point
graph
characterizing
adjacency matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211693396.3A
Other languages
Chinese (zh)
Other versions
CN115880183B (en
Inventor
柯建生
王兵
陈学斌
戴振军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pole 3d Information Technology Co ltd
Original Assignee
Guangzhou Pole 3d Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pole 3d Information Technology Co ltd filed Critical Guangzhou Pole 3d Information Technology Co ltd
Priority to CN202211693396.3A priority Critical patent/CN115880183B/en
Publication of CN115880183A publication Critical patent/CN115880183A/en
Application granted granted Critical
Publication of CN115880183B publication Critical patent/CN115880183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a point cloud model repairing method, a point cloud model repairing system, a point cloud model repairing device and a point cloud model repairing medium based on a deep network, wherein the point cloud model repairing method comprises the following steps: acquiring original point cloud, extracting global characteristic information and local characteristic information from the original point cloud, and integrating to obtain original point cloud characteristics; and obtaining a high-dimensional vector for representing the shape characteristics through attention mechanism learning; carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics; feature points in the topological relation range of the original point cloud are aggregated through a low-pass graph filter to obtain a graph adjacency matrix; high-dimensional vectors and rotation invariant features are input to a multi-graph convolutional neural network in a cascade mode to be reconstructed to obtain target point clouds, the method can keep the rotation invariance, detail information and topological relation of the repaired furniture geometric model, meanwhile, due to the fact that a deep network is introduced, time consumed in the repairing process can reach a level close to real time, and the method can be widely applied to the technical field of computer vision.

Description

Point cloud model repairing method, system, device and medium based on deep network
Technical Field
The invention relates to the technical field of computer vision, in particular to a point cloud model repairing method, a point cloud model repairing system, a point cloud model repairing device and a point cloud model repairing medium based on a depth network.
Background
The related technical scheme provides a plurality of methods, and an effective deep learning network can be constructed according to the repairing task to reconstruct the point cloud. Through the completion reconstruction, the missing part of the input point cloud is completed, the density of the sparse part is improved, and the generation of noise points is ensured not to be generated or to be reduced as much as possible through a loss function. Although point cloud repair techniques based on deep learning have been highly successful, many problems and challenges remain.
For example, the receptive field of the deep learning model in the related technical solution is not enough to obtain an accurate high-dimensional point cloud feature representation, and since the receptive field of the conventional convolution model is limited by the size of the convolution kernel, the context relationship between global points and points cannot be integrated. For another example, the rotation invariance of the input point cloud data cannot be effectively guaranteed, that is, different shape features are extracted from the same point cloud and the point cloud after the point cloud is rotated randomly, so that the popularization effect in any direction is poor. In addition, the method still has defects for repairing or maintaining the detail information of the point cloud model, such as sharp features like edges and corners, and holes. Most techniques are deficient in the ability to repair local details of the point cloud; and, it is difficult to maintain its topological relationship while repairing the point cloud.
Disclosure of Invention
In view of the above, to at least partially solve one of the above technical problems or disadvantages, an embodiment of the present invention provides a method for repairing a point cloud model based on a depth network, which fully utilizes global context interaction information to obtain a more accurate depth value; the technical scheme of the application also provides a system, a device and a medium corresponding to the method.
On one hand, the technical scheme of the application provides a point cloud model repairing method based on a deep network, which comprises the following steps:
acquiring original point cloud, extracting global characteristic information and local characteristic information from the original point cloud, and integrating to obtain original point cloud characteristics;
obtaining a high-dimensional vector for representing shape features through attention mechanism learning according to the original point cloud features;
carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics;
aggregating the characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and cascading the high-dimensional vectors and the rotation invariant features, and inputting a cascaded result and the graph adjacency matrix into a multi-graph convolution neural network for reconstruction to obtain a target point cloud.
In a feasible embodiment of the present application, the obtaining of the original point cloud, extracting global feature information from the original point cloud, and integrating the local feature information to obtain the original point cloud feature includes:
determining a weight score of candidate points in the original point cloud;
determining that the weight score is lower than a preset score, determining that the candidate point is a noise point, and removing the noise point to obtain a characteristic point of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
In a possible embodiment of the present disclosure, in the step of obtaining a high-dimensional vector for characterizing shape features through attention mechanism learning according to the original point cloud features, a calculation formula of the high-dimensional vector is as follows:
F Encode =SA(F in )
wherein, F Encode Characterizing the high-dimensional vector; f in Characterizing the original point cloud characteristics, and SA characterizing the calculation process of an attention mechanism; the attention calculation formula in the calculation process of the attention mechanism is as follows:
Figure BDA0004022276230000021
wherein ρ is a representation of the relationship.
In a possible embodiment of the present application, in the step of specifically encoding the original point cloud through the spherical harmonic kernel function to obtain the rotation invariant feature, the encoding process of the rotation invariant feature includes:
F Rotlnv =P·B(n)
wherein, F RotInv Characterizing the rotation invariant features, P characterizing features of the original point cloud, B (n) characterizing a basis of the spherical harmonic kernel function, and n characterizing a surface normal where feature points are located; the basis of the spherical harmonic kernel function satisfies the following formula:
Figure BDA0004022276230000022
in a possible embodiment of the present disclosure, in the step of obtaining the graph adjacency matrix by aggregating the feature points in the topological relation range of the original point cloud through the low-pass graph filter, a filtering process of the low-pass graph filter satisfies the following formula:
Figure BDA0004022276230000023
wherein A characterizes the graph adjacency matrix, h l Characterizing parameters of the low-pass map filter, L characterizing an order of the low-pass map filter, M characterizing a dimension of the map adjacency matrix, L characterizing a map convolution kernel order, and R characterizing a real number domain.
In a possible embodiment of the present disclosure, the aggregating, by a low-pass map filter, feature points in a topological relation range of the original point cloud to obtain a map adjacency matrix includes:
determining an initial value of the graph adjacency matrix according to the near point of the characteristic point, the standard two-dimensional lattice and the hyperparametric attenuation rate; the initial value satisfies the following formula:
Figure BDA0004022276230000031
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0004022276230000032
initial value, z, characterizing the graph adjacency matrix i Characterizing the ith node, N, in the canonical two-dimensional lattice i Denotes z i K-neighbors of, Z i As a regular term, the regular term satisfies the following formula:
Figure BDA0004022276230000033
in a possible embodiment of the present application, the concatenating the high-dimensional vector and the rotation invariant feature, and accurately inputting the concatenated result and the graph adjacency matrix into a multi-graph convolutional neural network to reconstruct to obtain a target point cloud includes:
constructing a constraint condition through a bulldozer distance error function and a chamfering distance error function, and performing parameter adjustment on the multiple graph convolution neural network through the constraint condition;
and outputting the reconstructed target point cloud through the multi-graph convolutional neural network after parameter adjustment.
On the other hand, this application technical scheme still provides point cloud model repair system based on depth network, and this system includes:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for acquiring original point clouds, extracting global feature information and local feature information from the original point clouds and integrating the global feature information and the local feature information to obtain original point cloud features; learning through an attention mechanism according to the original point cloud characteristics to obtain a high-dimensional vector for representing shape characteristics;
the characteristic coding unit is used for specifically coding the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics;
the characteristic filtering unit is used for aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, and accurately inputting the cascaded result and the graph adjacency matrix into a multi-graph convolutional neural network for reconstruction to obtain the target point cloud.
On the other hand, the technical scheme of the application also provides a point cloud model repairing device based on the depth network, and the device comprises at least one processor; at least one memory for storing at least one program; when executed by the at least one processor, cause the at least one processor to execute the method for deep-web based point cloud model repair as described in the first aspect.
On the other hand, the present technical solution also provides a storage medium, in which a processor-executable program is stored, and when the processor-executable program is executed by a processor, the processor-executable program is configured to perform the method for repairing a point cloud model based on a deep network according to any one of the first aspect.
Advantages and benefits of the present invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
according to the technical scheme, the missing furniture geometric model is repaired by combining a depth network, a spherical harmonic kernel function and a low-pass graph filter are introduced in the process of decoding coding characteristics in the point cloud repairing process to ensure the rotational invariance of a decoding result and maintain the topological relation of original data, the scheme can keep the rotational invariance, detail information and the topological relation of the repaired furniture geometric model to a great extent, and meanwhile, due to the introduction of the depth network, the time consumed in the repairing process can reach a level close to real time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a point cloud model repairing method based on a deep network according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a hybrid model provided in the technical solution of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the related technical scheme, point cloud restoration technologies based on deep learning can be divided into three types, namely dense reconstruction, complete reconstruction and denoising reconstruction according to different emphasis points of restoration tasks.
The method focuses on the repair operation of improving the resolution (density) and distribution uniformity of the points in the local point cloud block, and is called dense reconstruction (up-sampling reconstruction or super-resolution reconstruction). The raw point clouds produced by the depth camera and lidar sensor are typically low resolution or sparse, while the point cloud distribution is also non-uniform. The goal of dense reconstruction is to output a denser, uniformly distributed point cloud with a given set of non-uniformly distributed, sparse point cloud coordinates while maintaining the shape (potential surface) of the target object. Related work mainly comprises a PU-Net network architecture based on a PointNet + + system structure, a Multi-Step Progressive Upsampling network, also called 3PU (3-Step Progressive Upsampling), a data-driven point cloud Upsampling technology, point cloud reconstruction and Dense point cloud reconstruction framework PU-GAN of technologies such as a fusion GCN, a Dense GCN, a Multi-branch GCN, a Clone GCN and a Node Shuffle.
Point cloud repair operations focused on three-dimensional shape completion are called completion reconstruction. The task of the complementary reconstruction is to recover a complete shape from the incomplete or partial input data, which is of great value in three-dimensional reconstruction. The shape completion operation needs to satisfy 3 requirements, namely, details of the input point cloud are retained, the missing part is drawn by using a detailed geometric structure, and uniformly distributed points are generated on the surface of the object. The neural network structure for the complementary reconstruction is usually composed of two parts, namely an encoder and a decoder. Related work includes a Point Completion Network (PCN) based on deep learning, a multilevel neural Network structure directly acting on Point cloud proposed by Huang and the like, a method for introducing an attention mechanism into a feature encoder of the PCN, a TopNet model, a PF-Net model, a Point cloud Completion Network based on GAN, a Completion reconstruction algorithm based on a separation feature aggregation strategy, a jump attention Network for three-dimensional Point cloud Completion and the like.
The emphasis is on the operation of recovering a clean set of points from the noisy input while preserving the geometric details of the underlying object surface, called denoised reconstruction. The goal of point cloud denoising is to recover a clean set of points from the noisy input, while preserving the geometric details of the object surface. The related work comprises a GraphPointNet model, a neural projection denoising model proposed by Duan and the like, a three-dimensional point cloud denoising model proposed by PointCleanNet, an EC-Net model, luo and the like, a Non-Local-Part-Aware (NLPA) network structure and the like.
As indicated by the background, although the point cloud repair technique based on deep learning has been highly successful, it still faces many problems and challenges.
Based on defects or problems existing in related technical schemes pointed out in the background technology, the technical scheme of the application is combined with the deep network to repair the missing furniture geometric model, so that the rotational invariance, the detail information and the topological relation of the repaired furniture geometric model can be kept to a great extent, and meanwhile, due to the introduction of the deep network, the time consumption of the repair process can reach a level close to real time.
In a first aspect, as shown in fig. 1, the present application provides a point cloud model repairing method based on a deep network, where the method includes steps S01-S05:
s01, acquiring an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating to obtain original point cloud features;
specifically, in the embodiment, for the determination or deficiency existing in the related art scheme, the neighborhood sampling and self-attention technology is firstly introduced into the point cloud processing. In the model, because each point is only characterized in a traditional Multi-Layer Perception (MLP) structure, the integration capability of local structure information is too weak; the embodiment integrates the local information point by point and the neighborhood thereof by sampling and grouping the local neighborhood on the basis of the embodiment.
Furthermore, the existing models still use a convolutional neural network or a multilayer perceptron to process point clouds in the aspect of point cloud feature extraction, and although the two models have great success in many tasks, the two models have the structural defect of the model, and the receptive field is not enough to acquire global feature information. The defect is particularly obvious in the task of point cloud feature extraction, a large amount of coplanarity or long chain link correlation may exist between points, and the global information is particularly important for a geometric model of missing furniture. Thus, embodiments propose the use of a self-attention framework in combination to achieve the goal of extracting global and local information simultaneously.
In addition, in some possible embodiments, in step S01 of acquiring an original point cloud, extracting global feature information from the original point cloud, and integrating local feature information to obtain an original point cloud feature, the embodiments may further include steps S011-S012:
s011, determining the weight fraction of candidate points in the original point cloud;
s012, determining that the weight score is lower than a preset score, determining that the candidate point is a noise point, and removing the noise point to obtain a feature point of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
In particular, in an embodiment, an attention mechanism may be introduced to remove noise points, calculate a weight or a score for each point feature, aim to assign a lower weight to an unimportant point such as an abnormal point or noise, and assign a higher weight to a significant point that can more effectively characterize a curved surface shape.
S02, learning through an attention mechanism according to the original point cloud characteristics to obtain a high-dimensional vector for representing shape characteristics;
specifically, in the embodiment, for the feature after the encoding is completed after the step S01 (S011-S012), the embodiment inputs it into the self-attention module for calculation, so as to learn the shape-related high-dimensional representation:
F Encode =SA(F in )
wherein, F in ,F Encode Respectively representing the characteristics of input and output, SA represents an attention module, and the attention calculation formula is as follows:
Figure BDA0004022276230000061
where ρ is a relationship Function (relationship Function). As shown in fig. 2, in the embodiment, a high-dimensional feature representation with shape perception characteristics and global and information combination is finally obtained through the hybrid model in fig. 2, and a required point cloud representation is extracted for subsequent repair work.
After the embodiment obtains the high-dimensional characteristic representation of the original point cloud, the embodiment provides a point cloud repairing network to repair the original point cloud, and simultaneously solves the problems existing in the existing model. In the process, the embodiment not only ensures the rotation invariance and the topological relation invariance of the point cloud, but also can reconstruct the local detail information of the point cloud model with high quality and high speed.
S03, specifically encoding the original point cloud through a spherical harmonic kernel function to obtain rotation invariant features;
in particular, in the embodiment, firstly, in order to maintain the rotational invariance of the repaired furniture geometric model to a greater extent, the furniture geometric model needs to be encoded with rotational invariance. In an embodiment, the original point cloud is specifically encoded by using a spherical harmonic kernel function, which may be expressed as:
F RotInv =P·B(n)
wherein, P (P) Orig ) And F RotInv Respectively representing the original point cloud and the characteristics after rotation invariance coding, wherein n represents the surface normal of the point, and the basis of a second-order spherical harmonic kernel function is given by a formula:
Figure BDA0004022276230000071
since the spherical harmonic kernel function itself has rotation invariance, i.e. if the original point rotates, the coefficient of the spherical harmonic does not need to change, the new feature base is a linear combination of the original feature bases, i.e.:
F RotInv =P′·B′(n)
in an embodiment, the high-dimensional features extracted in the first step are concatenated with rotation invariant features and used as input to a decoder:
F Input =Concatenate(F Encode ,F RotInv )
s04, aggregating the characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
in particular, in embodiments, particularly during decoding and point cloud repair, the underlying surface of a three-dimensional point is piecewise smooth in the three-dimensional spatial domain, with discontinuities caused by curvature along the surface. However, in practice the 3D coordinates should always be smooth along the curved surface. Therefore, the embodiment designs a low-pass graph filter to filter the obtained high-dimensional features on the basis of using the graph convolution neural network so as to achieve the purpose of keeping the detail and topological relation, and simultaneously can filter.
The low-pass graph filter of an embodiment maintains smoothness of three-dimensional points on the graph by aggregating feature points within a range of topological relations while pushing the network to learn graph topological relations. Further, embodiments implement a filter based on a graph adjacency matrix with learnable parameters:
Figure BDA0004022276230000072
wherein h is l Is a parameter of the filter, L is the order of the filter, M characterizes the dimension of the graph adjacency matrix, L characterizes the graph convolution kernel order, R characterizes the real number domain; the larger the order number, the larger the receptive field, the learnable graph adjacency matrix a, whose initial value can be expressed as:
Figure BDA0004022276230000073
wherein z is i Is the ith node in the normalized two-dimensional lattice, σ is a hyperparametric decay Rate, N i Denotes z i K-neighbors of, and a regularization term
Figure BDA0004022276230000074
To ensure that it is taken up or taken up>
Figure BDA0004022276230000075
Here, embodiments introduce more by considering the k-neighbors of each nodeThe following is connected to increase the receptive field and make the model easier to train. />
S05, cascading the high-dimensional vector and the rotation invariant feature, and accurately inputting a cascaded result and the graph adjacency matrix into a multi-graph convolution neural network for reconstruction to obtain a target point cloud;
specifically, in the embodiment, the final reconstructed point is expressed by the following formula:
Figure BDA0004022276230000081
here, the embodiment uses this graph filter as one of the modules of the network, and finally obtains the reconstructed point cloud through the multi-graph convolutional neural network.
In some possible embodiments, the concatenating the high-dimensional vector and the rotation invariant feature, and inputting the concatenated result and the graph adjacency matrix into a multi-graph convolutional neural network to reconstruct to obtain a target point cloud, may further include steps S051 to S052:
s051, constructing constraint conditions through a bulldozer distance error function and a chamfering distance error function, and adjusting parameters of the multi-graph convolution neural network through the constraint conditions;
and S052, outputting the multi-graph convolutional neural network after parameter adjustment to obtain the reconstructed target point cloud.
In particular, in an embodiment, the embodiment uses a bulldozer distance L EMD (Earth mover's Distance, EMD) and chamfer Distance L CD The (Chamfer Distance, CD) error function is constrained as follows:
Figure BDA0004022276230000082
Figure BDA0004022276230000083
on the other hand, this application technical scheme still provides point cloud model repair system based on depth network, and this system includes:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for acquiring an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating the global feature information and the local feature information to obtain original point cloud features; learning through an attention mechanism according to the original point cloud characteristics to obtain a high-dimensional vector for representing shape characteristics;
the characteristic coding unit is used for specifically coding the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics;
the characteristic filtering unit is used for aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, and accurately inputting the cascaded result and the graph adjacency matrix into a multi-graph convolutional neural network for reconstruction to obtain the target point cloud.
On the other hand, this application technical scheme still provides a point cloud model prosthetic devices based on depth network, and the device includes: at least one processor; at least one memory for storing at least one program; when executed by the at least one processor, cause the at least one processor to execute the method for deep-web based point cloud model repair according to the second aspect.
The embodiment of the invention also provides a storage medium, which stores a corresponding execution program, and the program is executed by a processor, so that the point cloud model repairing method based on the deep network in the first aspect is realized.
From the above specific implementation process, it can be concluded that the technical solution provided by the present invention has the following advantages or advantages compared to the prior art:
1. according to the technical scheme, an encoder of the existing method is improved, local neighborhood is sampled and combined to obtain local fine-grained information, and meanwhile a self-attention module is used for obtaining context connection between a global point and a point.
2. According to the technical scheme, in the process of decoding the coding features through point cloud repair, a spherical harmonic kernel function and a low-pass graph filter are introduced to ensure rotation invariance of a decoding result and maintain a topological relation of original data.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more of the functions and/or features may be implemented in a separate physical device or software module. It will also be understood that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The point cloud model repairing method based on the deep network is characterized by comprising the following steps of:
acquiring original point cloud, extracting global characteristic information and local characteristic information from the original point cloud, and integrating to obtain original point cloud characteristics;
obtaining a high-dimensional vector for representing shape features through attention mechanism learning according to the original point cloud features;
carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics;
aggregating the characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and cascading the high-dimensional vectors and the rotation invariant features, and inputting a cascaded result and the graph adjacency matrix into a multi-graph convolution neural network for reconstruction to obtain a target point cloud.
2. The method for repairing a point cloud model based on a deep network as claimed in claim 1, wherein the obtaining of the original point cloud, extracting global feature information from the original point cloud and integrating the global feature information and the local feature information to obtain original point cloud features comprises:
determining a weight score of candidate points in the original point cloud;
determining that the weight score is lower than a preset score, determining that the candidate point is a noise point, and removing the noise point to obtain a characteristic point of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
3. The method according to claim 1, wherein in the step of obtaining a high-dimensional vector for characterizing shape features by learning through an attention mechanism according to the original point cloud features, a calculation formula of the high-dimensional vector is:
F Encode =SA(F in )
wherein, F Encode Characterizing the high-dimensional vector; f in Characterizing the original point cloud characteristics, and SA characterizing the calculation process of an attention mechanism; the attention calculation formula in the calculation process of the attention mechanism is as follows:
Figure FDA0004022276220000011
where ρ characterizes a relationship representing a function.
4. The method according to claim 1, wherein in the step of specifically encoding the original point cloud through the spherical harmonic kernel function to obtain the rotation invariant feature, the encoding process of the rotation invariant feature comprises:
F RotInv =P·B(n)
wherein, F RotInv Characterizing the rotation invariant features, P characterizing features of the original point cloud, B (n) characterizing the spherical harmonicsThe basis of the kernel function, n represents the surface normal where the characteristic point is located; the basis of the spherical harmonic kernel function satisfies the following formula:
Figure FDA0004022276220000012
5. the method for repairing a point cloud model based on a deep network as claimed in claim i, wherein in the step of obtaining a graph adjacency matrix by aggregating feature points in a topological relation range of the original point cloud through a low-pass graph filter, a filtering process of the low-pass graph filter satisfies the following formula:
Figure FDA0004022276220000021
wherein A characterizes the graph adjacency matrix, h l Characterizing parameters of the low-pass map filter, L characterizing an order of the low-pass map filter, M characterizing a dimension of the map adjacency matrix, L characterizing a map convolution kernel order, and R characterizing a real number domain.
6. The method for repairing a point cloud model based on a deep network according to claim 5, wherein the step of aggregating feature points in a topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix comprises the following steps:
determining an initial value of the graph adjacency matrix according to the near point of the characteristic point, the standard two-dimensional lattice and the hyperparametric attenuation rate; the initial value satisfies the following formula:
Figure FDA0004022276220000022
wherein the content of the first and second substances,
Figure FDA0004022276220000023
initial value, z, characterizing the graph adjacency matrix i Characterizing the ith node, N, in the canonical two-dimensional lattice i Denotes z i K-neighbors of, Z i As a regular term, the regular term satisfies the following formula:
Figure FDA0004022276220000024
7. the method for repairing a point cloud model based on a deep network as claimed in any one of claims 1 to 6, wherein the step of concatenating the high-dimensional vector and the rotation invariant feature and inputting the concatenated result and the graph adjacency matrix into a multi-graph convolutional neural network for reconstruction to obtain a target point cloud comprises:
constructing a constraint condition through a bulldozer distance error function and a chamfer angle distance error function, and performing parameter adjustment on the multiple-graph convolutional neural network through the constraint condition;
and outputting the reconstructed target point cloud through the multi-graph convolutional neural network after parameter adjustment.
8. Point cloud model repair system based on depth network, characterized by including:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for acquiring original point clouds, extracting global feature information and local feature information from the original point clouds and integrating the global feature information and the local feature information to obtain original point cloud features; learning through an attention mechanism according to the original point cloud characteristics to obtain a high-dimensional vector for representing shape characteristics;
the characteristic coding unit is used for specifically coding the original point cloud through a spherical harmonic kernel function to obtain rotation invariant characteristics;
the characteristic filtering unit is used for aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, and accurately inputting the cascaded result and the graph adjacency matrix into a multi-graph convolutional neural network for reconstruction to obtain the target point cloud.
9. Point cloud model repair device based on depth network, characterized in that the device includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to perform the method of any one of claims 1-7.
10. A storage medium having stored therein a processor-executable program, wherein the processor-executable program, when executed by a processor, is configured to execute the method of point cloud model repair based on a deep network according to any one of claims 1 to 7.
CN202211693396.3A 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network Active CN115880183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211693396.3A CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211693396.3A CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Publications (2)

Publication Number Publication Date
CN115880183A true CN115880183A (en) 2023-03-31
CN115880183B CN115880183B (en) 2024-03-15

Family

ID=85755686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211693396.3A Active CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Country Status (1)

Country Link
CN (1) CN115880183B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
US11488283B1 (en) * 2021-11-30 2022-11-01 Huazhong University Of Science And Technology Point cloud reconstruction method and apparatus based on pyramid transformer, device, and medium
CN115439694A (en) * 2022-09-19 2022-12-06 南京邮电大学 High-precision point cloud completion method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
US11488283B1 (en) * 2021-11-30 2022-11-01 Huazhong University Of Science And Technology Point cloud reconstruction method and apparatus based on pyramid transformer, device, and medium
CN115439694A (en) * 2022-09-19 2022-12-06 南京邮电大学 High-precision point cloud completion method and device based on deep learning

Also Published As

Publication number Publication date
CN115880183B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
CN111062880B (en) Underwater image real-time enhancement method based on condition generation countermeasure network
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN110490814B (en) Mixed noise removing method and system based on smooth rank constraint and storage medium
CN111179189B (en) Image processing method and device based on generation of countermeasure network GAN, electronic equipment and storage medium
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN111179187A (en) Single image rain removing method based on cyclic generation countermeasure network
LU500415B1 (en) Grid denoising method based on graphical convolution network
CN113538246B (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116912257B (en) Concrete pavement crack identification method based on deep learning and storage medium
JP2007102458A (en) Method for automatically drawing notice portion by image processing, device therefor and recording medium recording program
Liu et al. Facial image inpainting using multi-level generative network
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN112686822A (en) Image completion method based on stack generation countermeasure network
CN115880183A (en) Point cloud model repairing method, system, device and medium based on deep network
CN116895089A (en) Face diversified complement method and system based on generation countermeasure network
Zhao et al. NormalNet: Learning-based normal filtering for mesh denoising
CN112801909B (en) Image fusion denoising method and system based on U-Net and pyramid module
CN114926352A (en) Image reflection removing method, system, device and storage medium
CN114494387A (en) Data set network generation model and fog map generation method
CN115423697A (en) Image restoration method, terminal and computer storage medium
CN114615505A (en) Point cloud attribute compression method and device based on depth entropy coding and storage medium
CN110097642B (en) Model grid completion method based on half-edge structure
CN112907456B (en) Deep neural network image denoising method based on global smooth constraint prior model
CN113012076B (en) Dunhuang fresco restoration method based on adjacent pixel points and self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant