CN115880183B - Point cloud model restoration method, system, device and medium based on depth network - Google Patents

Point cloud model restoration method, system, device and medium based on depth network Download PDF

Info

Publication number
CN115880183B
CN115880183B CN202211693396.3A CN202211693396A CN115880183B CN 115880183 B CN115880183 B CN 115880183B CN 202211693396 A CN202211693396 A CN 202211693396A CN 115880183 B CN115880183 B CN 115880183B
Authority
CN
China
Prior art keywords
point cloud
original point
graph
feature
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211693396.3A
Other languages
Chinese (zh)
Other versions
CN115880183A (en
Inventor
柯建生
王兵
陈学斌
戴振军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pole 3d Information Technology Co ltd
Original Assignee
Guangzhou Pole 3d Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pole 3d Information Technology Co ltd filed Critical Guangzhou Pole 3d Information Technology Co ltd
Priority to CN202211693396.3A priority Critical patent/CN115880183B/en
Publication of CN115880183A publication Critical patent/CN115880183A/en
Application granted granted Critical
Publication of CN115880183B publication Critical patent/CN115880183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a point cloud model restoration method, a system, a device and a medium based on a depth network, wherein the method comprises the following steps: acquiring an original point cloud, extracting global characteristic information from the original point cloud, and integrating local characteristic information to obtain an original point cloud characteristic; learning through an attention mechanism to obtain a high-dimensional vector used for representing the shape characteristics; performing specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature; aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix; the method can keep the rotation invariance, detail information and topological relation of the repaired furniture geometric model, and meanwhile, due to the fact that the depth network is introduced, the time consumption in the repairing process can reach a near real-time level, and the method can be widely applied to the technical field of computer vision.

Description

Point cloud model restoration method, system, device and medium based on depth network
Technical Field
The invention relates to the technical field of computer vision, in particular to a point cloud model repairing method, a system, a device and a medium based on a depth network.
Background
The related technical scheme provides a plurality of methods, and can construct an effective deep learning network according to the repair task to reconstruct the point cloud. By the complement reconstruction, the missing part of the input point cloud is complemented, the density of the sparse part is improved, and no noise point is ensured or noise point generation is reduced as much as possible through a loss function. While deep learning based point cloud repair techniques have met with great success, many problems and challenges remain.
For example, the receptive field of the deep learning model in the related technical scheme is insufficient to obtain an accurate high-dimensional point cloud feature representation, and the receptive field of the traditional convolution model is limited by the size of the convolution kernel, so that the context relation between the global points cannot be integrated. For another example, rotational invariance of input point cloud data cannot be effectively guaranteed, namely, the same point cloud and the point cloud after the same point cloud rotates randomly can extract different shape features, so that popularization effects in any direction are poor. In addition, there are still disadvantages to repairing or maintaining the detailed information of the point cloud model, such as sharp features and voids of edges, corner points, etc. Most technologies repair the local lack of detail capability of point clouds; and, it is difficult to maintain its topological relationship while point cloud repair.
Disclosure of Invention
In view of the above, in order to at least partially solve one of the above technical problems or drawbacks, an object of an embodiment of the present invention is to provide a point cloud model repairing method based on a depth network, which makes full use of global context interaction information to obtain a more accurate depth value; the technical scheme of the application also provides a system, a device and a medium corresponding to the method.
On the one hand, the technical scheme of the application provides a point cloud model restoration method based on a depth network, which comprises the following steps:
acquiring an original point cloud, extracting global characteristic information from the original point cloud, and integrating local characteristic information to obtain an original point cloud characteristic;
learning to obtain a high-dimensional vector for representing the shape characteristic through an attention mechanism according to the original point cloud characteristic;
performing specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature;
aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and cascading the high-dimensional vector and the rotation invariant feature, and inputting the cascading result and the graph adjacent matrix into a multi-graph convolutional neural network to reconstruct so as to obtain a target point cloud.
In a possible embodiment of the present application, the obtaining an original point cloud, extracting global feature information from the original point cloud, and integrating local feature information to obtain an original point cloud feature, includes:
determining the weight score of the candidate point in the original point cloud;
determining that the weight score is lower than a preset score, determining that the candidate points are noise points, and eliminating the noise points to obtain characteristic points of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
In a possible embodiment of the present application, in the step of learning, according to the original point cloud feature, a high-dimensional vector for characterizing a shape feature through an attention mechanism, a calculation formula of the high-dimensional vector is:
F Encode =SA(F in )
wherein F is Encode Characterizing the high-dimensional vector; f (F) in Characterizing the original point cloud characteristics, and SA characterizes the calculation process of an attention mechanism; the attention calculation formula in the calculation process of the attention mechanism is as follows:
wherein ρ represents the relationship representation function.
In a possible embodiment of the present application, in the step of specifically encoding the original point cloud by using a spherical harmonic kernel function to obtain a rotation invariant feature, an encoding process of the rotation invariant feature is:
F Rotlnv =P·B(n)
wherein F is RotInv Characterizing the rotation invariant feature, wherein P characterizes the original point cloud, B (n) characterizes the basis of the spherical harmonic kernel function, and n characterizes the surface normal where the feature point is located; the basis of the spherical harmonic kernel function satisfies the following formula:
in a possible embodiment of the present application, in the step of aggregating the feature points in the topological relation range of the original point cloud by using a low-pass graph filter to obtain the graph adjacency matrix, a filtering process of the low-pass graph filter satisfies the following formula:
wherein A characterizes the graph adjacency matrix, h l And (3) representing parameters of the low-pass graph filter, wherein L represents the order of the low-pass graph filter, M represents the dimension of the graph adjacency matrix, L represents the graph convolution kernel order, and R represents the real number domain.
In a possible embodiment of the present application, the aggregating, by a low-pass graph filter, feature points within a topological relation range of the original point cloud to obtain a graph adjacency matrix includes:
determining an initial value of the graph adjacency matrix according to the adjacent points of the characteristic points, the standard two-dimensional lattice and the super-parameter attenuation rate; the initial value satisfies the following formula:
wherein,characterizing initial values, z, of the graph adjacency matrix i Characterizing an ith node, N, in the canonical two-dimensional lattice i Representing z i K-adjacent point, Z i As a regular term, the regular term satisfies the following formula:
in a possible embodiment of the present application, the concatenating the high-dimensional vector and the rotation invariant feature, inputting the concatenated result and the graph adjacency matrix to a multiple graph convolutional neural network to reconstruct to obtain a target point cloud, includes:
constructing constraint conditions through a bulldozer distance error function and a chamfer distance error function, and carrying out parameter adjustment on the multiple graph convolutional neural network through the constraint conditions;
and outputting the reconstructed target point cloud through the multiple-graph convolution neural network after parameter adjustment.
On the other hand, the technical scheme of the application also provides a point cloud model repairing system based on a depth network, and the system comprises:
the feature extraction unit is used for obtaining an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating the global feature information and the local feature information to obtain original point cloud features; learning according to the original point cloud features through an attention mechanism to obtain a high-dimensional vector used for representing the shape features;
the feature coding unit is used for carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature;
the characteristic filtering unit is used for converging characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, and inputting the cascaded result and the graph adjacent matrix into a multi-graph convolutional neural network to be reconstructed to obtain a target point cloud.
On the other hand, the technical scheme also provides a point cloud model repairing device based on the depth network, which comprises at least one processor; at least one memory for storing at least one program; the at least one program, when executed by the at least one processor, causes the at least one processor to run the depth network based point cloud model restoration method as described in the first aspect.
In another aspect, a storage medium is further provided, in which a processor executable program is stored, where the processor executable program is configured to perform the depth network based point cloud model restoration method according to any one of the first aspects when executed by a processor.
Advantages and benefits of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
according to the technical scheme, the missing furniture geometric model is repaired by combining the depth network, in the process of decoding coding features in the process of point cloud repair, the spherical harmonic kernel function and the low-pass diagram filter are introduced to ensure rotation invariance of a decoding result and maintain the topological relation of original data, the scheme can maintain the rotation invariance, detail information and the topological relation of the repaired furniture geometric model to a great extent, and meanwhile, due to the fact that the depth network is introduced, the time consumption in the repair process can reach a near real-time level.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a step flowchart of a point cloud model repairing method based on a depth network provided in the technical scheme of the present application;
fig. 2 is a schematic structural diagram of a hybrid model provided in the technical solution of the present application.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the related technical scheme, the point cloud repair technology based on the deep learning can be divided into three types of dense reconstruction, complement reconstruction and denoising reconstruction according to different emphasis points of the repair task.
Repair operations that focus on improving the resolution (density) and distribution uniformity of points inside a local point cloud are called dense reconstruction (upsampled reconstruction or super-resolution reconstruction). The raw point clouds produced by depth cameras and lidar sensors are typically low resolution or sparse, while the point cloud distribution is also non-uniform. The goal of dense reconstruction is to output a denser, uniformly distributed point cloud by a given set of non-uniformly distributed, sparse point cloud coordinates, while maintaining the shape (potential surface) of the target object. Related works mainly include PU-Net network architecture based on PointNet++ architecture, multi-step progressive up-sampling network, also called 3PU (3-Step Progressive Upsampling), data-driven point cloud up-sampling technology, point cloud reconstruction fusing GCN, dense GCN, multi-branch GCN, clone GCN, node Shuffle and other technologies, dense point cloud reconstruction framework PU-GAN and the like.
The point cloud repair operation focused on three-dimensional shape completion is called completion reconstruction. The task of the complement reconstruction is to recover a complete shape from incomplete or local input data, which is of great value in three-dimensional reconstruction. The shape complement operation needs to meet 3 requirements, namely, preserving the details of the input point cloud, drawing the missing part by using a detailed geometric structure, and generating evenly distributed points on the surface of an object. The neural network structure for the complement reconstruction is generally composed of two parts, namely an encoder and a decoder. Related works include a deep learning-based point-fill network (Point Completion Network, PCN), a multi-level neural network structure directly acting on a point cloud proposed by Huang and the like, a method for introducing an attention mechanism at a feature encoder of the PCN, a TopNet model, a PF-Net model, a GAN-based point-cloud-fill network, a separation feature aggregation policy-based fill reconstruction algorithm, a jump attention network for three-dimensional point-cloud-fill, and the like.
The operation of recovering a clean set of points from a noisy input while preserving the geometric details of the underlying object surface is biased as denoising reconstruction. The goal of point cloud denoising is to recover a clean set of points from the noisy input while preserving the geometric details of the object surface. Related works include a neural projection denoising model proposed by a graph PointNet model, a Duan and the like, a PointCleanNet, EC-Net model, a three-dimensional point cloud denoising model proposed by Luo and the like, a Non-Local-Part-Aware (NLPA) network structure and the like.
As noted in the background, while deep learning based point cloud repair techniques have met with great success, many problems and challenges remain.
Based on defects or problems in related technical schemes pointed out in the background art, the technical scheme of the application is combined with the depth network to repair the geometrical model of the missing furniture, so that the rotation invariance, detail information and topological relation of the geometrical model of the furniture after repair can be maintained to a great extent, and the time consumption of the repair process can reach a near real-time level due to the introduction of the depth network.
In a first aspect, as shown in fig. 1, the technical solution of the present application provides a method for repairing a point cloud model based on a depth network, where the method includes steps S01-S05:
s01, acquiring an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating the global feature information and the local feature information to obtain original point cloud features;
in particular, in an embodiment, for the determination or deficiency existing in the related technical solution, the neighborhood sampling and self-attention technology is first introduced into the point cloud processing. In the model, the integration capability of local structure information is too weak because each point is only characterized in a traditional Multi-Layer perceptron (MLP) structure; the embodiment integrates the local information of the point-by-point neighborhood and the neighborhood thereof by sampling and grouping the local neighborhood on the basis.
Furthermore, the existing models still use convolutional neural networks or multi-layer perceptrons to process point clouds in the aspect of point cloud feature extraction, and although the two models have great success in many tasks, the two models have defects in model structure, and the receptive field is insufficient to acquire global feature information. In the task of extracting point cloud features, the defect is particularly remarkable, a large amount of coplanarity or long-chain correlation can exist among points, and the global information is particularly important for missing a geometric model of furniture. Thus, embodiments propose the use of a self-attention framework in combination to achieve the goal of extracting global and local information simultaneously.
In addition, in some possible embodiments, in the step S01 of obtaining an original point cloud, extracting global feature information from the original point cloud and integrating local feature information to obtain an original point cloud feature, the embodiment may further include steps S011-S012:
s011, determining weight scores of candidate points in the original point cloud;
s012, determining that the weight score is lower than a preset score, determining that the candidate point is a noise point, and eliminating the noise point to obtain a characteristic point of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
In particular, in the embodiment, a attention mechanism may be introduced to remove noise points, a weight or score is calculated for each point feature, so as to assign a lower weight to points with unimportant abnormal points or noise, and assign a higher weight to important points that can more effectively represent the curved surface shape.
S02, learning to obtain a high-dimensional vector for representing the shape characteristic through an attention mechanism according to the original point cloud characteristic;
specifically, in the embodiment, for the feature after the encoding is completed after the step S01 (S011-S012), the embodiment transmits the feature to the self-attention module to perform calculation, so as to learn to obtain a high-dimensional representation related to the shape:
F Encode =SA(F in )
wherein F is in ,F Encode The input and output characteristics are respectively represented, SA represents the attention module, and the attention calculation formula is as follows:
where ρ is a relational expression function (Relation Function). As shown in fig. 2, the embodiment finally obtains a high-dimensional characteristic representation with a global and information combination and shape perception characteristic through the mixed model in fig. 2, and extracts a required point cloud representation for subsequent repair work.
After the embodiment obtains the high-dimensional characteristic representation of the original point cloud, the embodiment proposes a point cloud restoration network to restore the original point cloud, and meanwhile solves the problems existing in the existing model. In the process, the embodiment not only ensures the rotation invariance and topological relation invariance of the point cloud, but also can reconstruct local detail information of the point cloud model with high quality and high speed.
S03, performing specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature;
in particular in an embodiment, first, in order to maintain the rotational invariance of the repaired furniture geometry model to a greater extent, the furniture geometry model needs to be rotationally invariance encoded. In the embodiment, the spherical harmonic kernel function is used for specifically encoding the original point cloud, which can be expressed as:
F RotInv =P·B(n)
wherein P (P Orig ) And F is equal to RotInv The original point cloud and the characteristics after rotation invariance coding are respectively represented, n represents the surface normal of the point, and the basis of the second-order spherical harmonic kernel function is given by the formula:
since the spherical harmonic kernel function itself has rotational invariance, i.e. if the original point rotates, the coefficients of the spherical harmonic do not need to be changed, the new feature basis is a linear combination of the original feature basis, i.e.:
F RotInv =P′·B′(n)
in an embodiment, the high-dimensional features extracted in the first step are concatenated with rotation invariant features and used as input to a decoder:
F Input =Concatenate(F Encode ,F RotInv )
s04, aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
in particular embodiments, the underlying surface of the three-dimensional point is piecewise smooth in the three-dimensional spatial domain, resulting in discontinuities along the curvature of the surface, particularly during decoding and point cloud repair. However, in practice the 3D coordinates should be always smooth along the curved surface. Therefore, the embodiment designs a low-pass graph filter to filter the obtained high-dimensional characteristics on the basis of using the graph convolution neural network so as to achieve the purpose of keeping the detail and topological relation, and can also filter.
The low-pass graph filter of the embodiment can simultaneously promote the network to learn the topological relation of the graph by aggregating the characteristic points in the topological relation range, thereby keeping the smoothness of the three-dimensional points on the graph. Still further, embodiments implement a filter based on graph adjacency matrix with learnable parameters:
wherein h is l Is a parameter of the filter, L is the order of the filter, M represents the dimension of the graph adjacency matrix, L represents the graph convolution kernel order, and R represents the real number domain; the larger the order value, the larger the receptive field, and the learnable graph adjacency matrix a, the initial value of which can be expressed as:
wherein z is i Is the ith node in the canonical two-dimensional lattice, σ is a super-parametric decay rate, N i Representing z i K-neighbors of (2), and a regularization termTo ensure->Here, embodiments introduce more context connections by considering the k-neighbors of each node, increasing receptive fields and making the model more easy to train.
S05, cascading the high-dimensional vector and the rotation invariant feature, and inputting the cascading result and the graph adjacent matrix into a multi-graph convolutional neural network to reconstruct so as to obtain a target point cloud;
in particular embodiments, the final reconstructed point is expressed in the following formula:
here, the embodiment uses this graph filter as one of the modules of the network, and the reconstructed point cloud is finally obtained through the multiple graph convolution neural network.
In some possible embodiments, the cascade connection of the high-dimensional vector and the rotation invariant feature, the input of the cascade connection result and the graph adjacency matrix to the multiple graph convolutional neural network to reconstruct a target point cloud, and the steps of S051-S052 may further include:
s051, constructing constraint conditions through a bulldozer distance error function and a chamfering distance error function, and carrying out parameter adjustment on the multiple graph convolutional neural network through the constraint conditions;
s052, outputting the reconstructed target point cloud through the multiple-graph convolution neural network after parameter adjustment.
In particular, in the embodiment, the embodiment uses bulldozer distance L EMD (Earth mover's Distance, EMD) and chamfer Distance L CD The (Chamfer Distance, CD) error function is constrained as follows:
on the other hand, the technical scheme of the application also provides a point cloud model repairing system based on a depth network, and the system comprises:
the feature extraction unit is used for obtaining an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating the global feature information and the local feature information to obtain original point cloud features; learning according to the original point cloud features through an attention mechanism to obtain a high-dimensional vector used for representing the shape features;
the feature coding unit is used for carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature;
the characteristic filtering unit is used for converging characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
and the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, and inputting the cascaded result and the graph adjacent matrix into a multi-graph convolutional neural network to be reconstructed to obtain a target point cloud.
On the other hand, the technical scheme of the application also provides a point cloud model repairing device based on a depth network, and the device comprises: at least one processor; at least one memory for storing at least one program; the at least one program, when executed by the at least one processor, causes the at least one processor to run the depth network based point cloud model restoration method as described in the second aspect.
The embodiment of the invention also provides a storage medium which stores a corresponding execution program, and the program is executed by a processor to realize the point cloud model restoration method based on the depth network in the first aspect.
From the above specific implementation process, it can be summarized that, compared with the prior art, the technical solution provided by the present invention has the following advantages or advantages:
1. according to the technical scheme, an encoder of an existing method is improved, local neighborhood is sampled and combined to obtain local fine granularity information, and a self-attention module is used for obtaining context connection between global points.
2. According to the technical scheme, in the process of decoding the coding features through point cloud restoration, a spherical harmonic kernel function and a low-pass diagram filter are introduced to ensure rotation invariance of a decoding result and maintain the topological relation of original data.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (9)

1. The point cloud model repairing method based on the depth network is characterized by comprising the following steps of:
acquiring an original point cloud, extracting global characteristic information from the original point cloud, and integrating local characteristic information to obtain an original point cloud characteristic;
learning to obtain a high-dimensional vector for representing the shape characteristic through an attention mechanism according to the original point cloud characteristic;
performing specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature;
aggregating characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
cascading the high-dimensional vector and the rotation invariant feature, inputting the cascading result and the graph adjacent matrix into a multi-graph convolutional neural network to reconstruct so as to obtain a target point cloud;
in the step of learning to obtain a high-dimensional vector for representing the shape feature through an attention mechanism according to the original point cloud feature, a calculation formula of the high-dimensional vector is as follows:
F Encode =SA(F in )
wherein F is Encode Characterizing the high-dimensional vector; f (F) in Characterizing the original point cloud characteristics, and SA characterizes the calculation process of an attention mechanism; the attention calculation formula in the calculation process of the attention mechanism is as follows:
wherein ρ represents a relation representation function, MLP represents a multi-layer perceptron, x i ,x j Representing any two points which are not identical in the original point cloud.
2. The depth network-based point cloud model restoration method according to claim 1, wherein the obtaining an original point cloud, extracting global feature information from the original point cloud, and integrating local feature information to obtain an original point cloud feature, includes:
determining the weight score of the candidate point in the original point cloud;
determining that the weight score is lower than a preset score, determining that the candidate points are noise points, and eliminating the noise points to obtain characteristic points of the original point cloud; the feature points are used for describing the global feature information and the local feature information.
3. The depth network-based point cloud model restoration method according to claim 1, wherein in the step of specifically encoding the original point cloud by using a spherical harmonic kernel function to obtain a rotation invariant feature, the encoding process of the rotation invariant feature is as follows:
F RotInv =P·B(n)
wherein F is RotInv Characterizing the rotation invariant feature, wherein P characterizes the original point cloud, B (n) characterizes the basis of the spherical harmonic kernel function, and n characterizes the surface normal where the feature point is located; the basis of the spherical harmonic kernel function satisfies the following formula:
wherein n is x ,n y ,n z The vector values of the normal line in the three essentially orthogonal directions of x, y and z are respectively represented.
4. The depth network-based point cloud model restoration method according to claim 1, wherein in the step of aggregating feature points in the topological relation range of the original point cloud by a low-pass graph filter to obtain a graph adjacency matrix, a filtering process of the low-pass graph filter satisfies the following formula:
h(A)∈R M×M
wherein A characterizes the graph adjacency matrix, h l And (3) representing parameters of the low-pass graph filter, wherein L represents the order of the low-pass graph filter, M represents the dimension of the graph adjacency matrix, L represents the graph convolution kernel order, and R represents the real number domain.
5. The depth network-based point cloud model restoration method according to claim 4, wherein the aggregating feature points in the topological relation range of the original point cloud by a low-pass graph filter to obtain a graph adjacency matrix comprises:
determining an initial value of the graph adjacency matrix according to the adjacent points of the characteristic points, the standard two-dimensional lattice and the super-parameter attenuation rate; the initial value satisfies the following formula:
wherein,representing initial values of the graph adjacency matrix, wherein i and j represent any two connected nodes in the graph adjacency matrix, and z i Representing an ith node in the standard two-dimensional lattice, zj represents the jth node connected with the ith node, and sigma represents a preset hyper-parameter value; n (N) i Representing z i K-adjacent point, Z i As a regular term, the regular term satisfies the following formula:
6. the depth network-based point cloud model restoration method according to any one of claims 1 to 5, wherein the cascade connection of the high-dimensional vector and the rotation invariant feature, and the input of the cascade connection result and the graph adjacency matrix into a multiple graph convolutional neural network to reconstruct a target point cloud, comprises:
constructing constraint conditions through a bulldozer distance error function and a chamfer distance error function, and carrying out parameter adjustment on the multiple graph convolutional neural network through the constraint conditions;
and outputting the reconstructed target point cloud through the multiple-graph convolution neural network after parameter adjustment.
7. The point cloud model restoration system based on the depth network is characterized by comprising the following components:
the feature extraction unit is used for obtaining an original point cloud, extracting global feature information and local feature information from the original point cloud, and integrating the global feature information and the local feature information to obtain original point cloud features; learning according to the original point cloud features through an attention mechanism to obtain a high-dimensional vector used for representing the shape features;
the feature coding unit is used for carrying out specific coding on the original point cloud through a spherical harmonic kernel function to obtain a rotation invariant feature; the characteristic filtering unit is used for converging characteristic points in the topological relation range of the original point cloud through a low-pass graph filter to obtain a graph adjacency matrix;
the restoration output unit is used for cascading the high-dimensional vector and the rotation invariant feature, inputting the cascading result and the graph adjacent matrix into a multi-graph convolutional neural network to reconstruct so as to obtain a target point cloud;
in the step of learning to obtain a high-dimensional vector for representing the shape feature through an attention mechanism according to the original point cloud feature, a calculation formula of the high-dimensional vector is as follows:
F Encode =SA(F in )
wherein F is Encode Characterizing the high-dimensional vector; f (F) in Characterizing the original point cloud characteristics, and SA characterizes the calculation process of an attention mechanism; the attention mechanismThe attention calculation formula in the calculation process of (a) is as follows:
wherein ρ represents the relationship representation function.
8. A point cloud model repair device based on a depth network, the device comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to perform the depth network based point cloud model restoration method of any of claims 1-6.
9. A storage medium having stored therein a processor executable program, which when executed by a processor is for running the depth network based point cloud model restoration method according to any of claims 1-6.
CN202211693396.3A 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network Active CN115880183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211693396.3A CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211693396.3A CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Publications (2)

Publication Number Publication Date
CN115880183A CN115880183A (en) 2023-03-31
CN115880183B true CN115880183B (en) 2024-03-15

Family

ID=85755686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211693396.3A Active CN115880183B (en) 2022-12-28 2022-12-28 Point cloud model restoration method, system, device and medium based on depth network

Country Status (1)

Country Link
CN (1) CN115880183B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
US11488283B1 (en) * 2021-11-30 2022-11-01 Huazhong University Of Science And Technology Point cloud reconstruction method and apparatus based on pyramid transformer, device, and medium
CN115439694A (en) * 2022-09-19 2022-12-06 南京邮电大学 High-precision point cloud completion method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
US11488283B1 (en) * 2021-11-30 2022-11-01 Huazhong University Of Science And Technology Point cloud reconstruction method and apparatus based on pyramid transformer, device, and medium
CN115439694A (en) * 2022-09-19 2022-12-06 南京邮电大学 High-precision point cloud completion method and device based on deep learning

Also Published As

Publication number Publication date
CN115880183A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Mescheder et al. Occupancy networks: Learning 3d reconstruction in function space
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN109949222B (en) Image super-resolution reconstruction method based on semantic graph
WO2023212997A1 (en) Knowledge distillation based neural network training method, device, and storage medium
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN113936318A (en) Human face image restoration method based on GAN human face prior information prediction and fusion
Liu et al. Facial image inpainting using multi-level generative network
CN112686822B (en) Image completion method based on stack generation countermeasure network
CN114494387A (en) Data set network generation model and fog map generation method
Sarkar et al. 3d shape processing by convolutional denoising autoencoders on local patches
CN115880183B (en) Point cloud model restoration method, system, device and medium based on depth network
CN115661340B (en) Three-dimensional point cloud up-sampling method and system based on source information fusion
CN114913588B (en) Face image restoration and recognition method applied to complex scene
Zhao et al. NormalNet: Learning-based normal filtering for mesh denoising
CN117196963A (en) Point cloud denoising method based on noise reduction self-encoder
CN114708353A (en) Image reconstruction method and device, electronic equipment and storage medium
CN114615505A (en) Point cloud attribute compression method and device based on depth entropy coding and storage medium
CN112801909A (en) Image fusion denoising method and system based on U-Net and pyramid module
CN112907456A (en) Deep neural network image denoising method based on global smooth constraint prior model
Cai et al. Orthogonal Dictionary Guided Shape Completion Network for Point Cloud
CN115115537B (en) Image restoration method based on mask training
CN113012076B (en) Dunhuang fresco restoration method based on adjacent pixel points and self-encoder
CN117934979B (en) Target identification method based on fractal coder-decoder
Cui et al. A fast approximate sparse coding networks and application to image denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant