CN112200733B - Grid denoising method based on graph convolution network - Google Patents

Grid denoising method based on graph convolution network Download PDF

Info

Publication number
CN112200733B
CN112200733B CN202010939418.4A CN202010939418A CN112200733B CN 112200733 B CN112200733 B CN 112200733B CN 202010939418 A CN202010939418 A CN 202010939418A CN 112200733 B CN112200733 B CN 112200733B
Authority
CN
China
Prior art keywords
graph
noise
mesh
network
graph convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010939418.4A
Other languages
Chinese (zh)
Other versions
CN112200733A (en
Inventor
沈越凡
郑友怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010939418.4A priority Critical patent/CN112200733B/en
Publication of CN112200733A publication Critical patent/CN112200733A/en
Priority to JP2021107136A priority patent/JP7171087B2/en
Priority to LU500415A priority patent/LU500415B1/en
Application granted granted Critical
Publication of CN112200733B publication Critical patent/CN112200733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a mesh denoising method based on a graph convolution network. The present invention demonstrates such a graphical representation that can naturally capture geometric features while being lightweight for both the training and inference phases. To facilitate efficient feature learning, such networks utilize both static and dynamic edge convolution, which enables us to learn information from potential connections between explicit lattice structures and unconnected neighbors. To better estimate the unknown noise function, the present invention introduces a plurality of cascaded optimization paradigms of the GCN to step-wise extrapolate the noise-free normal of the section. The present invention achieves the best results in multiple noisy datasets, including a CAD model, which typically contains sharp features, and a raw scan model with real noise captured from different devices.

Description

Grid denoising method based on graph convolution network
Technical Field
The invention belongs to the field of computer graphics, and particularly relates to a grid denoising method based on a graph convolution network.
Background
Because the vertex positions or the surface normal are essentially 3D signals, the task of denoising on the mesh surface is similar to 2D images. Therefore, the mesh denoising technique is greatly inspired by the denoising technique in the image, and various low-pass and feature-preserving filters have been introduced to perform mesh denoising. Among them, bilateral filters are one of the most widely used filters, but one common drawback of filter-based approaches is that once features are severely corrupted by noise, they (especially weak features) are difficult to recover with these approaches. Another method is an optimization-based mesh denoising method. But only the mesh that satisfies the assumption applies and the noise pattern is not well summarized for meshes with different geometric features.
In contrast, learning-based methods do not make specific assumptions about the underlying features or noise patterns, and have been successfully applied to image denoising. However, unlike images, 3D meshes are typically irregular, so image-based convolution operations cannot be directly applied. To solve this problem, we propose a new method to input irregular grid block data directly into the graph convolution network. By performing a graph-based representation of local surface geometry through a graph convolution operation, our network is able to capture the inherent geometry of the source model under noise better than other existing methods.
Graph Convolution Networks (GCNs) have been introduced to handle non-euclidean structures. Early work on GCNs required static graph structures and therefore could not be extended to meshes with varying topologies. Recent studies on the convolution of dynamic graphs have shown that variable edges can perform better. Our method utilizes a static graph structure in the block and a dynamic graph structure constructed during convolution to efficiently learn the geometric features in the block. Still other convolution operations developed for meshes are mainly used to understand whole objects or large scenes and require very deep network structures. The denoising work focuses more on local blocks, and convolution is introduced into a dual space of a grid surface.
Disclosure of Invention
The invention provides a grid denoising method based on a GCN network, which uses rotation invariant graph representation on a dual space of a grid surface so as to realize effective feature learning through graph convolution. Meanwhile, static and dynamic graph convolution operations in the framework of the invention are connected in series to learn effective explicit structural features and potential implicit features between adjacent nodes.
The invention is realized by the following technical scheme:
a mesh denoising method based on a graph convolution network comprises the following steps:
the method comprises the following steps: and generating a local block for each face in the noise grid and carrying out rotation alignment on the local blocks by adopting a tensor voting algorithm.
Step two: and converting the local blocks aligned in the step one into a graph representation, inputting the graph representation to a trained graph convolution neural network, predicting a noise-free normal direction, and updating the vertex of the grid model according to the predicted normal direction to obtain a denoised model. Wherein the graph convolutional neural network structure is composed of LeLayer static EdgeConv, LdLayer dynamic EdgeConv and L1Layer full interconnect (FC) layer.
Further, the first step is realized by the following substeps:
(1.1) for the selected surface patch f, defining a bounding sphere according to a fixed proportion of the area of the field, and taking all surface patches in the bounding sphere as the surfaces in the block p.
(1.2) definition of f for all faces in the BlockiVoting tensor TiAnd obtaining the characteristic value and the unit characteristic vector.
(1.3) constructing a rotation matrix R according to the eigenvector obtained from 1.2iAnd connecting the centroid and normal of each facet in p to Ri -1Multiplying to generate new block data
Figure BDA0002673107200000021
Further, the second step is realized by the following sub-steps:
and (2.1) carrying out static edge convolution processing on the input image iteration by using the static EdgeConv to obtain the adjacent surface characteristics.
And (2.2) iterating the result obtained in the step 2.1, and carrying out dynamic edge convolution processing by using the dynamic edgeConv to obtain the nearest characteristic surface in the characteristic space.
(2.3) after graph convolution, concatenating the learned features together to summarize the features through a full concatenation layer.
And (2.4) selecting the most important characteristics through symmetry pooling to finally predict the normal direction.
Further, the training data set of the graph convolution neural network is constructed by the following method:
applying tensor voting algorithm to each surface of the noiseless model in the data set to obtain three eigenvalues lambda123. And dividing the surfaces of all models in all the data into two groups of surfaces and edges according to the characteristic values, and respectively collecting block data to construct a training data set.
And further, training a plurality of cascaded graph convolution neural networks to predict a noise-free normal direction, and during training, constructing a denoised grid model by using the predicted normal direction of the previous-stage graph convolution neural network to generate new data to train the next-stage graph convolution neural network until the loss of the cascaded network is not reduced any more.
Further, in the last stage of the graph convolution neural network, iteration optimization is carried out on the normal vector through a bilateral filter on the grid, the vertex is updated every iteration, and finally the grid model after denoising is obtained.
The outstanding contribution of the invention is as follows:
the invention provides a GCN-Denoiser, a network denoising method based on the retention characteristics of a Graph Convolution Network (GCN). Unlike previous learning-based mesh noise reduction methods that perform feature learning based on artificially constructed feature learning or on voxel representation, the present invention explores the structure of the triangular mesh itself, introduces a graph representation, and then introduces a graph convolution operation in the dual space of the triangle. The present invention demonstrates such a graphical representation that naturally captures geometric features while being lightweight for both the training and inference phases. To facilitate efficient feature learning, such networks utilize both static and dynamic edge convolution, which enables information to be learned from potential connections between explicit grid structures and unconnected neighbors. To better estimate the unknown noise function, the present invention introduces a plurality of cascaded optimization paradigms of the GCN to step-wise extrapolate the noise-free normal of the section. The present invention achieves the best results in multiple noisy datasets, including a CAD model containing sharp features and an original scan model with real noise captured from different devices. The method of the invention achieves the best results at present while achieving a good balance between effect and efficiency.
Drawings
FIG. 1 is a schematic diagram of the network denoising process according to the present invention.
Figure 2 is the GCN network architecture of the present invention.
FIG. 3 is a diagram of the network denoising effect of the present invention.
Detailed Description
Since noise is a complex formation of the surface of the mesh model, it is generally estimated using local methods. The invention aims to predict the original noise-free method of each patch f in a mesh triangular dual domain by using a block p under noise in a certain range, and reconstruct a noise-free model, and specifically comprises the following steps:
the method comprises the following steps: and generating a local block for each face in the noise grid and carrying out rotation alignment on the local blocks by adopting a tensor voting algorithm.
Step two: and converting the local blocks aligned in the step one into a graph representation, inputting the graph representation to a trained graph convolution neural network, predicting a noise-free normal direction, and updating the vertex of the grid model according to the predicted normal direction to obtain a denoised model. Wherein the graph convolution neural network structure is formed by LeLayer EdgeConv, LdLayer dynamic EdgeConv and L1Layer full interconnect (FC) layer.
FIG. 1 illustrates the denoising process of the cascade of multiple convolutional neural networks in the present invention. The process of the invention is further illustrated below with reference to a specific example:
for a noise mesh, the input triangular mesh is first defined as M ═ { V, F }, where V ═ V }, thenk}1NvSet all vertices, F ═ Fi}1 NfAll the faces are provided. N is a radical of hydrogenvAnd NfRespectively, the number of vertices and the number of facets. For each face F in FiGenerate its local block data pi. The set of all blocks in M is defined as P ═ { P ═ Pi}1 Nf. Similarly, the surface fiIs denoted by niIts centroid is denoted as ciAnd its area is represented as ai
Wherein, the block piThe fingers lying in a small plane fiCenter of mass ciAll facets (including f) in a sphere of radius ri) I.e. piShould satisfy:
Figure BDA0002673107200000031
blocks at different locations but with the same properties can be cumbersome for neural networks because it is difficult to learn spatial transformations for deep learning methods. To solve this problem, the present invention uses tensor voting theory to align faces unambiguously into a common coordinate systemI.e. for all faces f in the blockiVoting tensor TiAnd obtaining a characteristic value and a unit characteristic vector, which are as follows:
firstly p is put iniConversion to origin [0, 0]It is then scaled to a unit bounding box. A voting tensor TiFor the face fiIs defined as follows:
Figure BDA0002673107200000041
wherein muj=(aj/am)exp(-||cj-ci| σ), σ is a parameter, set to 1/3 in this embodiment, where amIs piAnd n is the largest triangle area injIs fjVoting normal vector of (a): n isj'=2(nj·wj)wj-njWherein w isj=normalize{[(cj-ci)×nj]×(cj-ci)}. Due to TiIs a semi-positive definite matrix that can be represented by its spectral decomposition as:
Ti=λ1e1e1 T2e2e2 T3e3e3 T
wherein λ1≥λ2≥λ3Is its characteristic value, e1,e2And e3Are the corresponding unit feature vectors that form a set of orthogonal bases.
Then, a rotation matrix R is constructedi=[e1,e2,e3]And p isiThe center of mass and normal to R of each faceti -1Multiply to generate new block data
Figure BDA0002673107200000042
Thereafter, a graph structure is built for each generated block to fit it into our subsequent graph convolution network. Creating an undirected graphG ═ Q, E, Φ), where is the block
Figure BDA0002673107200000049
Each face f in (a) creates an on-graph node qiE.q, and an edge e ═ Q (Q)i,qj) E if the corresponding face fiAnd fjAdjacent to each other. Φ represents a node signature, containing a set of node attributes. For each
Figure BDA0002673107200000043
Corresponding surface fi
Figure BDA0002673107200000044
Figure BDA0002673107200000045
And
Figure BDA0002673107200000046
the fingers are aligned with the centroid and normal of the back plane fi. diIs fiThe number of adjacent faces in the 1-ring neighborhood of (a) helps to distinguish the boundary faces.
As shown in fig. 2, the GCN network of the present invention is illustrated as being comprised of a plurality of convolutional layers. (Martin Simonovsky and Nikos Komodakis.2017.dynamic edge-controlled filters in volumetric neural network graphs. In2017IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 29-38.) in each layer, the GCN of the present invention aggregates the features of each node's neighbors and updates, also known as convolution operations, similar to conventional convolution networks. Although each figure has some connectivity across the faces, their structures vary widely in blocks. We take the ECC (Edge-Conditioned contribution) strategy to deal with the different structures in the Convolution process. Let Gl=(Ql,El,Φl) Is the l-th layer in the graph convolution,
Figure BDA0002673107200000047
is GlTo middleiFeature vectors of individual nodes. Updating a node in the following mannerThe method is characterized in that:
Figure BDA0002673107200000048
where Ψ is the feature set, hΘ l=LinearΘ l(Fl i,Fl j-Fl i). Each graphics volume base layer in the network has the same LinearΘ
Since the mapping from geometry to connectivity is not a one-to-one function, using only the original graph structure may result in some information loss during the convolution process. In order to enrich the acceptance domain of graph nodes, the invention further allows to connect non-adjacent graph nodes during convolution. This graphics transformation is called dynamic EdgeConv. For this scheme, the neighbors of each node are dynamically computed by KNN (K-8 in the implementation of the present embodiment) according to the euclidean distance of the node.
The network architecture of the present invention is represented by LeStatic EdgeConv, L of a layerdDynamic EdgeConv and L of layers1A Fully Connected (FC) layer of layers. After each layer of the graph volume, the learned features are connected together for pooling. In this embodiment, both the average pool and the maximum pool are used as symmetric functions, which can select the most important features. Finally, the FC layer regresses the 3D vector, the normal to the prediction of the present invention. Each layer in the architecture of the present invention, except the last FC layer, carries a batch normalization and activation function, leakyreu.
As a preferred option, cascaded GCN (GCN) is used1,...,GCNX) Stepwise regression to the noise-free normal. All GCNs in the cascade optimization have the same architecture but different numbers of static EdgeConv, dynamic EdgeConv and FC layers. In this embodiment, for the first GCN, Le-3, Ld-3, and Ll-4 are used, and for the remaining GCNs, Le-2, Ld-2, and Ll-3 are used.
The positions of the vertices need to be updated after each normal prediction update, and the vertex updates can be determined in the method of the inventionIs defined as the following formula, wherein omega'iThe vertex domain contains its neighbor vertices, i.e., the associated patch containing its neighbor vertices:
Figure BDA0002673107200000051
k represents iteration times of vertex updating, superscript g represents a target normal direction, the target normal direction is a predicted value output by a network in the GCN updating of the previous x-1 level, and the optimized normal direction is represented in the updating of the last level; eij denotes the edge between two vertices in the patch.
In the off-line training step, we use GCNxDenoise the noisy meshes in the training set and then generate new data from these updated meshes to train the next GCNx+1. Cascading GCNs can be stopped when the error of the network on the validation set is no longer reduced. The loss function is the MSE between the network output and a standard value, i.e.
Figure BDA0002673107200000052
In this connection, it is possible to use,
Figure BDA0002673107200000053
is a face fiR is the corresponding rotation matrix mentioned above.
As a preferred approach, for each 3D model, different levels and types of noise are generated for training. Applying tensor voting algorithm to each surface of the noiseless model in the data set to obtain three eigenvalues lambda123. Each facet is grouped into four groups for each model: { fii 2<0.01∧λi 3<0.001 is a flat surface, { fii 2>0.01∧λi 3<0.1} is the edge surface, { fii 3>0.1 is the corner surface, the remainder are the transition surfaces. Since the edge surfaces and corner surfaces are few relative to the other two, they are further divided into two groups: from a flat surface andthe transition surface is a featureless surface, and the characteristic surface is composed of an edge surface and a corner surface. Blocks were collected uniformly as training data in both groups. This strategy is applicable in training all GCNs.
As a preferred scheme, given a prediction normal of the input noise grid, in order to avoid discontinuity between adjacent surfaces caused by local processing, a bilateral filter (Youyi Zheng, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai.2011. binary normal filtering for mesh classification. IEEE Transactions on Visualization and Computer Graphics 17,10(2011), 1521-:
Figure BDA0002673107200000061
this iteration of the operation can be iterated m times, but only applied to the normal outputs in the final cascade of GCNs. Wherein
Figure BDA0002673107200000062
Is a normal direction of iterative optimization for m +1 times,
Figure BDA0002673107200000063
the normal direction is calculated according to the patch after the vertex is updated according to the normal direction obtained by the mth iteration. OmegaiIs adjacent to fiSet of (a) WsAnd WrIs a gaussian function with a kernel of σ s and σ r, respectively.
Fig. 1 and 3 show the denoising results of the original scan model of the real noise captured from the device and the CAD model containing the clear features of the method, respectively, with the input noise model at the leftmost, the result in the middle, and the original noise-free true value at the rightmost. As can be seen from the figure, the method has good denoising effect and can achieve good balance between the effect and the efficiency.

Claims (6)

1. A mesh denoising method based on a graph convolution network is characterized by comprising the following steps:
the method comprises the following steps: generating a local block for each surface in the noise grid and carrying out rotation alignment on the local blocks by adopting a tensor voting algorithm;
step two: converting the local blocks aligned in the step one into a graph representation, inputting the graph representation into a trained graph convolution neural network, predicting a noise-free normal direction, and updating the vertex of the grid model according to the predicted normal direction to obtain a denoised model, wherein the graph convolution neural network structure is represented by LeLayer static EdgeConv, LdLayer dynamic EdgeConv and L1Layer full interconnect (FC) layer.
2. The method for mesh denoising based on graph volume network as claimed in claim 1, wherein the step one is realized by the following sub-steps:
(1.1) for the selected surface patches f, defining a bounding sphere according to the fixed proportion of the area of the field of the selected surface patches f, and taking all the surface patches in the bounding sphere as the surfaces in the block p;
(1.2) definition of f for all faces in the BlockiVoting tensor TiAnd obtaining a characteristic value and a unit characteristic vector;
(1.3) constructing a rotation matrix R according to the eigenvector obtained from 1.2iAnd connecting the centroid and normal of each facet in p to Ri -1Multiplying to generate new block data
Figure FDA0002673107190000011
3. The method for mesh denoising based on graph volume network as claimed in claim 1, wherein the second step is realized by the following sub-steps:
(2.1) carrying out static edge convolution processing on the input image iteration by using the static edgeConv to obtain an adjacent surface feature;
and (2.2) iterating the result obtained in the step 2.1, and carrying out dynamic edge convolution processing by using the dynamic edgeConv to obtain the nearest characteristic surface in the characteristic space.
(2.3) after graph convolution, concatenating the learned features together to summarize the features through a full concatenation layer;
and (2.4) selecting the most important characteristics through symmetry pooling to finally predict the normal direction.
4. The mesh denoising method based on the graph convolution network as claimed in claim 1, wherein the training data set of the graph convolution neural network is constructed by the following method:
applying tensor voting algorithm to each surface of the noiseless model in the data set to obtain three eigenvalues lambda123. And dividing the surfaces of all models in all data into two groups of surfaces and edges according to the characteristic values, and respectively collecting block data to construct a training data set.
5. The method as claimed in claim 1, wherein a plurality of cascaded convolutional neural networks are trained to predict the noise-free normal direction, and during training, a denoised mesh model is constructed by using the predicted normal direction of the preceding convolutional neural network to generate new data to train the following convolutional neural network until the loss of the cascaded networks is not reduced.
6. The method as claimed in claim 1, wherein in the last stage of the convolutional neural network, iterative optimization is performed on normal vectors by using a bilateral filter on the mesh, and vertices are updated in each iteration, so as to finally obtain a denoised mesh model.
CN202010939418.4A 2020-09-09 2020-09-09 Grid denoising method based on graph convolution network Active CN112200733B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010939418.4A CN112200733B (en) 2020-09-09 2020-09-09 Grid denoising method based on graph convolution network
JP2021107136A JP7171087B2 (en) 2020-09-09 2021-06-28 A mesh denoising method based on graph convolutional networks
LU500415A LU500415B1 (en) 2020-09-09 2021-07-09 Grid denoising method based on graphical convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010939418.4A CN112200733B (en) 2020-09-09 2020-09-09 Grid denoising method based on graph convolution network

Publications (2)

Publication Number Publication Date
CN112200733A CN112200733A (en) 2021-01-08
CN112200733B true CN112200733B (en) 2022-06-21

Family

ID=74005804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010939418.4A Active CN112200733B (en) 2020-09-09 2020-09-09 Grid denoising method based on graph convolution network

Country Status (3)

Country Link
JP (1) JP7171087B2 (en)
CN (1) CN112200733B (en)
LU (1) LU500415B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011516A (en) * 2021-03-30 2021-06-22 华南理工大学 Three-dimensional mesh model classification method and device based on graph topology and storage medium
CN117315194B (en) * 2023-09-27 2024-05-28 南京航空航天大学 Triangular mesh representation learning method for large aircraft appearance
CN117197000B (en) * 2023-11-06 2024-03-19 武汉中观自动化科技有限公司 Quick grid denoising method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626102A (en) * 2020-04-13 2020-09-04 上海交通大学 Bimodal iterative denoising anomaly detection method and terminal based on video weak marker

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726525B2 (en) * 2017-09-26 2020-07-28 Samsung Electronics Co., Ltd. Image denoising neural network architecture and method of training the same
US20190318227A1 (en) * 2018-04-13 2019-10-17 Fabula Al Limited Recommendation system and method for estimating the elements of a multi-dimensional tensor on geometric domains from partial observations
CN109658348A (en) * 2018-11-16 2019-04-19 天津大学 The estimation of joint noise and image de-noising method based on deep learning
EP3674983A1 (en) * 2018-12-29 2020-07-01 Dassault Systèmes Machine-learning for 3d modeled object inference

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626102A (en) * 2020-04-13 2020-09-04 上海交通大学 Bimodal iterative denoising anomaly detection method and terminal based on video weak marker

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Graph Auto-Encoder for Graph Signal Denoising》;Tien Huu Do等;《ICASSP》;20200514;全文 *

Also Published As

Publication number Publication date
JP7171087B2 (en) 2022-11-15
CN112200733A (en) 2021-01-08
LU500415B1 (en) 2022-03-09
JP2022045893A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
Liu et al. Learning converged propagations with deep prior ensemble for image enhancement
Thakur et al. State‐of‐art analysis of image denoising methods using convolutional neural networks
CN112200733B (en) Grid denoising method based on graph convolution network
Valsesia et al. Image denoising with graph-convolutional neural networks
Zhang et al. High-quality image restoration using low-rank patch regularization and global structure sparsity
Zaied et al. A novel approach for face recognition based on fast learning algorithm and wavelet network theory
Pistilli et al. Learning robust graph-convolutional representations for point cloud denoising
CN112634149B (en) Point cloud denoising method based on graph convolution network
EP2817783A1 (en) Method and apparatus for mesh simplification
Chen et al. Deep point set resampling via gradient fields
Bhatt et al. SSIM compliant modeling framework with denoising and deblurring applications
Li et al. Coarse-to-fine PatchMatch for dense correspondence
Armando et al. Mesh denoising with facet graph convolutions
Gong Mean curvature is a good regularization for image processing
Tourtounis et al. Salt-n-pepper noise filtering using cellular automata
Li Image super-resolution using attention based densenet with residual deconvolution
Yan et al. On combining CNN with non-local self-similarity based image denoising methods
Zhao et al. NormalNet: Learning-based mesh normal denoising via local partition normalization
Dinesh et al. Point cloud sampling via graph balancing and Gershgorin disc alignment
Lalos et al. Signal processing on static and dynamic 3d meshes: Sparse representations and applications
Zin et al. Local image denoising using RAISR
Sharma et al. Deep learning based frameworks for image super-resolution and noise-resilient super-resolution
CN114782336A (en) Method and device for predicting fiber bundle orientation distribution based on graph convolution neural network
Wei et al. Image denoising with deep unfolding and normalizing flows
Zhao et al. Joint Discontinuity-Aware Depth Map Super-Resolution via Dual-Tasks Driven Unfolding Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant