CN114418852A - Point cloud arbitrary scale up-sampling method based on self-supervision deep learning - Google Patents

Point cloud arbitrary scale up-sampling method based on self-supervision deep learning Download PDF

Info

Publication number
CN114418852A
CN114418852A CN202210064957.7A CN202210064957A CN114418852A CN 114418852 A CN114418852 A CN 114418852A CN 202210064957 A CN202210064957 A CN 202210064957A CN 114418852 A CN114418852 A CN 114418852A
Authority
CN
China
Prior art keywords
point cloud
vertex
seed
projection
vertexes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210064957.7A
Other languages
Chinese (zh)
Other versions
CN114418852B (en
Inventor
刘贤明
赵文博
季向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202210064957.7A priority Critical patent/CN114418852B/en
Publication of CN114418852A publication Critical patent/CN114418852A/en
Application granted granted Critical
Publication of CN114418852B publication Critical patent/CN114418852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a point cloud arbitrary scale up-sampling method based on self-supervision deep learning, belonging to the technical field of point cloud processing; the method comprises the steps of generating a series of seed vertexes by estimating the distance from a vertex to a hidden surface corresponding to a point cloud; for each seed vertex, taking a plurality of point cloud vertex coordinates closest to the seed vertex as the input of a neural network, and outputting a projection point of the vertex on the hidden surface; and finally, regulating the number of the projection points to the number of the target vertexes through sampling of the farthest point. The density of the seed nodes can be set at will, so the up-sampling multiplying power can also be set at will, and because each projection point is independently generated, the network only needs to process the condition of one vertex at a time and is irrelevant to the up-sampling multiplying power, so the network does not need to be trained repeatedly; meanwhile, when the training data is generated, only the three-dimensional grid model is needed, the seed vertex and the corresponding projection direction and projection distance are generated near the model, and the paired dense point cloud-sparse point cloud is not needed, so that the method is self-supervised.

Description

Point cloud arbitrary scale up-sampling method based on self-supervision deep learning
Technical Field
The invention discloses a point cloud arbitrary scale up-sampling method based on self-supervision deep learning, and belongs to the technical field of point cloud processing.
Background
The point cloud is a set of randomly distributed discrete points expressing the spatial structure and surface attributes of a three-dimensional object or scene in space, and the surface corresponding to the point cloud is called a hidden surface. Each point in the point cloud has at least three-dimensional position information, and may have color, material or other information according to different application scenes.
In recent years, with the development of collecting devices such as laser radars, three-dimensional point cloud devices are acquired through the devices and displayed or analyzed, so that good effects are achieved. However, limited by the accuracy of the acquisition equipment, the resulting point cloud is typically less dense. While the low density point cloud is not conducive to display and further processing. Therefore, upsampling the obtained point cloud to improve the density of the vertex is an indispensable step in the point cloud processing.
Point cloud up-sampling can be done by conventional methods or methods based on deep learning. The traditional up-sampling method is limited by the sparsity and irregularity of the point cloud, and is difficult to obtain good effect. The method based on deep learning can better capture the characteristics of the irregular structure, thereby completing high-quality up-sampling. However, such methods are usually trained and used in an end-to-end manner, i.e. the input and output of the network are complete point clouds, so that it is difficult to achieve up-sampling at any magnification.
At present, a common processing method based on deep learning realizes upsampling with different magnifications by two ways:
in the first mode, the up-sampling multiplying power is preset, corresponding training data and a network are constructed, and training is carried out independently. If the method wants to obtain a certain specific up-sampling multiplying power, the training must be carried out again;
and secondly, preparing data of various multiplying powers, and training a network so that the network can perform upsampling within a given multiplying power range. Compared with the previous method, the method can generate point cloud within a certain magnification range, but cannot well process the magnification outside the range.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
In consideration of the limitation of the existing deep learning method, the invention discloses a point cloud arbitrary scale up-sampling method based on self-supervision deep learning, which solves the problem of arbitrary scale up-sampling by utilizing a method for generating seed vertexes and projection points; the density of the seed nodes can be set at will, so the up-sampling multiplying power can also be set at will, and because each projection point is independently generated, the network only needs to process the condition of one vertex at a time and is irrelevant to the up-sampling multiplying power, so the network does not need to be trained repeatedly; meanwhile, when the training data is generated, only the three-dimensional grid model is needed, the seed vertex and the corresponding projection direction and projection distance are generated near the model, and the paired dense point cloud-sparse point cloud is not needed, so that the method is self-supervised.
The purpose of the invention is realized as follows:
a point cloud arbitrary scale up-sampling method based on self-supervision deep learning comprises the following steps:
a, reading an input point cloud to generate a seed vertex set;
b, acquiring a projection point set from the seed vertex to the hidden surface of the point cloud according to the seed vertex set;
and c, obtaining the number of target vertexes according to the up-sampling multiplying power, and adjusting the number of vertexes in the projection point set to the number of the target vertexes by using the sampling of the farthest point.
The point cloud arbitrary scale up-sampling method based on the self-supervision deep learning specifically comprises the following steps:
step a1, performing voxelization on the space where the input point cloud is located;
a2, estimating the distance from the gravity center to the hidden surface of the input point cloud for each voxel;
step a3, selecting the barycenter with a distance within a certain range as the seed vertex set.
Further, step a2 specifically includes:
a2-1, selecting a plurality of vertexes closest to the current gravity center from the input point cloud, and sorting according to the distance;
step a2-2, starting from the third vertex, forming a triangle by the vertex and the first two nearest vertices;
step a2-3, calculating the distance from the center of gravity to each triangle, and selecting the nearest distance as the estimated distance.
The point cloud arbitrary scale up-sampling method based on the self-supervision deep learning specifically comprises the following steps:
b1, normalizing the coordinates of the input point cloud according to the coordinates of the seed vertexes for each seed vertex, and acquiring the projection direction from the seed vertexes to the hidden surface of the point cloud;
b2, normalizing the direction of the input point cloud according to the projection direction for each seed vertex, and acquiring the projection distance of the seed vertex along the projection direction;
and b3, obtaining the projection point of the seed vertex according to the projection distance and the projection direction.
Further, step b1 specifically includes:
b1-1, for each seed vertex, simultaneously translating the input point cloud and the seed vertex to move the seed vertex to the origin;
b1-2, selecting a plurality of vertexes closest to the seed vertexes from the input point cloud, and arranging the vertex coordinates of the vertexes into a coordinate matrix;
and b1-3, inputting the coordinate matrix into the neural network to obtain the projection direction.
Further, step b2 specifically includes:
b2-1, rotating the coordinate matrix and the projection direction at the same time to make the projection direction parallel to the coordinate axis;
and b2-2, inputting the rotated coordinate matrix into a neural network to obtain the projection distance.
Has the advantages that:
the invention relates to a point cloud arbitrary scale up-sampling method based on self-supervision deep learning, which solves the problem of arbitrary scale up-sampling by utilizing a method for generating seed vertexes and projection points; specifically, a series of seed vertexes are generated by estimating the distance from the vertexes to the hidden surface corresponding to the point cloud; for each seed vertex, taking a plurality of point cloud vertex coordinates closest to the seed vertex as the input of a neural network, and outputting a projection point of the vertex on the hidden surface; and finally, regulating the number of the projection points to the number of the target vertexes through sampling of the farthest point. The density of the seed nodes can be set at will, so the up-sampling multiplying power can also be set at will, and because each projection point is independently generated, the network only needs to process the condition of one vertex at a time and is irrelevant to the up-sampling multiplying power, so the network does not need to be trained repeatedly; meanwhile, when the training data is generated, only the three-dimensional grid model is needed, the seed vertex and the corresponding projection direction and projection distance are generated near the model, and the paired dense point cloud-sparse point cloud is not needed, so that the method is self-supervised.
Drawings
FIG. 1 is a flow chart of the point cloud arbitrary scale up-sampling method based on the self-supervised deep learning of the present invention.
Fig. 2 is a flow chart of acquiring projection directions.
FIG. 3 is a flow chart of acquiring a projection distance.
Detailed Description
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
In this embodiment, a flow chart of a method for sampling a point cloud at any scale based on self-supervised deep learning is shown in fig. 1, and the method includes the following steps:
a, reading an input point cloud to generate a seed vertex set; the method specifically comprises the following steps:
step a1, performing voxelization on the space where the input point cloud is located;
a2, estimating the distance from the gravity center to the hidden surface of the input point cloud for each voxel; the method specifically comprises the following steps:
a2-1, selecting a plurality of vertexes closest to the current gravity center from the input point cloud, and sorting according to the distance;
step a2-2, starting from the third vertex, forming a triangle by the vertex and the first two nearest vertices;
step a2-3, calculating the distance from the gravity center to each triangle, and selecting the nearest distance as the estimated distance;
step a3, selecting the gravity center with a distance within a certain range as a seed vertex set;
in the present embodiment, with (0, 0, 0) as the origin, the space is decomposed into cubes with a side length L along the coordinate axis direction; for the center of gravity P of each cube, n vertices from the input point cloud closest to P are selected: k1, k2, … and kn, respectively calculating the distances from P to triangles (k1, k2 and k3), (k1, k2 and k4) … (k1, k2 and kn), and taking the minimum value of the distances as the estimated distance. When the estimated distance is between [ L1, L2], the center of gravity will be considered the seed vertex;
b, acquiring a projection point set from the seed vertex to the hidden surface of the point cloud according to the seed vertex set; the method specifically comprises the following steps:
b1, normalizing the coordinates of the input point cloud according to the coordinates of the seed vertexes for each seed vertex, and acquiring the projection direction from the seed vertexes to the hidden surface of the point cloud; the method specifically comprises the following steps:
b1-1, for each seed vertex, simultaneously translating the input point cloud and the seed vertex to move the seed vertex to the origin;
b1-2, selecting a plurality of vertexes closest to the seed vertexes from the input point cloud, and arranging the vertex coordinates of the vertexes into a coordinate matrix;
b1-3, inputting the coordinate matrix into a neural network to obtain a projection direction;
the flowchart for executing the above steps is shown in fig. 2, where coordinates of the seed P are (xp, yp, zp), the translation vector T is (-xp, -yp, -zp), m vertices closest to P are selected from the input point cloud, and coordinates of the closest vertices are arranged into a coordinate matrix; applying the translation vector to a coordinate matrix, and normalizing the position of the point cloud; inputting the normalized coordinate matrix into a neural network to obtain a projection direction N;
b2, normalizing the direction of the input point cloud according to the projection direction for each seed vertex, and acquiring the projection distance of the seed vertex along the projection direction; the method specifically comprises the following steps:
b2-1, rotating the coordinate matrix and the projection direction at the same time to make the projection direction parallel to the coordinate axis;
b2-2, inputting the rotated coordinate matrix into a neural network to obtain a projection distance;
the flowchart for executing the above steps is shown in fig. 3, a certain coordinate axis is selected, a rotation matrix R from the projection direction N to the coordinate axis is calculated, R is applied to the coordinate matrix, and the direction of the input point cloud is normalized. And sending the normalized coordinate matrix into a neural network to obtain a projection distance L.
B3, obtaining the projection point of the seed vertex according to the projection distance and the projection direction;
in the implementation of the embodiment of the present invention, the normalized projected point coordinate is NL, and the actual projected point coordinate is NLR-1-T;
And c, obtaining the number of target vertexes according to the up-sampling multiplying power, and adjusting the number of vertexes in the projection point set to the number of the target vertexes by using the sampling of the farthest point.
When the specific embodiment of the invention is executed, the number of vertices of the input point cloud is set to be Q, the upsampling multiplying power is set to be S, and the target number of vertices is set to be Q × S.

Claims (6)

1. A point cloud arbitrary scale up-sampling method based on self-supervision deep learning is characterized by comprising the following steps:
a, reading an input point cloud to generate a seed vertex set;
b, acquiring a projection point set from the seed vertex to the hidden surface of the point cloud according to the seed vertex set;
and c, obtaining the number of target vertexes according to the up-sampling multiplying power, and adjusting the number of vertexes in the projection point set to the number of the target vertexes by using the sampling of the farthest point.
2. The point cloud arbitrary scale up-sampling method based on the self-supervision deep learning according to claim 1, characterized in that the step a specifically comprises:
step a1, performing voxelization on the space where the input point cloud is located;
a2, estimating the distance from the gravity center to the hidden surface of the input point cloud for each voxel;
step a3, selecting the barycenter with a distance within a certain range as the seed vertex set.
3. The point cloud arbitrary scale up-sampling method based on the self-supervised deep learning as recited in claim 2, wherein the step a2 specifically comprises:
a2-1, selecting a plurality of vertexes closest to the current gravity center from the input point cloud, and sorting according to the distance;
step a2-2, starting from the third vertex, forming a triangle by the vertex and the first two nearest vertices;
step a2-3, calculating the distance from the center of gravity to each triangle, and selecting the nearest distance as the estimated distance.
4. The point cloud arbitrary scale up-sampling method based on the self-supervised deep learning as recited in claim 1, wherein the step b specifically comprises:
b1, normalizing the coordinates of the input point cloud according to the coordinates of the seed vertexes for each seed vertex, and acquiring the projection direction from the seed vertexes to the hidden surface of the point cloud;
b2, normalizing the direction of the input point cloud according to the projection direction for each seed vertex, and acquiring the projection distance of the seed vertex along the projection direction;
and b3, obtaining the projection point of the seed vertex according to the projection distance and the projection direction.
5. The point cloud arbitrary scale up-sampling method based on the self-supervised deep learning as recited in claim 4, wherein the step b1 specifically comprises:
b1-1, for each seed vertex, simultaneously translating the input point cloud and the seed vertex to move the seed vertex to the origin;
b1-2, selecting a plurality of vertexes closest to the seed vertexes from the input point cloud, and arranging the vertex coordinates of the vertexes into a coordinate matrix;
and b1-3, inputting the coordinate matrix into the neural network to obtain the projection direction.
6. The point cloud arbitrary scale up-sampling method based on the self-supervised deep learning as recited in claim 4, wherein the step b2 specifically comprises:
b2-1, rotating the coordinate matrix and the projection direction at the same time to make the projection direction parallel to the coordinate axis;
and b2-2, inputting the rotated coordinate matrix into a neural network to obtain the projection distance.
CN202210064957.7A 2022-01-20 2022-01-20 Point cloud arbitrary scale up-sampling method based on self-supervision deep learning Active CN114418852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210064957.7A CN114418852B (en) 2022-01-20 2022-01-20 Point cloud arbitrary scale up-sampling method based on self-supervision deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210064957.7A CN114418852B (en) 2022-01-20 2022-01-20 Point cloud arbitrary scale up-sampling method based on self-supervision deep learning

Publications (2)

Publication Number Publication Date
CN114418852A true CN114418852A (en) 2022-04-29
CN114418852B CN114418852B (en) 2024-04-12

Family

ID=81276279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210064957.7A Active CN114418852B (en) 2022-01-20 2022-01-20 Point cloud arbitrary scale up-sampling method based on self-supervision deep learning

Country Status (1)

Country Link
CN (1) CN114418852B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845561A (en) * 2017-03-13 2017-06-13 哈尔滨工业大学 A kind of complicated curved face object sorting technique based on cloud VFH descriptions and neutral net
CN110502979A (en) * 2019-07-11 2019-11-26 哈尔滨工业大学 A kind of laser radar waveform Modulation recognition method based on decision tree
US20200090357A1 (en) * 2018-09-14 2020-03-19 Lucas PAGÉ-CACCIA Method and system for generating synthetic point cloud data using a generative model
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN112581515A (en) * 2020-11-13 2021-03-30 上海交通大学 Outdoor scene point cloud registration method based on graph neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845561A (en) * 2017-03-13 2017-06-13 哈尔滨工业大学 A kind of complicated curved face object sorting technique based on cloud VFH descriptions and neutral net
US20200090357A1 (en) * 2018-09-14 2020-03-19 Lucas PAGÉ-CACCIA Method and system for generating synthetic point cloud data using a generative model
CN110502979A (en) * 2019-07-11 2019-11-26 哈尔滨工业大学 A kind of laser radar waveform Modulation recognition method based on decision tree
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN112581515A (en) * 2020-11-13 2021-03-30 上海交通大学 Outdoor scene point cloud registration method based on graph neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANGEL X. CHANG等: "ShapeNet: An Information-Rich 3D Model Repository", 《ARXIV》, 9 September 2015 (2015-09-09) *
DAVID S. ROSENBERG等: "Multiview Point Cloud Kernels for Semisupervised Learning", 《IEEE》, 31 December 2009 (2009-12-31) *
LI, XY等: "A Snapshot-based Approach for Self-supervised Feature Learning and Weakly-supervised Classification on Point CloudData", 《WEB OF SCIENCE》, 8 July 2021 (2021-07-08) *
梁振斌;熊风光;韩燮;陶谦;: "基于深度学习的点云匹配", 计算机工程与设计, no. 06, 15 June 2020 (2020-06-15) *

Also Published As

Publication number Publication date
CN114418852B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN111784821B (en) Three-dimensional model generation method and device, computer equipment and storage medium
CN112002014A (en) Three-dimensional face reconstruction method, system and device for fine structure
KR20100136604A (en) Real-time visualization system of 3 dimension terrain image
Lasserre et al. A neuron membrane mesh representation for visualization of electrophysiological simulations
CN104157011A (en) Modeling method for three-dimensional terrain
CN111028335B (en) Point cloud data block surface patch reconstruction method based on deep learning
EP2528042B1 (en) Method and device for the re-meshing of 3D polygon models
CN116416376A (en) Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
Yoo et al. Image‐Based Modeling of Urban Buildings Using Aerial Photographs and Digital Maps
CN117333637B (en) Modeling and rendering method, device and equipment for three-dimensional scene
Favorskaya et al. Realistic 3D-modeling of forest growth with natural effect
CN111881919B (en) Line element intelligent simplification method and device based on tracking type grid subdivision
Buck et al. Ignorance is bliss: flawed assumptions in simulated ground truth
CN114418852B (en) Point cloud arbitrary scale up-sampling method based on self-supervision deep learning
Schoor et al. VR based visualization and exploration of plant biological data
CN110426688A (en) A kind of SAR analogue echoes method based on terrain backgrounds target
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN105321205A (en) Sparse key point-based parametric human model reconstruction method
Davis et al. 3d modeling of cities for virtual environments
CN114998497A (en) Image rendering method, system, equipment and medium based on grid data
Weier et al. Generating and rendering large scale tiled plant populations
CN114140508A (en) Method, system and equipment for generating three-dimensional reconstruction model and readable storage medium
JP6802129B2 (en) Information processing equipment, methods and programs
Dierenbach et al. Next-Best-View method based on consecutive evaluation of topological relations
DE112020007352T5 (en) GOVERNANCE FACILITIES, PROGRAM AND GOVERNANCE PROCEDURES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant