CN113808224A - Point cloud geometric compression method based on block division and deep learning - Google Patents

Point cloud geometric compression method based on block division and deep learning Download PDF

Info

Publication number
CN113808224A
CN113808224A CN202110947297.2A CN202110947297A CN113808224A CN 113808224 A CN113808224 A CN 113808224A CN 202110947297 A CN202110947297 A CN 202110947297A CN 113808224 A CN113808224 A CN 113808224A
Authority
CN
China
Prior art keywords
point cloud
point
encoder
points
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110947297.2A
Other languages
Chinese (zh)
Inventor
高攀
游康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110947297.2A priority Critical patent/CN113808224A/en
Publication of CN113808224A publication Critical patent/CN113808224A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a point cloud geometric compression method based on block division and deep learning, which comprises the following steps: (1) partitioning point clouds by using a farthest point sampling FPS and a K neighbor KNN operation; (2) sequentially taking the S point cloud blocks obtained in the step (1) as the input of an encoder in a self-encoder, compressing the S blocks by the encoder to obtain S low-dimensional feature vectors, quantizing the low-dimensional feature vectors, and splicing the quantized low-dimensional feature vectors with the coordinates of the S structure points sampled in the step (1) to form the final hidden layer representation of the point cloud; (3) entropy coding the final hidden layer representation of the point cloud to form a final bit stream, and transmitting the final bit stream to a decoding end; (4) and the decoding end acquires the transmitted bit stream and performs entropy decoding on the bit stream into S low-dimensional feature vectors and S structure point coordinates. The invention can realize better point cloud compression effect.

Description

Point cloud geometric compression method based on block division and deep learning
Technical Field
The invention relates to the technical field of multimedia compression, in particular to a point cloud geometric compression method based on block division and deep learning.
Background
Three-dimensional point clouds are an important representation of objects in three-dimensional space. With the advent of virtual reality and mixed reality technologies, point clouds are receiving more and more attention. A point cloud is a collection of a set of points in three-dimensional space, each point being specified by (x, y, z) coordinates and optional attributes (e.g., color, normal vector, etc.). The amount of point cloud data is usually large, which puts high requirements on the performance of the point cloud compression method.
The current research on point cloud compression can be roughly divided into three categories: laser radar point cloud compression, point cloud geometric information compression, and video-based point cloud compression. However, conventional lossy point cloud compression methods typically perform poorly in low bit rate environments. For example, the number of points generated by an octree-based compression method will decrease sharply as the tree depth decreases, and a mosaic-like blockiness effect will also be produced.
The self-encoder is a new data compression model based on machine learning, which can automatically learn the analysis transformation and the synthesis transformation of point cloud data. However, most of the existing point cloud compression self-encoders are based on voxelization and three-dimensional convolution, and the methods have the problems of low memory use efficiency, high time complexity and the like. Moreover, voxelization and three-dimensional convolution cannot effectively handle irregular or sparse point clouds. The recently proposed deep neural network structures such as PointNet and PointNet + + can directly extract point cloud characteristics from a point set without voxel operation. PointNet uses a symmetric function to extract features of a disordered point cloud, but it lacks the process of obtaining local features. The PointNet + + is used as an improved version of PointNet, the processes of multiple sampling, grouping and PointNet are used for obtaining local features of point clouds on different scales, the method is similar to multiple convolution operations in a two-dimensional image, and a better result is obtained in point cloud classification and segmentation. Although methods currently exist for compressing point clouds using a hierarchy like PointNet + +, they all only work on sparse point clouds.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a point cloud geometric compression method based on block division and deep learning, which can realize better point cloud compression effect.
In order to solve the technical problem, the invention provides a point cloud geometric compression method based on block division and deep learning, which comprises the following steps:
(1) partitioning point clouds by using a farthest point sampling FPS and a K neighbor KNN operation; sampling S structural points in the original point cloud by using farthest point sampling, and then sequentially acquiring K adjacent points in the original point cloud by using a K adjacent method by taking each structural point as a center, namely dividing the original point cloud into S blocks, wherein each block has K points;
(2) sequentially taking the S point cloud blocks obtained in the step (1) as the input of an encoder in a self-encoder, compressing the S blocks by the encoder to obtain S low-dimensional feature vectors, quantizing the low-dimensional feature vectors, and splicing the quantized low-dimensional feature vectors with the coordinates of the S structure points sampled in the step (1) to form the final hidden layer representation of the point cloud;
(3) entropy coding the final hidden layer representation of the point cloud to form a final bit stream, and transmitting the final bit stream to a decoding end;
(4) a decoding end obtains the transmitted bit stream and carries out entropy decoding on the bit stream into S low-dimensional feature vectors and S structure point coordinates; and decoding the S low-dimensional feature vectors into S point cloud blocks by a decoder of the self-encoder, and finally adding and combining the S point cloud blocks and the corresponding S structural point coordinates to obtain a final point cloud reconstruction result.
Preferably, in the step (1), the partitioning of the point cloud by using the farthest point sampling FPS and the K-nearest neighbor KNN operation specifically includes: for an original point cloud with N points, sampling S structural points from the original point cloud by using a farthest point sampling; for each structural point, K adjacent points of the structural point are found from the original point cloud by using KNN, and the coordinates of the sampling point are subtracted from the coordinates of the K adjacent points, namely the coordinates of the K points are moved to a coordinate system with the structural point as an origin, so that a block can be obtained; for S structure points, S blocks may be generated, each block having K points, each block may be considered a "mini" point cloud.
Preferably, in the step (2) and the step (4), the self-encoder is an end-to-end trained deep neural network, and the input and the output of the network are the origin cloud block and the reconstruction result of the point cloud block respectively; the self-encoder is divided into three parts: analyzing the transform encoder, the quantization and the synthesis transform decoder and using the chamfer distance to constrain the error between the reconstructed block and the input block, the loss function from the encoder is defined as:
Loss=DCD+λR
wherein DCDAnd R represents the bit rate estimated according to the quantized hidden layer characteristics, and lambda represents a Lagrange multiplier.
The invention has the beneficial effects that: the invention provides a method for dividing point cloud into small blocks, wherein points in the point cloud are independent from each other, and regular voxel spatial correlation does not exist; a farthest point sampling and K nearest neighbor algorithm is used, a complex point cloud is divided into a plurality of small blocks with simple shapes, and the local structure information of the point cloud is better captured in the subsequent reconstruction process; the self-encoder structure based on the block compresses the point cloud, can obtain better results in the reconstruction process, can freely adjust the point number of the reconstructed point cloud, and has great reference significance for solving other problems related to point cloud reconstruction.
Drawings
FIG. 1 is a schematic diagram of the compression process of the present invention.
FIG. 2 is a diagram of a self-encoder according to the present invention.
Fig. 3(a) is a schematic diagram of an input point cloud containing 8192 points according to the present invention.
FIG. 3(b) is a diagram illustrating 32 structure points sampled from an input point cloud by using the farthest point sampling according to the present invention.
Fig. 3(c) is a diagram illustrating the blocking result when the present invention uses K neighbors (K: 256) for 32 structure points.
Fig. 3(d) is a diagram illustrating the blocking result when the present invention uses K neighbors (K is 512) for 32 structure points.
Fig. 4 is a schematic diagram illustrating a training process visualization according to the present invention.
FIG. 5(a) is a schematic diagram of the "airplane" point cloud input by the present invention.
FIG. 5(b) is a schematic diagram of the reconstructed "airplane" point cloud of the present invention.
FIG. 5(c) is a schematic diagram of the "bathtub" point cloud inputted by the present invention.
FIG. 5(d) is a schematic view of the reconstructed "bathtub" point cloud of the present invention.
Detailed Description
A point cloud geometric compression method based on block division and deep learning comprises the following steps:
(1) partitioning point clouds by using a farthest point sampling FPS and a K neighbor KNN operation; sampling S structural points in the original point cloud by using farthest point sampling, and then sequentially acquiring K adjacent points in the original point cloud by using a K adjacent method by taking each structural point as a center, namely dividing the original point cloud into S blocks, wherein each block has K points;
(2) sequentially taking the S point cloud blocks obtained in the step (1) as the input of an encoder in a self-encoder, compressing the S blocks by the encoder to obtain S low-dimensional feature vectors, quantizing the low-dimensional feature vectors, and splicing the quantized low-dimensional feature vectors with the coordinates of the S structure points sampled in the step (1) to form the final hidden layer representation of the point cloud;
(3) entropy coding the final hidden layer representation of the point cloud to form a final bit stream, and transmitting the final bit stream to a decoding end;
(4) a decoding end obtains the transmitted bit stream and carries out entropy decoding on the bit stream into S low-dimensional feature vectors and S structure point coordinates; and decoding the S low-dimensional feature vectors into S point cloud blocks by a decoder of the self-encoder, and finally adding and combining the S point cloud blocks and the corresponding S structural point coordinates to obtain a final point cloud reconstruction result.
As shown in fig. 1, the point cloud is first divided into S blocks, each block having K points. In the encoding process, the encoding result of the self-encoder and the S sampling points are connected to form a point cloud hidden layer representation with the size of a final (S, 3+ d) matrix. During decoding, the hidden representation of the point cloud is split into two parts identical to the encoder output. And finally, assembling the decoded blocks to generate a point cloud reconstruction result with the matrix size of (S multiplied by k, 3), wherein k represents the number of points of the point cloud of the reconstruction block. It should be noted that the number of points of the input block point cloud and the number of points of the reconstructed block point cloud may be set to be unequal, that is, K may not be equal to K.
Different from a two-dimensional image blocking method, the method provided by the invention adopts a method of Farthest Point Sampling (FPS) and K-Nearest Neighbors (KNN) to segment the Point cloud into a plurality of blocks with the same resolution, and a specific blocking process is described as follows.
First, for an original point cloud having N points, S structural points are sampled from the original point cloud using the farthest point sampling.
Next, for each structure point, K neighbor points of the structure point are found from the original point cloud using KNN. And subtracting the coordinates of the sampling point from the coordinates of the K adjacent points, namely moving the coordinates of the K points to a coordinate system taking the structure point as an origin to obtain a block. For S structure points, S blocks can be generated, each block having K points. Each block can be viewed as a "mini" point cloud.
Fig. 3(a) - (d) are illustrations of the S and K sizes in block segmentation. Fig. 3(a) shows a point cloud of 8192 points, fig. 3(b) shows an FPS sampling result when S takes 32, fig. 3(c) shows a blocking result when S takes 32 and K takes 256, and fig. 3(d) shows a blocking result when S takes 32 and K takes 512. It can be seen that the point cloud structure information cannot be completely captured even when the product of S and K is just the number of input point clouds. It is necessary to use sxk ═ α N (α >1) to avoid the case where some points in the original point cloud are missed. In a specific implementation, we use α ═ 2.
When the number S of the blocks is large and the resolution K of the blocks is small, the self-encoder is more suitable for high-quality reconstruction of point clouds under high bit rate; when the number S of blocks is small and the resolution K of the blocks is large, it is more suitable for compressing a smaller volume of a point cloud at a low bit rate.
The invention designs a self-encoder based on PointNet, which realizes the transformation and compression of point cloud blocks. The self-encoder comprises an analysis transform (f)a) Quantization function (Q) and synthesis transform (f)s). We extract hidden layer features from a simple block using analytical transformation; quantizing the hidden layer features with a quantization function to facilitate further compression; the quantized features are reconstructed into one block using a synthesis transform.
The analytical transformation first uses a Set Abstraction (SA) to extract a local feature in the local region for each point, and then uses PointNet to extract higher level features on the global scale. For the quantization process, during training, we superimpose a noise generated by uniform distribution between-0.5 and 0.5 on each element on the hidden layer features extracted by the analysis transformation. During testing, rounding operation is carried out on the hidden layer characteristics to realize subsequent entropy coding. The synthesis transformation is a multi-layer perceptron composed of several fully connected layers, and the last step is to transform the output of the multi-layer perceptron into a two-dimensional matrix of block geometric information.
The self-encoder uses a chamfer distance (D)CD) To constrain the error between the reconstructed result and the input patch. As shown in fig. 2, the final loss function from the encoder is L ═ DCD+ λ R, where R represents the bit rate estimated by the probability distribution of hidden layer features. It should be noted that the influence of λ on the compression effect is not significant, and we mainly adjust the compression ratio by changing the size of the bottleneck dimension of the self-encoder.
Examples
The present invention will be described in further detail with reference to a specific embodiment. For ease of explanation, and without loss of generality, the following assumptions are made:
the method proposed by the invention is intended to be trained and tested by using a ModelNet40 data set. The ModelNet40 dataset contains 9835 training point clouds created by uniform sampling on each 3D model in ModelNet40 and 2467 test point clouds, each point cloud containing 8192 points.
The present embodiment controls the compression ratio of the input point cloud by S, K and the size of the bottleneck dimension d from the encoder. Experiments show that for the point cloud of 8192 points used in the embodiment, when the dimension of the bottleneck is between 8 and 16, the compression performance is good.
Taking a point cloud reconstruction at a low bit rate as an example, where S is 32, K is 512, and d is 16, that is, where α is 2, that is, sxk is 2N, is set to capture the overall structure of the point cloud more completely, where N is the number of points of the input point cloud, that is, 8192. The present embodiment first divides the point cloud in each training set into 32 blocks, each block contains 512 points, i.e. 9835 points cloud is converted into 9835 × 32 blocks. We use these 9835 × 32 blocks as training data for the self-encoder, and such a huge amount of data can significantly avoid the overfitting problem often encountered in model training. Meanwhile, as the shape of each block is simple, the self-encoder can obtain good reconstruction effect without large parameter. The parameters of the self-encoder in this embodiment are as follows:
SA(K=512,8,[32,64,128])→PN(64,32,d=16)→Quantization→FC(d=16,128)
→ReLU→FC(128,256)→ReLU→FC(256,256×3)
the parameters of each operation SA (representing the set feature extraction layer), PN (representing the PointNet layer), and FC (representing the fully connected layer) are defined as follows:
set feature extraction layer: SA (number of blocks, number of groups, shared multi-layer perceptron size)
PointNet layer: PN (sharing multi-layer perceptron size)
Full connection layer: FC (input feature dimension, output feature dimension)
After the self-encoder is trained in the above-mentioned manner, it is applied to the compression process proposed by us. The reconstruction results from the encoder over different iterations are shown in fig. 4.
During compression, we divide the input point cloud into 32 blocks using FPS and KNN to input the trained auto-encoder. The self-encoder converts the 32 blocks into 32 hidden layer representations, and the 32 hidden layer representations and the sampled 32 structure point coordinates are spliced into a (32, 16+3) two-dimensional matrix to serve as a final representation of the input point cloud. The two-dimensional matrix is transmitted to a decoding end after entropy coding.
In the decoding process, the two-dimensional matrix of (32, 16+3) is first split into 32, 16 block implicit representation sets and 32 structure point coordinate information with a matrix size of (32, 3). The 32 16-dimensional block hidden representations are reconstructed into 32 blocks by the decoder of the self-encoder, each block having a matrix size of (256, 3). And then adding the 32 blocks and the corresponding structural point coordinates, namely, shifting the 32 blocks to the original positions of the 32 blocks in the input point cloud again, and taking the union set to obtain a coordinate set with a two-dimensional matrix size of (32 multiplied by 256, 3), namely finishing the overall reconstruction of the original point cloud. Examples of three-dimensional point cloud reconstruction results are shown in fig. 5(a) - (d).

Claims (3)

1. A point cloud geometric compression method based on block division and deep learning is characterized by comprising the following steps:
(1) partitioning point clouds by using a farthest point sampling FPS and a K neighbor KNN operation; sampling S structural points in the original point cloud by using farthest point sampling, and then sequentially acquiring K adjacent points in the original point cloud by using a K adjacent method by taking each structural point as a center, namely dividing the original point cloud into S blocks, wherein each block has K points;
(2) sequentially taking the S point cloud blocks obtained in the step (1) as the input of an encoder in a self-encoder, compressing the S blocks by the encoder to obtain S low-dimensional feature vectors, quantizing the low-dimensional feature vectors, and splicing the quantized low-dimensional feature vectors with the coordinates of the S structure points sampled in the step (1) to form the final hidden layer representation of the point cloud;
(3) entropy coding the final hidden layer representation of the point cloud to form a final bit stream, and transmitting the final bit stream to a decoding end;
(4) a decoding end obtains the transmitted bit stream and carries out entropy decoding on the bit stream into S low-dimensional feature vectors and S structure point coordinates; and decoding the S low-dimensional feature vectors into S point cloud blocks by a decoder of the self-encoder, and finally adding and combining the S point cloud blocks and the corresponding S structural point coordinates to obtain a final point cloud reconstruction result.
2. The point cloud geometric compression method based on block partitioning and deep learning of claim 1, wherein in the step (1), the operation of using the farthest point sampling FPS and the K-nearest neighbor KNN to partition the point cloud is specifically: for an original point cloud with N points, sampling S structural points from the original point cloud by using a farthest point sampling; for each structural point, K adjacent points of the structural point are found from the original point cloud by using KNN, and the coordinates of the sampling point are subtracted from the coordinates of the K adjacent points, namely the coordinates of the K points are moved to a coordinate system with the structural point as an origin, so that a block is obtained; for S structure points, S blocks are generated, each block having K points, each block being considered as a "mini" point cloud.
3. The method of claim 1, wherein in the steps (2) and (4), the self-encoder is an end-to-end trained deep neural network, and the input and output of the network are the origin cloud block and the reconstructed result of the point cloud block; the self-encoder is divided into three parts: analyzing the transform encoder, the quantization and the synthesis transform decoder and using the chamfer distance to constrain the error between the reconstructed block and the input block, the loss function from the encoder is defined as:
Loss=DCD+λR
wherein DCDAnd R represents the bit rate estimated according to the quantized hidden layer characteristics, and lambda represents a Lagrange multiplier.
CN202110947297.2A 2021-08-18 2021-08-18 Point cloud geometric compression method based on block division and deep learning Pending CN113808224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110947297.2A CN113808224A (en) 2021-08-18 2021-08-18 Point cloud geometric compression method based on block division and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110947297.2A CN113808224A (en) 2021-08-18 2021-08-18 Point cloud geometric compression method based on block division and deep learning

Publications (1)

Publication Number Publication Date
CN113808224A true CN113808224A (en) 2021-12-17

Family

ID=78893744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110947297.2A Pending CN113808224A (en) 2021-08-18 2021-08-18 Point cloud geometric compression method based on block division and deep learning

Country Status (1)

Country Link
CN (1) CN113808224A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349230A (en) * 2019-07-15 2019-10-18 北京大学深圳研究生院 A method of the point cloud Geometric compression based on depth self-encoding encoder
CN112581552A (en) * 2020-12-14 2021-03-30 深圳大学 Self-adaptive blocking point cloud compression method and device based on voxels
CN112672168A (en) * 2020-12-14 2021-04-16 深圳大学 Point cloud compression method and device based on graph convolution
CN113011511A (en) * 2021-03-29 2021-06-22 江苏思玛特科技有限公司 Sample generation method based on deep learning multispectral LiDAR data classification
CN113160068A (en) * 2021-02-23 2021-07-23 清华大学 Point cloud completion method and system based on image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349230A (en) * 2019-07-15 2019-10-18 北京大学深圳研究生院 A method of the point cloud Geometric compression based on depth self-encoding encoder
CN112581552A (en) * 2020-12-14 2021-03-30 深圳大学 Self-adaptive blocking point cloud compression method and device based on voxels
CN112672168A (en) * 2020-12-14 2021-04-16 深圳大学 Point cloud compression method and device based on graph convolution
CN113160068A (en) * 2021-02-23 2021-07-23 清华大学 Point cloud completion method and system based on image
CN113011511A (en) * 2021-03-29 2021-06-22 江苏思玛特科技有限公司 Sample generation method based on deep learning multispectral LiDAR data classification

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PAULO DE OLIVEIRA RENTE等: ""Graph-Based Static 3D Point Clouds Geometry Coding"", 《IEEE TRANSACTIONS ON MULTIMEDIA》, vol. 21, no. 2, 25 June 2018 (2018-06-25), pages 284 - 299, XP011708069, DOI: 10.1109/TMM.2018.2859591 *
SIHENG CHEN等: ""Deep Unsupervised Learning of 3D Point Clouds via Graph Topology Inference and Filtering"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, 11 December 2019 (2019-12-11), pages 3193 - 3198 *
WEI YAN等: ""Deep AutoEncoder-based Lossy Geometry Compression for Point Clouds"", 《ARXIV》, 18 April 2019 (2019-04-18), pages 4321 - 4328 *
艾达等: "《三维点云数据压缩技术研究综述》", 《西安邮电大学学报》, vol. 26, no. 1, 31 January 2021 (2021-01-31), pages 90 - 96 *

Similar Documents

Publication Publication Date Title
US10410377B2 (en) Compressing a signal that represents a physical attribute
Golla et al. Real-time point cloud compression
WO2022063055A1 (en) 3d point cloud compression system based on multi-scale structured dictionary learning
US6614428B1 (en) Compression of animated geometry using a hierarchical level of detail coder
CN110691243A (en) Point cloud geometric compression method based on deep convolutional network
JP2001500676A (en) Data compression based on wavelets
CN108028945A (en) The apparatus and method of conversion are performed by using singleton coefficient update
CN105374054A (en) Hyperspectral image compression method based on spatial spectrum characteristics
CN104299256B (en) Almost-lossless compression domain volume rendering method for three-dimensional volume data
JP3828640B2 (en) Image signal conversion method
Bletterer et al. Point cloud compression using depth maps
CN115102934B (en) Decoding method, encoding device, decoding equipment and storage medium for point cloud data
CN113808224A (en) Point cloud geometric compression method based on block division and deep learning
CN113763539B (en) Implicit function three-dimensional reconstruction method based on image and three-dimensional input
Cao et al. What’s new in Point Cloud Compression?
CN115065822A (en) Point cloud geometric information compression system, method and computer system
CN114708343A (en) Three-dimensional point cloud coding and decoding method, compression method and device based on map dictionary learning
Jain et al. Compressed volume rendering using deep learning
Marvie et al. Coding of dynamic 3D meshes
KR100400608B1 (en) Encoding method for 3-dimensional voxel model by using skeletons
CN114998457B (en) Image compression method, image decompression method, related device and readable storage medium
CN114025146B (en) Dynamic point cloud geometric compression method based on scene flow network and time entropy model
WO2023179706A1 (en) Encoding method, decoding method, and terminal
Tabra et al. FPGA implementation of new LM-SPIHT colored image compression with reduced complexity and low memory requirement compatible for 5G
US20230377208A1 (en) Geometry coordinate scaling for ai-based dynamic point cloud coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination