CN113573060B - Point cloud geometric coding method and system for deep learning self-adaptive feature dimension - Google Patents
Point cloud geometric coding method and system for deep learning self-adaptive feature dimension Download PDFInfo
- Publication number
- CN113573060B CN113573060B CN202110854530.2A CN202110854530A CN113573060B CN 113573060 B CN113573060 B CN 113573060B CN 202110854530 A CN202110854530 A CN 202110854530A CN 113573060 B CN113573060 B CN 113573060B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- voxelized
- characteristic
- learning
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a point cloud geometric coding method for deep learning self-adaptive feature dimensions. Then, the point cloud is mapped into characteristic coefficients of N8 x 8 by an encoder of a self-encoder, when different N is calculated, a cost function formed between the mean square error and the code rate of the point cloud is reconstructed by a decoder, the characteristic dimension corresponding to the minimum cost function is solved, and the corresponding bit stream at the moment is output. Then, a decoder is used to reconstruct the point cloud. And finally, restoring the real block point cloud according to the reconstructed point cloud and the minimum value. And fusing each output block of point cloud to obtain a decoded point cloud. The method optimizes the distortion and the code rate of the point cloud, can quickly obtain the optimal characteristic dimension, and further improves the efficiency and the quality of point cloud compression.
Description
Technical Field
The invention relates to the field of 3D point cloud digitization, in particular to a point cloud geometric coding method and system for deep learning self-adaptive feature dimensions.
Background
The point cloud is a digitized sample of the real world in three dimensions, formally a set of points, each point consisting of geometric (x, y, z) and attribute information (e.g., R, G, B, reflection intensity, etc.). The 3D point cloud digital rebuilds a real three-dimensional world. The method is widely applied to the fields of virtual reality, augmented reality, unmanned driving, medical treatment, high-precision maps and the like. However, compared to the conventional 2D image, the 2D image is a set of regularly arranged attribute information (color information such as R, G, B or gray values), and the point cloud is a set of unordered points with an order of magnitude at least exceeding that of the 2D image, so that effective point cloud compression is very challenging and essential for storing and transmitting the point cloud.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for point cloud geometric encoding with deep learning adaptive feature dimension to solve the above problems.
In order to realize the purpose, the invention adopts the following technical scheme:
a point cloud geometric coding method for deep learning self-adaptive feature dimensions comprises the following steps:
s1, partitioning and voxelization processing is carried out on the original point cloud;
s2, carrying out point cloud encoding and reconstruction through a self-encoder according to the voxelized point cloud obtained in the step S1;
s3, calculating the mean square error MSE of the reconstructed point cloud and the original point cloud under different feature dimensions N N And a cost function S (N) formed between corresponding code rates, and taking the characteristic dimension corresponding to the minimum cost function as the optimal characteristic dimension N * ;
S4, decoding the bit stream entropy corresponding to the optimal characteristic dimension, inversely quantizing the obtained integer into an original floating point coefficient, and inputting the characteristic coefficient into a decoder corresponding to a three-layer self-encoder to enable the characteristic coefficient to be mapped back to the point cloud to obtain a final reconstructed point cloud;
s5, entropy coding is carried out on the minimum value of the whole point cloud obtained in the step S1;
and S6, further fusing the block point clouds to form a final decoded point cloud based on the final reconstructed point cloud obtained in the step S4 and the original block point cloud restored by the minimum entropy decoding in the step S5.
Further, the step S1 specifically includes: the original point cloud is evenly divided into 64-by-64 blocks, the point cloud features are extracted by 3D convolution, the point cloud is subjected to pixelation, points are represented by '1', and points are not represented by '0'.
Further, the step S2 specifically includes:
s201, for input 64 × 64 voxelized point clouds, learning the features of the voxelized point clouds by adopting 32 5 × 5 3D convolution kernels, wherein the step length is 2, the output feature vectors are changed into 32 × 32, and the sigmoid function is adopted as an activation function;
s202, learning the characteristics of the voxelized point cloud by adopting 32 3D convolution kernels of 5 × 5, wherein the step length is 2, and therefore, the output characteristic vector becomes 32 × 16;
s203, adopting N5 × 5 3D convolution kernels to learn the characteristics of the voxelized point cloud, wherein the step length is 2, and therefore, the output characteristic vector is changed into N × 8;
s204, uniformly quantizing the characteristic coefficients of N8X 8 which are floating point numbers to 2 2 To 2 16 An integer of (d);
s205, entropy coding the quantized feature coefficients;
s206, decoding the characteristic coefficient;
s207: inverse quantization is carried out to return the original floating point number;
s208, adopting N5 × 5 3D deconvolution kernels to learn the characteristics of the voxelized point cloud, wherein the step length is 2, so that the output characteristic vector becomes N × 16, and the activation function adopts a sigmoid function;
s209, learning the characteristics of the voxelized point cloud by adopting 32 5-by-5 3D deconvolution kernels, wherein the step length is 2, so that the output characteristic vector becomes 32-by-32, and the sigmoid function is adopted as an activation function;
s210: the features of the voxelized point cloud were learned using 1 5 x 5 3D deconvolution kernel with a step size of 2, so the output feature vector became 1 x 64, the activation function used the sigmoid function, and the output was rounded to become voxelized 0 and 1.
Further, the cost function S (N) is specifically as follows:
N * =argminS(N)=argmin(MSE N +α*RATE N )
wherein N is * Optimal feature dimension for solution(ii) a N is different characteristic dimensions, and the values of N are 8, 16, 32, 64, 128 and 256; MSE N Is the mean square error between the original point cloud and the reconstructed point cloud; α is a coefficient for making the code rate and the mean square error in the same order, and α =10 as shown in the above formula λ λ is an integer from-4 to 4 and satisfies the above formula; RATE N Is the corresponding code rate when the characteristic dimension is N.
Further, the step S4 specifically includes:
s401, decoding the output bit stream and restoring the quantized characteristic coefficient;
step S402, inverse quantization is carried out on the characteristic coefficient of the integer to be the original floating point number;
s403, learning the characteristics of the voxelized point cloud by adopting N5-by-5 3D deconvolution kernels, wherein the step length is 2, so that the output characteristic vector is changed into N-by-16, and the sigmoid function is adopted as an activation function;
step S404, learning the characteristics of the voxelized point cloud by using 32 5-by-5 3D deconvolution kernels, wherein the step length is 2, so that the output characteristic vector is changed into 32-by-32, and the sigmoid function is adopted as an activation function;
step S405, using 1 5X 5 3D deconvolution kernel to learn the features of the voxelized point cloud with the step size of 2, so that the output feature vector becomes 1X 64, the activation function uses the sigmoid function, and the output result is rounded to become voxelized 0 and 1.
A point cloud geometric encoding system for deep learning adaptive feature dimensions, comprising a processor, a memory and a computer program stored on the memory, wherein the processor executes the computer program, and specifically executes the steps of the point cloud geometric encoding method as claimed in the above claims.
Compared with the prior art, the invention has the following beneficial effects:
the method optimizes the distortion and the code rate of the point cloud, can quickly obtain the optimal characteristic dimension, and further improves the efficiency and the quality of point cloud compression.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a flow chart of point cloud encoding and reconstruction based on an auto-encoder according to an embodiment of the present invention.
FIG. 3 is a flow chart of the point cloud reconstruction based on feature vectors according to an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a point cloud geometric encoding method for deep learning adaptive feature dimension, comprising the following steps:
s1, partitioning and voxelization processing is carried out on the original point cloud; the point cloud is evenly divided into 64 by 64 blocks. To subsequently extract point cloud features using 3D convolution, the point cloud is first voxelized, somewhat denoted by "1" and not denoted by "0".
S2, using a three-layer encoder of an encoder to the voxelized point cloud, mapping the point cloud to a feature space, wherein the feature is a group of coefficients with the size of N2 x 2, and further uniformly quantizing the coefficients of floating point numbers to 2 2 To 2 16 And (4) entropy coding the quantized coefficients directly to output a bit stream. And performing entropy decoding and inverse quantization at a reconstruction end, using a decoder corresponding to a three-layer self-encoder for the characteristic coefficient, and mapping the characteristic coefficient back to the point cloud.
S3, calculating the mean square error MSE of the reconstructed point cloud and the original point cloud under different feature dimensions (N =16, 32, 64, 128 and 256) N And a cost function S (N) formed between corresponding code rates is shown as the following formula, wherein the minimum corresponding characteristic dimension N of the cost function * That is, the corresponding entropy-coded bit stream and binarization N are output * 。
N * =argminS(N)=argmin(MSE N +α*RATE N )
Wherein N is * The optimal characteristic dimension for solving is obtained; n is different characteristic dimensions, and the value of N is 8,16、32、64、128、256;MSE N Is the mean square error between the original point cloud and the reconstructed point cloud; α is a coefficient for making the code rate and the mean square error in the same order, and α =10 as shown in the above formula λ λ is an integer from-4 to 4 and satisfies the above formula; RATE N Is the corresponding code rate when the characteristic dimension is N.
S4, point cloud reconstruction based on the characteristic vector firstly carries out point cloud reconstruction on the characteristic dimension N * Decoding the corresponding bit stream entropy, then inversely quantizing the obtained integer into an original floating point coefficient, and inputting the characteristic coefficient into a decoder corresponding to a three-layer self-encoder to enable the characteristic coefficient to be mapped back to the point cloud;
s5, when the voxelized point cloud is learned, the lower left corner of the whole point cloud is moved to the origin, so that the moving coordinates need to be recorded, and the minimum value of the whole point cloud is subjected to entropy coding;
and S6, restoring the original block point cloud by adding the minimum value decoded from the entropy in the step M5 to the point cloud reconstructed in the step M4, and further fusing the block point cloud to form the final decoded point cloud.
In this embodiment, preferably, referring to fig. 2, the point cloud encoding and reconstruction based on the self-encoder are completed by the following steps:
in step S201, the features of the input 64 × 64 voxelized point cloud are learned by using 32 5 × 5 3D convolution kernels, the step size is 2, so that the output feature vector becomes 32 × 32, and the sigmoid function is used as the activation function.
In step S202, the features of the voxelized point cloud are learned by using 32 5 × 5 3D convolution kernels with a step size of 2, so that the output feature vector becomes 32 × 16, and the sigmoid function is used as the activation function.
In step S203, the features of the voxelized point cloud are learned using N5 × 5 3D convolution kernels with a step size of 2, and thus the output feature vector becomes N × 8.
S204, uniformly quantizing the characteristic coefficients of N8X 8 which are floating point numbers to 2 2 To 2 16 Is an integer of (1).
And step S205, entropy coding the quantized feature coefficients.
And step S206, decoding the characteristic coefficient.
Step S207: dequantization back to the original floating point number.
In step S208, the features of the voxelized point cloud are learned by using N5 × 5 3D deconvolution kernels with a step size of 2, so that the output feature vector becomes N × 16, and the activation function uses a sigmoid function.
In step S209, the features of the voxelized point cloud are learned by using 32 5 × 5 3D deconvolution kernels with a step size of 2, so that the output feature vector becomes 32 × 32, and the sigmoid function is used as the activation function.
Step S210: the features of the voxelized point cloud were learned using 1 5 x 5 3D deconvolution kernel with a step size of 2, so the output feature vector became 1 x 64, the activation function used the sigmoid function, and the output was rounded to become voxelized 0 and 1.
In this embodiment, preferably, referring to fig. 3, the point cloud reconstruction based on the feature vector is completed by the following steps:
and S401, decoding the output bit stream and restoring the quantized characteristic coefficients.
And step S402, inverse quantization is carried out on the characteristic coefficients of the integers to obtain the original floating point numbers.
And S403, learning the characteristics of the voxelized point cloud by using N5-by-5 3D deconvolution kernels, wherein the step size is 2, so that the output characteristic vector becomes N-by-16, and the sigmoid function is adopted as the activation function.
And S404, learning the characteristics of the voxelized point cloud by using 32 5-by-5 3D deconvolution kernels, wherein the step size is 2, so that the output characteristic vector is changed into 32-by-32, and the sigmoid function is adopted as the activation function.
And S405, learning the characteristics of the voxelized point cloud by using 1 5-by-5 3D deconvolution core, wherein the step length is 2, the output characteristic vector becomes 1-by-64, the sigmoid function is adopted by the activation function, and the output result is rounded and becomes voxelized 0 and 1.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
Claims (5)
1. A point cloud geometric coding method for deep learning self-adaptive feature dimensions is characterized by comprising the following steps:
s1, partitioning and voxelization processing is carried out on the original point cloud;
s2, carrying out point cloud encoding and reconstruction through a self-encoder according to the voxelized point cloud obtained in the step S1;
s3, calculating the mean square error MSE of the reconstructed point cloud and the original point cloud under different feature dimensions N N And a cost function S (N) formed between corresponding code rates, and taking the characteristic dimension corresponding to the minimum cost function as the optimal characteristic dimension N * ;
The cost function S (N) is specifically as follows:
N * =argminS(N)=argmin(MSE N +α*RATE N ) (1)
wherein N is * The optimal feature dimension for solution; n is different characteristic dimensions, and the values of N are 8, 16, 32, 64, 128 and 256; MSE N Is the mean square error between the original point cloud and the reconstructed point cloud; alpha is a coefficient for keeping the code rate and the mean square error in the same order of magnitude; RATE N The corresponding code rate when the characteristic dimension is N;
s4, decoding the bit stream entropy corresponding to the optimal characteristic dimension, inversely quantizing the obtained integer into an original floating point coefficient, and inputting the characteristic coefficient into a decoder corresponding to a three-layer self-encoder to enable the characteristic coefficient to be mapped back to the point cloud to obtain a final reconstructed point cloud;
s5, entropy coding is carried out on the minimum value of the whole point cloud obtained in the step S1;
and S6, further fusing the block point clouds to form a final decoded point cloud based on the final reconstructed point cloud obtained in the step S4 and the original block point cloud restored by the minimum entropy decoding in the step S5.
2. The method for point cloud geometric coding of deep learning adaptive feature dimensions according to claim 1, wherein the step S1 specifically comprises: the original point cloud is evenly divided into 64 × 64 blocks, 3D convolution is adopted to extract point cloud features, the point cloud is subjected to pixelation, points are represented by '1', and points are not represented by '0'.
3. The method of claim 1, wherein the step S2 specifically comprises:
s201, for input 64 x 64 voxelized point clouds, learning the features of the voxelized point clouds by adopting 32 5 x 5D convolution kernels, wherein the step length is 2, and the activation function adopts a sigmoid function;
s202, learning the characteristics of the voxelized point cloud by adopting 32 5-by-5 3D convolution kernels, wherein the step length is 2;
s203, learning the characteristics of the voxelized point cloud by adopting N5-by-5 3D convolution kernels, wherein the step length is 2;
s204, uniformly quantizing the characteristic coefficients of N8 x 8 which are floating point numbers to 2 2 To 2 16 An integer of (a);
s205, entropy coding the quantized feature coefficients;
s206, decoding the characteristic coefficient;
s207: inverse quantization is carried out to return the original floating point number;
s208, learning the characteristics of the voxelized point cloud by adopting N5-by-5 3D deconvolution kernels, wherein the step length is 2, and the sigmoid function is adopted as an activation function;
s209, learning the characteristics of the voxelized point cloud by adopting 32 5-by-5 3D deconvolution kernels, wherein the step length is 2, and the sigmoid function is adopted as an activation function;
s210: the features of the voxelized point cloud were learned using 1 5 x 5 3D deconvolution kernel with a step size of 2, so the output feature vector was 1 x 64, the activation function used the sigmoid function, and the output rounded to become voxelized 0 and 1.
4. The method for point cloud geometric coding of deep learning adaptive feature dimensions according to claim 1, wherein the step S4 specifically comprises:
s401, decoding the output bit stream and restoring the quantized characteristic coefficient;
step S402, inverse quantization is carried out on the characteristic coefficient of the integer to the original floating point number;
s403, learning the characteristics of the voxelized point cloud by adopting N5-by-5 3D deconvolution kernels, wherein the step length is 2, so that the output characteristic vector is changed into N-by-16, and the sigmoid function is adopted as an activation function;
step S404, learning the characteristics of the voxelized point cloud by using 32 5-by-5 3D deconvolution kernels, wherein the step length is 2, so that the output characteristic vector is changed into 32-by-32, and the sigmoid function is adopted as an activation function;
step S405, using 1 5X 5 3D deconvolution kernel to learn the features of the voxelized point cloud with the step size of 2, so that the output feature vector becomes 1X 64, the activation function uses the sigmoid function, and the output result is rounded to become voxelized 0 and 1.
5. A point cloud geometric coding system for deep learning adaptive feature dimension, comprising a processor, a memory and a computer program stored on the memory, wherein the processor executes the computer program and specifically executes the steps in the point cloud geometric coding method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110854530.2A CN113573060B (en) | 2021-07-28 | 2021-07-28 | Point cloud geometric coding method and system for deep learning self-adaptive feature dimension |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110854530.2A CN113573060B (en) | 2021-07-28 | 2021-07-28 | Point cloud geometric coding method and system for deep learning self-adaptive feature dimension |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113573060A CN113573060A (en) | 2021-10-29 |
CN113573060B true CN113573060B (en) | 2022-12-23 |
Family
ID=78168270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110854530.2A Active CN113573060B (en) | 2021-07-28 | 2021-07-28 | Point cloud geometric coding method and system for deep learning self-adaptive feature dimension |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113573060B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9734595B2 (en) * | 2014-09-24 | 2017-08-15 | University of Maribor | Method and apparatus for near-lossless compression and decompression of 3D meshes and point clouds |
CN110677652A (en) * | 2019-09-26 | 2020-01-10 | 叠境数字科技(上海)有限公司 | Point cloud geometric lossy compression method based on voxel convolution |
CN112396703B (en) * | 2020-11-18 | 2024-01-12 | 北京工商大学 | Reconstruction method of single-image three-dimensional point cloud model |
CN112581552B (en) * | 2020-12-14 | 2023-04-07 | 深圳大学 | Self-adaptive blocking point cloud compression method and device based on voxels |
-
2021
- 2021-07-28 CN CN202110854530.2A patent/CN113573060B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113573060A (en) | 2021-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10904564B2 (en) | Method and apparatus for video coding | |
JP6676193B2 (en) | Method for encoding a point cloud representing a scene, an encoder system, and a non-transitory computer-readable recording medium storing a program | |
WO2021000658A1 (en) | Point cloud encoding and decoding method, encoder, decoder, and computer storage medium | |
US10599935B2 (en) | Processing artificial neural network weights | |
CN113658051B (en) | Image defogging method and system based on cyclic generation countermeasure network | |
Chou et al. | A volumetric approach to point cloud compression—Part I: Attribute compression | |
CN113678466A (en) | Method and apparatus for predicting point cloud attribute encoding | |
US9736455B2 (en) | Method and apparatus for downscaling depth data for view plus depth data compression | |
CN108028941B (en) | Method and apparatus for encoding and decoding digital images by superpixel | |
JP2018534881A (en) | How to compress a point cloud | |
JP5934210B2 (en) | Method for reconstructing current block of image and corresponding encoding method, corresponding encoding device, and storage medium for holding the image encoded in a bitstream | |
CN110691243A (en) | Point cloud geometric compression method based on deep convolutional network | |
EP2732624A1 (en) | Luma-based chroma intra prediction | |
CN107871306B (en) | Method and device for denoising picture | |
GB2575514A (en) | Method and system for compressing and decompressing digital three-dimensional point cloud data | |
CN111294614B (en) | Method and apparatus for digital image, audio or video data processing | |
US10382711B2 (en) | Method and device for processing graph-based signal using geometric primitives | |
EP3791593A2 (en) | Method and apparatus for encoding and decoding volumetric video data | |
CN113573060B (en) | Point cloud geometric coding method and system for deep learning self-adaptive feature dimension | |
JP7394980B2 (en) | Method, device and program for decoding neural network with block division | |
CN116233439A (en) | Method and device for determining code rate control parameters | |
GB2571818A (en) | Selecting encoding options | |
WO2021184380A1 (en) | Point cloud encoding method and decoding method, encoder and decoder, and storage medium | |
EP3912125A1 (en) | Enhancement of three-dimensional facial scans | |
CN113674369B (en) | Method for improving G-PCC compression by deep learning sampling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |