WO2024160290A1 - Method, apparatus, and medium for point cloud coding - Google Patents
Method, apparatus, and medium for point cloud coding Download PDFInfo
- Publication number
- WO2024160290A1 WO2024160290A1 PCT/CN2024/075628 CN2024075628W WO2024160290A1 WO 2024160290 A1 WO2024160290 A1 WO 2024160290A1 CN 2024075628 W CN2024075628 W CN 2024075628W WO 2024160290 A1 WO2024160290 A1 WO 2024160290A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- coding
- feature
- current point
- bitstream
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 259
- 238000006243 chemical reaction Methods 0.000 claims abstract description 32
- 238000013459 approach Methods 0.000 claims description 44
- 230000006835 compression Effects 0.000 claims description 34
- 238000007906 compression Methods 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 33
- 238000013507 mapping Methods 0.000 claims description 27
- 230000007246 mechanism Effects 0.000 claims description 26
- 238000004458 analytical method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 22
- 238000009826 distribution Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 230000006978 adaptation Effects 0.000 claims description 6
- 239000007787 solid Substances 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 241000509579 Draco Species 0.000 claims description 4
- 230000009977 dual effect Effects 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 230000000750 progressive effect Effects 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000013473 artificial intelligence Methods 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000013139 quantization Methods 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000009827 uniform distribution Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 235000014347 soups Nutrition 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
Definitions
- Embodiments of the present disclosure relates generally to point cloud coding techniques, and more particularly, to semantic enhancement-based point cloud compression.
- a point cloud is a collection of individual data points in a three-dimensional (3D) plane with each point having a set coordinate on the X, Y, and Z axes.
- a point cloud may be used to represent the physical content of the three-dimensional space.
- Point clouds have shown to be a promising way to represent 3D visual data for a wide range of immersive applications, from augmented reality to autonomous cars.
- Point cloud coding standards have evolved primarily through the development of the well-known MPEG organization.
- MPEG short for Moving Picture Experts Group, is one of the main standardization groups dealing with multimedia.
- CPP Call for proposals
- the final standard will consist in two classes of solutions.
- Video-based Point Cloud Compression (V-PCC or VPCC) is appropriate for point sets with a relatively uniform distribution of points.
- Geometry-based Point Cloud Compression (G-PCC or GPCC) is appropriate for more sparse distributions.
- coding efficiency of conventional point cloud coding techniques is generally expected to be further improved.
- Embodiments of the present disclosure provide a solution for point cloud coding.
- a method for point cloud coding comprises: determining, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and performing the conversion based on the coding module.
- the method in accordance with the first aspect of the present disclosure determines the point cloud feature extractor or the point cloud geometry reconstruction module based on the type of the point cloud. In this way, the point cloud geometry compression (PCGC) can be improved.
- PCGC point cloud geometry compression
- an apparatus for processing point cloud sequence comprises a processor and a non-transitory memory with instructions thereon.
- a non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
- a non-transitory computer-readable recording medium stores a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus.
- the method comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and generating the bitstream based on the coding module.
- a method for storing a bitstream of a point cloud sequence comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; generating the bitstream based on the coding module; and storing the bitstream in a non-transitory computer-readable recording medium.
- Fig. 1 illustrates a block diagram that illustrates an example point cloud coding system, in accordance with some embodiments of the present disclosure
- Fig. 2 illustrates another block diagram of an example point cloud coding system in accordance with some embodiments of the present disclosure
- Fig. 3 illustrates a block diagram that illustrates an example of a point cloud compression (PCC) encoder, in accordance with some embodiments of the present disclosure
- Fig. 4 illustrates a block diagram that illustrates an example of a PCC decoder, in accordance with some embodiments of the present disclosure
- Fig. 5 illustrates an example of a flow of a compression method in accordance with some embodiments of the present disclosure
- Fig. 6 illustrates an example of a feature encoder module in accordance with some embodiments of the present disclosure
- Fig. 7 illustrates a flowchart of a method for point cloud coding in accordance with some embodiments of the present disclosure.
- Fig. 8 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
- references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
- the term “and/or” includes any and all combinations of one or more of the listed terms.
- Fig. 1 is a block diagram that illustrates an example point cloud coding system 100 that may utilize the techniques of the present disclosure.
- the point cloud coding system 100 may include a source device 110 and a destination device 120.
- the source device 110 can be also referred to as a point cloud encoding device, and the destination device 120 can be also referred to as a point cloud decoding device.
- the source device 110 can be configured to generate encoded point cloud data and the destination device 120 can be configured to decode the encoded point cloud data generated by the source device 110.
- the techniques of this disclosure are generally directed to coding (encoding and/or decoding) point cloud data, i.e., to support point cloud compression.
- the coding may be effective in compressing and/or decompressing point cloud data.
- Source device 100 and destination device 120 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as smartphones and mobile phones, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming devices, vehicles (e.g., terrestrial or marine vehicles, spacecraft, aircraft, etc. ) , robots, LIDAR devices, satellites, extended reality devices, or the like.
- source device 100 and destination device 120 may be equipped for wireless communication.
- the source device 100 may include a data source 112, a memory 114, a PCC encoder 116, and an input/output (I/O) interface 118.
- the destination device 120 may include an input/output (I/O) interface 128, a PCC decoder 126, a memory 124, and a data consumer 122.
- PCC encoder 116 of source device 100 and PCC decoder 126 of destination device 120 may be configured to apply the techniques of this disclosure related to point cloud coding.
- source device 100 represents an example of an encoding device
- destination device 120 represents an example of a decoding device.
- source device 100 and destination device 120 may include other components or arrangements.
- source device 100 may receive data (e.g., point cloud data) from an internal or external source.
- destination device 120 may interface with an external data consumer, rather than include a data consumer in the same device.
- data source 112 represents a source of point cloud data (i.e., raw, unencoded point cloud data) and may provide a sequential series of “frames” of the point cloud data to PCC encoder 116, which encodes point cloud data for the frames.
- data source 112 generates the point cloud data.
- Data source 112 of source device 100 may include a point cloud capture device, such as any of a variety of cameras or sensors, e.g., one or more video cameras, an archive containing previously captured point cloud data, a 3D scanner or a light detection and ranging (LIDAR) device, and/or a data feed interface to receive point cloud data from a data content provider.
- a point cloud capture device such as any of a variety of cameras or sensors, e.g., one or more video cameras, an archive containing previously captured point cloud data, a 3D scanner or a light detection and ranging (LIDAR) device, and/or a data feed interface to receive point cloud data from a data content provider.
- data source 112 may generate the point cloud data based on signals from a LIDAR apparatus.
- point cloud data may be computer-generated from scanner, camera, sensor or other data.
- data source 112 may generate the point cloud data, or produce a combination of live point cloud data, archived point cloud data, and computer-generated point cloud data.
- PCC encoder 116 encodes the captured, pre-captured, or computer-generated point cloud data.
- PCC encoder 116 may rearrange frames of the point cloud data from the received order (sometimes referred to as “display order” ) into a coding order for coding.
- PCC encoder 116 may generate one or more bitstreams including encoded point cloud data.
- Source device 100 may then output the encoded point cloud data via I/O interface 118 for reception and/or retrieval by, e.g., I/O interface 128 of destination device 120.
- the encoded point cloud data may be transmitted directly to destination device 120 via the I/O interface 118 through the network 130A.
- the encoded point cloud data may also be stored onto a storage medium/server 130B for access by destination device 120.
- Memory 114 of source device 100 and memory 124 of destination device 120 may represent general purpose memories.
- memory 114 and memory 124 may store raw point cloud data, e.g., raw point cloud data from data source 112 and raw, decoded point cloud data from PCC decoder 126.
- memory 114 and memory 124 may store software instructions executable by, e.g., PCC encoder 116 and PCC decoder 126, respectively.
- PCC encoder 116 and PCC decoder 126 may also include internal memories for functionally similar or equivalent purposes.
- memory 114 and memory 124 may store encoded point cloud data, e.g., output from PCC encoder 116 and input to PCC decoder 126.
- portions of memory 114 and memory 124 may be allocated as one or more buffers, e.g., to store raw, decoded, and/or encoded point cloud data.
- memory 114 and memory 124 may store point cloud data.
- I/O interface 118 and I/O interface 128 may represent wireless transmitters/receivers, modems, wired networking components (e.g., Ethernet cards) , wireless communication components that operate according to any of a variety of IEEE 802.11 standards, or other physical components.
- I/O interface 118 and I/O interface 128 may be configured to transfer data, such as encoded point cloud data, according to a cellular communication standard, such as 4G, 4G-LTE (Long-Term Evolution) , LTE Advanced, 5G, or the like.
- I/O interface 118 and I/O interface 128 may be configured to transfer data, such as encoded point cloud data, according to other wireless standards, such as an IEEE 802.11 specification.
- source device 100 and/or destination device 120 may include respective system-on-a-chip (SoC) devices.
- SoC system-on-a-chip
- source device 100 may include an SoC device to perform the functionality attributed to PCC encoder 116 and/or I/O interface 118
- destination device 120 may include an SoC device to perform the functionality attributed to PCC decoder 126 and/or I/O interface 128.
- the techniques of this disclosure may be applied to encoding and decoding in support of any of a variety of applications, such as communication between autonomous vehicles, communication between scanners, cameras, sensors and processing devices such as local or remote servers, geographic mapping, or other applications.
- I/O interface 128 of destination device 120 receives an encoded bitstream from source device 110.
- the encoded bitstream may include signaling information defined by PCC encoder 116, which is also used by PCC decoder 126, such as syntax elements having values that represent a point cloud.
- Data consumer 122 uses the decoded data. For example, data consumer 122 may use the decoded point cloud data to determine the locations of physical objects. In some examples, data consumer 122 may comprise a display to present imagery based on the point cloud data.
- PCC encoder 116 and PCC decoder 126 each may be implemented as any of a variety of suitable encoder and/or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.
- PCC encoder 116 and PCC decoder 126 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device.
- a device including PCC encoder 116 and/or PCC decoder 126 may comprise one or more integrated circuits, microprocessors, and/or other types of devices.
- PCC encoder 116 and GPCC decoder 126 may operate according to a coding standard, such as video point cloud compression (VPCC) standard or a geometry point cloud compression (GPCC) standard.
- VPCC video point cloud compression
- GPCC geometry point cloud compression
- This disclosure may generally refer to coding (e.g., encoding and decoding) of frames to include the process of encoding or decoding data.
- An encoded bitstream generally includes a series of values for syntax elements representative of coding decisions (e.g., coding modes) .
- a point cloud may contain a set of points in a 3D space, and may have attributes associated with the point.
- the attributes may be color information such as R, G, B or Y, Cb, Cr, or reflectance information, or other attributes.
- Point clouds may be captured by a variety of cameras or sensors such as LIDAR sensors and 3D scanners and may also be computer-generated. Point cloud data are used in a variety of applications including, but not limited to, construction (modeling) , graphics (3D models for visualizing and animation) , and the automotive industry (LIDAR sensors used to help in navigation) .
- the PCC encoder 116 in the system 100 may include a GPCC encoder, and the PCC decoder 126 may include a GPCC decoder.
- the GPCC encoder and the GPCC decoder may be collectively referred to as a GPCC coder or a GPCC.
- the GPCC may be applied in combination with another coder.
- Fig. 2 illustrates another block diagram of an example point cloud coding system 200 in accordance with some embodiments of the present disclosure.
- the point cloud system 200 includes a geometry encoder 210, a geometry decoder 230, a GPCC 220 and a machine vision task 240.
- the GPCC 220 may be implemented as a PCC encoder such as the PCC encoder 116 in Fig. 1 and a PCC decoder such as the PCC decoder 120 in Fig. 1.
- the system 100 and the system 200 may be used in combination or separately.
- the system 100 may be a part of the system 200.
- the geometry encoder 210, the geometry decoder 230 and/or the machine vision task 240 may be added into the system 100 as additional modules.
- the geometry encoder 210 may be based on machine learning (ML) or artificial intelligence (AI)
- the geometry decoder 230 may be based on ML or AI, as well.
- the ML/AI based geometry encoder/decoder may be applied in combination with the GPCC 220.
- the geometry encoder 210 and/or the geometry decoder 230 may be pretrained or fine-tuned.
- Information or data output from the geometry decoder 230 may be used for a machine vision task 240.
- the system 200 may be referred to as an AI based point cloud compression (AI-PCC) system. It is to be understood that in some example embodiments, the system 200 may include additional modules such as feature extractor module, or the like. Scope of the present disclosure is not limited here.
- the GPCC 220 may include a GPCC encoder and a GPCC decoder.
- Fig. 3 is a block diagram illustrating an example of a GPCC encoder 300, which may be an example of a GPCC encoder of the GPCC 220 illustrated in Fig. 2, in accordance with some embodiments of the present disclosure.
- Fig. 4 is a block diagram illustrating an example of a GPCC decoder 400, which may be an example of a GPCC decoder of the GPCC 220 illustrated in Fig. 2, in accordance with some embodiments of the present disclosure.
- GPCC encoder 300 and GPCC decoder 400 point cloud positions are coded first. Attribute coding depends on the decoded geometry.
- Fig. 3 and Fig. 4 the region adaptive hierarchical transform (RAHT) unit 318, surface approximation analysis unit 312, RAHT unit 414 and surface approximation synthesis unit 410 are options typically used for Category 1 data.
- the level-of-detail (LOD) generation unit 320, lifting unit 322, LOD generation unit 416 and inverse lifting unit 418 are options typically used for Category 3 data. All the other units are common between Categories 1 and 3.
- LOD level-of-detail
- the compressed geometry is typically represented as an octree from the root all the way down to a leaf level of individual voxels.
- the compressed geometry is typically represented by a pruned octree (i.e., an octree from the root down to a leaf level of blocks larger than voxels) plus a model that approximates the surface within each leaf of the pruned octree.
- a pruned octree i.e., an octree from the root down to a leaf level of blocks larger than voxels
- a model that approximates the surface within each leaf of the pruned octree.
- the surface model used is a triangulation comprising 1-10 triangles per block, resulting in a triangle soup.
- the Category 1 geometry codec is therefore known as the Trisoup geometry codec
- the Category 3 geometry codec is known as the Octree geometry codec.
- GPCC encoder 300 may include a coordinate transform unit 302, a color transform unit 304, a voxelization unit 306, an attribute transfer unit 308, an octree analysis unit 310, a surface approximation analysis unit 312, an arithmetic encoding unit 314, a geometry reconstruction unit 316, an RAHT unit 318, a LOD generation unit 320, a lifting unit 322, a coefficient quantization unit 324, and an arithmetic encoding unit 326.
- GPCC encoder 300 may receive a set of positions and a set of attributes.
- the positions may include coordinates of points in a point cloud.
- the attributes may include information about points in the point cloud, such as colors associated with points in the point cloud.
- Coordinate transform unit 302 may apply a transform to the coordinates of the points to transform the coordinates from an initial domain to a transform domain. This disclosure may refer to the transformed coordinates as transform coordinates.
- Color transform unit 304 may apply a transform to convert color information of the attributes to a different domain. For example, color transform unit 304 may convert color information from an RGB color space to a YCbCr color space.
- voxelization unit 306 may voxelize the transform coordinates. Voxelization of the transform coordinates may include quantizing and removing some points of the point cloud. In other words, multiple points of the point cloud may be subsumed within a single “voxel, ” which may thereafter be treated in some respects as one point. Furthermore, octree analysis unit 310 may generate an octree based on the voxelized transform coordinates. Additionally, in the example of Fig. 3, surface approximation analysis unit 312 may analyze the points to potentially determine a surface representation of sets of the points.
- Arithmetic encoding unit 314 may perform arithmetic encoding on syntax elements representing the information of the octree and/or surfaces determined by surface approximation analysis unit 312. GPCC encoder 300 may output these syntax elements in a geometry bitstream.
- Geometry reconstruction unit 316 may reconstruct transform coordinates of points in the point cloud based on the octree, data indicating the surfaces determined by surface approximation analysis unit 312, and/or other information.
- the number of transform coordinates reconstructed by geometry reconstruction unit 316 may be different from the original number of points of the point cloud because of voxelization and surface approximation. This disclosure may refer to the resulting points as reconstructed points.
- Attribute transfer unit 308 may transfer attributes of the original points of the point cloud to reconstructed points of the point cloud data.
- RAHT unit 318 may apply RAHT coding to the attributes of the reconstructed points.
- LOD generation unit 320 and lifting unit 322 may apply LOD processing and lifting, respectively, to the attributes of the reconstructed points.
- RAHT unit 318 and lifting unit 322 may generate coefficients based on the attributes.
- Coefficient quantization unit 324 may quantize the coefficients generated by RAHT unit 318 or lifting unit 322.
- Arithmetic encoding unit 326 may apply arithmetic coding to syntax elements representing the quantized coefficients.
- GPCC encoder 300 may output these syntax elements in an attribute bitstream.
- GPCC decoder 400 may include a geometry arithmetic decoding unit 402, an attribute arithmetic decoding unit 404, an octree synthesis unit 406, an inverse quantization unit 408, a surface approximation synthesis unit 410, a geometry reconstruction unit 412, a RAHT unit 414, a LOD generation unit 416, an inverse lifting unit 418, a coordinate inverse transform unit 420, and a color inverse transform unit 422.
- GPCC decoder 400 may obtain a geometry bitstream and an attribute bitstream.
- Geometry arithmetic decoding unit 402 of decoder 400 may apply arithmetic decoding (e.g., CABAC or other type of arithmetic decoding) to syntax elements in the geometry bitstream.
- attribute arithmetic decoding unit 404 may apply arithmetic decoding to syntax elements in attribute bitstream.
- Octree synthesis unit 406 may synthesize an octree based on syntax elements parsed from geometry bitstream.
- surface approximation synthesis unit 410 may determine a surface model based on syntax elements parsed from geometry bitstream and based on the octree.
- geometry reconstruction unit 412 may perform a reconstruction to determine coordinates of points in a point cloud.
- Coordinate inverse transform unit 420 may apply an inverse transform to the reconstructed coordinates to convert the reconstructed coordinates (positions) of the points in the point cloud from a transform domain back into an initial domain.
- inverse quantization unit 408 may inverse quantize attribute values.
- the attribute values may be based on syntax elements obtained from attribute bitstream (e.g., including syntax elements decoded by attribute arithmetic decoding unit 404) .
- RAHT unit 414 may perform RAHT coding to determine, based on the inverse quantized attribute values, color values for points of the point cloud.
- LOD generation unit 416 and inverse lifting unit 418 may determine color values for points of the point cloud using a level of detail-based technique.
- color inverse transform unit 422 may apply an inverse color transform to the color values.
- the inverse color transform may be an inverse of a color transform applied by color transform unit 304 of encoder 300.
- color transform unit 304 may transform color information from an RGB color space to a YCbCr color space.
- color inverse transform unit 422 may transform color information from the YCbCr color space to the RGB color space.
- the various units of Fig. 3 and Fig. 4 are illustrated to assist with understanding the operations performed by encoder 300 and decoder 400.
- the units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof.
- Fixed-function circuits refer to circuits that provide particular functionality and are preset on the operations that can be performed.
- Programmable circuits refer to circuits that can be programmed to perform various tasks and provide flexible functionality in the operations that can be performed.
- programmable circuits may execute software or firmware that cause the programmable circuits to operate in the manner defined by instructions of the software or firmware.
- Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters) , but the types of operations that the fixed-function circuits perform are generally immutable.
- one or more of the units may be distinct circuit blocks (fixed-function or programmable) , and in some examples, one or more of the units may be integrated circuits.
- This disclosure is related to point cloud coding technologies. Specifically, it is about point cloud semantic enhancement coding for machine vision.
- the ideas may be applied individually or in various combination, to any point cloud coding standard or non-standard point cloud codec, e.g., the being-developed Artificial Intelligence based Point Cloud Compression (AI-PCC) .
- AI-PCC Artificial Intelligence based Point Cloud Compression
- G-PCC Geometry based Point Cloud Compression AI-PCC Artificial Intelligence based Point Cloud Compression MPEG Moving Picture Experts Group UAV Unmanned Air Vehicle PCGC Point Cloud Geometry Compression KNN K Nearest Neighbors.
- the current point cloud geometry coding adopts PSNR as the main distortion metric for rate distortion optimization, aiming to ensure the objective quality of the reconstructed point cloud at the decoder under a certain bit rate, to provide viewers with high-quality point cloud content.
- PSNR the main distortion metric for rate distortion optimization
- more and more point cloud data are directly used by machine vision and intelligent algorithms to complete various 3D computer vision tasks.
- UAV Unmanned Air Vehicle
- the task of the receiver is no longer just to reconstruct the complete original point cloud, but to perform semantic segmentation, point cloud classification, target detection and other intelligent analysis tasks on the point cloud data.
- Point clouds used in different scenarios (such as basic solid point cloud, lidar point cloud, digital human point cloud, etc. ) have different structural characteristics.
- the basic solid point cloud may have a simple structure and a uniform distribution.
- the lidar point cloud may have a relatively complex structure and a extremely sparse distribution.
- 3D deep learning has made great progress in visual tasks.
- researchers have introduced deep learning-based methods to point cloud compression. These methods can be roughly divided into two categories, voxel-based point cloud compression and point-based point cloud compression.
- voxel-based methods point clouds are voxelized into multiple blocks, which are processed by 3D convolution neural networks such as sparse convolution.
- point-based methods such as FoldingNet, directly process the original point clouds with graph-based convolution network and transformer-based network.
- PCGC Point Cloud Geometry Compression
- Point cloud geometry coding reconstruction and point cloud intelligent analysis are two completely different tasks.
- the extracted point cloud geometry features and point cloud semantic features are different in the feature space.
- encoder refers to the model to code of the information to be signalled.
- decoder refers to the model to decode the compression bits to get the signalled information.
- point cloud feature extractors such as point-based extractors and voxel-based extractors, etc.
- point-based extractors and voxel-based extractors, etc. may be used for different types of point clouds.
- the point-based extractors may be used to extract the features of point clouds.
- the graph-based method may be used.
- the graph convolution may be used to model the topological structure of point clouds.
- the feature representation of such point cloud may be extracted depending on the effective modeling of the topological structure of point clouds.
- the graph convolution may use the K Nearest Neighbors (KNN) algorithm to get the local point cloud nearest neighbor, then extract the features of these local adjacent points.
- KNN K Nearest Neighbors
- the transformer-based method may be used.
- the transformer-based method may be used to model the topological structure of point clouds.
- the spatial structure of point cloud may be modeled and the feature representation of such point cloud may be extracted depending on the attention mechanism of the transformer.
- the transformer based method may realize progressive geometric feature extraction of point cloud through stacking multi-layer attention mechanism.
- the voxel-based extractors may be used to extract the features of point clouds.
- the sparse convolution-based methods may be used.
- the sparse convolution may be used to extract the geometry structure features of point cloud.
- sparse convolution may voxelize the point cloud and calculate the geometry characteristics of the voxelized point cloud.
- the voxelization may be used to make point cloud data more regular for subsequent convolution processing.
- different pre-trained point cloud feature extractors may be used to extract the compact feature representation for point clouds with different spatial structures and point sizes.
- it may be signaled from the encoder to the decoder which point cloud feature extractor is used.
- d may be derived at the decoder, which point cloud feature extractor is used.
- an indicator of the final sampled point cloud may be coded by point cloud codec.
- the point cloud codec may be G-PCC, V-PCC, Draco etc.
- indications of the features may be coded with fixed-length coding, unary coding, truncated unary coding, etc. al.
- indications of the features may be coding in a predictive way.
- the extracted features may contain more point cloud spatial structure information.
- spatial structure information may include high-frequency texture information and low-frequency structure information in point cloud data.
- the extracted features may contain more point cloud semantic information.
- the point cloud semantic information may include the semantic information of each point.
- the transformer-based point cloud feature mapping module may be used to minimize feature similarity between the feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
- the design of feature mapping module may refer to the transformer structure of multi-head attention mechanism.
- the self-attention mechanism may be used to calculate the similarity of feature mapping space, and then the features may be weighted to summed.
- the point cloud feature space mapping module may be symmetrically designed and reserved at the encoder and decoder.
- a it may be signaled from the encoder to the decoder whether a multi-task model is used.
- b it may be derived at the decoder, whether a multi-task model is used.
- the loss constraint of multi-objective may be used.
- the geometry constraints for point cloud reconstruction may be used to ensure that a basic quality point cloud can be reconstructed.
- the geometry constraints may be calculated by using chamfer distance for supervised learning.
- the chamfer distance may be computed as the following formula:
- S 1 and S 2 are two point clouds.
- x and y are the coordinates of the points in S 1 and S 2 , respectively.
- the machine vision oriented semantic constraints may be used.
- the semantic constraints may be used to constrain the feature distribution distance between coded features and original features.
- the semantic constraints may be calculated by using Kullback-Leibler (KL) divergence.
- KL Kullback-Leibler
- F rec ) ⁇ ip ori (v i ) log p ori (v i ) -p ori (v i ) log p rec (v i )
- F ori and F rec are original feature and coded feature
- p ori and p rec are probability distribution of the original feature and the coded feature, respectively.
- L cd and L semantic are the geometry constraints and semantic constraints, respectively.
- the folding-based method may be used to reconstruct the point cloud from the decoded point cloud feature representation.
- the FoldingNet may be used as the basic reconstruction network in the folding-based method.
- the folding method may be used to reconstruct the 3D structure of the point cloud from feature space.
- the sparse-convolution based methods and point-based methods may be used to reconstruct the point cloud.
- the sparse convolution upscale may be used as the basic reconstruction network in the sparse-convolution based methods.
- the large-scale point cloud may be blocked according to the number of points.
- the point-based method may be used to code and reconstruct the blocked point cloud.
- Whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as dimensions, color format, color component, slice/picture type.
- a syntax element disclosed above may be binarized as a flag, a fixed length code, an EG (x) code, a unary code, a truncated unary code, a truncated binary code, etc. It can be signed or unsigned.
- a syntax element disclosed above may be coded with at least one context model. Or it may be bypass coded.
- a syntax element disclosed above may be signaled in a conditional way.
- the SE is signaled only if the corresponding function is applicable.
- the SE is signaled only if the dimensions (width and/or height) of the block satisfy a condition.
- a syntax element disclosed above may be signaled at block level/sequence level/group of pictures level/group of frames level/picture level/frame level/slice level/tile level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/GPS/APS/slice header/tile group header/tile header.
- Whether to and/or how to apply the disclosed methods above may be signalled at block level/sequence level/group of pictures level/group of frames level/picture level/frame level/slice level/tile level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/GPS/APS/slice header/tile group header/tile header.
- coded information such as block size, frame size, colour format, attribute format, single/dual tree partitioning, colour component, slice/tile/picture/frame type.
- the pre-trained point cloud geometry feature extraction model is reused to extract the compact feature representation of point cloud.
- point cloud data is characterized with complex spatial structure as high-dimensional feature information for further processing.
- the point cloud feature mapping module is introduced. Because point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis are two completely different tasks, their representation feature spaces should also be different. The point cloud feature mapping module can explore the similarity of these two different tasks in the feature space as much as possible.
- a multi-task learning mechanism is introduced to preserve the point cloud geometry structure and semantic information as much as possible.
- Fig. 5 depicts the flow of the compression method in accordance with embodiments of the present disclosure.
- Fig. 5 may be described with respect to Fig. 2.
- the geometry encoder 220 may include a graph-based extractor 510 and a deep PCC (DPCC) based extractor 515.
- the geometry decoder 230 may include a folding based decoder 570 and a DPCC based decoder 575.
- the GPCC 220 may include a plurality of modules, including an octree encoder 530, an octree decoder 535 and an entropy model 560.
- the GPCC 220 may include additional modules such as a quantization (Q) module, an AE module, an AD module, and the like.
- the features such as X from the geometry encoder 210 may be inputted into the feature encoder module 520.
- the feature encoder module 520 may process the input features and obtain output features such as Y.
- the output features may be inputted into the GPCC 220, as shown in Fig. 5.
- Output features from the GPCC 220 may be inputted to the feature encoder module 525.
- the feature encoder module 520 may process the features and output processed features to the geometry decoder 230.
- the geometry decoder 230 may transmit point cloud data to the machine vision task 240, which includes a point cloud classification task 580 and a point cloud segmentation task 585, and/or the like.
- Fig. 6 depicts an example of the flow of the feature encoder module 520.
- the feature encoder module 520 may include an input embedding module 610.
- the input features may be inputted to the input embedding module 610.
- the feature encoder module 520 may include additional modules such as a multi-head attention module, an add and normalization module, a feed forward module, an add and normalization module, and/or the like. It is to be understood that the feature encoder module 525 in Fig. 5 may be similar to the feature encoder module 520 or same with the feature encoder module 520, which will not be repeated here.
- point cloud sequence may refer to a sequence of one or more point clouds.
- frame may refer to a point cloud in a point cloud sequence.
- point cloud may refer to a frame in the point cloud sequence.
- coding unit may refer to a block, a box, a cube, a slice, a tile, a frame, or any other units involving a group of points in PCC.
- Fig. 7 illustrates a flowchart of a method 700 for point cloud coding in accordance with embodiments of the present disclosure.
- the method 700 may be implemented for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence.
- the method 700 starts at block 710, where a type of the current point cloud is determined.
- a coding module for the current point cloud is determined based on the type of the current point cloud.
- the coding module may include a point cloud feature extractor.
- the coding module may include a point cloud geometry reconstruction module.
- the conversion is performed based on the coding module.
- the method 700 enables determining a coding module for the conversion based on the type of the point cloud. For example, a proper feature extractor can be selected based on the type of the point cloud. For another example, a proper reconstruction module such as a reconstruction decoder can be selected. Accordingly, coding efficiency can be improved.
- the conversion includes encoding the current point cloud into the bitstream.
- the conversion may be performed by an encoder coding information to be included in the bitstream.
- determining the coding module comprises: determining a point cloud feature extractor based on the type of the current point cloud.
- the point cloud feature extractor comprises at least one of: a point-based extractor, or a voxel-based extractor.
- the point cloud feature extractor may be a compact feature representation extractor. That is, different compact feature representation extractors of different point cloud may be utilized.
- the type of the current point cloud comprises a basic solid point cloud with a number of points less than a threshold number or a basic object
- the coding module comprises a point-based extractor
- the point-based extractor is based on at least one of: a graph-based approach, or a transformer-based approach.
- the graph-based approach may include a graph-based extractor such as the graph-based extractor 510 in Fig. 5.
- the point-based extractor is based on a graph-based approach, and a graph convolution is used to model a topological structure of the current point cloud, and a feature representation of the current point cloud is extracted based on modeling of the topological structure of the current point cloud.
- a K Nearest Neighbors (KNN) algorithm is used by the graph convolution to obtain a local point cloud nearest neighbor, and extract features of local adjacent points.
- the point-based extractor is based on a transformer-based approach for modeling a topological structure of the current point cloud, a spatial structure of the current point cloud is modeled, and a feature representation of the current point cloud is extracted based on an attention mechanism of a transformer.
- the transformer-based approach performs a progressive geometric feature extraction of the current point cloud by stacking a multi-layer attention mechanism.
- the type of the current point cloud comprises a Lidar point cloud or a large scale point cloud with a complex structure and a number of points larger than a threshold number
- the coding module comprises a voxel-based extractor
- a sparse convolution-based approach is used.
- the sparse convolution-based approach is used to extract geometry structure features of the current point cloud.
- the sparse convolution voxelizes the current point cloud and determines geometry characteristics of the voxelized current point cloud.
- a voxelization is used to make point cloud data regular for a subsequent convolution processing.
- the voxelization may be used to make point cloud data more regular for subsequent convolution processing.
- the point cloud feature extractor comprises a pretrained point cloud feature extractor extracting compact feature representation for point clouds with different spatial structures and point sizes. That is, different pre-trained point cloud feature extractors may be used to extract the compact feature representation for point clouds with different spatial structures and point sizes.
- an indication of the point cloud feature extractor to be used is included in the bitstream. For example, it may be signaled from the encoder to the decoder which point cloud feature extractor is used.
- the point cloud feature extractor to be used is determined by a decoder for decoding the current point cloud from the bitstream.
- the conversion includes decoding the current point cloud from the bitstream.
- the conversion may be performed by a decoder.
- the decoder codes compression bits to determine information in the bitstream.
- determining the coding module comprises: determining the point cloud geometry reconstruction module (also referred to as a point cloud geometry reconstruction decoder) based on the type of the current point cloud.
- the reconstructed point cloud may be determined by the point cloud geometry reconstruction module based on the decoded point cloud feature representation.
- the current point cloud comprises a small object point cloud with a number of points less than a threshold number or a basic object
- the point cloud geometry reconstruction decoder uses a folding-based approach to reconstruct the current point cloud from a decoded point cloud feature representation.
- the folding-based approach may use a folding-based decoder such as the folding-based decoder 570 in Fig. 5.
- a FoldingNet is used as a basic reconstruction network for the folding-based approach.
- the folding-based approach is used to reconstruct a 3D structure of the current point cloud from a feature space.
- the current point cloud comprises a Lidar point cloud or a large scale point cloud with a number of point clouds larger than a threshold number, and at least one of a sparse-convolution based approach or a point-based approach is used to reconstruct the current point cloud.
- a sparse-convolution upscale is used as a basic reconstruction network in the sparse-convolution based approach.
- the large scale point cloud is blocked based on the number of points in the large scale point cloud.
- the point-based approach is used to code and reconstruct the blocked point cloud.
- the method 700 further comprises: applying a point cloud feature mapping module to determine a similarity of features in a plurality of tasks.
- the point cloud feature mapping module may be implemented as the feature encoder module 520 and/or the feature encoder module 525 in Fig. 5, or an input embedding module 610 as shown in Fig. 6.
- the plurality of tasks comprises a point cloud geometry reconstruction task
- extracted features of the current point cloud comprises point cloud spatial structure information more than that for a further task. That is, for the point cloud geometry reconstruction task, the extracted features may contain more point cloud spatial structure information.
- the spatial structure information comprises at least one of: high-frequency texture information or low-frequency structure information in point cloud data.
- the plurality of tasks comprises a point cloud intelligent analysis task
- extracted features of the current point cloud comprises point cloud semantic information more than that for a further task.
- the point cloud intelligent analysis task may include at least one of a point cloud segmentation such as the point cloud segmentation 585 in Fig. 5 or a point cloud classification such as the point cloud classification 580 in Fig. 5. That is, for point cloud intelligent analysis task, the extracted features may contain more point cloud semantic information.
- the point cloud semantic information comprises semantic information of each point in the current point cloud.
- the point cloud feature mapping module comprises a transformer-based point cloud feature mapping module for minimizing a feature similarity between feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
- a design of the point cloud feature mapping module is based on a transformer structure of a multi-head attention mechanism.
- a self-attention mechanism is used to determine the feature similarity of feature mapping space, and features are weighted to be summed.
- the point cloud feature mapping module is symmetrically designed and reserved at at least one of: an encoder for the conversion, or a decoder for the conversion.
- the method 700 further comprises: retaining point cloud semantic information while maintaining a geometry structure of the current point cloud based on a multi-task learning mechanism.
- the multi-task learning mechanism may be used to retain the point cloud semantic information as much as possible while maintaining the geometry structure of the point cloud. In this way, the semantic losses can be reduced. The accuracy of point cloud semantic segmentation and classification can be improved.
- an indication of a usage of a multi-task model is included in the bitstream. For example, it may be signaled from the encoder to the decoder whether a multi-task model is used.
- whether to use a multi-task model is determined at a decoder for the conversion.
- a loss constraint of a multi-object is used.
- a geometry constraint for a point cloud reconstruction is used for a basic quality point cloud.
- the geometry constraint is determined based on a chamfer distance for a supervised learning.
- the chamber distance is determined by:
- S 1 and S 2 are two point clouds
- x and y are coordinates of points in S 1 and S 2 , respectively
- L CD (S 1 , S 2 ) denotes the chamber distance between the two point clouds.
- a machine vision oriented semantic constraint is used for a point cloud.
- the semantic constraint is used to constrain a feature distribution distance between coded features and original features of the point cloud.
- the semantic constraint is determined based on a Kullback-Leibler (KL) divergence.
- KL Kullback-Leibler
- F rec ) ⁇ ip ori (v i ) log p ori (v i ) -p ori (v i ) log p rec (v i ) ,
- F ori and F rec denote original feature and coded feature
- p ori and p rec denote probability distribution of the original feature and the coded feature, respectively.
- an indication of a final sampled point cloud is included in the bitstream.
- the indication of the final sampled point cloud is coded by a point cloud codec.
- the point cloud codec comprises at least one of: a geometry-based point cloud compression (G-PCC) , a video-based point cloud compression (V-PCC) , or Draco.
- an indication of at least one feature of the current point cloud is included in the bitstream.
- the indication of the at least one feature is coded with at least one of: a fixed-length coding, a unary coding, or a truncated unary coding.
- the indication of the at least one feature is coded in a predictive way.
- information regarding whether to apply the method and/or how to apply the method is included in at least one of: the bitstream, a frame, a tile, a slice, or an octree.
- whether to and/or how to apply the method is based on coded information, the coded information comprising at least one of: a dimension, a color format, a color component, a slice type or a picture type.
- a syntax element or an indication is binarized as at least one of: a flag, a fixed length code, an exponential Golomb (x) (EG (x) ) code, a unary code, a truncated unary code, or a truncated binary code.
- the syntax element or the indication is signed or unsigned.
- a syntax element or an indication is coded with at least one context model.
- a syntax element or an indication is bypass coded.
- a syntax element or an indication is included in the bitstream based on at least one condition, the at least one condition comprising at least one of: a fist condition that a function corresponding to the syntax element or the indication is applicable, or a second condition that a dimension of a block of the current point cloud satisfies a condition.
- a syntax element or an indication is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
- the syntax element or the indication is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
- CTU coding tree unit
- CU coding unit
- CB coding tree block
- CB coding block
- TBS coding block
- TB transform block
- PB prediction block
- DCI Decoding Capability Information
- DCI De
- whether to apply the method and/or how to apply the method is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
- whether to apply the method and/or how to apply the method is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
- CTU coding tree unit
- CU coding unit
- CTB coding tree block
- CB coding block
- T transform block
- PB prediction block
- DCI Decoding Capability Information
- DCI Decoding
- whether to apply the method and/or how to apply the method is based on coded information.
- the coded information comprises at least one of: a block size, a color format, an attribute format, a single or dual tree partitioning, a color component, a slice type, a tile type, a picture type, or a frame type.
- the method is used in a coding tool requiring chroma fusion.
- a non-transitory computer-readable recording medium is provided.
- a bitstream of a point cloud sequence is stored in the non-transitory computer-readable recording medium.
- the bitstream of the point cloud sequence is generated by a method performed by a point cloud sequence processing apparatus.
- a type of a current point cloud of the point cloud sequence is determined.
- a coding module for the current point cloud is determined based on the type of the current point cloud.
- the coding module comprises at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module.
- the bitstream is generated based on the coding module.
- a method for storing a bitstream of a point cloud sequence is proposed.
- a type of a current point cloud of the point cloud sequence is determined.
- a coding module for the current point cloud is determined based on the type of the current point cloud.
- the coding module comprises at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module.
- the bitstream is generated based on the coding module.
- the bitstream is stored in a non-transitory computer-readable recording medium.
- a method for point cloud coding comprising: determining, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and performing the conversion based on the coding module.
- Clause 2 The method of clause 1, wherein the conversion includes encoding the current point cloud into the bitstream.
- Clause 3 The method of clause 2, wherein the conversion is performed by an encoder coding information to be included in the bitstream.
- determining the coding module comprises: determining the point cloud feature extractor based on the type of the current point cloud.
- the point cloud feature extractor comprises at least one of: a point-based extractor, or a voxel-based extractor.
- Clause 6 The method of clause 4 or 5, wherein the type of the current point cloud comprises a basic solid point cloud with a number of points less than a threshold number or a basic object, and the coding module comprises a point-based extractor.
- Clause 7 The method of clause 6, wherein the point-based extractor is based on at least one of: a graph-based approach, or a transformer-based approach.
- Clause 8 The method of clause 7, wherein the point-based extractor is based on a graph-based approach, and a graph convolution is used to model a topological structure of the current point cloud, and a feature representation of the current point cloud is extracted based on modeling of the topological structure of the current point cloud.
- Clause 9 The method of clause 8, wherein a K Nearest Neighbors (KNN) algorithm is used by the graph convolution to obtain a local point cloud nearest neighbor, and extract features of local adjacent points.
- KNN K Nearest Neighbors
- Clause 10 The method of clause 7, wherein the point-based extractor is based on a transformer-based approach for modeling a topological structure of the current point cloud, and a spatial structure of the current point cloud is modeled, and a feature representation of the current point cloud is extracted based on an attention mechanism of a transformer.
- Clause 11 The method of clause 10, wherein the transformer-based approach performs a progressive geometric feature extraction of the current point cloud by stacking a multi-layer attention mechanism.
- Clause 12 The method of clause 4 or 5, wherein the type of the current point cloud comprises a Lidar point cloud or a large scale point cloud with a complex structure and a number of points larger than a threshold number, and the coding module comprises a voxel-based extractor.
- Clause 13 The method of clause 12, wherein a sparse convolution-based approach is used.
- Clause 14 The method of clause 13, wherein the sparse convolution-based approach is used to extract geometry structure features of the current point cloud.
- Clause 15 The method of clause 13 or 14, wherein the sparse convolution voxelizes the current point cloud and determines geometry characteristics of the voxelized current point cloud.
- Clause 16 The method of clause 15, wherein a voxelization is used to make point cloud data regular for a subsequent convolution processing.
- Clause 18 The method of any of clauses 4-16, wherein an indication of the point cloud feature extractor to be used is included in the bitstream.
- Clause 20 The method of clause 1, wherein the conversion includes decoding the current point cloud from the bitstream.
- Clause 21 The method of clause 20, wherein the conversion is performed by a decoder, the decoder coding compression bits to determine information in the bitstream.
- determining the coding module comprises: determining the point cloud geometry reconstruction module based on the type of the current point cloud.
- Clause 23 The method of clause 22, wherein the current point cloud comprises a small object point cloud with a number of points less than a threshold number or a basic object, and the point cloud geometry reconstruction decoder uses a folding-based approach to reconstruct the current point cloud from a decoded point cloud feature representation.
- Clause 25 The method of clause 23 or 24, wherein the folding-based approach is used to reconstruct a 3D structure of the current point cloud from a feature space.
- Clause 26 The method of clause 22, wherein the current point cloud comprises a Lidar point cloud or a large scale point cloud with a number of point clouds larger than a threshold number, and at least one of a sparse-convolution based approach or a point-based approach is used to reconstruct the current point cloud.
- Clause 27 The method of clause 26, wherein a sparse-convolution upscale is used as a basic reconstruction network in the sparse-convolution based approach.
- Clause 28 The method of clause 26 or 27, wherein the large scale point cloud is blocked based on the number of points in the large scale point cloud.
- Clause 29 The method of clause 28, wherein the point-based approach is used to code and reconstruct the blocked point cloud.
- Clause 30 The method of any of clauses 1-29, further comprising: applying a point cloud feature mapping module to determine a similarity of features in a plurality of tasks.
- Clause 31 The method of clause 30, wherein the plurality of tasks comprises a point cloud geometry reconstruction task, and for the point cloud geometry reconstruction task, extracted features of the current point cloud comprises point cloud spatial structure information more than that for a further task.
- the spatial structure information comprises at least one of: high-frequency texture information or low-frequency structure information in point cloud data.
- Clause 33 The method of clause 30, wherein the plurality of tasks comprises a point cloud intelligent analysis task, and extracted features of the current point cloud comprises point cloud semantic information more than that for a further task, and wherein the point cloud intelligent analysis task comprises at least one of: a point cloud segmentation or a point cloud classification.
- the point cloud feature mapping module comprises a transformer-based point cloud feature mapping module for minimizing a feature similarity between feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
- Clause 36 The method of clause 35, wherein a design of the point cloud feature mapping module is based on a transformer structure of a multi-head attention mechanism.
- Clause 37 The method of clause 35, wherein a self-attention mechanism is used to determine the feature similarity of feature mapping space, and features are weighted to be summed.
- Clause 39 The method of any of clauses 1-38, further comprising: retaining point cloud semantic information while maintaining a geometry structure of the current point cloud based on a multi-task learning mechanism.
- Clause 40 The method of clause 39, wherein an indication of a usage of a multi-task model is included in the bitstream.
- Clause 41 The method of clause 39, wherein whether to use a multi-task model is determined at a decoder for the conversion.
- Clause 42 The method of any of clauses 39-41, wherein during a training stage for the multi-task learning mechanism, a loss constraint of a multi-object is used.
- Clause 43 The method of clause 42, wherein a geometry constraint for a point cloud reconstruction is used for a basic quality point cloud.
- Clause 46 The method of any of clauses 42-45, wherein a machine vision oriented semantic constraint is used for a point cloud.
- Clause 47 The method of clause 46, wherein the semantic constraint is used to constrain a feature distribution distance between coded features and original features of the point cloud.
- Clause 48 The method of clause 47, wherein the semantic constraint is determined based on a Kullback-Leibler (KL) divergence.
- KL Kullback-Leibler
- Clause 51 The method of any of clauses 1-50, wherein an indication of a final sampled point cloud is included in the bitstream.
- Clause 52 The method of clause 51, wherein the indication of the final sampled point cloud is coded by a point cloud codec.
- the point cloud codec comprises at least one of: a geometry-based point cloud compression (G-PCC) , a video-based point cloud compression (V-PCC) , or Draco.
- G-PCC geometry-based point cloud compression
- V-PCC video-based point cloud compression
- Draco Draco.
- Clause 54 The method of any of clauses 1-53, wherein an indication of at least one feature of the current point cloud is included in the bitstream.
- Clause 55 The method of clause 54, wherein the indication of the at least one feature is coded with at least one of: a fixed-length coding, a unary coding, or a truncated unary coding.
- Clause 56 The method of clause 54 or 55, wherein the indication of the at least one feature is coded in a predictive way.
- Clause 57 The method of any of clauses 1-56, wherein information regarding whether to apply the method and/or how to apply the method is included in at least one of: the bitstream, a frame, a tile, a slice, or an octree.
- Clause 58 The method of any of clauses 1-56, wherein whether to and/or how to apply the method is based on coded information, the coded information comprising at least one of: a dimension, a color format, a color component, a slice type or a picture type.
- Clause 59 The method of any of clauses 1-58, wherein a syntax element or an indication is binarized as at least one of: a flag, a fixed length code, an exponential Golomb (x) (EG (x) ) code, a unary code, a truncated unary code, or a truncated binary code.
- a syntax element or an indication is binarized as at least one of: a flag, a fixed length code, an exponential Golomb (x) (EG (x) ) code, a unary code, a truncated unary code, or a truncated binary code.
- Clause 60 The method of clause 59, wherein the syntax element or the indication is signed or unsigned.
- Clause 61 The method of any of clauses 1-60, wherein a syntax element or an indication is coded with at least one context model.
- Clause 62 The method of any of clauses 1-60, wherein a syntax element or an indication is bypass coded.
- Clause 63 The method of any of clauses 1-62, wherein a syntax element or an indication is included in the bitstream based on at least one condition, the at least one condition comprising at least one of: a fist condition that a function corresponding to the syntax element or the indication is applicable, or a second condition that a dimension of a block of the current point cloud satisfies a condition.
- Clause 64 The method of any of clauses 1-63, wherein a syntax element or an indication is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
- Clause 65 The method of clause 64, wherein the syntax element or the indication is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
- CTU coding tree unit
- CU coding unit
- CTB coding tree block
- CB coding block
- T transform block
- PB prediction block
- DCI Decoding Capability Information
- Clause 66 The method of any of clauses 1-65, wherein whether to apply the method and/or how to apply the method is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
- Clause 67 The method of clause 66, wherein whether to apply the method and/or how to apply the method is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
- CTU coding tree unit
- CU coding unit
- CTB coding tree block
- CB coding block
- T transform block
- PB prediction block
- Clause 68 The method of any of clauses 1-65, wherein whether to apply the method and/or how to apply the method is based on coded information.
- Clause 69 The method of clause 68, wherein the coded information comprises at least one of: a block size, a color format, an attribute format, a single or dual tree partitioning, a color component, a slice type, a tile type, a picture type, or a frame type.
- Clause 70 The method of any of clauses 1-69, wherein the method is used in a coding tool requiring chroma fusion.
- An apparatus for point cloud coding comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-70.
- Clause 72 A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-70.
- a non-transitory computer-readable recording medium storing a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus, wherein the method comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and generating the bitstream based on the coding module.
- a method for storing a bitstream of a point cloud sequence comprising: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; generating the bitstream based on the coding module; and storing the bitstream in a non-transitory computer-readable recording medium.
- Fig. 8 illustrates a block diagram of a computing device 800 in which various embodiments of the present disclosure can be implemented.
- the computing device 800 may be implemented as or included in the source device 110 (or the PCC encoder 116 or the GPCC encoder 300) or the destination device 120 (or the PCC decoder 126 or the GPCC decoder 400) .
- computing device 800 shown in Fig. 8 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
- the computing device 800 includes a general-purpose computing device 800.
- the computing device 800 may at least comprise one or more processors or processing units 810, a memory 820, a storage unit 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860.
- the computing device 800 may be implemented as any user terminal or server terminal having the computing capability.
- the server terminal may be a server, a large-scale computing device or the like that is provided by a service provider.
- the user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof.
- the computing device 800 can support any type of interface to a user (such as “wearable” circuitry and the like) .
- the processing unit 810 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 820. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 800.
- the processing unit 810 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
- the computing device 800 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 800, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium.
- the memory 820 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination thereof.
- the storage unit 830 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 800.
- a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 800.
- the computing device 800 may further include additional detachable/non-detachable, volatile/non-volatile memory medium.
- additional detachable/non-detachable, volatile/non-volatile memory medium may be provided.
- a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk
- an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk.
- each drive may be connected to a bus (not shown) via one or more data medium interfaces.
- the communication unit 840 communicates with a further computing device via the communication medium.
- the functions of the components in the computing device 800 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 800 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
- PCs personal computers
- the input device 850 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like.
- the output device 860 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like.
- the computing device 800 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 800, or any devices (such as a network card, a modem and the like) enabling the computing device 800 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown) .
- I/O input/output
- some or all components of the computing device 800 may also be arranged in cloud computing architecture.
- the components may be provided remotely and work together to implement the functionalities described in the present disclosure.
- cloud computing provides computing, software, data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services.
- the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols.
- a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components.
- the software or components of the cloud computing architecture and corresponding data may be stored on a server at a remote position.
- the computing resources in the cloud computing environment may be merged or distributed at locations in a remote data center.
- Cloud computing infrastructures may provide the services through a shared data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
- the computing device 800 may be used to implement point cloud encoding/decoding in embodiments of the present disclosure.
- the memory 820 may include one or more point cloud coding modules 825 having one or more program instructions. These modules are accessible and executable by the processing unit 810 to perform the functionalities of the various embodiments described herein.
- the input device 850 may receive point cloud data as an input 870 to be encoded.
- the point cloud data may be processed, for example, by the point cloud coding module 825, to generate an encoded bitstream.
- the encoded bitstream may be provided via the output device 860 as an output 880.
- the input device 850 may receive an encoded bitstream as the input 870.
- the encoded bitstream may be processed, for example, by the point cloud coding module 825, to generate decoded point cloud data.
- the decoded point cloud data may be provided via the output device 860 as the output 880.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Embodiments of the present disclosure provide a solution for point cloud coding. A method for point cloud coding is proposed. In the method, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud is determined. A coding module for the current point cloud is determined based on the type of the current point cloud. The coding module comprises at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module. The conversion is performed based on the coding module.
Description
Embodiments of the present disclosure relates generally to point cloud coding techniques, and more particularly, to semantic enhancement-based point cloud compression.
A point cloud is a collection of individual data points in a three-dimensional (3D) plane with each point having a set coordinate on the X, Y, and Z axes. Thus, a point cloud may be used to represent the physical content of the three-dimensional space. Point clouds have shown to be a promising way to represent 3D visual data for a wide range of immersive applications, from augmented reality to autonomous cars.
Point cloud coding standards have evolved primarily through the development of the well-known MPEG organization. MPEG, short for Moving Picture Experts Group, is one of the main standardization groups dealing with multimedia. In 2017, the MPEG 3D Graphics Coding group (3DG) published a call for proposals (CFP) document to start to develop point cloud coding standard. The final standard will consist in two classes of solutions. Video-based Point Cloud Compression (V-PCC or VPCC) is appropriate for point sets with a relatively uniform distribution of points. Geometry-based Point Cloud Compression (G-PCC or GPCC) is appropriate for more sparse distributions. However, coding efficiency of conventional point cloud coding techniques is generally expected to be further improved.
Embodiments of the present disclosure provide a solution for point cloud coding.
In a first aspect, a method for point cloud coding is proposed. The method comprises: determining, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and performing the conversion based on the coding module. The method in accordance with the first aspect of the present disclosure determines the point cloud
feature extractor or the point cloud geometry reconstruction module based on the type of the point cloud. In this way, the point cloud geometry compression (PCGC) can be improved.
In a second aspect, an apparatus for processing point cloud sequence is proposed. The apparatus for processing point cloud sequence comprises a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure.
In a third aspect, a non-transitory computer-readable storage medium is proposed. The non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
In a fourth aspect, a non-transitory computer-readable recording medium is proposed. The non-transitory computer-readable recording medium stores a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus. The method comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and generating the bitstream based on the coding module.
In a fifth aspect, a method for storing a bitstream of a point cloud sequence is proposed. The method comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; generating the bitstream based on the coding module; and storing the bitstream in a non-transitory computer-readable recording medium.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Through the following detailed description with reference to the accompanying drawings, the above and other objectives, features, and advantages of example embodiments of
the present disclosure will become more apparent. In the example embodiments of the present disclosure, the same reference numerals usually refer to the same components.
Fig. 1 illustrates a block diagram that illustrates an example point cloud coding system, in accordance with some embodiments of the present disclosure;
Fig. 2 illustrates another block diagram of an example point cloud coding system in accordance with some embodiments of the present disclosure;
Fig. 3 illustrates a block diagram that illustrates an example of a point cloud compression (PCC) encoder, in accordance with some embodiments of the present disclosure;
Fig. 4 illustrates a block diagram that illustrates an example of a PCC decoder, in accordance with some embodiments of the present disclosure;
Fig. 5 illustrates an example of a flow of a compression method in accordance with some embodiments of the present disclosure;
Fig. 6 illustrates an example of a feature encoder module in accordance with some embodiments of the present disclosure;
Fig. 7 illustrates a flowchart of a method for point cloud coding in accordance with some embodiments of the present disclosure; and
Fig. 8 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
Throughout the drawings, the same or similar reference numerals usually refer to the same or similar elements.
Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
Example Environment
Fig. 1 is a block diagram that illustrates an example point cloud coding system 100 that may utilize the techniques of the present disclosure. As shown, the point cloud coding system 100 may include a source device 110 and a destination device 120. The source device 110 can be also referred to as a point cloud encoding device, and the destination device 120 can be also referred to as a point cloud decoding device. In operation, the source device 110 can be configured to generate encoded point cloud data and the destination device 120 can be
configured to decode the encoded point cloud data generated by the source device 110. The techniques of this disclosure are generally directed to coding (encoding and/or decoding) point cloud data, i.e., to support point cloud compression. The coding may be effective in compressing and/or decompressing point cloud data.
Source device 100 and destination device 120 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as smartphones and mobile phones, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming devices, vehicles (e.g., terrestrial or marine vehicles, spacecraft, aircraft, etc. ) , robots, LIDAR devices, satellites, extended reality devices, or the like. In some cases, source device 100 and destination device 120 may be equipped for wireless communication.
The source device 100 may include a data source 112, a memory 114, a PCC encoder 116, and an input/output (I/O) interface 118. The destination device 120 may include an input/output (I/O) interface 128, a PCC decoder 126, a memory 124, and a data consumer 122. In accordance with this disclosure, PCC encoder 116 of source device 100 and PCC decoder 126 of destination device 120 may be configured to apply the techniques of this disclosure related to point cloud coding. Thus, source device 100 represents an example of an encoding device, while destination device 120 represents an example of a decoding device. In other examples, source device 100 and destination device 120 may include other components or arrangements. For example, source device 100 may receive data (e.g., point cloud data) from an internal or external source. Likewise, destination device 120 may interface with an external data consumer, rather than include a data consumer in the same device.
In general, data source 112 represents a source of point cloud data (i.e., raw, unencoded point cloud data) and may provide a sequential series of “frames” of the point cloud data to PCC encoder 116, which encodes point cloud data for the frames. In some examples, data source 112 generates the point cloud data. Data source 112 of source device 100 may include a point cloud capture device, such as any of a variety of cameras or sensors, e.g., one or more video cameras, an archive containing previously captured point cloud data, a 3D scanner or a light detection and ranging (LIDAR) device, and/or a data feed interface to receive point cloud data from a data content provider. Thus, in some examples, data source 112 may generate the point cloud data based on signals from a LIDAR apparatus. Alternatively or additionally, point cloud data may be computer-generated from scanner, camera, sensor or other
data. For example, data source 112 may generate the point cloud data, or produce a combination of live point cloud data, archived point cloud data, and computer-generated point cloud data. In each case, PCC encoder 116 encodes the captured, pre-captured, or computer-generated point cloud data. PCC encoder 116 may rearrange frames of the point cloud data from the received order (sometimes referred to as “display order” ) into a coding order for coding. PCC encoder 116 may generate one or more bitstreams including encoded point cloud data. Source device 100 may then output the encoded point cloud data via I/O interface 118 for reception and/or retrieval by, e.g., I/O interface 128 of destination device 120. The encoded point cloud data may be transmitted directly to destination device 120 via the I/O interface 118 through the network 130A. The encoded point cloud data may also be stored onto a storage medium/server 130B for access by destination device 120.
Memory 114 of source device 100 and memory 124 of destination device 120 may represent general purpose memories. In some examples, memory 114 and memory 124 may store raw point cloud data, e.g., raw point cloud data from data source 112 and raw, decoded point cloud data from PCC decoder 126. In addition, or alternatively, memory 114 and memory 124 may store software instructions executable by, e.g., PCC encoder 116 and PCC decoder 126, respectively. Although memory 114 and memory 124 are shown separately from PCC encoder 116 and PCC decoder 126 in this example, it should be understood that PCC encoder 116 and PCC decoder 126 may also include internal memories for functionally similar or equivalent purposes. Furthermore, memory 114 and memory 124 may store encoded point cloud data, e.g., output from PCC encoder 116 and input to PCC decoder 126. In some examples, portions of memory 114 and memory 124 may be allocated as one or more buffers, e.g., to store raw, decoded, and/or encoded point cloud data. For instance, memory 114 and memory 124 may store point cloud data.
I/O interface 118 and I/O interface 128 may represent wireless transmitters/receivers, modems, wired networking components (e.g., Ethernet cards) , wireless communication components that operate according to any of a variety of IEEE 802.11 standards, or other physical components. In examples where I/O interface 118 and I/O interface 128 comprise wireless components, I/O interface 118 and I/O interface 128 may be configured to transfer data, such as encoded point cloud data, according to a cellular communication standard, such as 4G, 4G-LTE (Long-Term Evolution) , LTE Advanced, 5G, or the like. In some examples where I/O interface 118 comprises a wireless transmitter, I/O interface 118 and I/O interface
128 may be configured to transfer data, such as encoded point cloud data, according to other wireless standards, such as an IEEE 802.11 specification. In some examples, source device 100 and/or destination device 120 may include respective system-on-a-chip (SoC) devices. For example, source device 100 may include an SoC device to perform the functionality attributed to PCC encoder 116 and/or I/O interface 118, and destination device 120 may include an SoC device to perform the functionality attributed to PCC decoder 126 and/or I/O interface 128.
The techniques of this disclosure may be applied to encoding and decoding in support of any of a variety of applications, such as communication between autonomous vehicles, communication between scanners, cameras, sensors and processing devices such as local or remote servers, geographic mapping, or other applications.
I/O interface 128 of destination device 120 receives an encoded bitstream from source device 110. The encoded bitstream may include signaling information defined by PCC encoder 116, which is also used by PCC decoder 126, such as syntax elements having values that represent a point cloud. Data consumer 122 uses the decoded data. For example, data consumer 122 may use the decoded point cloud data to determine the locations of physical objects. In some examples, data consumer 122 may comprise a display to present imagery based on the point cloud data.
PCC encoder 116 and PCC decoder 126 each may be implemented as any of a variety of suitable encoder and/or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of PCC encoder 116 and PCC decoder 126 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device. A device including PCC encoder 116 and/or PCC decoder 126 may comprise one or more integrated circuits, microprocessors, and/or other types of devices.
PCC encoder 116 and GPCC decoder 126 may operate according to a coding standard, such as video point cloud compression (VPCC) standard or a geometry point cloud compression
(GPCC) standard. This disclosure may generally refer to coding (e.g., encoding and decoding) of frames to include the process of encoding or decoding data. An encoded bitstream generally includes a series of values for syntax elements representative of coding decisions (e.g., coding modes) .
A point cloud may contain a set of points in a 3D space, and may have attributes associated with the point. The attributes may be color information such as R, G, B or Y, Cb, Cr, or reflectance information, or other attributes. Point clouds may be captured by a variety of cameras or sensors such as LIDAR sensors and 3D scanners and may also be computer-generated. Point cloud data are used in a variety of applications including, but not limited to, construction (modeling) , graphics (3D models for visualizing and animation) , and the automotive industry (LIDAR sensors used to help in navigation) .
In some example embodiments, the PCC encoder 116 in the system 100 may include a GPCC encoder, and the PCC decoder 126 may include a GPCC decoder. The GPCC encoder and the GPCC decoder may be collectively referred to as a GPCC coder or a GPCC. In some example embodiments, the GPCC may be applied in combination with another coder.
Fig. 2 illustrates another block diagram of an example point cloud coding system 200 in accordance with some embodiments of the present disclosure. As illustrated, the point cloud system 200 includes a geometry encoder 210, a geometry decoder 230, a GPCC 220 and a machine vision task 240. In some example embodiments, the GPCC 220 may be implemented as a PCC encoder such as the PCC encoder 116 in Fig. 1 and a PCC decoder such as the PCC decoder 120 in Fig. 1. The system 100 and the system 200 may be used in combination or separately. For example, the system 100 may be a part of the system 200. The geometry encoder 210, the geometry decoder 230 and/or the machine vision task 240 may be added into the system 100 as additional modules.
In some example embodiments, the geometry encoder 210 may be based on machine learning (ML) or artificial intelligence (AI) , and the geometry decoder 230 may be based on ML or AI, as well. The ML/AI based geometry encoder/decoder may be applied in combination with the GPCC 220. The geometry encoder 210 and/or the geometry decoder 230 may be pretrained or fine-tuned. Information or data output from the geometry decoder 230 may be used for a machine vision task 240. As used herein, the system 200 may be referred to as an AI based point cloud compression (AI-PCC) system. It is to be understood that in some
example embodiments, the system 200 may include additional modules such as feature extractor module, or the like. Scope of the present disclosure is not limited here.
In some example embodiments, the GPCC 220 may include a GPCC encoder and a GPCC decoder. Fig. 3 is a block diagram illustrating an example of a GPCC encoder 300, which may be an example of a GPCC encoder of the GPCC 220 illustrated in Fig. 2, in accordance with some embodiments of the present disclosure. Fig. 4 is a block diagram illustrating an example of a GPCC decoder 400, which may be an example of a GPCC decoder of the GPCC 220 illustrated in Fig. 2, in accordance with some embodiments of the present disclosure.
In both GPCC encoder 300 and GPCC decoder 400, point cloud positions are coded first. Attribute coding depends on the decoded geometry. In Fig. 3 and Fig. 4, the region adaptive hierarchical transform (RAHT) unit 318, surface approximation analysis unit 312, RAHT unit 414 and surface approximation synthesis unit 410 are options typically used for Category 1 data. The level-of-detail (LOD) generation unit 320, lifting unit 322, LOD generation unit 416 and inverse lifting unit 418 are options typically used for Category 3 data. All the other units are common between Categories 1 and 3.
For Category 3 data, the compressed geometry is typically represented as an octree from the root all the way down to a leaf level of individual voxels. For Category 1 data, the compressed geometry is typically represented by a pruned octree (i.e., an octree from the root down to a leaf level of blocks larger than voxels) plus a model that approximates the surface within each leaf of the pruned octree. In this way, both Category 1 and 3 data share the octree coding mechanism, while Category 1 data may in addition approximate the voxels within each leaf with a surface model. The surface model used is a triangulation comprising 1-10 triangles per block, resulting in a triangle soup. The Category 1 geometry codec is therefore known as the Trisoup geometry codec, while the Category 3 geometry codec is known as the Octree geometry codec.
In the example of Fig. 3, GPCC encoder 300 may include a coordinate transform unit 302, a color transform unit 304, a voxelization unit 306, an attribute transfer unit 308, an octree analysis unit 310, a surface approximation analysis unit 312, an arithmetic encoding unit 314, a geometry reconstruction unit 316, an RAHT unit 318, a LOD generation unit 320, a lifting unit 322, a coefficient quantization unit 324, and an arithmetic encoding unit 326.
As shown in the example of Fig. 3, GPCC encoder 300 may receive a set of positions and a set of attributes. The positions may include coordinates of points in a point cloud. The attributes may include information about points in the point cloud, such as colors associated with points in the point cloud.
Coordinate transform unit 302 may apply a transform to the coordinates of the points to transform the coordinates from an initial domain to a transform domain. This disclosure may refer to the transformed coordinates as transform coordinates. Color transform unit 304 may apply a transform to convert color information of the attributes to a different domain. For example, color transform unit 304 may convert color information from an RGB color space to a YCbCr color space.
Furthermore, in the example of Fig. 3, voxelization unit 306 may voxelize the transform coordinates. Voxelization of the transform coordinates may include quantizing and removing some points of the point cloud. In other words, multiple points of the point cloud may be subsumed within a single “voxel, ” which may thereafter be treated in some respects as one point. Furthermore, octree analysis unit 310 may generate an octree based on the voxelized transform coordinates. Additionally, in the example of Fig. 3, surface approximation analysis unit 312 may analyze the points to potentially determine a surface representation of sets of the points. Arithmetic encoding unit 314 may perform arithmetic encoding on syntax elements representing the information of the octree and/or surfaces determined by surface approximation analysis unit 312. GPCC encoder 300 may output these syntax elements in a geometry bitstream.
Geometry reconstruction unit 316 may reconstruct transform coordinates of points in the point cloud based on the octree, data indicating the surfaces determined by surface approximation analysis unit 312, and/or other information. The number of transform coordinates reconstructed by geometry reconstruction unit 316 may be different from the original number of points of the point cloud because of voxelization and surface approximation. This disclosure may refer to the resulting points as reconstructed points. Attribute transfer unit 308 may transfer attributes of the original points of the point cloud to reconstructed points of the point cloud data.
Furthermore, RAHT unit 318 may apply RAHT coding to the attributes of the reconstructed points. Alternatively, or in addition, LOD generation unit 320 and lifting unit
322 may apply LOD processing and lifting, respectively, to the attributes of the reconstructed points. RAHT unit 318 and lifting unit 322 may generate coefficients based on the attributes. Coefficient quantization unit 324 may quantize the coefficients generated by RAHT unit 318 or lifting unit 322. Arithmetic encoding unit 326 may apply arithmetic coding to syntax elements representing the quantized coefficients. GPCC encoder 300 may output these syntax elements in an attribute bitstream.
In the example of Fig. 4, GPCC decoder 400 may include a geometry arithmetic decoding unit 402, an attribute arithmetic decoding unit 404, an octree synthesis unit 406, an inverse quantization unit 408, a surface approximation synthesis unit 410, a geometry reconstruction unit 412, a RAHT unit 414, a LOD generation unit 416, an inverse lifting unit 418, a coordinate inverse transform unit 420, and a color inverse transform unit 422.
GPCC decoder 400 may obtain a geometry bitstream and an attribute bitstream. Geometry arithmetic decoding unit 402 of decoder 400 may apply arithmetic decoding (e.g., CABAC or other type of arithmetic decoding) to syntax elements in the geometry bitstream. Similarly, attribute arithmetic decoding unit 404 may apply arithmetic decoding to syntax elements in attribute bitstream.
Octree synthesis unit 406 may synthesize an octree based on syntax elements parsed from geometry bitstream. In instances where surface approximation is used in geometry bitstream, surface approximation synthesis unit 410 may determine a surface model based on syntax elements parsed from geometry bitstream and based on the octree.
Furthermore, geometry reconstruction unit 412 may perform a reconstruction to determine coordinates of points in a point cloud. Coordinate inverse transform unit 420 may apply an inverse transform to the reconstructed coordinates to convert the reconstructed coordinates (positions) of the points in the point cloud from a transform domain back into an initial domain.
Additionally, in the example of Fig. 4, inverse quantization unit 408 may inverse quantize attribute values. The attribute values may be based on syntax elements obtained from attribute bitstream (e.g., including syntax elements decoded by attribute arithmetic decoding unit 404) .
Depending on how the attribute values are encoded, RAHT unit 414 may perform RAHT coding to determine, based on the inverse quantized attribute values, color values for
points of the point cloud. Alternatively, LOD generation unit 416 and inverse lifting unit 418 may determine color values for points of the point cloud using a level of detail-based technique.
Furthermore, in the example of Fig. 4, color inverse transform unit 422 may apply an inverse color transform to the color values. The inverse color transform may be an inverse of a color transform applied by color transform unit 304 of encoder 300. For example, color transform unit 304 may transform color information from an RGB color space to a YCbCr color space. Accordingly, color inverse transform unit 422 may transform color information from the YCbCr color space to the RGB color space.
The various units of Fig. 3 and Fig. 4 are illustrated to assist with understanding the operations performed by encoder 300 and decoder 400. The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof. Fixed-function circuits refer to circuits that provide particular functionality and are preset on the operations that can be performed. Programmable circuits refer to circuits that can be programmed to perform various tasks and provide flexible functionality in the operations that can be performed. For instance, programmable circuits may execute software or firmware that cause the programmable circuits to operate in the manner defined by instructions of the software or firmware. Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters) , but the types of operations that the fixed-function circuits perform are generally immutable. In some examples, one or more of the units may be distinct circuit blocks (fixed-function or programmable) , and in some examples, one or more of the units may be integrated circuits.
Some exemplary embodiments of the present disclosure will be described in detailed hereinafter. It should be understood that section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to PCC, AI-PCC, GPCC or other specific point cloud codecs, the disclosed techniques are applicable to other point cloud coding technologies also. Furthermore, while some embodiments describe point cloud coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder.
1. Brief summary
This disclosure is related to point cloud coding technologies. Specifically, it is about point cloud semantic enhancement coding for machine vision. The ideas may be applied individually or in various combination, to any point cloud coding standard or non-standard point cloud codec, e.g., the being-developed Artificial Intelligence based Point Cloud Compression (AI-PCC) .
2. Abbreviations
G-PCC Geometry based Point Cloud Compression
AI-PCC Artificial Intelligence based Point Cloud Compression
MPEG Moving Picture Experts Group
UAV Unmanned Air Vehicle
PCGC Point Cloud Geometry Compression
KNN K Nearest Neighbors.
G-PCC Geometry based Point Cloud Compression
AI-PCC Artificial Intelligence based Point Cloud Compression
MPEG Moving Picture Experts Group
UAV Unmanned Air Vehicle
PCGC Point Cloud Geometry Compression
KNN K Nearest Neighbors.
3. Introduction
The current point cloud geometry coding adopts PSNR as the main distortion metric for rate distortion optimization, aiming to ensure the objective quality of the reconstructed point cloud at the decoder under a certain bit rate, to provide viewers with high-quality point cloud content. However, with the development of 5G and AI technology, more and more point cloud data are directly used by machine vision and intelligent algorithms to complete various 3D computer vision tasks. For instance, in scenarios such as autopilot, Unmanned Air Vehicle (UAV) navigation and smart city, the task of the receiver is no longer just to reconstruct the complete original point cloud, but to perform semantic segmentation, point cloud classification, target detection and other intelligent analysis tasks on the point cloud data. Point clouds used in different scenarios (such as basic solid point cloud, lidar point cloud, digital human point cloud, etc. ) have different structural characteristics. The basic solid point cloud may have a simple structure and a uniform distribution. The lidar point cloud may have a relatively complex structure and a extremely sparse distribution. There have been many researches on image and video coding algorithms for machine vision, but there are few researches on intelligent point cloud coding algorithms for machine vision.
3.1 Deep Learning based Point Cloud Compression Network
In recent years, 3D deep learning has made great progress in visual tasks. Researchers have introduced deep learning-based methods to point cloud compression. These methods can be roughly divided into two categories, voxel-based point cloud compression and point-based point cloud compression. For voxel-based methods, point clouds are voxelized into multiple blocks, which are processed by 3D convolution neural networks such as sparse convolution. For point-based methods, such as FoldingNet, directly process the original point clouds with graph-based convolution network and transformer-based network.
3.2 Point Cloud Semantic Enhancement
For point cloud intelligent analysis of human-computer hybrid visual scene, compression coding often brings distortion to point cloud, which leads to distortion of point cloud intelligent analysis. The existing coding methods mainly consider the geometry distortion, but the geometry distortion does not represent the semantic distortion of the point cloud. In order to minimize the semantic distortion in the intelligent analysis of the point cloud at the decoder, and ensure the objective reconstruction quality of the point cloud, it is a valuable problem to optimize the rate distortion and enhance the point cloud semantics in the point cloud coding process.
4. Problems
The existing designs for Point Cloud Geometry Compression (PCGC) method have the following problems:
1. In the existing point cloud geometry coding process, the loss of objective geometric quality is mainly considered to provide a higher quality point cloud for viewers, but the semantic loss of point cloud caused by coding in machine vision tasks is not considered. These semantic losses often reduce the accuracy of point cloud semantic segmentation and classification.
2. Point cloud geometry coding reconstruction and point cloud intelligent analysis are two completely different tasks. The extracted point cloud geometry features and point cloud semantic features are different in the feature space. However, there are few methods to
consider the differences of these two different tasks in the feature space during the coding process.
3. In the training process of existing AI-PCC methods, the sum of squares of errors is mainly used as the distortion measure of rate distortion optimization. This evaluation method mainly considers the error of objective reconstruction quality in the coding process and does not consider the loss of feature semantics in the coding process.
5. Detail solutions
To solve the above problems and some other problems not mentioned, methods as summarized below are disclosed. The embodiments should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these embodiments can be applied individually or combined in any manner.
1) In the following discussions, the term “encoder” refers to the model to code of the information to be signalled. The term “decoder” refers to the model to decode the compression bits to get the signalled information.
2) It’s proposed to utilize different compact feature representation extractors of different point cloud.
a. In one example, different point cloud feature extractors (such as point-based extractors and voxel-based extractors, etc. ) may be used for different types of point clouds.
i. In one example, for basic solid point clouds with limited points, such as the basic objects, the point-based extractors may be used to extract the features of point clouds.
1. In one example, the graph-based method may be used.
2. In one example, the graph convolution may be used to model the topological structure of point clouds. The feature representation of such point cloud may be extracted depending on the effective modeling of the topological structure of point clouds.
a. In one example, the graph convolution may use the K Nearest Neighbors (KNN) algorithm to get the local point cloud nearest neighbor, then extract the features of these local adjacent points.
3. In one example, the transformer-based method may be used.
4. In one example, the transformer-based method may be used to model the topological structure of point clouds. The spatial structure of point cloud may be modeled and the feature representation of such point cloud may be extracted depending on the attention mechanism of the transformer.
a. In one example, the transformer based method may realize progressive geometric feature extraction of point cloud through stacking multi-layer attention mechanism.
ii. In one example, for large scale point clouds with complex structure and a large number of points, such as Lidar point cloud, the voxel-based extractors may be used to extract the features of point clouds.
1. In one example, the sparse convolution-based methods may be used.
2. In one example, the sparse convolution may be used to extract the geometry structure features of point cloud.
a. In one example, sparse convolution may voxelize the point cloud and calculate the geometry characteristics of the voxelized point cloud.
i. In one example, the voxelization may be used to make point cloud data more regular for subsequent convolution processing.
b. In one example, different pre-trained point cloud feature extractors may be used to extract the compact feature representation for point clouds with different spatial structures and point sizes.
c. In one example, it may be signaled from the encoder to the decoder which point cloud feature extractor is used.
d. In one example, it may be derived at the decoder, which point cloud feature extractor is used.
3) It is proposed to code and signal indications of the final sampled point cloud to the decoder.
a. In one example, an indicator of the final sampled point cloud may be coded by point cloud codec.
i. In one example, the point cloud codec may be G-PCC, V-PCC, Draco etc.
4) It is proposed to code and signal indications of the features to the decoder.
a. In one example, indications of the features (e.g., indices) may be coded with fixed-length coding, unary coding, truncated unary coding, etc. al.
b. In one example, indications of the features may be coding in a predictive way.
5) It is proposed to use point cloud feature mapping module to explore the similarity of the features in different tasks.
a. In one example, for the point cloud geometry reconstruction task, the extracted features may contain more point cloud spatial structure information.
i. In one example, spatial structure information may include high-frequency texture information and low-frequency structure information in point cloud data.
b. In one example, for point cloud intelligent analysis tasks (point cloud segmentation, point cloud classification) , the extracted features may contain more point cloud semantic information.
i. In one example, the point cloud semantic information may include the semantic information of each point.
c. In one example, the transformer-based point cloud feature mapping module may be used to minimize feature similarity between the feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
i. In one example, the design of feature mapping module may refer to the transformer structure of multi-head attention mechanism.
ii. In one example, the self-attention mechanism may be used to calculate the similarity of feature mapping space, and then the features may be weighted to summed.
d. In one example, the point cloud feature space mapping module may be symmetrically designed and reserved at the encoder and decoder.
6) It’s proposed to use the multi-task learning mechanism to retain the point cloud semantic information as much as possible while maintaining the geometry structure of the point cloud.
a. In one example, it may be signaled from the encoder to the decoder whether a multi-task model is used.
b. In one example, it may be derived at the decoder, whether a multi-task model is used.
c. In one example, during the training stage, the loss constraint of multi-objective may be used.
i. In one example, the geometry constraints for point cloud reconstruction may be used to ensure that a basic quality point cloud can be reconstructed.
1. In one example, the geometry constraints may be calculated by using chamfer distance for supervised learning.
2. In one example, the chamfer distance may be computed as the following formula:
where S1 and S2 are two point clouds. x and y are the coordinates of the points in S1 and S2, respectively.
ii. In one example, the machine vision oriented semantic constraints may be used.
1. In one example, the semantic constraints may be used to constrain the feature distribution distance between coded features and original features.
2. In one example, the semantic constraints may be calculated by using Kullback-Leibler (KL) divergence.
3. In one example, the KL divergence distance may be computed as the following formula:
Lsemantic = DKL (Fori || Frec)
= ∑ipori (vi) log pori (vi) -pori (vi) log prec (vi)
Lsemantic = DKL (Fori || Frec)
= ∑ipori (vi) log pori (vi) -pori (vi) log prec (vi)
where Fori and Frec are original feature and coded feature, pori and prec are probability distribution of the original feature and the coded feature, respectively.
iii. In one example, the final loss function may be computed as the following formula:
Lmulti = Lcd + Lsemantic
Lmulti = Lcd + Lsemantic
where Lcd and Lsemantic are the geometry constraints and semantic constraints, respectively.
7) It’s proposed to get the reconstructed point cloud based on the decoded point cloud feature representation.
a. In one example, for different types of point clouds, different point cloud geometry reconstruction decoders may be used.
i. In one example, for small object point clouds with limited points, such as the basic objects, the folding-based method may be used to reconstruct the point cloud from the decoded point cloud feature representation.
1. In one example, the FoldingNet may be used as the basic reconstruction network in the folding-based method.
2. In one example, the folding method may be used to reconstruct the 3D structure of the point cloud from feature space.
ii. In one example, for large scale point cloud with a large number of point clouds, such as the Lidar point cloud, the sparse-convolution based methods and point-based methods may be used to reconstruct the point cloud.
1. In one example, the sparse convolution upscale may be used as the basic reconstruction network in the sparse-convolution based methods.
2. In one example, the large-scale point cloud may be blocked according to the number of points.
3. In one example, the point-based method may be used to code and reconstruct the blocked point cloud.
8) Whether to and/or how to apply a method disclosed above may be signaled from encoder to decoder in a bitstream/frame/tile/slice/octree/etc.
9) Whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as dimensions, color format, color component, slice/picture type.
General
10) A syntax element disclosed above may be binarized as a flag, a fixed length code, an EG (x) code, a unary code, a truncated unary code, a truncated binary code, etc. It can be signed or unsigned.
11) A syntax element disclosed above may be coded with at least one context model. Or it may be bypass coded.
12) A syntax element disclosed above may be signaled in a conditional way.
a. The SE is signaled only if the corresponding function is applicable.
b. The SE is signaled only if the dimensions (width and/or height) of the block satisfy a condition.
13) A syntax element disclosed above may be signaled at block level/sequence level/group of pictures level/group of frames level/picture level/frame level/slice level/tile level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/GPS/APS/slice header/tile group header/tile header.
14) Whether to and/or how to apply the disclosed methods above may be signalled at block level/sequence level/group of pictures level/group of frames level/picture level/frame level/slice level/tile level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/GPS/APS/slice header/tile group header/tile header.
15) Whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as block size, frame size, colour format, attribute format, single/dual tree partitioning, colour component, slice/tile/picture/frame type.
16) The proposed methods disclosed in this document may be used in other coding tools which require chroma fusion.
6. Embodiments
In this disclosure, it is proposed to improve the semantic enhancement-based point cloud compression method. Firstly, the pre-trained point cloud geometry feature extraction model is reused to extract the compact feature representation of point cloud. In this way, point cloud data is characterized with complex spatial structure as high-dimensional feature information for further processing. Secondly, the point cloud feature mapping module is introduced. Because point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis are two completely different tasks, their representation feature spaces should also be different. The point cloud feature mapping module can explore the similarity of these two different tasks in the feature space as much as possible. Thirdly, in the process of coding model training, a multi-task learning mechanism is introduced to preserve the point cloud geometry structure and semantic information as much as possible.
An example of the coding flow 500 for the semantic enhancement-based point cloud compression method is depicted in Fig. 5. Fig. 5 depicts the flow of the compression method in accordance with embodiments of the present disclosure. Fig. 5 may be described with respect to Fig. 2. As illustrated, the geometry encoder 220 may include a graph-based extractor 510 and a deep PCC (DPCC) based extractor 515. The geometry decoder 230 may include a folding based decoder 570 and a DPCC based decoder 575.
The GPCC 220 may include a plurality of modules, including an octree encoder 530, an octree decoder 535 and an entropy model 560. The GPCC 220 may include additional modules such as a quantization (Q) module, an AE module, an AD module, and the like.
The features such as X from the geometry encoder 210 may be inputted into the feature encoder module 520. The feature encoder module 520 may process the input features and obtain output features such as Y. The output features may be inputted into the GPCC 220, as shown in Fig. 5. Output features from the GPCC 220 may be inputted to the feature encoder module 525. The feature encoder module 520 may process the features and output processed features to the geometry decoder 230. The geometry decoder 230 may transmit point cloud data to the machine vision task 240, which includes a point cloud classification task 580 and a point cloud segmentation task 585, and/or the like.
An example of the feature encoder module 520 is depicted in Fig. 6. Fig. 6 depicts an example of the flow of the feature encoder module 520. The feature encoder module 520 may include an input embedding module 610. The input features may be inputted to the input embedding module 610. The feature encoder module 520 may include additional modules such as a multi-head attention module, an add and normalization module, a feed forward module, an add and normalization module, and/or the like. It is to be understood that the feature encoder module 525 in Fig. 5 may be similar to the feature encoder module 520 or same with the feature encoder module 520, which will not be repeated here.
More details will be further discussed below. Embodiments of the present disclosure are related to point cloud coding and point cloud compression. As used herein, the term “point cloud sequence” may refer to a sequence of one or more point clouds. The term “frame” may refer to a point cloud in a point cloud sequence. The term “point cloud” may refer to a frame in the point cloud sequence. The term “coding unit” may refer to a block, a box, a cube, a slice, a tile, a frame, or any other units involving a group of points in PCC.
Fig. 7 illustrates a flowchart of a method 700 for point cloud coding in accordance with embodiments of the present disclosure. The method 700 may be implemented for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence.
As shown in Fig. 7, the method 700 starts at block 710, where a type of the current point cloud is determined. At block 720, a coding module for the current point cloud is determined based on the type of the current point cloud. For example, the coding module may include a point cloud feature extractor. For another example, the coding module may include a point cloud geometry reconstruction module. At block 730, the conversion is performed based on the coding module.
The method 700 enables determining a coding module for the conversion based on the type of the point cloud. For example, a proper feature extractor can be selected based on the type of the point cloud. For another example, a proper reconstruction module such as a reconstruction decoder can be selected. Accordingly, coding efficiency can be improved.
In some embodiments, the conversion includes encoding the current point cloud into the bitstream. For example, the conversion may be performed by an encoder coding information to be included in the bitstream.
In some embodiments, determining the coding module comprises: determining a point cloud feature extractor based on the type of the current point cloud. By way of example, the point cloud feature extractor comprises at least one of: a point-based extractor, or a voxel-based extractor. In some embodiments, the point cloud feature extractor may be a compact feature representation extractor. That is, different compact feature representation extractors of different point cloud may be utilized.
In some embodiments, the type of the current point cloud comprises a basic solid point cloud with a number of points less than a threshold number or a basic object, and the coding module comprises a point-based extractor.
In some embodiments, the point-based extractor is based on at least one of: a graph-based approach, or a transformer-based approach. For example, the graph-based approach may include a graph-based extractor such as the graph-based extractor 510 in Fig. 5.
In some embodiments, the point-based extractor is based on a graph-based approach, and a graph convolution is used to model a topological structure of the current point cloud, and a feature representation of the current point cloud is extracted based on modeling of the topological structure of the current point cloud.
In some embodiments, a K Nearest Neighbors (KNN) algorithm is used by the graph convolution to obtain a local point cloud nearest neighbor, and extract features of local adjacent points.
In some embodiments, the point-based extractor is based on a transformer-based approach for modeling a topological structure of the current point cloud, a spatial structure of the current point cloud is modeled, and a feature representation of the current point cloud is extracted based on an attention mechanism of a transformer.
In some embodiments, the transformer-based approach performs a progressive geometric feature extraction of the current point cloud by stacking a multi-layer attention mechanism.
In some embodiments, the type of the current point cloud comprises a Lidar point cloud or a large scale point cloud with a complex structure and a number of points larger than a threshold number, and the coding module comprises a voxel-based extractor.
In some embodiments, a sparse convolution-based approach is used.
In some embodiments, the sparse convolution-based approach is used to extract geometry structure features of the current point cloud.
In some embodiments, the sparse convolution voxelizes the current point cloud and determines geometry characteristics of the voxelized current point cloud.
In some embodiments, a voxelization is used to make point cloud data regular for a subsequent convolution processing. For example, the voxelization may be used to make point cloud data more regular for subsequent convolution processing.
In some embodiments, the point cloud feature extractor comprises a pretrained point cloud feature extractor extracting compact feature representation for point clouds with different spatial structures and point sizes. That is, different pre-trained point cloud feature extractors
may be used to extract the compact feature representation for point clouds with different spatial structures and point sizes.
In some embodiments, an indication of the point cloud feature extractor to be used is included in the bitstream. For example, it may be signaled from the encoder to the decoder which point cloud feature extractor is used.
In some embodiments, the point cloud feature extractor to be used is determined by a decoder for decoding the current point cloud from the bitstream.
In some embodiments, the conversion includes decoding the current point cloud from the bitstream. For example, the conversion may be performed by a decoder. The decoder codes compression bits to determine information in the bitstream.
In some embodiments, determining the coding module comprises: determining the point cloud geometry reconstruction module (also referred to as a point cloud geometry reconstruction decoder) based on the type of the current point cloud. The reconstructed point cloud may be determined by the point cloud geometry reconstruction module based on the decoded point cloud feature representation.
In some embodiments, the current point cloud comprises a small object point cloud with a number of points less than a threshold number or a basic object, and the point cloud geometry reconstruction decoder uses a folding-based approach to reconstruct the current point cloud from a decoded point cloud feature representation. For example, the folding-based approach may use a folding-based decoder such as the folding-based decoder 570 in Fig. 5.
In some embodiments, a FoldingNet is used as a basic reconstruction network for the folding-based approach.
In some embodiments, the folding-based approach is used to reconstruct a 3D structure of the current point cloud from a feature space.
In some embodiments, the current point cloud comprises a Lidar point cloud or a large scale point cloud with a number of point clouds larger than a threshold number, and at least one of a sparse-convolution based approach or a point-based approach is used to reconstruct the current point cloud.
In some embodiments, a sparse-convolution upscale is used as a basic reconstruction network in the sparse-convolution based approach.
In some embodiments, the large scale point cloud is blocked based on the number of points in the large scale point cloud.
In some embodiments, the point-based approach is used to code and reconstruct the blocked point cloud.
In some embodiments, the method 700 further comprises: applying a point cloud feature mapping module to determine a similarity of features in a plurality of tasks. By way of example, the point cloud feature mapping module may be implemented as the feature encoder module 520 and/or the feature encoder module 525 in Fig. 5, or an input embedding module 610 as shown in Fig. 6.
In some embodiments, the plurality of tasks comprises a point cloud geometry reconstruction task, and for the point cloud geometry reconstruction task, extracted features of the current point cloud comprises point cloud spatial structure information more than that for a further task. That is, for the point cloud geometry reconstruction task, the extracted features may contain more point cloud spatial structure information.
In some embodiments, the spatial structure information comprises at least one of: high-frequency texture information or low-frequency structure information in point cloud data.
In some embodiments, the plurality of tasks comprises a point cloud intelligent analysis task, and extracted features of the current point cloud comprises point cloud semantic information more than that for a further task. For example, the point cloud intelligent analysis task may include at least one of a point cloud segmentation such as the point cloud segmentation 585 in Fig. 5 or a point cloud classification such as the point cloud classification 580 in Fig. 5. That is, for point cloud intelligent analysis task, the extracted features may contain more point cloud semantic information.
In some embodiments, the point cloud semantic information comprises semantic information of each point in the current point cloud.
In some embodiments, the point cloud feature mapping module comprises a transformer-based point cloud feature mapping module for minimizing a feature similarity
between feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
In some embodiments, a design of the point cloud feature mapping module is based on a transformer structure of a multi-head attention mechanism.
In some embodiments, a self-attention mechanism is used to determine the feature similarity of feature mapping space, and features are weighted to be summed.
In some embodiments, the point cloud feature mapping module is symmetrically designed and reserved at at least one of: an encoder for the conversion, or a decoder for the conversion.
In some embodiments, the method 700 further comprises: retaining point cloud semantic information while maintaining a geometry structure of the current point cloud based on a multi-task learning mechanism. For example, the multi-task learning mechanism may be used to retain the point cloud semantic information as much as possible while maintaining the geometry structure of the point cloud. In this way, the semantic losses can be reduced. The accuracy of point cloud semantic segmentation and classification can be improved.
In some embodiments, an indication of a usage of a multi-task model is included in the bitstream. For example, it may be signaled from the encoder to the decoder whether a multi-task model is used.
Alternatively, or in addition, in some embodiments, whether to use a multi-task model is determined at a decoder for the conversion.
In some embodiments, during a training stage for the multi-task learning mechanism, a loss constraint of a multi-object is used.
In some embodiments, a geometry constraint for a point cloud reconstruction is used for a basic quality point cloud.
In some embodiments, the geometry constraint is determined based on a chamfer distance for a supervised learning.
In some embodiments, the chamber distance is determined by:
where S1 and S2 are two point clouds, x and y are coordinates of points in S1 and S2, respectively, and LCD (S1, S2) denotes the chamber distance between the two point clouds.
In some embodiments, a machine vision oriented semantic constraint is used for a point cloud.
In some embodiments, the semantic constraint is used to constrain a feature distribution distance between coded features and original features of the point cloud.
In some embodiments, the semantic constraint is determined based on a Kullback-Leibler (KL) divergence.
In some embodiments, a KL divergence distance Lsemantic is determined by:
Lsemantic = DKL (Fori || Frec) = ∑ipori (vi) log pori (vi) -pori (vi) log prec (vi) ,
Lsemantic = DKL (Fori || Frec) = ∑ipori (vi) log pori (vi) -pori (vi) log prec (vi) ,
where Fori and Frec denote original feature and coded feature, pori and prec denote probability distribution of the original feature and the coded feature, respectively.
In some embodiments, a final loss constraint is determined by: Lmulti = Lcd + Lsemantic, where Lcd denotes a geometry constraint, Lsemantic denotes a semantic constraint, and Lmulti denotes the final loss constraint. In this way, both the error of objective reconstruction quality in the coding process and the loss of feature semantics in the coding process can be considered. The training process of the AI-PCC can be improved by such evaluation.
In some embodiments, an indication of a final sampled point cloud is included in the bitstream.
In some embodiments, the indication of the final sampled point cloud is coded by a point cloud codec. By way of example, the point cloud codec comprises at least one of: a geometry-based point cloud compression (G-PCC) , a video-based point cloud compression (V-PCC) , or Draco.
In some embodiments, an indication of at least one feature of the current point cloud is included in the bitstream.
In some embodiments, the indication of the at least one feature is coded with at least one of: a fixed-length coding, a unary coding, or a truncated unary coding.
In some embodiments, the indication of the at least one feature is coded in a predictive way.
In some embodiments, information regarding whether to apply the method and/or how to apply the method is included in at least one of: the bitstream, a frame, a tile, a slice, or an octree.
In some embodiments, whether to and/or how to apply the method is based on coded information, the coded information comprising at least one of: a dimension, a color format, a color component, a slice type or a picture type.
In some embodiments, a syntax element or an indication is binarized as at least one of: a flag, a fixed length code, an exponential Golomb (x) (EG (x) ) code, a unary code, a truncated unary code, or a truncated binary code.
In some embodiments, the syntax element or the indication is signed or unsigned.
In some embodiments, a syntax element or an indication is coded with at least one context model.
In some embodiments, a syntax element or an indication is bypass coded.
In some embodiments, a syntax element or an indication is included in the bitstream based on at least one condition, the at least one condition comprising at least one of: a fist condition that a function corresponding to the syntax element or the indication is applicable, or a second condition that a dimension of a block of the current point cloud satisfies a condition.
In some embodiments, a syntax element or an indication is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
In some embodiments, the syntax element or the indication is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
In some embodiments, whether to apply the method and/or how to apply the method is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
In some embodiments, whether to apply the method and/or how to apply the method is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
In some embodiments, whether to apply the method and/or how to apply the method is based on coded information.
In some embodiments, the coded information comprises at least one of: a block size, a color format, an attribute format, a single or dual tree partitioning, a color component, a slice type, a tile type, a picture type, or a frame type.
In some embodiments, the method is used in a coding tool requiring chroma fusion.
According to further embodiments of the present disclosure, a non-transitory computer-readable recording medium is provided. A bitstream of a point cloud sequence is stored in the non-transitory computer-readable recording medium. The bitstream of the point cloud sequence is generated by a method performed by a point cloud sequence processing apparatus. According to the method, a type of a current point cloud of the point cloud sequence is determined. A coding module for the current point cloud is determined based on the type of the current point cloud. The coding module comprises at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module. The bitstream is generated based on the coding module.
According to still further embodiments of the present disclosure, a method for storing a bitstream of a point cloud sequence is proposed. In the method, a type of a current point cloud of the point cloud sequence is determined. A coding module for the current point cloud is determined based on the type of the current point cloud. The coding module comprises at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module. The
bitstream is generated based on the coding module. The bitstream is stored in a non-transitory computer-readable recording medium.
Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner.
Clause 1. A method for point cloud coding, comprising: determining, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and performing the conversion based on the coding module.
Clause 2. The method of clause 1, wherein the conversion includes encoding the current point cloud into the bitstream.
Clause 3. The method of clause 2, wherein the conversion is performed by an encoder coding information to be included in the bitstream.
Clause 4. The method of clause 2 or 3, wherein determining the coding module comprises: determining the point cloud feature extractor based on the type of the current point cloud.
Clause 5. The method of clause 4, wherein the point cloud feature extractor comprises at least one of: a point-based extractor, or a voxel-based extractor.
Clause 6. The method of clause 4 or 5, wherein the type of the current point cloud comprises a basic solid point cloud with a number of points less than a threshold number or a basic object, and the coding module comprises a point-based extractor.
Clause 7. The method of clause 6, wherein the point-based extractor is based on at least one of: a graph-based approach, or a transformer-based approach.
Clause 8. The method of clause 7, wherein the point-based extractor is based on a graph-based approach, and a graph convolution is used to model a topological structure of the current point cloud, and a feature representation of the current point cloud is extracted based on modeling of the topological structure of the current point cloud.
Clause 9. The method of clause 8, wherein a K Nearest Neighbors (KNN) algorithm is used by the graph convolution to obtain a local point cloud nearest neighbor, and extract features of local adjacent points.
Clause 10. The method of clause 7, wherein the point-based extractor is based on a transformer-based approach for modeling a topological structure of the current point cloud, and a spatial structure of the current point cloud is modeled, and a feature representation of the current point cloud is extracted based on an attention mechanism of a transformer.
Clause 11. The method of clause 10, wherein the transformer-based approach performs a progressive geometric feature extraction of the current point cloud by stacking a multi-layer attention mechanism.
Clause 12. The method of clause 4 or 5, wherein the type of the current point cloud comprises a Lidar point cloud or a large scale point cloud with a complex structure and a number of points larger than a threshold number, and the coding module comprises a voxel-based extractor.
Clause 13. The method of clause 12, wherein a sparse convolution-based approach is used.
Clause 14. The method of clause 13, wherein the sparse convolution-based approach is used to extract geometry structure features of the current point cloud.
Clause 15. The method of clause 13 or 14, wherein the sparse convolution voxelizes the current point cloud and determines geometry characteristics of the voxelized current point cloud.
Clause 16. The method of clause 15, wherein a voxelization is used to make point cloud data regular for a subsequent convolution processing.
Clause 17. The method of any of clauses 4-16, wherein the point cloud feature extractor comprises a pretrained point cloud feature extractor extracting compact feature representation for point clouds with different spatial structures and point sizes.
Clause 18. The method of any of clauses 4-16, wherein an indication of the point cloud feature extractor to be used is included in the bitstream.
Clause 19. The method of any of clauses 4-16, wherein the point cloud feature extractor to be used is determined by a decoder for decoding the current point cloud from the bitstream.
Clause 20. The method of clause 1, wherein the conversion includes decoding the current point cloud from the bitstream.
Clause 21. The method of clause 20, wherein the conversion is performed by a decoder, the decoder coding compression bits to determine information in the bitstream.
Clause 22. The method of clause 20 or 21, wherein determining the coding module comprises: determining the point cloud geometry reconstruction module based on the type of the current point cloud.
Clause 23. The method of clause 22, wherein the current point cloud comprises a small object point cloud with a number of points less than a threshold number or a basic object, and the point cloud geometry reconstruction decoder uses a folding-based approach to reconstruct the current point cloud from a decoded point cloud feature representation.
Clause 24. The method of clause 23, wherein a FoldingNet is used as a basic reconstruction network for the folding-based approach.
Clause 25. The method of clause 23 or 24, wherein the folding-based approach is used to reconstruct a 3D structure of the current point cloud from a feature space.
Clause 26. The method of clause 22, wherein the current point cloud comprises a Lidar point cloud or a large scale point cloud with a number of point clouds larger than a threshold number, and at least one of a sparse-convolution based approach or a point-based approach is used to reconstruct the current point cloud.
Clause 27. The method of clause 26, wherein a sparse-convolution upscale is used as a basic reconstruction network in the sparse-convolution based approach.
Clause 28. The method of clause 26 or 27, wherein the large scale point cloud is blocked based on the number of points in the large scale point cloud.
Clause 29. The method of clause 28, wherein the point-based approach is used to code and reconstruct the blocked point cloud.
Clause 30. The method of any of clauses 1-29, further comprising: applying a point cloud feature mapping module to determine a similarity of features in a plurality of tasks.
Clause 31. The method of clause 30, wherein the plurality of tasks comprises a point cloud geometry reconstruction task, and for the point cloud geometry reconstruction task, extracted features of the current point cloud comprises point cloud spatial structure information more than that for a further task.
Clause 32. The method of clause 31, wherein the spatial structure information comprises at least one of: high-frequency texture information or low-frequency structure information in point cloud data.
Clause 33. The method of clause 30, wherein the plurality of tasks comprises a point cloud intelligent analysis task, and extracted features of the current point cloud comprises point cloud semantic information more than that for a further task, and wherein the point cloud intelligent analysis task comprises at least one of: a point cloud segmentation or a point cloud classification.
Clause 34. The method of clause 33, wherein the point cloud semantic information comprises semantic information of each point in the current point cloud.
Clause 35. The method of clause 30, wherein the point cloud feature mapping module comprises a transformer-based point cloud feature mapping module for minimizing a feature similarity between feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
Clause 36. The method of clause 35, wherein a design of the point cloud feature mapping module is based on a transformer structure of a multi-head attention mechanism.
Clause 37. The method of clause 35, wherein a self-attention mechanism is used to determine the feature similarity of feature mapping space, and features are weighted to be summed.
Clause 38. The method of any of clauses 30-37, wherein the point cloud feature mapping module is symmetrically designed and reserved at at least one of: an encoder for the conversion, or a decoder for the conversion.
Clause 39. The method of any of clauses 1-38, further comprising: retaining point cloud semantic information while maintaining a geometry structure of the current point cloud based on a multi-task learning mechanism.
Clause 40. The method of clause 39, wherein an indication of a usage of a multi-task model is included in the bitstream.
Clause 41. The method of clause 39, wherein whether to use a multi-task model is determined at a decoder for the conversion.
Clause 42. The method of any of clauses 39-41, wherein during a training stage for the multi-task learning mechanism, a loss constraint of a multi-object is used.
Clause 43. The method of clause 42, wherein a geometry constraint for a point cloud reconstruction is used for a basic quality point cloud.
Clause 44. The method of clause 43, wherein the geometry constraint is determined based on a chamfer distance for a supervised learning.
Clause 45. The method of clause 44, wherein the chamber distance is determined by: where S1 and S2 are two point clouds, x and y are coordinates of points in S1 and S2, respectively, and LCD (S1, S2) denotes the chamber distance between the two point clouds.
Clause 46. The method of any of clauses 42-45, wherein a machine vision oriented semantic constraint is used for a point cloud.
Clause 47. The method of clause 46, wherein the semantic constraint is used to constrain a feature distribution distance between coded features and original features of the point cloud.
Clause 48. The method of clause 47, wherein the semantic constraint is determined based on a Kullback-Leibler (KL) divergence.
Clause 49. The method of clause 48, wherein a KL divergence distance Lsemantic is determined by: Lsemantic = DKL (Fori || Frec) = ∑ipori (vi) log pori (vi) - pori (vi) log prec (vi) , where Fori and Frec denote original feature and coded feature, pori and prec denote probability distribution of the original feature and the coded feature, respectively.
Clause 50. The method of clause 42, wherein a final loss constraint is determined by: Lmulti = Lcd + Lsemantic, where Lcd denotes a geometry constraint, Lsemantic denotes a semantic constraint, and Lmulti denotes the final loss constraint.
Clause 51. The method of any of clauses 1-50, wherein an indication of a final sampled point cloud is included in the bitstream.
Clause 52. The method of clause 51, wherein the indication of the final sampled point cloud is coded by a point cloud codec.
Clause 53. The method of clause 52, wherein the point cloud codec comprises at least one of: a geometry-based point cloud compression (G-PCC) , a video-based point cloud compression (V-PCC) , or Draco.
Clause 54. The method of any of clauses 1-53, wherein an indication of at least one feature of the current point cloud is included in the bitstream.
Clause 55. The method of clause 54, wherein the indication of the at least one feature is coded with at least one of: a fixed-length coding, a unary coding, or a truncated unary coding.
Clause 56. The method of clause 54 or 55, wherein the indication of the at least one feature is coded in a predictive way.
Clause 57. The method of any of clauses 1-56, wherein information regarding whether to apply the method and/or how to apply the method is included in at least one of: the bitstream, a frame, a tile, a slice, or an octree.
Clause 58. The method of any of clauses 1-56, wherein whether to and/or how to apply the method is based on coded information, the coded information comprising at least one of: a dimension, a color format, a color component, a slice type or a picture type.
Clause 59. The method of any of clauses 1-58, wherein a syntax element or an indication is binarized as at least one of: a flag, a fixed length code, an exponential Golomb (x) (EG (x) ) code, a unary code, a truncated unary code, or a truncated binary code.
Clause 60. The method of clause 59, wherein the syntax element or the indication is signed or unsigned.
Clause 61. The method of any of clauses 1-60, wherein a syntax element or an indication is coded with at least one context model.
Clause 62. The method of any of clauses 1-60, wherein a syntax element or an indication is bypass coded.
Clause 63. The method of any of clauses 1-62, wherein a syntax element or an indication is included in the bitstream based on at least one condition, the at least one condition comprising at least one of: a fist condition that a function corresponding to the syntax element or the indication is applicable, or a second condition that a dimension of a block of the current point cloud satisfies a condition.
Clause 64. The method of any of clauses 1-63, wherein a syntax element or an indication is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
Clause 65. The method of clause 64, wherein the syntax element or the indication is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
Clause 66. The method of any of clauses 1-65, wherein whether to apply the method and/or how to apply the method is included at one of: a block level, a sequence level, a group of pictures level, a group of frames level, a picture level, a frame level, a slice level, a tile level, or a tile group level.
Clause 67. The method of clause 66, wherein whether to apply the method and/or how to apply the method is included in one of: a coding tree unit (CTU) , a coding unit (CU) , a transform unit (TU) , a prediction unit (PU) , a coding tree block (CTB) , a coding block (CB) , a transform block (TB) , a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a Video Parameter Set (VPS) , a decoded parameter set (DPS) , Decoding Capability Information (DCI) , a Picture Parameter Set (PPS) , an Adaptation Parameter Set (APS) , a slice header, a tile group header, or a tile header.
Clause 68. The method of any of clauses 1-65, wherein whether to apply the method and/or how to apply the method is based on coded information.
Clause 69. The method of clause 68, wherein the coded information comprises at least one of: a block size, a color format, an attribute format, a single or dual tree partitioning, a color component, a slice type, a tile type, a picture type, or a frame type.
Clause 70. The method of any of clauses 1-69, wherein the method is used in a coding tool requiring chroma fusion.
Clause 71. An apparatus for point cloud coding comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-70.
Clause 72. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-70.
Clause 73. A non-transitory computer-readable recording medium storing a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus, wherein the method comprises: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; and generating the bitstream based on the coding module.
Clause 74. A method for storing a bitstream of a point cloud sequence, comprising: determining a type of a current point cloud of the point cloud sequence; determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; generating the bitstream based on the coding module; and storing the bitstream in a non-transitory computer-readable recording medium.
Example Device
Fig. 8 illustrates a block diagram of a computing device 800 in which various embodiments of the present disclosure can be implemented. The computing device 800 may be implemented as or included in the source device 110 (or the PCC encoder 116 or the GPCC encoder 300) or the destination device 120 (or the PCC decoder 126 or the GPCC decoder 400) .
It would be appreciated that the computing device 800 shown in Fig. 8 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
As shown in Fig. 8, the computing device 800 includes a general-purpose computing device 800. The computing device 800 may at least comprise one or more processors or processing units 810, a memory 820, a storage unit 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860.
In some embodiments, the computing device 800 may be implemented as any user terminal or server terminal having the computing capability. The server terminal may be a server, a large-scale computing device or the like that is provided by a service provider. The user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It would be contemplated that the computing device 800 can support any type of interface to a user (such as “wearable” circuitry and the like) .
The processing unit 810 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 820. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 800. The processing unit 810 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
The computing device 800 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 800, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 820 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination
thereof. The storage unit 830 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 800.
The computing device 800 may further include additional detachable/non-detachable, volatile/non-volatile memory medium. Although not shown in Fig. 8, it is possible to provide a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk and an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk. In such cases, each drive may be connected to a bus (not shown) via one or more data medium interfaces.
The communication unit 840 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 800 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 800 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
The input device 850 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like. The output device 860 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like. By means of the communication unit 840, the computing device 800 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 800, or any devices (such as a network card, a modem and the like) enabling the computing device 800 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown) .
In some embodiments, instead of being integrated in a single device, some or all components of the computing device 800 may also be arranged in cloud computing architecture. In the cloud computing architecture, the components may be provided remotely and work together to implement the functionalities described in the present disclosure. In some embodiments, cloud computing provides computing, software, data access and storage service, which will not require end users to be aware of the physical locations or configurations of the
systems or hardware providing these services. In various embodiments, the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols. For example, a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components. The software or components of the cloud computing architecture and corresponding data may be stored on a server at a remote position. The computing resources in the cloud computing environment may be merged or distributed at locations in a remote data center. Cloud computing infrastructures may provide the services through a shared data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
The computing device 800 may be used to implement point cloud encoding/decoding in embodiments of the present disclosure. The memory 820 may include one or more point cloud coding modules 825 having one or more program instructions. These modules are accessible and executable by the processing unit 810 to perform the functionalities of the various embodiments described herein.
In the example embodiments of performing point cloud encoding, the input device 850 may receive point cloud data as an input 870 to be encoded. The point cloud data may be processed, for example, by the point cloud coding module 825, to generate an encoded bitstream. The encoded bitstream may be provided via the output device 860 as an output 880.
In the example embodiments of performing point cloud decoding, the input device 850 may receive an encoded bitstream as the input 870. The encoded bitstream may be processed, for example, by the point cloud coding module 825, to generate decoded point cloud data. The decoded point cloud data may be provided via the output device 860 as the output 880.
While this disclosure has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to
be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting.
Claims (74)
- A method for point cloud coding, comprising:determining, for a conversion between a current point cloud of a point cloud sequence and a bitstream of the point cloud sequence, a type of the current point cloud;determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; andperforming the conversion based on the coding module.
- The method of claim 1, wherein the conversion includes encoding the current point cloud into the bitstream.
- The method of claim 2, wherein the conversion is performed by an encoder coding information to be included in the bitstream.
- The method of claim 2 or 3, wherein determining the coding module comprises:determining the point cloud feature extractor based on the type of the current point cloud.
- The method of claim 4, wherein the point cloud feature extractor comprises at least one of:a point-based extractor, ora voxel-based extractor.
- The method of claim 4 or 5, wherein the type of the current point cloud comprises a basic solid point cloud with a number of points less than a threshold number or a basic object, and the coding module comprises a point-based extractor.
- The method of claim 6, wherein the point-based extractor is based on at least one of:a graph-based approach, ora transformer-based approach.
- The method of claim 7, wherein the point-based extractor is based on a graph-based approach, and a graph convolution is used to model a topological structure of the current point cloud, and a feature representation of the current point cloud is extracted based on modeling of the topological structure of the current point cloud.
- The method of claim 8, wherein a K Nearest Neighbors (KNN) algorithm is used by the graph convolution to obtain a local point cloud nearest neighbor, and extract features of local adjacent points.
- The method of claim 7, wherein the point-based extractor is based on a transformer-based approach for modeling a topological structure of the current point cloud, and a spatial structure of the current point cloud is modeled and a feature representation of the current point cloud is extracted based on an attention mechanism of a transformer.
- The method of claim 10, wherein the transformer-based approach performs a progressive geometric feature extraction of the current point cloud by stacking a multi-layer attention mechanism.
- The method of claim 4 or 5, wherein the type of the current point cloud comprises a Lidar point cloud or a large scale point cloud with a complex structure and a number of points larger than a threshold number, and the coding module comprises a voxel-based extractor.
- The method of claim 12, wherein a sparse convolution-based approach is used.
- The method of claim 13, wherein the sparse convolution-based approach is used to extract geometry structure features of the current point cloud.
- The method of claim 13 or 14, wherein the sparse convolution voxelizes the current point cloud and determines geometry characteristics of the voxelized current point cloud.
- The method of claim 15, wherein a voxelization is used to make point cloud data regular for a subsequent convolution processing.
- The method of any of claims 4-16, wherein the point cloud feature extractor comprises a pretrained point cloud feature extractor extracting compact feature representation for point clouds with different spatial structures and point sizes.
- The method of any of claims 4-16, wherein an indication of the point cloud feature extractor to be used is included in the bitstream.
- The method of any of claims 4-16, wherein the point cloud feature extractor to be used is determined by a decoder for decoding the current point cloud from the bitstream.
- The method of claim 1, wherein the conversion includes decoding the current point cloud from the bitstream.
- The method of claim 20, wherein the conversion is performed by a decoder, the decoder coding compression bits to determine information in the bitstream.
- The method of claim 20 or 21, wherein determining the coding module comprises:determining the point cloud geometry reconstruction module based on the type of the current point cloud.
- The method of claim 22, wherein the current point cloud comprises a small object point cloud with a number of points less than a threshold number or a basic object, and the point cloud geometry reconstruction module uses a folding-based approach to reconstruct the current point cloud from a decoded point cloud feature representation.
- The method of claim 23, wherein a FoldingNet is used as a basic reconstruction network for the folding-based approach.
- The method of claim 23 or 24, wherein the folding-based approach is used to reconstruct a 3D structure of the current point cloud from a feature space.
- The method of claim 22, wherein the current point cloud comprises a Lidar point cloud or a large scale point cloud with a number of point clouds larger than a threshold number, and at least one of a sparse-convolution based approach or a point-based approach is used to reconstruct the current point cloud.
- The method of claim 26, wherein a sparse-convolution upscale is used as a basic reconstruction network in the sparse-convolution based approach.
- The method of claim 26 or 27, wherein the large scale point cloud is blocked based on the number of points in the large scale point cloud.
- The method of claim 28, wherein the point-based approach is used to code and reconstruct the blocked point cloud.
- The method of any of claims 1-29, further comprising:applying a point cloud feature mapping module to determine a similarity of features in a plurality of tasks.
- The method of claim 30, wherein the plurality of tasks comprises a point cloud geometry reconstruction task, and for the point cloud geometry reconstruction task, extracted features of the current point cloud comprises point cloud spatial structure information more than that for a further task.
- The method of claim 31, wherein the spatial structure information comprises at least one of: high-frequency texture information or low-frequency structure information in point cloud data.
- The method of claim 30, wherein the plurality of tasks comprises a point cloud intelligent analysis task, and extracted features of the current point cloud comprises point cloud semantic information more than that for a further task, and wherein the point cloud intelligent analysis task comprises at least one of: a point cloud segmentation or a point cloud classification.
- The method of claim 33, wherein the point cloud semantic information comprises semantic information of each point in the current point cloud.
- The method of claim 30, wherein the point cloud feature mapping module comprises a transformer-based point cloud feature mapping module for minimizing a feature similarity between feature spaces of point cloud geometry coding reconstruction and point cloud machine vision intelligent analysis.
- The method of claim 35, wherein a design of the point cloud feature mapping module is based on a transformer structure of a multi-head attention mechanism.
- The method of claim 35, wherein a self-attention mechanism is used to determine the feature similarity of feature mapping space, and features are weighted to be summed.
- The method of any of claims 30-37, wherein the point cloud feature mapping module is symmetrically designed and reserved at at least one of: an encoder for the conversion, or a decoder for the conversion.
- The method of any of claims 1-38, further comprising:retaining point cloud semantic information while maintaining a geometry structure of the current point cloud based on a multi-task learning mechanism.
- The method of claim 39, wherein an indication of a usage of a multi-task model is included in the bitstream.
- The method of claim 39, wherein whether to use a multi-task model is determined at a decoder for the conversion.
- The method of any of claims 39-41, wherein during a training stage for the multi-task learning mechanism, a loss constraint of a multi-object is used.
- The method of claim 42, wherein a geometry constraint for a point cloud reconstruction is used for a basic quality point cloud.
- The method of claim 43, wherein the geometry constraint is determined based on a chamfer distance for a supervised learning.
- The method of claim 44, wherein the chamber distance is determined by:
where S1 and S2 are two point clouds, x and y are coordinates of points in S1 and S2, respectively, and LCD (S1, S2) denotes the chamber distance between the two point clouds. - The method of any of claims 42-45, wherein a machine vision oriented semantic constraint is used for a point cloud.
- The method of claim 46, wherein the semantic constraint is used to constrain a feature distribution distance between coded features and original features of the point cloud.
- The method of claim 47, wherein the semantic constraint is determined based on a Kullback-Leibler (KL) divergence.
- The method of claim 48, wherein a KL divergence distance Lsemantic is determined by:
Lsemantic = DKL (Fori || Frec) = ∑ipori (vi) log pori (vi) -pori (vi) log prec (vi) ,where Fori and Frec denote original feature and coded feature, pori and prec denote probability distribution of the original feature and the coded feature, respectively. - The method of claim 42, wherein a final loss constraint is determined by:
Lmulti = Lcd + Lsemantic,where Lcd denotes a geometry constraint, Lsemantic denotes a semantic constraint, and Lmulti denotes the final loss constraint. - The method of any of claims 1-50, wherein an indication of a final sampled point cloud is included in the bitstream.
- The method of claim 51, wherein the indication of the final sampled point cloud is coded by a point cloud codec.
- The method of claim 52, wherein the point cloud codec comprises at least one of:a geometry-based point cloud compression (G-PCC) ,a video-based point cloud compression (V-PCC) , orDraco.
- The method of any of claims 1-53, wherein an indication of at least one feature of the current point cloud is included in the bitstream.
- The method of claim 54, wherein the indication of the at least one feature is coded with at least one of: a fixed-length coding, a unary coding, or a truncated unary coding.
- The method of claim 54 or 55, wherein the indication of the at least one feature is coded in a predictive way.
- The method of any of claims 1-56, wherein information regarding whether to apply the method and/or how to apply the method is included in at least one of: the bitstream, a frame, a tile, a slice, or an octree.
- The method of any of claims 1-56, wherein whether to and/or how to apply the method is based on coded information, the coded information comprising at least one of: a dimension, a color format, a color component, a slice type or a picture type.
- The method of any of claims 1-58, wherein a syntax element or an indication is binarized as at least one of:a flag,a fixed length code,an exponential Golomb (x) (EG (x) ) code,a unary code,a truncated unary code, ora truncated binary code.
- The method of claim 59, wherein the syntax element or the indication is signed or unsigned.
- The method of any of claims 1-60, wherein a syntax element or an indication is coded with at least one context model.
- The method of any of claims 1-60, wherein a syntax element or an indication is bypass coded.
- The method of any of claims 1-62, wherein a syntax element or an indication is included in the bitstream based on at least one condition, the at least one condition comprising at least one of:a fist condition that a function corresponding to the syntax element or the indication is applicable, ora second condition that a dimension of a block of the current point cloud satisfies a condition.
- The method of any of claims 1-63, wherein a syntax element or an indication is included at one of:a block level,a sequence level,a group of pictures level,a group of frames level,a picture level,a frame level,a slice level,a tile level, ora tile group level.
- The method of claim 64, wherein the syntax element or the indication is included in one of:a coding tree unit (CTU) ,a coding unit (CU) ,a transform unit (TU) ,a prediction unit (PU) ,a coding tree block (CTB) ,a coding block (CB) ,a transform block (TB) ,a prediction block (PB) ,a sequence header,a picture header,a sequence parameter set (SPS) ,a Video Parameter Set (VPS) ,a decoded parameter set (DPS) ,Decoding Capability Information (DCI) ,a Picture Parameter Set (PPS) ,an Adaptation Parameter Set (APS) ,a slice header,a tile group header, ora tile header.
- The method of any of claims 1-65, wherein whether to apply the method and/or how to apply the method is included at one of:a block level,a sequence level,a group of pictures level,a group of frames level,a picture level,a frame level,a slice level,a tile level, ora tile group level.
- The method of claim 66, wherein whether to apply the method and/or how to apply the method is included in one of:a coding tree unit (CTU) ,a coding unit (CU) ,a transform unit (TU) ,a prediction unit (PU) ,a coding tree block (CTB) ,a coding block (CB) ,a transform block (TB) ,a prediction block (PB) ,a sequence header,a picture header,a sequence parameter set (SPS) ,a Video Parameter Set (VPS) ,a decoded parameter set (DPS) ,Decoding Capability Information (DCI) ,a Picture Parameter Set (PPS) ,an Adaptation Parameter Set (APS) ,a slice header,a tile group header, ora tile header.
- The method of any of claims 1-65, wherein whether to apply the method and/or how to apply the method is based on coded information.
- The method of claim 68, wherein the coded information comprises at least one of: a block size, a color format, an attribute format, a single or dual tree partitioning, a color component, a slice type, a tile type, a picture type, or a frame type.
- The method of any of claims 1-69, wherein the method is used in a coding tool requiring chroma fusion.
- An apparatus for processing point cloud data, comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of claims 1-70.
- A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of claims 1-70.
- A non-transitory computer-readable recording medium storing a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus, wherein the method comprises:determining a type of a current point cloud of the point cloud sequence;determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module; andgenerating the bitstream based on the coding module.
- A method for storing a bitstream of a point cloud sequence, comprising:determining a type of a current point cloud of the point cloud sequence;determining a coding module for the current point cloud based on the type of the current point cloud, the coding module comprising at least one of: a point cloud feature extractor or a point cloud geometry reconstruction module;generating the bitstream based on the coding module; andstoring the bitstream in a non-transitory computer-readable recording medium.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNPCT/CN2023/074467 | 2023-02-03 | ||
CN2023074467 | 2023-02-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024160290A1 true WO2024160290A1 (en) | 2024-08-08 |
Family
ID=92145866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2024/075628 WO2024160290A1 (en) | 2023-02-03 | 2024-02-02 | Method, apparatus, and medium for point cloud coding |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024160290A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019159940A (en) * | 2018-03-14 | 2019-09-19 | 株式会社Preferred Networks | Point group feature extraction device, point group feature extraction method, and program |
EP3767521A1 (en) * | 2019-07-15 | 2021-01-20 | Promaton Holding B.V. | Object detection and instance segmentation of 3d point clouds based on deep learning |
CN113850270A (en) * | 2021-04-15 | 2021-12-28 | 北京大学 | Semantic scene completion method and system based on point cloud-voxel aggregation network model |
WO2022067790A1 (en) * | 2020-09-30 | 2022-04-07 | Oppo广东移动通信有限公司 | Point cloud layering method, decoder, encoder, and storage medium |
US20220224940A1 (en) * | 2019-05-30 | 2022-07-14 | Lg Electronics Inc. | Method and device for processing point cloud data |
-
2024
- 2024-02-02 WO PCT/CN2024/075628 patent/WO2024160290A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019159940A (en) * | 2018-03-14 | 2019-09-19 | 株式会社Preferred Networks | Point group feature extraction device, point group feature extraction method, and program |
US20220224940A1 (en) * | 2019-05-30 | 2022-07-14 | Lg Electronics Inc. | Method and device for processing point cloud data |
EP3767521A1 (en) * | 2019-07-15 | 2021-01-20 | Promaton Holding B.V. | Object detection and instance segmentation of 3d point clouds based on deep learning |
WO2022067790A1 (en) * | 2020-09-30 | 2022-04-07 | Oppo广东移动通信有限公司 | Point cloud layering method, decoder, encoder, and storage medium |
CN113850270A (en) * | 2021-04-15 | 2021-12-28 | 北京大学 | Semantic scene completion method and system based on point cloud-voxel aggregation network model |
Non-Patent Citations (1)
Title |
---|
S. SCHWARZ (NOKIA), M. M. HANNUKSELA (NOKIA): "[AHG4] Occupancy-only PSNR calculations for V3C V-PCC coding evaluation", 29. JVET MEETING; 20230111 - 20230120; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 17 January 2023 (2023-01-17), XP030306470 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240223807A1 (en) | Method, apparatus, and medium for point cloud coding | |
US20240135592A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024160290A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024175011A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024217512A1 (en) | Method, apparatus, and medium for point cloud processing | |
WO2024149258A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024213148A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024149309A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2023280129A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024074121A1 (en) | Method, apparatus, and medium for point cloud coding | |
US20240267527A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024012381A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024193613A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024212969A1 (en) | Method, apparatus, and medium for video processing | |
WO2024149203A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024146644A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024074122A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024083194A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024074123A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2023093785A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2023116897A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2023051551A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2024153154A1 (en) | Method, apparatus, and medium for video processing | |
WO2023093865A1 (en) | Method, apparatus, and medium for point cloud coding | |
WO2023131126A1 (en) | Method, apparatus, and medium for point cloud coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24749765 Country of ref document: EP Kind code of ref document: A1 |