WO2023282543A1 - 포인트 클라우드 데이터의 전송 장치와 이 전송 장치에서 수행되는 방법 및, 포인트 클라우드 데이터의 수신 장치와 이 수신 장치에서 수행되는 방법 - Google Patents
포인트 클라우드 데이터의 전송 장치와 이 전송 장치에서 수행되는 방법 및, 포인트 클라우드 데이터의 수신 장치와 이 수신 장치에서 수행되는 방법 Download PDFInfo
- Publication number
- WO2023282543A1 WO2023282543A1 PCT/KR2022/009457 KR2022009457W WO2023282543A1 WO 2023282543 A1 WO2023282543 A1 WO 2023282543A1 KR 2022009457 W KR2022009457 W KR 2022009457W WO 2023282543 A1 WO2023282543 A1 WO 2023282543A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- value
- attribute
- point cloud
- geometry
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 298
- 230000005540 biological transmission Effects 0.000 title claims abstract description 81
- 230000002123 temporal effect Effects 0.000 claims abstract description 183
- 230000006835 compression Effects 0.000 claims abstract description 18
- 238000007906 compression Methods 0.000 claims abstract description 18
- 101150030514 GPC1 gene Proteins 0.000 claims description 16
- 239000000203 mixture Substances 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims 2
- 238000010168 coupling process Methods 0.000 claims 2
- 238000005859 coupling reaction Methods 0.000 claims 2
- 238000000605 extraction Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 229
- 238000012545 processing Methods 0.000 description 123
- 238000005538 encapsulation Methods 0.000 description 91
- 230000011664 signaling Effects 0.000 description 66
- 238000013139 quantization Methods 0.000 description 56
- 238000006243 chemical reaction Methods 0.000 description 53
- 230000006870 function Effects 0.000 description 49
- 230000009466 transformation Effects 0.000 description 40
- 230000000007 visual effect Effects 0.000 description 24
- 238000005516 engineering process Methods 0.000 description 22
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 17
- 238000009877 rendering Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 101100287577 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) gpe-1 gene Proteins 0.000 description 11
- 230000002441 reversible effect Effects 0.000 description 10
- 238000005070 sampling Methods 0.000 description 10
- AWSBQWZZLBPUQH-UHFFFAOYSA-N mdat Chemical compound C1=C2CC(N)CCC2=CC2=C1OCO2 AWSBQWZZLBPUQH-UHFFFAOYSA-N 0.000 description 9
- 230000015572 biosynthetic process Effects 0.000 description 8
- 238000012805 post-processing Methods 0.000 description 8
- 238000003786 synthesis reaction Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000005192 partition Methods 0.000 description 6
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 5
- 208000031212 Autoimmune polyendocrinopathy Diseases 0.000 description 4
- 241000023320 Luma <angiosperm> Species 0.000 description 4
- 235000019395 ammonium persulphate Nutrition 0.000 description 4
- 238000000261 appearance potential spectroscopy Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000001131 transforming effect Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- MKXZASYAUGDDCJ-NJAFHUGGSA-N dextromethorphan Chemical compound C([C@@H]12)CCC[C@]11CCN(C)[C@H]2CC2=CC=C(OC)C=C21 MKXZASYAUGDDCJ-NJAFHUGGSA-N 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005693 optoelectronics Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000011449 brick Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure relates to a method and apparatus for processing point cloud content.
- the point cloud content is content expressed as a point cloud, which is a set of points belonging to a coordinate system representing a 3D space.
- Point cloud content can represent three-dimensional media, and provides various services such as VR (virtual reality), AR (augmented reality), MR (mixed reality), and autonomous driving service. used to provide Since tens of thousands to hundreds of thousands of point data are required to represent the point cloud content, a method for efficiently processing the vast amount of point data is required.
- the present disclosure provides an apparatus and method for efficiently processing point cloud data.
- the present disclosure provides a point cloud data processing method and apparatus for solving latency and encoding/decoding complexity.
- the present disclosure provides apparatus and methods for supporting temporal scalability in carriage of geometry-based point cloud compressed data.
- the present disclosure proposes an apparatus and methods for providing a point cloud content service that efficiently stores a G-PCC bitstream in a single track in a file or divides and stores a G-PCC bitstream in a plurality of tracks and provides signaling therefor.
- the present disclosure proposes apparatus and methods for processing a file storage scheme to support efficient access to a stored G-PCC bitstream.
- a method performed in an apparatus for receiving point cloud data includes obtaining a geometry-based point cloud compression (G-PCC) file including the point cloud data - the G-PCC file is Samples in the G-PCC file include information about a group of samples grouped based on one or more temporal levels and information about the temporal levels; and extracting one or more samples belonging to a target temporal level from samples in the G-PCC file based on the information on the sample group and the information on the temporal levels.
- G-PCC geometry-based point cloud compression
- An apparatus for receiving point cloud data includes a memory; and at least one processor, wherein the at least one processor obtains a geometry-based point cloud compression (G-PCC) file including the point cloud data, wherein the G-PCC file is the G-PCC file
- G-PCC geometry-based point cloud compression
- My samples include information about a group of samples grouped based on one or more temporal levels, and information about the temporal levels, based on the information about the sample group and the information about the temporal levels
- One or more samples belonging to a target temporal level may be extracted from samples in the G-PCC file.
- a method performed in an apparatus for transmitting point cloud data includes information on a sample group in which geometry-based point cloud compression (G-PCC) samples are grouped based on one or more temporal levels. generating; and generating a G-PCC file including the information on the sample group, the information on the temporal levels, and the point cloud data.
- G-PCC geometry-based point cloud compression
- An apparatus for transmitting point cloud data includes a memory; And at least one processor, wherein the at least one processor generates information about a sample group in which geometry-based point cloud compression (G-PCC) samples are grouped based on one or more temporal levels, and the sample A G-PCC file may be created including information on groups, information on temporal levels, and the point cloud data.
- G-PCC geometry-based point cloud compression
- Apparatus and method according to embodiments of the present disclosure can process point cloud data with high efficiency.
- Devices and methods according to embodiments of the present disclosure may provide a high quality point cloud service.
- Devices and methods according to embodiments of the present disclosure may provide point cloud content for providing general-purpose services such as VR services and autonomous driving services.
- Apparatus and method according to embodiments of the present disclosure may provide temporal scalability capable of effectively accessing a desired component among G-PCC components.
- the apparatus and method according to the embodiments of the present disclosure can manipulate data at a high level consistent with a network function or a decoder function by supporting temporal scalability, thereby improving the performance of a point cloud content providing system.
- Devices and methods according to embodiments of the present disclosure may increase bit efficiency by reducing bits for signaling frame rate information.
- the apparatus and method according to embodiments of the present disclosure may enable smooth and gradual reproduction by reducing complexity of reproduction.
- FIG. 1 is a block diagram illustrating an example of a point cloud content providing system according to embodiments of the present disclosure.
- FIG. 2 is a block diagram illustrating an example of a process of providing point cloud content according to embodiments of the present disclosure.
- FIG 3 shows an example of a point cloud video acquisition process according to embodiments of the present disclosure.
- FIG. 4 shows an example of a point cloud encoding apparatus according to embodiments of the present disclosure.
- FIG. 5 illustrates an example of a voxel according to embodiments of the present disclosure.
- FIG. 6 shows an example of an octree and occupancy code according to embodiments of the present disclosure.
- FIG. 7 shows an example of a neighbor node pattern according to embodiments of the present disclosure.
- FIG 8 shows an example of a configuration of points according to LOD distance values according to embodiments of the present disclosure.
- FIG. 10 is a block diagram illustrating an example of a point cloud decoding apparatus according to embodiments of the present disclosure.
- FIG. 11 is a block diagram illustrating another example of a point cloud decoding apparatus according to embodiments of the present disclosure.
- FIG. 12 is a block diagram illustrating another example of a transmission device according to embodiments of the present disclosure.
- FIG. 13 is a block diagram illustrating another example of a receiving device according to embodiments of the present disclosure.
- FIG. 14 shows an example of a structure capable of interworking with a method/apparatus for transmitting and receiving point cloud data according to embodiments of the present disclosure.
- 15 is a block diagram illustrating another example of a transmission device according to embodiments of the present disclosure.
- 16 illustrates an example of spatially dividing a bounding box into 3D blocks according to embodiments of the present disclosure.
- FIG. 17 is a block diagram illustrating another example of a receiving device according to embodiments of the present disclosure.
- FIG. 18 shows an example of a structure of a bitstream according to embodiments of the present disclosure.
- FIG. 19 illustrates an example of an identification relationship between components in a bitstream according to embodiments of the present disclosure.
- FIG 21 shows an example of an SPS syntax structure according to embodiments of the present disclosure.
- FIG. 22 illustrates an example of an indication of an attribute type and an indication of a correspondence between position components according to embodiments of the present disclosure.
- FIG. 23 shows an example of a GPS syntax structure according to embodiments of the present disclosure.
- FIG. 24 shows an example of an APS syntax structure according to embodiments of the present disclosure.
- 25 shows an example of an attribute coding type table according to embodiments of the present disclosure.
- 26 illustrates an example of a tile inventory syntax structure according to embodiments of the present disclosure.
- 29 and 30 show examples of attribute slice syntax structures according to embodiments of the present disclosure.
- FIG. 31 shows an example of a metadata slice syntax structure according to embodiments of the present disclosure.
- FIG. 32 shows an example of a TLV encapsulation structure according to embodiments of the present disclosure.
- TLV encapsulation syntax structure and payload type shows an example of a TLV encapsulation syntax structure and payload type according to embodiments of the present disclosure.
- FIG. 34 shows an example of a file including a single track according to embodiments of the present disclosure.
- 35 shows an example of a file including multiple tracks according to embodiments of the present disclosure.
- 36 and 37 show flow charts for embodiments supporting temporal scalability.
- 38 and 39 are flowcharts for examples of determining whether to signal and acquire frame rate information.
- 40 and 41 show flow charts for embodiments that can avoid the duplicate signaling problem.
- first and second are used only for the purpose of distinguishing one element from another, and do not limit the order or importance of elements unless otherwise specified. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment may be referred to as a first component in another embodiment. can also be called
- components that are distinguished from each other are intended to clearly explain each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form a single hardware or software unit, or a single component may be distributed to form a plurality of hardware or software units. Accordingly, even such integrated or distributed embodiments are included in the scope of the present disclosure, even if not mentioned separately.
- components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment comprising a subset of elements described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to the components described in various embodiments are also included in the scope of the present disclosure.
- the present disclosure relates to encoding and decoding of point cloud-related data, and terms used in the present disclosure may have common meanings commonly used in the technical field to which the present disclosure belongs unless newly defined in the present disclosure.
- “/” and “,” may be interpreted as “and/or”.
- “A/B” and “A, B” could be interpreted as “A and/or B”.
- “A/B/C” and “A, B, C” may mean “at least one of A, B and/or C”.
- This disclosure relates to compression of point cloud related data.
- Various methods or embodiments of the present disclosure may be applied to a point cloud compression or point cloud coding (PCC) standard (ex. G-PCC or V-PCC standard) of the Moving Picture Experts Group (MPEG) or a next-generation video/image coding standard.
- PCC point cloud compression or point cloud coding
- MPEG Moving Picture Experts Group
- MPEG Moving Picture Experts Group
- a “point cloud” may mean a set of points located in a 3D space.
- “point cloud content” is content expressed as a point cloud and may mean “point cloud video/video”.
- 'point cloud video/video' is referred to as 'point cloud video'.
- a point cloud video may include one or more frames, and one frame may be a still image or a picture. Accordingly, the point cloud video may include a point cloud image/frame/picture, and may be referred to as “point cloud image”, “point cloud frame”, and “point cloud picture”.
- point cloud data may mean data or information related to each point in a point cloud.
- Point cloud data may include geometry and/or attributes.
- point cloud data may further include meta data.
- Point cloud data may be referred to as “point cloud content data” or “point cloud video data” or the like.
- point cloud data may be referred to as "point cloud content”, “point cloud video”, “G-PCC data”, and the like.
- a point cloud object corresponding to point cloud data may be represented in a box shape based on a coordinate system, and the box shape based on the coordinate system may be referred to as a bounding box. That is, the bounding box may be a rectangular cuboid capable of containing all points of a point cloud, and may be a rectangular cuboid including a source point cloud frame.
- the geometry includes the position (or position information) of each point, and the position includes parameters (eg, a coordinate system consisting of an x-axis, a y-axis, and a z-axis) representing a three-dimensional coordinate system For example, x-axis value, y-axis value, and z-axis value). Geometry may be referred to as “geometry information”.
- the attribute may include a property of each point, and this property is one of texture information, color (RGB or YCbCr), reflectance (r), transparency, etc. of each point may contain more than Attributes may be referred to as “attribute information”.
- Meta data may include various data related to acquisition in an acquisition process described later.
- FIG. 1 shows an example of a system for providing point cloud content (hereinafter referred to as a 'point cloud content providing system') according to embodiments of the present disclosure.
- 2 shows an example of a process in which a point cloud content providing system provides point cloud content.
- the point cloud content providing system may include a transmission device 10 and a reception device 20 .
- the point cloud content providing system includes an acquisition process (S20), an encoding process (S21), a transmission process (S22), a decoding process (S23) illustrated in FIG. 2 by operations of the transmission device 10 and the reception device 20.
- a rendering process (S24) and/or a feedback process (S25) may be performed.
- the transmission device 10 acquires point cloud data, and converts a bitstream through a series of processes (eg, encoding process) on the acquired point cloud data (original point cloud data). can be printed out.
- the point cloud data may be output in the form of a bitstream through an encoding process.
- the transmission device 10 may transmit the output bitstream in the form of a file or streaming (streaming segment) to the reception device 20 through a digital storage medium or a network.
- Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the receiving device 20 may process (eg, decode or restore) received data (eg, encoded point cloud data) back into original point cloud data and render the received data (eg, encoded point cloud data).
- received data eg, encoded point cloud data
- render the received data eg, encoded point cloud data
- Point cloud content can be provided to the user through these processes, and the present disclosure can provide various embodiments required to effectively perform these series of processes.
- the transmission device 10 may include an acquisition unit 11, an encoding unit 12, an encapsulation processing unit 13, and a transmission unit 14, and the reception device 20 may include a receiving unit 21, a decapsulation processing unit 22, a decoding unit 23 and a rendering unit 24.
- the acquisition unit 11 may perform a process (S20) of acquiring a point cloud video through a capture, synthesis, or generation process. Accordingly, the acquisition unit 11 may be referred to as 'point cloud video acquisition'.
- Point cloud data (geometry and/or attributes, etc.) for a plurality of points may be generated by the acquisition process (S20).
- meta data related to point cloud video acquisition may be generated through the acquisition process ( S20 ).
- mesh data (eg, triangular data) representing connection information between point clouds may be generated by the acquisition process ( S20 ).
- Meta data may include initial viewing orientation metadata.
- the initial viewing orientation meta data may indicate whether the point cloud data represents forward or backward data.
- Meta data may be referred to as "auxiliary data" which is meta data about a point cloud.
- the acquired point cloud video may include a polygon file format or the stanford triangle format (PLY) file. Since a point cloud video has one or more frames, one or more PLY files may be included in the acquired point cloud video.
- the PLY file may include point cloud data of each point.
- the acquiring unit 11 includes camera equipment capable of obtaining depth (depth information) and an RGB camera capable of extracting color information corresponding to the depth information. It may consist of a combination of
- the camera equipment capable of obtaining depth information may be a combination of an infrared pattern projector and an infrared camera.
- the acquisition unit 11 may be composed of a lidar (LiDAR), and the lidar may use a radar system that measures the position coordinates of the reflector by measuring the time it takes for the laser pulse to be reflected and returned.
- LiDAR lidar
- the acquisition unit 110 may extract a shape of a geometry composed of points in a 3D space from depth information, and may extract an attribute representing color or reflection of each point from RGB information.
- Methods for extracting (or capturing, acquiring, etc.) point cloud video (or point cloud data) include an inward-facing method for capturing a central object and an outward-facing method for capturing an external environment. There may be an outward-facing scheme. Examples of the inward-pacing method and the outward-pacing method are shown in FIG. 3 . (a) of FIG. 3 is an example of an inward-pacing method, and (b) of FIG. 3 is an example of an outward-pacing method.
- the inward-pacing method may be used when configuring a current surrounding environment as point cloud content in an automobile, such as in autonomous driving.
- the outward-facing method consists of point cloud content that allows users to freely view 360-degree key objects such as characters, players, objects, and actors in a VR/AR environment. can be used when In the case of constructing point cloud content through multiple cameras, a camera calibration process may be performed before content is captured in order to set a global coordinate system between the cameras. A method of synthesizing an arbitrary point cloud video based on the captured point cloud video may be utilized.
- post-processing may be required to improve the quality of the captured point cloud content.
- maximum/minimum depth values can be adjusted within the range provided by the camera equipment, but post-processing to remove unwanted areas (eg, background) or point data in unwanted areas is required.
- post-processing for recognizing connected spaces and filling spatial holes may be performed.
- post-processing may be performed to integrate point cloud data extracted from cameras that share a spatial coordinate system into one content through a process of converting the point cloud data extracted from cameras to a global coordinate system for each point based on the positional coordinates of each camera. there is. Through this, one wide range of point cloud content may be generated, or point cloud content with a high density of points may be obtained.
- the encoder 12 may perform an encoding process (S21) of encoding the data (geometry, attribute and/or meta data and/or mesh data, etc.) generated by the acquisition unit 11 into one or more bitstreams. . Accordingly, the encoder 12 may be referred to as a 'point cloud video encoder'. The encoder 12 may encode the data generated by the acquisition unit 11 serially or in parallel.
- the encoding process (S21) performed by the encoder 12 may be geometry-based point cloud compression (G-PCC).
- the encoder 12 may perform a series of procedures such as prediction, transformation, quantization, and entropy coding for compression and coding efficiency.
- Encoded point cloud data may be output in the form of a bitstream.
- the encoder 12 may encode the point cloud data by dividing it into geometry and attributes as will be described later.
- the output bitstream may include a geometry bitstream including encoded geometry and an attribute bitstream including encoded attributes.
- the output bitstream may further include one or more of a metadata bitstream including meta data, an auxiliary bitstream including auxiliary data, and a mesh data bitstream including mesh data.
- the encoding process (S21) will be described in more detail below.
- a bitstream containing encoded point cloud data may be referred to as a 'point cloud bitstream' or a 'point cloud video bitstream'.
- the encapsulation processor 13 may perform a process of encapsulating one or more bitstreams output from the decoder 12 in the form of a file or a segment. Accordingly, the encapsulation processor 13 may be referred to as a 'file/segment encapsulation module'.
- the encapsulation processing unit 13 is composed of a separate component/module in relation to the transmission unit 14 is represented, but according to embodiments, the encapsulation processing unit 13 is ) may be included.
- the encapsulation processing unit 13 may encapsulate the corresponding data in a file format such as ISOBMFF (ISO Base Media File Format) or may process the data in the form of other DASH segments.
- the encapsulation processing unit 13 may include meta data in a file format. Meta data may be included in, for example, boxes of various levels on the ISOBMFF file format, or may be included as data in a separate track within a file.
- the encapsulation processing unit 130 may encapsulate meta data itself into a file. Meta data processed by the encapsulation processing unit 13 may be received from a metadata processing unit not shown in the drawing.
- the meta data processing unit may be included in the encoding unit 12 or may be configured as a separate component/module.
- the transmission unit 14 may perform a transmission process (S22) of applying processing (processing for transmission) according to a file format to the 'encapsulated point cloud bitstream'.
- the transmission unit 140 may transmit a bitstream or a file/segment including the corresponding bitstream to the reception unit 21 of the reception device 20 through a digital storage medium or a network. Accordingly, the transmission unit 14 may be referred to as a 'transmitter' or a 'communication module'.
- the transmission unit 14 may process point cloud data according to an arbitrary transmission protocol.
- 'processing the point cloud data according to an arbitrary transmission protocol' may be 'processing for transmission'.
- Processing for transmission may include processing for delivery through a broadcasting network, processing for delivery through a broadband, and the like.
- the transmitter 14 may receive not only point cloud data but also meta data from the meta data processor, and may process the transmitted meta data for transmission.
- processing for transmission may be performed in a transmission processing unit, and the transmission processing unit may be included in the transmission unit 14 or configured as a component/module separate from the transmission unit 14.
- the receiving unit 21 may receive a bitstream transmitted by the transmission device 10 or a file/segment including the corresponding bitstream. Depending on the transmitted channel, the receiving unit 21 may receive a bitstream or a file/segment including the corresponding bitstream through a broadcasting network, or may receive a bitstream or a file/segment including the corresponding bitstream through a broadband. there is. Alternatively, the receiving unit 21 may receive a bitstream or a file/segment including the corresponding bitstream through a digital storage medium.
- the receiver 21 may process the received bitstream or a file/segment including the corresponding bitstream according to a transmission protocol.
- the receiving unit 21 may perform a reverse process of transmission processing (processing for transmission) to correspond to the processing for transmission performed by the transmission device 10 .
- the receiving unit 21 may transfer encoded point cloud data to the decapsulation processing unit 22 and transfer meta data to the meta data parsing unit.
- Meta data may be in the form of a signaling table. According to embodiments, the reverse process of processing for transmission may be performed in the receiving processing unit.
- Each of the reception processing unit, the decapsulation processing unit 22, and the meta data parsing unit may be included in the reception unit 21 or configured as components/modules separate from the reception unit 21.
- the decapsulation processing unit 22 may decapsulate point cloud data (ie, a bitstream in the form of a file) received from the reception unit 21 or the reception processing unit. Accordingly, the decapsulation processor 22 may be referred to as a 'file/segment decapsulation module'.
- the decapsulation processing unit 22 may obtain a point cloud bitstream or a meta data bitstream by decapsulating files according to ISOBMFF or the like.
- metadata may be included in the point cloud bitstream.
- the obtained point cloud bitstream may be delivered to the decoder 23, and the obtained metadata bitstream may be delivered to the metadata processor.
- the meta data processing unit may be included in the decoding unit 23 or may be configured as a separate component/module.
- Meta data acquired by the decapsulation processing unit 23 may be in the form of a box or track in a file format.
- the decapsulation processor 23 may receive meta data necessary for decapsulation from the meta data processor if necessary. Meta data may be transferred to the decoder 23 and used in the decoding process (S23), or may be transferred to the rendering unit 24 and used in the rendering process (S24).
- the decoder 23 may perform a decoding process (S23) of decoding the point cloud bitstream (encoded point cloud data) by receiving the bitstream and performing an operation corresponding to the operation of the encoder 12. . Accordingly, the decoder 23 may be referred to as a 'point cloud video decoder'.
- the decoder 23 may decode the point cloud data by dividing it into geometry and attribute. For example, the decoder 23 may restore (decode) geometry from a geometry bitstream included in the point cloud bitstream, and generate attributes based on an attribute bitstream included in the point cloud bitstream and the restored geometry. It can be restored (decoded). A 3D point cloud video/image may be reconstructed based on position information according to the reconstructed geometry and an attribute (color or texture, etc.) according to the decoded attribute.
- the decoding process (S23) will be described in more detail below.
- the rendering unit 24 may perform a rendering process ( S24 ) of rendering the restored point cloud video. Accordingly, the rendering unit 24 may be referred to as a 'renderer'.
- the rendering process (S24) may refer to a process of rendering and displaying point cloud content on a 3D space.
- rendering may be performed according to a desired rendering method based on position information and attribute information of points decoded through the decoding process.
- Points of the point cloud content may be rendered as a vertex with a certain thickness, a cube with a specific minimum size centered at the vertex position, or a circle centered at the vertex position.
- a user may view all or part of the rendered result through a VR/AR display or a general display.
- the rendered video may be displayed through the display unit.
- a user may view all or part of the rendered result through a VR/AR display or a general display.
- the feedback process (S25) may include a process of transferring various feedback information that may be obtained in the rendering process (S24) or the display process to the transmitting device 10 or to other components in the receiving device 20.
- the feedback process (S25) may be performed by one or more of the components included in the receiving device 20 of FIG. 1, or may be performed by one or more of the components shown in FIGS. 10 and 11.
- the feedback process (S25) may be performed by a 'feedback unit' or a 'sensing/tracking unit'.
- Interactivity for point cloud content consumption may be provided through the feedback process ( S25 ).
- head orientation information, viewport information representing a region currently viewed by the user, and the like may be fed back.
- the user may interact with things implemented in the VR/AR/MR/autonomous driving environment.
- information related to the interaction is transmitted from the transmission device 10 to the transmission device 10 in the feedback process (S25). It can also be passed to the service provider side.
- the feedback process (S25) may not be performed.
- Head orientation information may refer to information about a user's head position, angle, movement, and the like. Based on this information, information about an area that the user is currently viewing within the point cloud video, that is, viewport information may be calculated.
- the viewport information may be information about an area currently viewed by the user in the point cloud video.
- a viewpoint is a point at which a user is viewing a point cloud video, and may mean a central point of a viewport area. That is, the viewport is an area centered on the viewpoint, and the size and shape occupied by the area may be determined by a field of view (FOV).
- FOV field of view
- Gaze analysis may be performed on the receiving side (receiving device) and transmitted to the transmitting side (transmitting device) through a feedback channel.
- a device such as a VR/AR/MR display may extract a viewport area based on a user's head position/direction, a vertical or horizontal FOV supported by the device, and the like.
- the feedback information may be consumed at the receiving side (receiving device) as well as being delivered to the transmitting side (transmitting device). That is, a decoding process, a rendering process, etc. of a receiving side (receiving device) may be performed using the feedback information.
- the receiving device 20 may preferentially decode and render only the point cloud video of the region currently viewed by the user by using the head orientation information and/or the viewport information.
- the receiving unit 21 may receive all point cloud data or receive point cloud data indicated by orientation information and/or viewport information based on the orientation information and/or viewport information.
- the decapsulation processing unit 22 may decapsulate all point cloud data or may decapsulate point cloud data indicated by orientation information and/or viewport information based on orientation information and/or viewport information.
- the decoder 23 may decode all point cloud data or may decode point cloud data indicated by orientation information and/or viewport information based on the orientation information and/or viewport information.
- FIG. 4 shows an example of a point cloud encoding apparatus 400 according to embodiments of the present disclosure.
- the point cloud encoding device 400 of FIG. 4 may correspond to the encoder 12 of FIG. 1 in configuration and function.
- the point cloud encoding apparatus 400 includes a coordinate system conversion unit 405, a geometry quantization unit 410, an octree analysis unit 415, an approximation unit 420, a geometry encoding unit 425, Restoration unit 430, attribute transformation unit 440, RAHT transformation unit 445, LOD generation unit 450, lifting unit 455, attribute quantization unit 460, attribute encoding unit 465 and/or color A conversion unit 435 may be included.
- the point cloud data acquired by the acquisition unit 11 may go through processes for adjusting the quality (eg, lossless, lossy, or near-lossless) of the point cloud content according to network conditions or applications. there is.
- each point of the obtained point cloud content may be transmitted without loss, but in this case, real-time streaming may not be possible because the size of the point cloud content is large. Therefore, in order to smoothly provide the point cloud content, a process of reconstructing the point cloud content according to the maximum target bitrate is required.
- Processes for adjusting the quality of point cloud content may include a process of reconstructing and encoding position information (position information included in geometry information) or color information (color information included in attribute information) of points.
- a process of reconstructing and encoding position information of points may be referred to as geometry coding, and a process of reconstructing and encoding attribute information associated with each point may be referred to as attribute coding.
- Geometry coding may include a geometry quantization process, a voxelization process, an octree analysis process, an approximation process, a geometry encoding process, and/or a coordinate system conversion process. Also, geometry coding may further include a geometry restoration process. Attribute coding may include a color transformation process, an attribute transformation process, a prediction transformation process, a lifting transformation process, a RAHT transformation process, an attribute quantization process, an attribute encoding process, and the like.
- the process of converting the coordinate system may correspond to a process of converting a coordinate system for positions of points. Accordingly, the process of transforming the coordinate system may be referred to as 'transform coordinates'.
- the coordinate system conversion process may be performed by the coordinate system conversion unit 405 .
- the coordinate system conversion unit 405 converts the positions of the points from the global space coordinate system into position information of a 3-dimensional space (eg, a 3-dimensional space represented by X-axis, Y-axis, and Z-axis coordinate systems).
- Position information in a 3D space may be referred to as 'geometry information'.
- the geometry quantization process may correspond to a process of quantizing position information of points and may be performed by the geometry quantization unit 410 .
- the geometry quantization unit 410 searches for position information having a minimum (x, y, z) value among position information of points, and obtains the minimum (x, y, z) value from the position information of each point. Position information having a value may be subtracted.
- the geometry quantization unit 410 may perform a quantization process by multiplying the subtracted value by a preset quantization scale value and then adjusting (lowering or raising) the result to a near integer value. there is.
- the voxelization process may correspond to a process of matching geometry information quantized through the quantization process to a specific voxel existing in a 3D space.
- a voxelization process may also be performed by the geometry quantization unit 410 .
- the geometry quantization unit 410 may perform octree-based voxelization based on position information of points in order to reconstruct each point to which the quantization process is applied.
- a voxel may mean a space for storing information of points existing in 3D, similar to a pixel, which is a minimum unit having information of a 2D image/video.
- Voxel is a compound word combining volume and pixel.
- a voxel may estimate spatial coordinates from a positional relationship with a voxel group, and may have color or reflectance information like a pixel.
- Only one point may not exist (match) in one voxel. That is, information related to several points may exist in one voxel. Alternatively, information related to a plurality of points included in one voxel may be integrated into one point information. These adjustments may optionally be performed. In the case of integrating and expressing one voxel with one point information, the position value of the center point of the voxel can be set based on the position values of points existing in the voxel, and it is necessary to perform a related attribute conversion process.
- the attribute conversion process may be adjusted to an average value of a color or reflectance of points included in a voxel or a position value of a central point of a voxel and points adjacent to each other within a specific radius.
- the octree analyzer 415 may use an octree to efficiently manage the region/position of a voxel.
- An example of an octree according to embodiments of the present disclosure is shown in (a) of FIG. 6 .
- a quadtree can be used as a data structure to divide an area until a leaf node becomes a pixel and efficiently manage the size and location of the area.
- the present disclosure may apply the same method to efficiently manage a 3D space by position and size of space.
- 8 spaces may be created by dividing the 3D space based on the x-axis, y-axis, and z-axis.
- 8 spaces may be created for each small space again.
- the octree analyzer 415 divides regions until leaf nodes become voxels, and octree data capable of managing 8 child node regions to efficiently manage each region size and position. structure can be used.
- the total volume of the octree must be set to (0,0,0) to (2d, 2d, 2d).
- 2d is set to a value constituting the smallest bounding box enclosing all points of the point cloud, and d is the depth of the octree.
- the expression for obtaining the value of d may be the same as Equation 1 below, is a position value of points to which the quantization process is applied.
- An octree may be expressed as an occupancy code, and an example of an occupancy code according to embodiments of the present disclosure is shown in (b) of FIG. 6 .
- the octree analyzer 415 may express the occupancy code of the corresponding node as 1 if a point is included in each node, and express the occupancy code of the corresponding node as 0 if the point is not included.
- Each node may have an 8-bit bitmap indicating whether or not there is an occupancy for 8 child nodes. For example, since the occupancy code of the nodes corresponding to the second depth (1-depth) of FIG. 6 (b) is 00100001, at least one space (voxel or region) corresponding to the may contain points of In addition, since the occupancy code of the child nodes (leaf nodes) of the third node is 10000111, the space corresponding to the first leaf node, the sixth leaf node, the seventh leaf node, and the eighth leaf node among the leaf nodes may include at least one point.
- the occupancy code of the child nodes (leaf nodes) of the eighth node is 01001111
- Spaces corresponding to nodes may include at least one point.
- the geometry encoding process may correspond to a process of performing entropy coding on occupancy codes.
- the geometry encoding process may be performed by the geometry encoding unit 425 .
- the geometry encoding unit 425 may perform entropy coding on the occupancy code.
- the generated occupancy code may be directly encoded or may be encoded through an intra/inter coding process to increase compression efficiency.
- the receiving device 20 may reconstruct the octree through the occupancy code.
- Positions of points in a specific region may be directly transmitted, or positions of points in a specific region may be reconstructed based on voxels using a surface model.
- a mode for directly transmitting the location of each point for a specific node may be a direct mode.
- the point cloud encoding apparatus 400 may check whether conditions for enabling the direct mode are satisfied.
- the conditions for enabling direct mode are: 1) Use direct mode option must be enabled, 2) That particular node does not correspond to a leaf node, 3) Points below the threshold must exist within that particular node. and 4) that the total number of points to be directly transmitted does not exceed a limit.
- the point cloud encoding apparatus 400 may directly entropy code and transmit the position value of a point of a corresponding specific node through the geometry encoding unit 425 .
- a mode for reconstructing a position of a point in a specific area based on a voxel using a surface model may be a trisoup mode.
- the tree-up mode may be performed by the approximation unit 420 .
- the approximation unit 420 may determine a specific level of the octree, and from the determined specific level, may reconstruct the positions of points in the node region on a voxel basis using a surface model.
- the point cloud encoding apparatus 400 may selectively apply the tree-soup mode. Specifically, when using the tree-soup mode, the point cloud encoding apparatus 400 may designate a level (specific level) to which the tree-soup mode is applied. For example, if the designated specific level is equal to the depth (d) of the octree, the tree-up mode may not be applied. That is, the specified specific level must be less than the depth value of the octree.
- a 3D cube area of nodes of a specified level is called a block, and one block may include one or more voxels.
- a block or voxel may correspond to a brick.
- Each block may have 12 edges, and the approximation unit 420 may check whether each edge is adjacent to an occupied voxel having a point.
- Each edge may be adjacent to several occupied voxels.
- a specific position of an edge adjacent to a voxel is called a vertex, and the approximation unit 420 may determine an average position of corresponding positions as a vertex when several occupied voxels are adjacent to one edge.
- the point cloud encoding apparatus 400 determines the starting point (x, y, z) of the edge, the direction vector ( ⁇ x, ⁇ y, ⁇ z) of the edge, and the position value of the vertex (relative within the edge). position values) may be entropy-coded through the geometry encoding unit 425.
- the geometry restoration process may correspond to a process of generating a restored geometry by reconstructing the octree and/or the approximated octree.
- the geometry restoration process may be performed by the restoration unit 430 .
- the restoration unit 430 may perform a geometry restoration process through triangle reconstruction, up-sampling, voxelization, and the like.
- the reconstruction unit 430 may reconstruct a triangle based on the starting point of the edge, the direction vector of the edge, and the position value of the vertex. To this end, the restoration unit 430 calculates the centroid value of each vertex as shown in Equation 2 below. Calculate , and the value of each vertex as shown in Equation 3 below Subtract the central value from After deriving , as shown in Equation 4 below, the value of adding all the squares of the subtraction values can be derived.
- the restoration unit 430 may obtain a minimum value of the added values and perform a projection process along an axis having the minimum value.
- the reconstruction unit 430 may project each vertex along the x-axis based on the center of the block and project the vertices onto the (y, z) plane.
- the restoration unit 430 may obtain the ⁇ value through atan2(bi, ai) and align the vertices based on the ⁇ value. there is.
- the first triangle (1, 2, 3) can be composed of the first, second and third vertices from the aligned vertices, the second triangle (3, 4, 1) is the third, fourth and first th vertices.
- the reconstructor 430 may perform an upsampling process to add points in the middle along the edges of the triangle and convert them into voxels.
- the restoration unit 430 may generate additional points based on the upsampling factor and the width of the block. These points can be called refined vertices.
- the restoration unit 430 may voxelize the refined vertices, and the point cloud encoding device 400 may perform attribute coding based on the voxelized position values.
- the geometry encoding unit 425 may increase compression efficiency by applying context adaptive arithmetic coding.
- the geometry encoding unit 425 may directly entropy code the occupancy code using the arithmetic code.
- the geometry encoding unit 425 adaptively performs encoding (intra-coding) based on the occupancy of neighboring nodes or adaptively performs encoding based on the occupancy code of the previous frame. may be performed (inter-coding).
- a frame may mean a set of point cloud data generated at the same time. Since intra-coding and inter-coding are optional processes, they may be omitted.
- the geometry encoding unit 425 may first obtain a neighbor pattern value by using the occupancy of neighboring nodes.
- An example of a pattern of neighboring nodes is shown in FIG. 7 .
- FIG. 7(a) shows a cube corresponding to a node (a cube located in the middle) and six cubes (neighboring nodes) sharing at least one face with the cube.
- Nodes shown in the figure are nodes of the same depth (depth).
- the numbers shown in the figure represent weights (1, 2, 4, 8, 16, 32, etc.) associated with each of the six nodes. Each weight is sequentially assigned according to the locations of neighboring nodes.
- the neighbor pattern value is the sum of the values multiplied by the weights of occupied neighbor nodes (neighbor nodes with points). Accordingly, the neighbor node pattern value may have a value from 0 to 63. If the neighbor node pattern value is 0, it indicates that there is no node (occupied node) having a point among the neighbor nodes of the corresponding node. When the neighbor node pattern value is 63, it indicates that all of the neighbor nodes are occupied nodes. In (b) of FIG. 7, since the neighboring nodes to which weights 1, 2, 4, and 8 are assigned are occupied nodes, the neighboring node pattern value is 15, which is a value obtained by adding 1, 2, 4, and 8.
- the geometry encoding unit 425 may perform coding according to neighboring node pattern values. For example, when the neighbor node pattern value is 63, the geometry encoding unit 425 may perform 64 types of coding. According to embodiments, the geometry encoding unit 425 may reduce coding complexity by changing the neighboring node pattern value. For example, the changing of the neighboring node pattern value is based on a table that changes 64 to 10 or 6 can be performed with
- Attribute coding may correspond to a process of coding attribute information based on the restored (reconstructed) geometry and the geometry before coordinate system conversion (original geometry). Since an attribute may be dependent on geometry, the reconstructed geometry may be utilized for attribute coding.
- attributes may include color, reflectance, and the like.
- the same attribute coding method may be applied to information or parameters included in attributes. Color has three components and reflectance has one component, and each component can be processed independently.
- Attribute coding may include a color transformation process, an attribute transformation process, a prediction transformation process, a lifting transformation process, a RAHT transformation process, an attribute quantization process, an attribute encoding process, and the like.
- the prediction transformation process, the lifting transformation process, and the RAHT transformation process may be selectively used, or a combination of one or more may be used.
- the color conversion process may correspond to a process of converting a color format within an attribute into another format.
- a color conversion process may be performed by the color conversion unit 435 . That is, the color conversion unit 435 may convert colors within attributes.
- the color conversion unit 435 may perform a coding operation of converting a color within an attribute from RGB to YCbCr.
- an operation of the color conversion unit 435 that is, a color conversion process may be optionally applied according to a color value included in an attribute.
- the position value of the points existing in the voxel is the center of the voxel in order to integrate them into one point information for the corresponding voxel. It can be set as a point. Accordingly, a process of converting the values of attributes associated with corresponding points may be required. Also, an attribute conversion process may be performed even when the tree-up mode is performed.
- the attribute transformation process may correspond to a process of transforming an attribute based on a position where geometry coding is not performed and/or a reconstructed geometry.
- the attribute conversion process may correspond to a process of transforming an attribute of a point at a corresponding position based on a position of a point included in a voxel.
- the attribute conversion process may be performed by the attribute conversion unit 440 .
- the attribute conversion unit 440 may calculate an average value of attribute values of points (neighboring points) adjacent to each other within a specific radius and the central location value of the voxel.
- the attribute transform unit 440 may apply a weight according to a distance from the central location to attribute values and calculate an average value of the weighted attribute values.
- each voxel has a position and a calculated attribute value.
- a K-D tree or Morton code When searching for neighboring points existing within a specific location or radius, a K-D tree or Morton code may be utilized.
- the K-D tree is a binary search tree, and supports a data structure capable of managing points based on location so as to quickly perform a nearest neighbor search (NNS).
- the Morton code can be generated by mixing bits of 3D location information (x, y, z) for all points. For example, if the value of (x, y, z) is (5, 9, 1), if (5, 9, 1) is expressed as a bit, it becomes (0101, 1001, 0001), and this value is z, y, Mixing according to the bit index in x order gives 010001000111, which becomes 1095. That is, 1095 becomes the Morton code value of (5, 9, 1). Points are sorted based on the Morton code and a nearest neighbor search (NNS) may be possible through a depth-first traversal process.
- NSS nearest neighbor search
- NMS nearest neighbor search
- the prediction conversion process may correspond to a process of predicting an attribute value of a current point based on attribute values of one or more points (neighboring points) neighboring the current point (a point corresponding to a prediction target).
- the prediction conversion process may be performed by the level of detail (LOD) generating unit 450 .
- LOD level of detail
- Prediction transformation is a method to which an LOD transformation technique is applied, and the LOD generation unit 450 may calculate and set the LOD value of each point based on the LOD distance value of each point.
- FIG. 8 An example of the configuration of points according to LOD distance values is shown in FIG. 8 .
- the first figure shows the original point cloud content
- the second figure shows the distribution of points with the lowest LOD
- the seventh figure shows the distribution of points with the highest LOD.
- points of the lowest LOD may be sparsely distributed
- points of the highest LOD may be densely distributed. That is, as the LOD increases, the interval (or distance) between points may become shorter.
- Each point existing in the point cloud may be separated for each LOD, and the configuration of points for each LOD may include points belonging to an LOD lower than the corresponding LOD value.
- the composition of points having LOD level 2 may include all points belonging to LOD level 1 and LOD level 2.
- FIG. 9 An example of the configuration of points for each LOD is shown in FIG. 9 .
- the upper figure of FIG. 9 shows examples of points (P0 to P9) in the point cloud content distributed in a 3D space.
- the original order of FIG. 9 indicates the order of points P0 to P9 before LOD generation, and the LOD-based order of FIG. 9 indicates the order of points according to LOD generation.
- LOD0 may include P0, P5, P4, and P2
- LOD1 may include points of LOD0 and P1, P6, and P3.
- LOD2 may include points of LOD0, points of LOD1, and P9, P8, and P7.
- the LOD generator 450 may generate a predictor for each point for predictive transformation. Accordingly, if there are N points, N predictors may be generated.
- the neighboring points may be points existing within a distance set for each LOD from the current point.
- the predictor may multiply attribute values of neighboring points by the 'set weight value' and set an average value of the attribute values multiplied by the weight values as the predicted attribute value of the current point.
- An attribute quantization process may be performed on a residual attribute value obtained by subtracting the predicted attribute value of the current point from the attribute value of the current point.
- the lifting transformation process may correspond to a process of reconstructing points into a set of detail levels through an LOD generation process.
- the lifting conversion process may be performed by the lifting unit 455 .
- the lifting transformation process also includes a process of generating a predictor for each point, a process of setting the calculated LOD to the predictor, a process of registering neighboring points, and a process of setting weights according to the distance between the current point and neighboring points. can include
- the lifting transformation process may be a method of cumulatively applying weights to attribute values.
- a method of cumulatively applying weights to attribute values may be as follows.
- An array QW (quantization weight) for storing weight values for each point may be separately present.
- the initial value of all elements of QW is 1.0.
- the value obtained by multiplying the QW value of the predictor index of the neighboring node (neighboring point) registered in the predictor by the weight of the predictor of the current point is added.
- a new weight is derived by further multiplying the calculated weight by the weight stored in QW, the new weight is cumulatively added to updateweight as the index of the neighboring node, and the new weight is added to the attribute value of the index of the neighboring node The multiplied values are cumulatively added to the update.
- the attribute value of update is divided by the weight value of updateweight of the predictor index, and the result is added to the existing attribute value. This process may be referred to as a lift update process.
- the attribute value updated through the lift update process is multiplied by the weight updated through the lift prediction process (stored in QW), and after quantizing the result (the multiplied value), the quantized value is calculated as entropy encode
- the RAHT transformation process may correspond to a method of predicting attribute information of nodes at a higher level using attribute information associated with nodes at a lower level of the octree. That is, the RATH conversion process may correspond to an attribute information intra-coding method through backward scan of an octree.
- the RAHT conversion process may be performed by the RAHT conversion unit 445 .
- the RAHT conversion unit 445 scans from voxels to the entire area and may perform a RAHT conversion process up to the root node while summing (merging) the voxels into a larger block at each step. Since the RAHT conversion unit 445 performs the RAHT conversion process only for occupied nodes, in the case of an empty node that is not occupied, the RAHT conversion process may be performed for a node of an upper level immediately above it.
- Equation 5 is a low-pass value and can be used in the merging process at the next higher level.
- weight is can be calculated as root node is the last class Through , it can be generated as shown in Equation 6 below.
- the gDC value may also be quantized and entropy coded like a high-pass coefficient.
- the attribute quantization process may correspond to a process of quantizing attributes output from the RAHT conversion unit 445, the LOD generation unit 450, and/or the lifting unit 455.
- the attribute quantization process may be performed by the attribute quantization unit 460 .
- the attribute encoding process may correspond to a process of outputting an attribute bitstream by encoding the quantized attribute.
- the attribute encoding process may be performed by the attribute encoding unit 465 .
- the attribute quantization unit 460 calculates the residual obtained by subtracting the predicted attribute value of the corresponding current point from the attribute value of the current point. Attribute values can be quantized.
- An example of the attribute quantization process of the present disclosure is shown in Table 2.
- the attribute encoding unit 465 may directly entropy code an attribute value (an attribute value that is not quantized) of the current point. In contrast, when neighboring points exist in the predictor of current points, the attribute encoding unit 465 may entropy-encode the quantized residual attribute value.
- the attribute quantization unit 460 when a value obtained by multiplying an attribute value updated through a lift update process by a weight updated through a lift prediction process (stored in QW) is output from the lifting unit 460, the attribute quantization unit 460 outputs the result (the multiplied value) may be quantized, and the attribute encoding unit 465 may entropy encode the quantized value.
- FIG. 10 shows an example of a point cloud decoding apparatus 1000 according to an embodiment of the present disclosure.
- the point cloud decoding apparatus 1000 of FIG. 10 may correspond to the decoder 23 of FIG. 1 in configuration and function.
- the point cloud decoding apparatus 1000 may perform a decoding process based on data (bitstream) transmitted from the transmission apparatus 10 .
- the decoding process may include restoring (decoding) the point cloud video by performing an operation corresponding to the above-described encoding operation on the bitstream.
- the decoding process may include a geometry decoding process and an attribute decoding process.
- the geometry decoding process may be performed by the geometry decoding unit 1010
- the attribute decoding process may be performed by the attribute decoding unit 1020. That is, the point cloud decoding apparatus 1000 may include a geometry decoding unit 1010 and an attribute decoding unit 1020 .
- the geometry decoding unit 1010 may restore geometry from the geometry bitstream, and the attribute decoding unit 1020 may restore attributes based on the restored geometry and the attribute bitstream.
- the point cloud decoding apparatus 1000 may restore a 3D point cloud video (point cloud data) based on position information according to the restored geometry and attribute information according to the restored attribute.
- the point cloud decoding apparatus 1100 includes a geometry decoding unit 1105, an octree synthesis unit 1110, an approximation synthesis unit 1115, a geometry restoration unit 1120, and a coordinate system inverse transformation unit 1125. , attribute decoding unit 1130, attribute inverse quantization unit 1135, RATH transform unit 1150, LOD generator 1140, inverse lifting unit 1145, and/or color inverse transform unit 1155. .
- the geometry decoding unit 1105, the octree synthesis unit 1110, the approximation synthesis unit 1115, the geometry restoration unit 1120, and the coordinate system inverse transformation unit 1150 may perform geometry decoding.
- Geometry decoding may be performed in a reverse process to the geometry coding described with reference to FIGS. 1 to 9 .
- Geometry decoding may include direct coding and trisoup geometry decoding. Direct coding and tri-sup geometry decoding may be selectively applied.
- the geometry decoding unit 1105 may decode the received geometry bitstream based on Arithmetic coding.
- An operation of the geometry decoding unit 1105 may correspond to a reverse process of an operation performed by the geometry encoding unit 435 .
- the octree synthesizer 1110 may generate an octree by obtaining an occupancy code from a decoded geometry bitstream (or information on geometry secured as a result of decoding). An operation of the octree synthesis unit 1110 may correspond to a reverse process of an operation performed by the octree analysis unit 415 .
- the approximation synthesis unit 1115 may synthesize a surface based on the decoded geometry and/or the generated octree when trisup geometry encoding is applied.
- the geometry restoration unit 1120 may restore geometry based on the surface and the decoded geometry.
- the geometry restoration unit 1120 may directly import and add position information of points to which direct coding is applied.
- the geometry restoration unit 1120 may perform a reconstruction operation, eg, triangle reconstruction, up-sampling, voxelization, and the like, to restore the geometry.
- the reconstructed geometry may include a point cloud picture or frame that does not include attributes.
- the coordinate system inverse transformation unit 1150 may obtain positions of points by transforming the coordinate system based on the restored geometry. For example, the coordinate system inverse transformation unit 1150 may inversely transform the positions of points from a 3-dimensional space (eg, a 3-dimensional space represented by X-axis, Y-axis, and Z-axis coordinate systems) into position information of a global space coordinate system.
- a 3-dimensional space eg, a 3-dimensional space represented by X-axis, Y-axis, and Z-axis coordinate systems
- the attribute decoding unit 1130, the attribute inverse quantization unit 1135, the RATH transform unit 1230, the LOD generator 1140, the inverse lifting unit 1145, and/or the color inverse transform unit 1250 may perform attribute decoding.
- can Attribute decoding may include RAHT transform decoding, predictive transform decoding, and lifting transform decoding. The above three decodings may be selectively used, or a combination of one or more decodings may be used.
- the attribute decoding unit 1130 may decode the attribute bitstream based on Arithmetic coding. For example, when the attribute value of the current point is directly entropy-encoded because there are no neighboring points in the predictor of each point, the attribute decoding unit 1130 decodes the attribute value (an attribute value that is not quantized) of the current point. can As another example, when the quantized residual attribute value is entropy-encoded because neighboring points exist in the predictor of the current points, the attribute decoder 1130 may decode the quantized residual attribute value.
- the attribute inverse quantization unit 1135 may inverse quantize a decoded attribute bitstream or information about an attribute obtained as a result of decoding, and output inverse quantized attributes (or attribute values). For example, when the quantized residual attribute value is output from the attribute decoding unit 1130, the attribute inverse quantization unit 1135 may inversely quantize the quantized residual attribute value and output the residual attribute value.
- the inverse quantization process may be selectively applied based on whether the point cloud encoding device 400 encodes attributes. That is, when the attribute value of the current point is directly encoded because there are no neighboring points in the predictor of each point, the attribute decoding unit 1130 may output the attribute value of the current point that is not quantized, and the attribute encoding process is performed. can be skipped.
- Table 3 An example of the attribute inverse quantization process of the present disclosure is shown in Table 3.
- the RATH transform unit 1150, the LOD generator 1140, and/or the inverse lift unit 1145 may process the reconstructed geometry and inverse quantized attributes.
- the RATH converter 1150, the LOD generator 1140, and/or the inverse lifter 1145 may selectively perform a decoding operation corresponding to the encoding operation of the point cloud encoding apparatus 400.
- the inverse color transform unit 1155 may perform inverse transform coding to inverse transform color values (or textures) included in decoded attributes. The operation of the color inverse transform unit 1155 may be selectively performed based on whether the color transform unit 435 is operated.
- the transmission device includes a data input unit 1205, a quantization processing unit 1210, a voxelization processing unit 1215, an octree occupancy code generation unit 1220, a surface model processing unit 1225, Intra/inter coding processing unit 1230, Arithmetic coder 1235, meta data processing unit 1240, color conversion processing unit 1245, attribute conversion processing unit 1250, prediction/lifting/RAHT conversion processing unit 1255 , an Arithmetic coder 1260 and a transmission processing unit 1265.
- the function of the data input unit 1205 may correspond to the acquisition process performed by the acquisition unit 11 of FIG. 1 . That is, the data input unit 1205 may acquire a point cloud video and generate point cloud data for a plurality of points. Geometry information (position information) in the point cloud data is generated by a quantization processing unit 1210, a voxelization processing unit 1215, an octree occupancy code generation unit 1220, a surface model processing unit 1225, an intra/inter coding processing unit 1230, and , may be generated in the form of a geometry bitstream through the Arithmetic Coder 1235.
- Attribute information in the point cloud data may be generated in the form of an attribute bitstream through a color conversion processing unit 1245, an attribute conversion processing unit 1250, a prediction/lifting/RAHT conversion processing unit 1255, and an arithmetic coder 1260.
- the geometry bitstream, the attribute bitstream, and/or the meta data bitstream may be transmitted to the receiving device through processing by the transmission processor 1265.
- the function of the quantization processing unit 1210 may correspond to the quantization process performed by the geometry quantization unit 410 of FIG. 4 and/or the function of the coordinate system conversion unit 405 .
- the function of the voxelization processing unit 1215 may correspond to the voxelization process performed by the geometry quantization unit 410 of FIG. ) may correspond to the function performed.
- the function of the surface model processing unit 1225 may correspond to the function performed by the approximation unit 420 of FIG. It may correspond to a function performed by the geometry encoding unit 425.
- a function of the meta data processor 1240 may correspond to that of the meta data processor described in FIG. 1 .
- the function of the color conversion processing unit 1245 may correspond to the function performed by the color conversion unit 435 of FIG. 4, and the function of the attribute conversion processing unit 1250 is performed by the attribute conversion unit 440 of FIG. It can correspond to the function of
- the function of the prediction/lifting/RAHT conversion processing unit 1255 may correspond to the functions performed by the RAHT conversion unit 4450, the LOD generation unit 450, and the lifting unit 455 of FIG. ) may correspond to the function of the attribute encoding unit 465 of FIG. 4 .
- a function of the transmission processing unit 1265 may correspond to a function performed by the transmission unit 14 and/or the encapsulation processing unit 13 of FIG. 1 .
- the receiving device includes a receiving unit 1305, a receiving processing unit 1310, an Arismetic decoder 1315, a metadata parser 1335, an octree reconstruction processing unit 1320 based on an occupancy code, and a surface model processing unit. 1325, Inverse quantization processing unit 1330, Arismetic decoder 1340, Inverse quantization processing unit 1345, Prediction/lifting/RAHT inverse transformation processing unit 1350, Color inverse transformation processing unit 1355, and Renderer 1360 can include
- the function of the receiver 1305 may correspond to the function performed by the receiver 21 of FIG. 1, and the function of the reception processor 1310 may correspond to the function performed by the decapsulation processor 22 of FIG. there is. That is, the receiving unit 1305 receives a bitstream from the transmission processing unit 1265, and the receiving processing unit 1310 may extract a geometry bitstream, an attribute bitstream, and/or a metadata bitstream through decapsulation processing. .
- the geometry bitstream is a reconstructed (restored) position value (position information) through an Arithmetic Decoder 1315, an Octree Reconstruction Processing Unit 1320 based on Ocupancy Code, a Surface Model Processing Unit 1325, and an Inverse Quantization Processing Unit 1330 can be created with
- the attribute bitstream may be generated as an attribute value reconstructed through an Arithmetic decoder 1340, an inverse quantization processor 1345, a prediction/lifting/RAHT inverse transform processor 1350, and a color inverse transform processor 1355.
- the meta data bitstream may be generated as meta data (or meta data information) restored through the meta data parser 1335 .
- a position value, attribute value, and/or meta data may be rendered by the renderer 1360 to provide a VR/AR/MR/autonomous driving experience to the user.
- the function of the Arismetic decoder 1315 may correspond to the function performed by the geometry decoding unit 1105 of FIG. It may correspond to the function performed by 1110.
- the function of the surface model processing unit 1325 may correspond to the function performed by the approximation synthesis unit of FIG. 11, and the function of the inverse quantization processing unit 1330 may correspond to the geometry restoration unit 1120 and/or the coordinate system inverse transformation unit 1125 of FIG. ) may correspond to the function performed.
- a function of the meta data parser 1335 may correspond to a function performed by the meta data parser described in FIG. 1 .
- the function of the Arithmetic decoder 1340 may correspond to the function performed by the attribute decoder 1130 of FIG. 11, and the function of the inverse quantization processor 1345 may correspond to the function can be matched.
- the function of the prediction/lifting/RAHT inverse transformation processing unit 1350 may correspond to the functions performed by the RAHT transformation unit 1150, the LOD generation unit 1140, and the inverse lifting unit 1145 of FIG. 11, and the color inverse transformation processing unit ( A function of 1355 may correspond to a function performed by the inverse color transform unit 1155 of FIG. 11 .
- FIG. 14 shows an example of a structure capable of interworking with a method/apparatus for transmitting and receiving point cloud data according to embodiments of the present disclosure.
- the structure of FIG. 14 is at least one of an AI Server, a robot, a self-driving vehicle, an XR device, a smartphone, a home appliance, and/or an HMD. At least one represents a configuration connected to a cloud network. Robots, self-driving vehicles, XR devices, smartphones, or consumer electronics may be referred to as devices. In addition, the XR device may correspond to or interwork with a point cloud data device (PCC) according to embodiments.
- PCC point cloud data device
- a cloud network may refer to a network that constitutes a part of a cloud computing infrastructure or exists within a cloud computing infrastructure.
- the cloud network may be configured using a 3G network, a 4G or Long Term Evolution (LTE) network, or a 5G network.
- LTE Long Term Evolution
- the server may be connected to at least one or more of robots, self-driving vehicles, XR devices, smart phones, home appliances, and/or HMDs through a cloud network, and may assist at least part of processing of the connected devices.
- the HMD may represent one of the types in which an XR device and/or a PCC device according to embodiments may be implemented.
- An HMD type device may include a communication unit, a control unit, a memory unit, an I/O unit, a sensor unit, and a power supply unit.
- the XR/PCC device applies PCC and/or XR technology to HMDs, HUDs in vehicles, televisions, mobile phones, smart phones, computers, wearable devices, home appliances, digital signage, vehicles, fixed robots or mobile robots, etc. may be implemented as
- the XR/PCC device analyzes 3D point cloud data or image data obtained through various sensors or from an external device to generate location (geometry) data and attribute data for 3D points, thereby generating information about the surrounding space or real objects. Information can be obtained, and XR objects to be displayed can be rendered and output. For example, the XR/PCC device may output an XR object including additional information about the recognized object in correspondence with the recognized object.
- the XR/PCC device may be implemented as a mobile phone or the like to which PCC technology is applied.
- the mobile phone may decode and display point cloud content based on PCC technology.
- Self-driving vehicles can be implemented as mobile robots, vehicles, unmanned air vehicles, etc. by applying PCC technology and XR technology.
- An autonomous vehicle to which XR/PCC technology is applied may refer to an autonomous vehicle equipped with a means for providing an XR image or an autonomous vehicle subject to control/interaction within the XR image.
- autonomous vehicles that are controlled/interacted within an XR image are distinguished from XR devices and may be interlocked with each other.
- An autonomous vehicle equipped with a means for providing an XR/PCC image may acquire sensor information from sensors including cameras, and output an XR/PCC image generated based on the obtained sensor information.
- an autonomous vehicle may provide an XR/PCC object corresponding to a real object or an object in a screen to a passenger by outputting an XR/PCC image with a HUD.
- the XR/PCC object when the XR/PCC object is output to the HUD, at least a part of the XR/PCC object may be output to overlap the real object toward which the passenger's gaze is directed.
- an XR/PCC object when an XR/PCC object is output to a display provided inside an autonomous vehicle, at least a part of the XR/PCC object may be output to overlap the object in the screen.
- an autonomous vehicle may output XR/PCC objects corresponding to objects such as lanes, other vehicles, traffic lights, traffic signs, two-wheeled vehicles, pedestrians, and buildings.
- VR technology is a display technology that provides objects or backgrounds of the real world only as CG images.
- AR technology means a technology that shows a virtual CG image on top of a real object image.
- MR technology is similar to the aforementioned AR technology in that it mixes and combines virtual objects in the real world.
- AR technology the distinction between real objects and virtual objects made of CG images is clear, and virtual objects are used in a form that complements real objects, whereas in MR technology, virtual objects are considered equivalent to real objects. distinct from technology. More specifically, for example, a hologram service to which the above-described MR technology is applied. Integrating VR, AR and MR technologies, it can be referred to as XR technology.
- Point cloud data may represent a volumetric encoding of a point cloud consisting of a sequence of frames (point cloud frames).
- Each point cloud frame may include a number of points, positions of points, and attributes of points. The number of points, positions of points, and attributes of points may vary from frame to frame.
- Each point cloud frame may refer to a set of 3D points specified by cartesian coordinates (x, y, z) and zero or more attributes of 3D points at a particular time instance.
- the Cartesian coordinate system (x, y, z) of 3D points may be a position or geometry.
- the present disclosure may further perform a spatial division process of dividing the point cloud data into one or more 3D blocks before encoding (encoding) the point cloud data.
- a 3D block may mean all or a partial area of a 3D space occupied by point cloud data.
- a 3D block is one of a tile group, tile, slice, coding unit (CU), prediction unit (PU), or transform unit (TU) can mean more.
- a tile corresponding to a 3D block may mean all or a partial area of a 3D space occupied by point cloud data.
- a slice corresponding to a 3D block may mean all or a partial area of a 3D space occupied by point cloud data.
- a tile may be divided into one or more slices based on the number of points included in one tile.
- a tile may be a group of slices having bounding box information. Bounding box information of each tile may be specified in a tile inventory (or tile parameter set (TPS)).
- TPS tile parameter set
- a tile may overlap another tile in the bounding box.
- a slice may be a unit of data in which encoding is independently performed or a unit of data in which decoding is independently performed.
- a slice can be a set of points that can be independently encoded or decoded.
- a slice may be a series of syntax elements representing part or all of a coded point cloud frame.
- Each slice may include an index for identifying a tile to which the corresponding slice belongs.
- the spatially divided 3D blocks may be independently or non-independently processed.
- spatially divided 3D blocks may be encoded or decoded independently or non-independently, and transmitted or received independently or non-independently.
- the spatially divided 3D blocks may be quantized or inversely quantized independently or independently of each other, and may be transformed or inversely transformed independently or independently of each other.
- the space-divided 3D blocks may be independently or non-independently rendered.
- encoding or decoding may be performed in units of slices or units of tiles.
- quantization or inverse quantization may be performed differently for each tile or slice, and may be performed differently for each transformed or inverse transformed tile or slice.
- point cloud data is spatially divided into one or more 3D blocks and the spatially divided 3D blocks are processed independently or non-independently, the process of processing the 3D blocks is performed in real time and at the same time, the corresponding process is reduced. It can be treated as a delay.
- random access and parallel encoding or parallel decoding on a 3D space occupied by point cloud data may be possible, and errors accumulated during encoding or decoding may be prevented.
- the transmission device 1500 includes a space divider 1505 performing a space division process, a signaling processor 1510, a geometry encoder 1515, an attribute encoder 1520, an encapsulation processor ( 1525) and/or a transmission processing unit 1530.
- the spatial division unit 1505 may perform a spatial division process of dividing the point cloud data into one or more 3D blocks based on a bounding box and/or a sub-bounding box.
- point cloud data may be divided into one or more tiles and/or one or more slices.
- point cloud data may be divided into one or more tiles through a spatial partitioning process, and each of the divided tiles may be further divided into one or more slices.
- FIG. 16 shows an example of spatially dividing a bounding box (ie, point cloud data) into one or more 3D blocks.
- the overall bounding box of point cloud data consists of three tiles: tile #0, tile #1, and tile #2.
- tile #0 may be further divided into two slices, that is, slice #0 and slice #1.
- tile #1 may be further divided into two slices, that is, slice #2 and slice #3.
- tile #2 may be further divided into slice #4.
- the signaling processing unit 1510 may generate and/or process signaling information (eg, entropy encoding) and output the signaling information in the form of a bit stream.
- signaling information eg, entropy encoding
- a bitstream output from the signaling processor in which signaling information is encoded
- the signaling information may include information for space division or information about space division. That is, the signaling information may include information related to a space division process performed by the space divider 1505.
- the signaling information may include information for decoding some point cloud data, information related to 3D space areas for supporting spatial access, and the like.
- the signaling information may include 3D bounding box information, 3D space area information, tile information, and/or tile inventory information.
- the signaling information may be provided from the spatial divider 1505, the geometry encoder 1515, the attribute encoder 1520, the transmission processor 1525, and/or the encapsulation processor 1530.
- the signaling processor 1510 converts the feedback information fed back from the receiving device 1700 of FIG. It can be provided to the encapsulation processing unit 1530.
- Signaling information may be stored and signaled in a sample in a track, a sample entry, a sample group, a track group, or a separate metadata track.
- the signaling information includes a sequence parameter set (SPS) for signaling at the sequence level, a geometry parameter set (GPS) for signaling of geometry coding information, and signaling of attribute coding information. It may be signaled in units of an attribute parameter set (APS) for signal level and a tile parameter set (TPS) (or tile inventory) for signaling at the tile level. Also, signaling information may be signaled in units of coding units such as slices or tiles.
- positions (position information) of the 3D blocks may be output to the geometry encoder 1515, and the attributes (attribute information) of the 3D blocks may be output to the attribute encoder 1520.
- the geometry encoder 1515 may configure an octree based on the position information, encode the configured octree, and output a geometry bitstream. In addition, the geometry encoder 1515 may reconstruct (restore) the octree and/or the approximated octree and output it to the attribute encoder 1520. The reconstructed octree may be a reconstructed geometry.
- the geometry encoder 1515 includes the coordinate system conversion unit 405 of FIG. 4, the geometry quantization unit 410, the octree analysis unit 415, the approximation unit 420, the geometry encoding unit 425, and/or the reconstruction unit 430.
- the geometry encoder 1515 includes the quantization processor 1210 of FIG. 12 , the voxelization processor 1215, the octree occupancy code generator 1220, the surface model processor 1225, and the intra/inter coding processor. All or part of the operations performed by 1230 and/or the Arithmetic Coder 1235 may be performed.
- the attribute encoder 1520 may output an attribute bitstream by encoding attributes based on the reconstructed geometry.
- the attribute encoder 1520 includes the attribute conversion unit 440 of FIG. 4, the RAHT conversion unit 445, the LOD generation unit 450, the lifting unit 455, the attribute quantization unit 460, the attribute encoding unit 465, and / or all or part of the operations performed by the color conversion unit 435 may be performed.
- the attribute encoder 1520 is performed by the attribute conversion processor 1250, the prediction/lifting/RAHT conversion processor 1255, the Arithmetic Coder 1260, and/or the color conversion processor 1245 of FIG. All or part of the operations may be performed.
- the encapsulation processor 1525 may encapsulate one or more input bitstreams into a file or segment.
- the encapsulation processing unit 1525 may encapsulate the geometry bitstream, the attribute bitstream, and the signaling bitstream, respectively, or multiplex the geometry bitstream, the attribute bitstream, and the signaling bitstream to perform encapsulation. can do.
- the encapsulation processing unit 1525 may encapsulate a bitstream (G-PCC bitstream) composed of a type-length-value (TLV) structure sequence into a file.
- TLV (or TLV encapsulation) structures constituting the G-PCC bitstream may include a geometry bitstream, an attribute bitstream, a signaling bitstream, and the like.
- the G-PCC bitstream may be generated by the encapsulation processor 1525 or may be generated by the transmission processor 1530.
- the TLV structure or TLV encapsulation structure will be described in detail later.
- the encapsulation processing unit 1525 may perform all or some of the operations performed by the encapsulation processing unit 13 of FIG. 1 .
- the transmission processing unit 1530 may process an encapsulated bitstream or a file/segment according to an arbitrary transmission protocol.
- the transmission processing unit 1530 may perform all or some of the operations performed by the transmission unit 14 and the transmission processing unit described with reference to FIG. 1 or the transmission processing unit 1265 of FIG. 12 .
- the receiving device 1700 may perform operations corresponding to those of the transmitting device 1500 performing space division. As illustrated in FIG. 17 , the receiving device 1700 includes a reception processing unit 1705, a decapsulation processing unit 1710, a signaling processing unit 1715, a geometry decoder 1720, an attribute encoder 1725, and/or a post-processing unit. (1730).
- the reception processing unit 1705 may receive a file/segment in which a G-PCC bitstream is encapsulated, a G-PCC bitstream, or a bitstream, and may perform processing according to a transport protocol targeting them.
- the reception processing unit 1705 may perform all or some of the operations performed by the reception unit 21 and the reception processing unit described with reference to FIG. 1 or by the reception unit 1305 or the reception processing unit 1310 of FIG. 13 .
- the decapsulation processing unit 1710 may obtain a G-PCC bitstream by performing a reverse process of the operations performed by the encapsulation processing unit 1525.
- the decapsulation processor 1710 may obtain a G-PCC bitstream by decapsulating the file/segment.
- the decapsulation processing unit 1710 may obtain a signaling bitstream and output it to the signaling processing unit 1715, obtain a geometry bitstream and output it to the geometry decoder 1720, and obtain an attribute bitstream It can be obtained and output to the attribute decoder 1725.
- the decapsulation processor 1710 may perform all or some of the operations performed by the decapsulation processor 22 of FIG. 1 or the reception processor 1410 of FIG. 13 .
- the signaling processing unit 1715 may parse and decode signaling information by performing a reverse process of the operations performed by the signaling processing unit 1510.
- the signaling processing unit 1715 may parse and decode signaling information from a signaling bitstream.
- the signaling processing unit 1715 may provide the decoded signaling information to the geometry decoder 1720, the attribute decoder 1720, and/or the post-processing unit 1730.
- the geometry decoder 1720 may restore the geometry from the geometry bitstream by performing a reverse process of the operations performed by the geometry encoder 1515.
- the geometry decoder 1720 may reconstruct geometry based on signaling information (parameters related to geometry). The reconstructed geometry may be provided to the attribute decoder 1725.
- the attribute decoder 1725 may restore an attribute from the attribute bitstream by performing a reverse process of the operations performed by the attribute encoder 1520.
- the attribute decoder 1725 may reconstruct an attribute based on signaling information (parameters related to the attribute) and the reconstructed geometry.
- the post-processing unit 1730 may restore point cloud data based on the restored geometry and the restored attributes. Restoration of point cloud data may be performed through a process of matching restored geometry and restored attributes with each other. According to embodiments, the post-processing unit 1730 performs a reverse process of the spatial division process of the transmission device 1500 based on the signaling information when the restored point cloud data is in tile and/or slice units, A bounding box of point cloud data may be restored. According to embodiments, when a bounding box is divided into a plurality of tiles and/or a plurality of slices through a spatial division process, the post-processor 1730 performs some slices and/or some slices based on signaling information. By combining tiles, a part of the bounding box may be restored. Here, some slices and/or some tiles used for restoration of the bounding box may be slices and/or some tiles related to a 3D spatial domain for which spatial access is desired.
- FIG. 18 shows an example of a structure of a bitstream according to embodiments of the present disclosure
- FIG. 19 shows an example of an identification relationship between components in a bitstream according to embodiments of the present disclosure
- FIG. 20 illustrates this Represents a reference relationship between components in a bitstream according to embodiments of the disclosure.
- the bitstream may include one or more sub-bitstreams.
- the bitstream includes one or more SPS, one or more GPS, one or more APS (APS0, APS1), one or more TPS, and/or one or more slices (slice 0, ..., slice n). can do. Since a tile is a slice group including one or more slices, a bitstream may include one or more tiles.
- the TPS may include information about each tile (eg, information such as coordinate values, height, and/or size of a bounding box), and each slice may include a geometry bitstream (Geom0) and/or one or more attribute bitstreams may include (Attr0, Attr1).
- slice 0 may include a geometry bitstream Geom00 and/or one or more attribute bitstreams Attr00 and Attr10.
- a geometry bitstream in each slice may include a geometry slice header (Geom_slice_header) and geometry slice data (Geom_slice_data).
- the geometry slice header includes identification information (geom_parameter_set_id) of a parameter set included in GPS, tile identifier (geom_tile_id), slice identifier (geom_slice_id), and/or information about data included in geometry slice data (geom_slice_data) (geomBoxOrigin, geom_box_log2_scale, geom_max_node_size_log2, geom_num_points), etc.
- geomBoxOrigin is geometry box origin information indicating the box origin of the corresponding geometry slice data
- geom_box_log2_scale is information indicating the log scale of the corresponding geometry slice data
- geom_max_node_size_log2 is information indicating the size of the root geometry octree node
- geom_num_points is the corresponding geometry slice data This is information related to the number of points in .
- Geometry slice data may include geometry information (or geometry data) of point cloud data within a corresponding slice.
- Each attribute bitstream in each slice may include an attribute slice header (Attr_slice_header) and attribute slice data (Attr_slice_data).
- the attribute slice header may include information about corresponding attribute slice data
- the attribute slice data may include attribute information (or attribute data) of point cloud data in the corresponding slice.
- each attribute bitstream may include different attribute information. For example, one attribute bitstream may include attribute information corresponding to color, and another attribute bitstream may include attribute information corresponding to reflectance.
- the SPS may include an identifier (seq_parameter_set_id) for identifying the corresponding SPS
- the GPS may include an identifier (geom_parameter_set_id) for identifying the corresponding GPS, and to which the corresponding GPS belongs (refers to).
- ) may include an identifier (seq_parameter_set_id) indicating an active SPS
- the APS may include an identifier (attr_parameter_set_id) for identifying the corresponding APS and an identifier (seq_parameter_set_id) indicating an active SPS referred to by the corresponding APS.
- the geometry data includes a geometry slice header and geometry slice data
- the geometry slice header may include an active GPS identifier (geom_parameter_set_id) referred to by the corresponding geometry slice.
- the geometry slice header may further include an identifier (geom_slice_id) for identifying the corresponding geometry slice and/or an identifier (geom_tile_id) for identifying the corresponding tile.
- the attribute data includes an attribute slice header and attribute slice data, and the attribute slice header may include an identifier (attr_parameter_set_id) of an active APS referred to by the corresponding attribute slice and an identifier (geom_slice_id) for identifying a geometry slice related to the attribute slice.
- the geometry slice may refer to the GPS, and the GPS may refer to the SPS.
- the SPS may list available attributes, assign an identifier to each of the listed attributes, and identify a decoding method. Attribute slices can be mapped to output attributes according to identifiers, and attribute slices themselves can have dependencies on previously decoded geometry slices and APSs.
- parameters necessary for encoding point cloud data may be newly defined in a parameter set of point cloud data and/or a corresponding slice header.
- parameters necessary for the encoding may be newly defined (added) to the APS, and when performing tile-based encoding, parameters necessary for the encoding are newly added to the tile and/or slice header. can be defined (added).
- syntax elements (or fields) expressed in the syntax structure of the SPS may be syntax elements included in the SPS or syntax elements signaled through the SPS.
- main_profile_compatibility_flag may indicate whether a bitstream conforms to the main profile. For example, if the value of main_profile_compatibility_flag is equal to the first value (e.g., 1), this may indicate that the bitstream follows the main profile, and if the value of main_profile_compatibility_flag is equal to the second value (e.g., 0), this indicates that the bitstream is the main profile. It can indicate that a profile other than the profile is followed.
- unique_point_positions_constraint_flag may indicate whether all output points in each point cloud frame referenced by the current SPS can have unique positions. For example, if the value of unique_point_positions_constraint_flag is equal to the first value (e.g., 1), this may indicate that all output points can have unique positions in each point cloud frame referenced by the current SPS, and the value of unique_point_positions_constraint_flag is equal to the second value. If equal to the value (e.g., 0), this may indicate that two or more output points may have the same position in any point cloud frame referenced by the current SPS. Even if all points are unique in each slice, slices and other points in a frame may overlap. In that case, the value of unique_point_positions_constraint_flag may be set to 0.
- level_idc may represent a level followed by a bitstream.
- sps_seq_parameter_set_id may indicate an identifier for an SPS referenced by other syntax elements.
- sps_bounding_box_present_flag may indicate whether a bounding box exists in the SPS. For example, if the value of sps_bounding_box_present_flag is equal to the first value (e.g., 1), this may indicate that the bounding box exists in the SPS, and if the value of sps_bounding_box_present_flag is equal to the second value (e.g., 0), this indicates the size of the bounding box. may indicate undefined.
- sps_bounding_box_present_flag When the value of sps_bounding_box_present_flag is equal to the first value (e.g., 1), sps_bounding_box_offset_x, sps_bounding_box_offset_y, sps_bounding_box_offset_z, sps_bounding_box_offset_log2_scale, sps_bounding_box_size_width, sps_bounding_box_size_height, and/or sps_bounding_box_size_depth may be further signaled.
- the first value e.g., 1
- sps_bounding_box_offset_x When the value of sps_bounding_box_offset_x, sps_bounding_box_offset_y, sps_bounding_box_offset_z, sps_bounding_box_offset_log2
- sps_bounding_box_offset_x may indicate the quantized x offset of the source bounding box in the Cartesian coordinate system, and if the x offset of the source bounding box does not exist, the value of sps_bounding_box_offset_x may be inferred to be 0.
- sps_bounding_box_offset_y may represent the quantized y offset of the source bounding box in the Cartesian coordinate system, and if the y offset of the source bounding box does not exist, the value of sps_bounding_box_offset_y may be inferred to be 0.
- sps_bounding_box_offset_z may indicate the quantized z offset of the source bounding box in the Cartesian coordinate system, and if the z offset of the source bounding box does not exist, the value of sps_bounding_box_offset_z may be inferred to be 0.
- sps_bounding_box_offset_log2_scale may indicate a scale factor for scaling quantized x, y, z source bounding box offsets.
- sps_bounding_box_size_width may indicate the width (or width) of the source bounding box in the Cartesian coordinate system, and if the width of the source bounding box does not exist, the value of sps_bounding_box_size_width may be inferred to be 1.
- sps_bounding_box_size_height may represent the height of the source bounding box in a Cartesian coordinate system, and if the height of the source bounding box does not exist, the value of sps_bounding_box_size_height may be inferred to be 1.
- sps_bounding_box_size_depth may indicate the depth of the source bounding box in Cartesian coordinates, and if the depth of the source bounding box does not exist, the value of sps_bounding_box_size_depth may be inferred as 1.
- a value obtained by adding 1 to sps_source_scale_factor_numerator_minus1 may indicate a scale factor numerator of the source point cloud.
- a value obtained by adding 1 to sps_source_scale_factor_denominator_minus1 may represent a scale factor denominator of the source point cloud.
- sps_num_attribute_sets may indicate the number of coded attributes in a bitstream.
- sps_num_attribute_sets may have a value between 0 and 63.
- attribute_dimension_minus1[i] and attribute_instance_id[i] as many as 'the number of coded attributes in the bitstream' indicated by sps_num_attribute_sets may be further signaled.
- i may increase by 1 from 0 to 'the number of coded attributes in the bitstream - 1'.
- a value obtained by adding 1 to attribute_dimension_minus1[i] may indicate the number of components of the i-th attribute, and attribute_instance_id may indicate an instance identifier of the i-th attribute.
- attribute_bitdepth_minus1[i] When the value of attribute_dimension_minus1[i] is greater than 1, attribute_bitdepth_minus1[i], attribute_secondary_bitdepth_minus1[i], attribute_cicp_colour_primaries[i], attribute_cicp_transfer_characteristics[i], attribute_cicp_matrix_coeffs[i], and/or attribute_cicp_video_full_range_flag[i] may be further signaled. there is.
- a value obtained by adding 1 to attribute_bitdepth_minus1[i] may indicate a bitdepth for the first component (or first component) of the i-th attribute signal(s).
- attribute_secondary_bitdepth_minus1[i] may indicate the bit depth for the second component (or second component) of the ith attribute signal(s), and attribute_cicp_colour_primaries[i] is the color attribute source primer of the ith attribute. It can represent the chromaticity coordinates of the heads.
- attribute_cicp_transfer_characteristics[i] is the source input linear optical intensity with nominal real-valued range from 0 to 1 of the i-th attribute, which is the reference opto-electronic transfer characteristic function characteristic function, or the inverse of the reference optoelectronic transfer characteristic function as a function of output linear optical intensity.
- attribute_cicp_matrix_coeffs[i] describes the matrix coefficients used to derive luma and chroma signals from green, blue, and red (or primaries of Y, Z, and X) of the ith attribute can do.
- attribute_cicp_video_full_range_flag[i] is the range of black level and luma and chroma signals derived from E'Y, E'PB and E'PR or E'R, E'G and E'B real-value component signals of the ith attribute can represent known_attribute_label_flag[i] may indicate whether know_attribute_label[i] or attribute_label_four_bytes[i] is signaled for the i-th attribute.
- known_attribute_label_flag[i] may indicate that known_attribute_label is signaled for the ith attribute
- the value of known_attribute_label_flag[i] is equal to the second value (e.g., 0).
- this may indicate that attribute_label_four_bytes[i] is signaled for the i-th attribute.
- known_attribute_label[i] may indicate the type of the i-th attribute.
- known_attribute_label[i] may indicate a known attribute type using a 4-byte code as shown in FIG. 22a.
- log2_max_frame_idx may indicate the number of bits used to signal the frame_idx syntax variable. For example, a value obtained by adding 1 to the value of log2_max_frame_idx may represent the number of bits used to signal the frame_idx syntax variable.
- sps_bypass_stream_enabled_flag may indicate whether a bypass coding mode is used to read a bitstream. For example, if the value of sps_bypass_stream_enabled_flag is equal to the first value (e.g., 1), this may indicate that the bypass coding mode is used to read the bitstream. As another example, if the value of sps_bypass_stream_enabled_flag is equal to the second value (e.g., 0), this may indicate that the bypass coding mode is not used to read the bitstream.
- sps_extension_flag may indicate whether sps_extension_data syntax elements exist in a corresponding SPS syntax structure.
- sps_extension_flag may need to be equal to 0 in the bitstream.
- sps_extension_data_flag may be further signaled.
- sps_extension_data_flag can have any value, and the presence and value of sps_extension_data_flag may not affect decoder conformance to the profile.
- syntax elements (or fields) expressed in the syntax structure of GPS may be syntax elements included in GPS or syntax elements signaled through GPS.
- gps_geom_parameter_set_id may indicate a GPS identifier referred to by other syntax elements, and gps_seq_parameter_set_id may indicate a value of seq_parameter_set_id for a corresponding active SPS.
- gps_box_present_flag may indicate whether additional bounding box information is provided in a geometry slice header referring to current GPS. For example, if the value of gps_box_present_flag is equal to the first value (e.g., 1), this may indicate that additional bounding box information is provided in the geometry slice header referring to the current GPS, and the value of gps_box_present_flag is equal to the second value (e.g., 1).
- gps_box_present_flag When the value of gps_box_present_flag is equal to the first value (e.g., 1), gps_gsh_box_log2_scale_present_flag may be further signaled. gps_gsh_box_log2_scale_present_flag may indicate whether gps_gsh_box_log2_scale is signaled to each geometry slice header referring to current GPS.
- gps_gsh_box_log2_scale_present_flag is equal to the first value (e.g., 1)
- this may indicate that gps_gsh_box_log2_scale is signaled to each geometry slice header referring to the current GPS.
- the value of gps_gsh_box_log2_scale_present_flag is equal to the second value (e.g., 0)
- gps_gsh_box_log2_scale_present_flag is equal to the second value (e.g., 0)
- gps_gsh_box_log2_scale may be further signaled.
- gps_gsh_box_log2_scale may represent a common scale factor of bounding box origins for all slices currently referring to GPS.
- unique_geometry_points_flag may indicate whether all output points in all slices currently referencing GPS have unique positions within one slice. For example, if the value of unique_geometry_points_flag is equal to the first value (e.g., 1), it may indicate that all output points in all slides currently referring to GPS have unique positions within one slice. If the value of unique_geometry_points_flag is equal to the second value (e.g., 0), it may indicate that two or more output points may have the same positions within one slice in all slices that currently refer to GPS.
- geometry_planar_mode_flag may indicate whether planar coding mode is activated. For example, if the value of geometry_planar_mode_flag is equal to the first value (e.g., 1), this may indicate that the planar coding mode is activated, and if the value of geometry_planar_mode_flag is equal to the second value (e.g., 0), this indicates that the planar coding mode is activated. You can indicate not to. When the value of geometry_planar_mode_flag is equal to the first value (e.g., 1), geom_planar_mode_th_idcm, geom_planar_mode_th[1], and/or geom_planar_mode_th[2] may be further signaled.
- the first value e.g. 1
- geom_planar_mode_th_idcm geom_planar_mode_th[1]
- geom_planar_mode_th[2] may be further signaled.
- geom_planar_mode_th_idcm may indicate an activation threshold for a direct coding mode.
- the value of geom_planar_mode_th_idcm can be an integer within the range of 0 to 127.
- geom_planar_mode_th[i] may indicate an activation threshold for a planar coding mode along the i-th direction in which the planar coding mode is most likely to be efficient, for i in the range of 0 to 2.
- geom_planar_mode_th[i] can be an integer within the range of 0 to 127.
- geometry_angular_mode_flag may indicate whether angular coding mode is activated. For example, if the value of geometry_angular_mode_flag is equal to the first value (e.g., 1), this indicates that angular coding mode is activated, and if the value of geometry_angular_mode_flag is equal to the second value (e.g., 0), this indicates that angular coding mode is not activated. can instruct.
- geometry_angular_mode_flag When the value of geometry_angular_mode_flag is equal to the first value (e.g., 1), lidar_head_position[0], lidar_head_position[1], lidar_head_position[2], number_lasers, planar_buffer_disabled, implicit_qtbt_angular_max_node_min_dim_log2_to_split_z, and/or implicit_qtbt_angular_max_split_to_diff_z may be further signaled.
- the first value e.g., 1
- lidar_head_position[0] lidar_head_position[1]
- lidar_head_position[2] number_lasers
- planar_buffer_disabled implicit_qtbt_angular_max_node_min_dim_log2_to_split_z
- lidar_head_position[0], lidar_head_position[1], and/or lidar_head_position[2] may represent the (X, Y, Z) coordinates of the lidar head in a coordinate system with internal axes.
- number_lasers may indicate the number of lasers used for angular coding mode.
- Each of laser_angle[i] and laser_correction[i] may be signaled as many times as indicated by number_lasers. Here, i may increase by 1 from 0 to 'the value of number_lasers -1'.
- laser_angle[i] may represent the tangent of the elevation angle of the ith laser relative to the horizontal plane defined by the 0th and 1st inner axes, and laser_correction[i] is the ith laser position relative to lidar_head_position[2] The correction of , along the second inner axis, can be directed.
- planar_buffer_disabled may indicate whether tracking of the closest nodes using a buffer is used in a process of coding a planar mode flag and a planar position in planar mode.
- planar_buffer_disabled if the value of planar_buffer_disabled is equal to the first value (e.g., 1), it may indicate that the nearest node tracking using a buffer is not used in the process of coding the planar mode flag and planar position in planar mode, and planar_buffer_disabled If the value of is equal to the second value (e.g., 0), this may indicate that the nearest node tracking using a buffer is used in a process of coding the planner mode flag and the planner position in planar mode. When planar_buffer_disabled does not exist, the value of planar_buffer_disabled can be inferred as a second value (e.g., 0).
- a second value e.g., 0
- implicit_qtbt_angular_max_node_min_dim_log2_to_split_z may indicate a log2 value of a node size in which a horizontal split of nodes is more preferable than a vertical split.
- implicit_qtbt_angular_max_diff_to_split_z may indicate a log2 value for a vertical-to-horizontal node size ratio allowed for a node. If implicit_qtbt_angular_max_diff_to_split_z does not exist, implicit_qtbt_angular_max_diff_to_split_z can be inferred to be 0.
- the neighbor_context_restriction_flag may indicate whether the geometry node occupancy of the current node is coded with contexts determined from neighboring nodes located inside the parent node of the current node. For example, if the value of neighbour_context_restriction_flag is equal to the first value (e.g., 0), this may indicate that the geometry node occupancy of the current node is coded with contexts determined from neighboring nodes located inside the parent node of the current node, and neighbour_context_restriction_flag If the value of is equal to the second value (e.g., 1), it may indicate that the geometry node occupancy of the current node is not coded with contexts determined from neighboring nodes located inside the parent node of the current node.
- the first value e.g., 0
- neighbour_context_restriction_flag If the value of is equal to the second value (e.g., 1), it may indicate that the geometry node occupancy of the current node is not
- inferred_direct_coding_mode_enabled_flag may indicate whether direct_mode_flag exists in a corresponding geometry node syntax. For example, if the value of inferred_direct_coding_mode_enabled_flag is equal to the first value (e.g., 1), this may indicate that direct_mode_flag is present in the corresponding geometry node syntax, and if the value of inferred_direct_coding_mode_enabled_flag is equal to the second value (e.g., 0), this indicates that direct_mode_flag is It may indicate that it does not exist in the corresponding geometry node syntax.
- the first value e.g. 1
- the value of inferred_direct_coding_mode_enabled_flag is equal to the second value (e.g., 0)
- bitwise_occupancy_coding_flag may indicate whether geometry node occupancy is encoded using bitwise contextualization of occupancy_map, which is a corresponding syntax element. For example, if the value of bitwise_occupancy_coding_flag is equal to the first value (e.g., 1), this may indicate that the geometry node occupancy is encoded using the bitwise contextualization of the corresponding syntax element occupancy_map, and the value of bitwise_occupancy_coding_flag is equal to the second value (e.g. , 0), this may indicate that the geometry node occupancy is encoded using the directory encoded syntax element occupancy_map.
- the first value e.g. 1
- bitwise_occupancy_coding_flag is equal to the second value (e.g. 0)
- the adjacent_child_contextualization_enabled_flag may indicate whether adjacent children of neighboring octree nodes are used for bitwise occupancy contextualization. For example, if the value of adjacent_child_contextualization_enabled_flag is equal to the first value (e.g., 1), this may indicate that adjacent children of neighboring octree nodes are used for bitwise occupancy contextualization, and the value of adjacent_child_contextualization_enabled_flag is equal to the second value (e.g., 0). ), this may indicate that the children of neighboring octree nodes are not used for bitwise occupancy contextualization.
- the first value e.g. 1
- adjacent_child_contextualization_enabled_flag is equal to the second value (e.g., 0).
- log2_neighbour_avail_boundary may indicate a value of a variable NeighbAvailBoundary used in a decoding process. For example, if the value of neighbor_context_restriction_flag is equal to the first value (e.g., 1), NeighbAvailabilityMask may be set to 1, and if the value of neighbor_context_restriction_flag is equal to the second value (e.g., 0), NeighbAvailabilityMask may be set to 1 ⁇ log2_neighbour_avail_boundary.
- log2_intra_pred_max_node_size may indicate an octree node size suitable for occupancy intra prediction.
- log2_trisoup_node_size can designate the variable TrisoupNodeSize as the size of triangle nodes.
- geom_scaling_enabled_flag may indicate whether a scaling process for geometry positions is applied during a geometry slice decoding process. For example, if the value of geom_scaling_enabled_flag is equal to the first value (e.g., 1), this may indicate that a scaling process for geometry positions is performed during a geometry slice decoding process, and if the value of geom_scaling_enabled_flag is equal to the second value (e.g., 0). ), this may indicate that no scaling is required for geometry positions. When the value of geom_scaling_enabled_flag is equal to the first value (e.g., 1), geom_base_qp may be further signaled.
- the first value e.g. 1
- geom_base_qp may be further signaled.
- geom_base_qp may indicate a base value of a geometry position quantization parameter.
- gps_implicit_geom_partition_flag may indicate whether an implicit geometry partition is enabled for a corresponding sequence or slice. For example, if the value of gps_implicit_geom_partition_flag is equal to the first value (e.g., 1), this may indicate that the implicit geometry partition is enabled for that sequence or slice, and the value of gps_implicit_geom_partition_flag is equal to the second value (e.g., 1). , 0), this may indicate that the implicit geometry partition is disabled for the corresponding sequence or slice.
- gps_implicit_geom_partition_flag When the value of gps_implicit_geom_partition_flag is equal to the first value (e.g., 1), gps_max_num_implicit_qtbt_before_ot and gps_min_size_implicit_qtbt may be signaled.
- gps_max_num_implicit_qtbt_before_ot may indicate the maximum number of implicit QT and BT partitions before OT partitions.
- gps_min_size_implicit_qtbt may indicate the minimum size of implicit QT and BT partitions.
- gps_extension_flag may represent whether gps_extension_data syntax elements exist in a corresponding GPS syntax structure. For example, if the value of gps_extension_flag is equal to the first value (e.g., 1), this may indicate that gps_extension_data syntax elements are present in the corresponding GPS syntax, and if the value of gps_extension_flag is equal to the second value (e.g., 0), this indicates that It may indicate that gps_extension_data syntax elements do not exist in the corresponding GPS syntax structure.
- the first value e.g. 1
- the value of gps_extension_flag is equal to the second value (e.g., 0)
- gps_extension_data_flag When the value of gps_extension_flag is equal to the first value (e.g., 1), gps_extension_data_flag may be further signaled.
- gps_extension_data_flag can have any value, and the presence and value of gps_extension_data_flag may not affect decoder conformance to the profile.
- syntax elements (or fields) expressed in the syntax structure of the APS may be syntax elements included in the APS or syntax elements signaled through the APS.
- aps_attr_parameter_set_id may provide an APS identifier for reference by other syntax elements, and aps_seq_parameter_set_id may indicate a value of sps_seq_parameter_set_id for an active SPS.
- attr_coding_type may indicate a coding type for an attribute. A table of values of attr_coding_type and attribute coding types assigned to each of them is shown in FIG. 25 . As illustrated in FIG.
- Attr_coding_type indicates predicting weight lifting, and if the value of attr_coding_type is a second value (e.g., 1), coding type is indicated.
- the type indicates RAHT, and if the value of attr_coding_type is the third value (e.g., 2), it may indicate fix weight lifting.
- aps_attr_initial_qp may indicate an initial value of the variable SliceQp for each slice referring to the APS.
- the value of aps_attr_initial_qp may be in the range of 4 to 51.
- aps_attr_chroma_qp_offset may indicate offsets for the initial quantization parameter signaled by aps_attr_initial_qp.
- the aps_slice_qp_delta_present_flag may represent whether the ash_attr_qp_delta_luma syntax element and the ash_attr_qp_delta_chroma syntax element are present in the corresponding attribute slice header (ASH).
- aps_slice_qp_delta_present_flag is equal to the first value (e.g., 1)
- this may indicate that ash_attr_qp_delta_luma and ash_attr_qp_delta_chroma are present in the corresponding attribute slice header (ASH)
- the value of aps_slice_qp_delta_present_flag is the second value (e.g., 0).
- this may indicate that ash_attr_qp_delta_luma and ash_attr_qp_delta_chroma do not exist in the corresponding attribute slice header.
- Attr_coding_type is the first value (e.g., 0) or the third value (e.g., 2), that is, if the coding type is prediction weight lifting or fixed weight lifting, lifting_num_pred_nearest_neighbours_minus1, lifting_search_range_minus1, and lifting_neighbour_bias[k] are further can be signaled.
- a value obtained by adding 1 to lifting_num_pred_nearest_neighbours_minus1 may represent the maximum number of nearest neighbors to be used for prediction.
- a value of the variable NumPredNearestNeighbours may be set to be the same as lifting_num_pred_nearest_neighbours (a value obtained by adding 1 to lifting_num_pred_nearest_neighbours_minus1).
- a value obtained by adding 1 to lifting_search_range_minus1 may indicate a search range used for 'determination of nearest neighbors used for prediction' and 'build of distance-based levels of detail (LOD)'.
- lifting_neighbour_bias[k] may indicate a bias used to weight the kth components in the calculation of the Euclidean distance between two points as part of the nearest neighbor derivation process.
- lifting_scalability_enabled_flag may indicate whether the attribute decoding process allows pruned octree decode results for input geometry points. For example, if the value of lifting_scalability_enabled_flag is equal to the first value (e.g., 1), this may indicate that the attribute decoding process allows pruned octree decode results for input geometry points, and the value of lifting_scalability_enabled_flag is equal to the second value (e.g.
- lifting_scalability_enabled_flag is not the first value (e.g., 1)
- lifting_num_detail_levels_minus1 may be further signaled.
- lifting_num_detail_levels_minus1 may indicate the number of LODs for attribute coding.
- lifting_lod_regular_sampling_enabled_flag may be further signaled.
- lifting_lod_regular_sampling_enabled_flag may indicate whether LOD is built by a regular sampling strategy.
- lifting_lod_regular_sampling_enabled_flag is equal to the first value (e.g., 1)
- this may indicate that the LOD is created using a regular sampling strategy
- the value of lifting_lod_regular_sampling_enabled_flag is equal to the second value (e.g., 0)
- this may indicate that the LOD is being created using a regular sampling strategy. It may indicate that a distance_based sampling strategy is used instead.
- lifting_scalability_enabled_flag When the value of lifting_scalability_enabled_flag is not the first value (e.g., 1), lifting_sampling_period_minus2 [idx] or lifting_sampling_distance_squared_scale_minus1 [idx] may be further signaled depending on whether the LOD is created by the regular sampling strategy (the value of lifting_lod_regular_sampling_enabled_flag).
- the first value e.g. 1, lifting_sampling_period_minus2[idx] may be signaled
- the value of lifting_lod_regular_sampling_enabled_flag
- idx can increase by 1 from 0, and can increase up to a value obtained by subtracting 1 from num_detail_levels_minus1.
- a value obtained by adding 2 to lifting_sampling_period_minus2[idx] may indicate a sampling period for LOD idx.
- a value obtained by adding 1 to lifting_sampling_distance_squared_scale_minu1[idx] may represent a scaling factor for derivation of the square of the sampling distance for LOD idx.
- lifting_sampling_distance_squared_offset[idx] may indicate an offset for deriving the square of sampling distance for LOD idx.
- Attr_coding_type is equal to the first value (e.g., 0), that is, if the coding type is prediction weight lifting, lifting_adaptive_prediction_threshold, lifting_intra_lod_prediction_num_layers, lifting_max_num_direct_predictors, and inter_component_prediction_enabled_flag may be signaled.
- lifting_adaptive_prediction_threshold may indicate a threshold value for enabling adaptive prediction.
- a variable AdaptivePredictionThreshold specifying a threshold for switching the adaptive predictor selection mode may be set equal to the value of lifting_adaptive_prediction_threshold.
- lifting_intra_lod_prediction_num_layers may indicate the number of LOD layers that decoded points within the same LOD layer can refer to to generate a prediction value of a target point. For example, if the value of lifting_intra_lod_prediction_num_layers is equal to the value of LevelDetailCount, this may indicate that the target point can refer to decoded points within the same LOD layer for all LOD layers. If the value of lifting_intra_lod_prediction_num_layers is 0, this may indicate that the target point cannot refer to decoded points within the same LOD layer for arbitrary LOD layers.
- a value of lifting_intra_lod_prediction_num_layers may have a range between 0 and LevelDetailCount.
- lifting_max_num_direct_predictors may indicate the maximum number of predictors to be used for direct prediction.
- inter_component_prediction_enabled_flag may indicate whether a primary component of a multi-component attribute is used to predict reconstructed values of nonprimary components. For example, if the value of inter_component_prediction_enabled_flag is equal to the first value (e.g., 1), this may indicate that the primary component of the multi-component attribute is used to predict restoration values of non-primary components. As another example, if the value of inter_component_prediction_enabled_flag is equal to the second value (e.g., 0), this may indicate that all attribute components are independently restored.
- raht_prediction_enabled_flag may be signaled.
- raht_prediction_enabled_flag may indicate whether transform weight prediction from neighboring points is enabled in a RAHT decoding process.
- raht_prediction_enabled_flag if the value of raht_prediction_enabled_flag is equal to the first value (e.g., 1), this may indicate that transform weighted prediction from neighboring points is enabled in the RAHT decoding process, and if the value of raht_prediction_enabled_flag is equal to the second value (e.g., 0). ), this may indicate that transform weighted prediction from neighboring points is disabled in the RAHT decoding process.
- the value of raht_prediction_enabled_flag is equal to the first value (e.g., 1), raht_prediction_threshold0 and raht_prediction_threshold1 may be further signaled.
- raht_prediction_threshold0 may indicate a threshold for terminating transform weighted prediction from neighboring points.
- raht_prediction_threshold1 may indicate a threshold for skipping transform weighted prediction from neighboring points.
- aps_extension_flag may indicate whether aps_extension_data_flag syntax elements exist in a corresponding APS syntax structure. For example, if the value of aps_extension_flag is equal to the first value (e.g., 1), this may indicate that aps_extension_data_flag syntax elements are present in the corresponding APS syntax structure, and if the value of aps_extension_flag is equal to the second value (e.g., 0) This may indicate that aps_extension_data_flag syntax elements do not exist in the corresponding APS syntax structure.
- the first value e.g. 1
- the value of aps_extension_flag is equal to the second value (e.g., 0) This may indicate that aps_extension_data_flag syntax elements do not exist in the corresponding APS syntax structure.
- aps_extension_data_flag When the value of aps_extension_flag is equal to the first value (e.g., 1), aps_extension_data_flag may be signaled.
- aps_extension_data_flag can have any value, and the presence and value of aps_extension_data_flag may not affect decoder conformance to the profile.
- TPS tile parameter set
- syntax elements (or fields) expressed in the syntax structure of the TPS may be syntax elements included in the TPS or syntax elements signaled through the TPS.
- tile_frame_idx may include an identifying number that can be used to identify the purpose of tile inventory.
- tile_seq_parameter_set_id may indicate a value of sps_seq_parameter_set_id for an active SPS.
- tile_id_present_flag may indicate parameters for identifying tiles. For example, if the value of tile_id_present_flag is equal to the first value (e.g., 1), this may indicate that tiles are identified according to the value of the tile_id syntax element, and if the value of tile_id_present_flag is equal to the second value (e.g., 0), this may indicate that tiles are identified. are identified according to their location in the tile inventory.
- tile_cnt may represent the number of tile bounding boxes present in the tile inventory.
- tile_bounding_box_bits may indicate a bit depth for expressing bounding box information for a tile inventory.
- tile_id, tile_bounding_box_offset_xyz[tile_id][k], and tile_bounding_box_size_xyz[tile_id][k] may be signaled. .
- tile_id can identify a specific tile in tile_inventory.
- tile_id may be signaled when the value of tile_id_present_flag is the first value (e.g., 1), and may be signaled as many times as the number of tile bounding boxes. If tile_id does not exist (if not signaled), the value of tile_id can be inferred as the index of a tile in the tile inventory provided by the loop variable tileIdx. It may be a requirement of bitstream conformance that all values of tile_id must be unique within the tile inventory.
- tile_bounding_box_offset_xyz[tileId][k] may indicate a bounding box including a slice identified by gsh_tile_id identical to tileId.
- tile_bounding_box_offset_xyz[tileId][k] may be the kth component of (x, y, z) origin coordinates of the tile bounding box for TileOrigin[k].
- tile_bounding_box_size_xyz[tileId][k] may be the kth component of the width, height and depth of the tile bounding box.
- Tile_origin_xyz[k] may be signaled while increasing by 1 until the variable k becomes 0 to 2.
- tile_origin_xyz[k] may represent the kth component of the tile origin in cartesian coordinates.
- the value of tile_origin_xyz[k] can be forced to be equal to sps_bounding_box_offset[k].
- tile_origin_log2_scale may represent a scaling factor for scaling components of tile_origin_xyz.
- the value of tile_origin_log2_scale can be forced to be equal to sps_bounding_box_offset_log2_scale.
- k 0, ... , 2
- FIGS. 27 and 28 show an example of a syntax structure of a geometry slice.
- the syntax elements (or fields) represented in FIGS. 27 and 28 may be syntax elements included in the geometry slice or signaled through the geometry slice.
- a bitstream transmitted from a transmitting device to a receiving device may include one or more slices.
- Each slice may include a geometry slice and an attribute slice.
- the geometry slice may be a geometry slice bitstream
- the attribute slice may be an attribute slice bitstream.
- a geometry slice may include a geometry slice header (GSH), and an attribute slice may include an attribute slice header (ASH).
- the geometry slice bitstream may include a geometry slice header (geometry_slice_header()) and geometry slice data geometry_slice_data().
- the geometry slice data may include geometry or geometry-related data related to part or all of the point cloud.
- syntax elements signaled through a geometry slice header may be as follows.
- gsh_geometry_parameter_set_id may represent the value of gps_geom_parameter_set_id of active GPS.
- gsh_tile_id may indicate an identifier of a corresponding tile referred to by a corresponding geometry slice header (GSH).
- GSH geometry slice header
- gsh_slice_id may indicate an identifier of a corresponding slice for reference by other syntax elements.
- frame_idx may represent log2_max_frame_idx + 1 least significant bits of a notional frame number counter. Successive slices with different values of frame_idx may form part of different output point cloud frames.
- Consecutive slices having the same frame_idx value and not having an intermediate frame boundary marker data unit may form part of the same output point cloud frame as each other.
- gsh_num_points may indicate the maximum number of coded points in a corresponding slice.
- a requirement of bitstream conformance may be that the value of gsh_num_points should be greater than or equal to the number of decoded points in a slice.
- gsh_box_log2_scale When the value of gps_box_present_flag is equal to the first value (e.g., 1), gsh_box_log2_scale, gsh_box_origin_x, gsh_box_origin_y, and gsh_box_origin_z may be signaled.
- gsh_box_log2_scale may be signaled when the value of gps_box_present_flag is equal to the first value (eg, 1) and the value of gps_gsh_box_log2_scale_present_flag is equal to the first value (eg, 1).
- gsh_box_log2_scale may indicate a scaling factor of a bounding box origin for a corresponding slice.
- gsh_box_origin_x may indicate the x value of the bounding box origin scaled by the value of gsh_box_log2_scale
- gsh_box_origin_y may indicate the y value of the bounding box origin scaled by the value of gsh_box_log2_scale
- gsh_box_origin_z may indicate the bounding scaled by the value of gsh_box_log2_scale It can indicate the z value of the box origin.
- the variables slice_origin_x, slice_origin_y, and/or slice_origin_z can be derived as follows.
- gps_gsh_box_log2_scale_present_flag is set equal to gsh_box_log2_scale. Otherwise, if the value of gps_gsh_box_log2_scale_present_flag is equal to the first value (e.g., 1), originScale is set equal to gps_gsh_box_log2_scale. If the value of gps_box_present_flag is equal to the second value (e.g., 0), the values of the variables slice_origin_x, slice_origin_y, and slice_origin_z can be inferred to be 0.
- gps_box_present_flag is equal to the first value (e.g., 1)
- the following equations may be applied to the variables slice_origin_x, slice_origin_y, and slice_origin_z.
- slice_origin_x gsh_box_origin_x ⁇ originScale
- slice_origin_y gsh_box_origin y ⁇ originScale
- slice_origin_z gsh_box_origin_z ⁇ originScale
- gps_implicit_geom_partition_flag When the value of gps_implicit_geom_partition_flag is equal to the first value (e.g., 1), gsh_log2_max_nodesize_x, gsh_log2_max_nodesize_y_minus_x, and gsh_log2_max_nodesize_z_minus_y may be further signaled. If the value of gps_implicit_geom_partition_flag is equal to the second value (e.g., 0), gsh_log2_max_nodesize may be signaled.
- the second value e.g., 0
- gsh_log2_max_nodesize_x can represent the bounding box size in the x dimension, that is, MaxNodesizeXLog2 used in the decoding process as follows.
- MaxNodeSizeXLog2 gsh_log2_max_nodesize_x
- MaxNodeSizeX 1 ⁇ MaxNodeSizeXLog2
- gsh_log2_max_nodesize_y_minus_x may indicate the bounding box size in the y dimension, that is, MaxNodesizeYLog2 used in the decoding process as follows.
- MaxNodeSizeYLog2 gsh_log2_max_nodesize_y_minus_x + MaxNodeSizeXLog2.
- MaxNodeSizeY 1 ⁇ MaxNodeSizeYLog2.
- gsh_log2_max_nodesize_z_minus_y may indicate the bounding box size in the z dimension, that is, MaxNodesizeZLog2 used in the decoding process as follows.
- MaxNodeSizeZLog2 gsh_log2_max_nodesize_z_minus_y + MaxNodeSizeYLog2
- MaxNodeSizeZ 1 ⁇ MaxNodeSizeZLog2
- gsh_log2_max_nodesize may represent the size of the root geometry octree node when the value of gps_implicit_geom_partition_flag is the first value (e.g. 1).
- the variable MaxNodeSize and the variable MaxGeometryOctreeDepth can be derived as follows.
- MaxNodeSize 1 ⁇ gsh_log2_max_nodesize
- MaxGeometryOctreeDepth gsh_log2_max_nodesize log2_trisoup_node_size
- geom_slice_qp_offset and geom_octree_qp_offsets_enabled_flag may be signaled.
- geom_slice_qp_offset may indicate an offset for the base geometry quantization parameter (geom_base_qp).
- geom_octree_qp_offsets_enabled_flag may indicate whether geom_node_qp_offset_eq0_flag is present in a corresponding geometry node syntax.
- geom_octree_qp_offsets_enabled_flag is equal to the first value (e.g., 1)
- this may indicate that geom_node_qp_offset_eq0_flag is present in the corresponding geometry node syntax
- the value of geom_octree_qp_offsets_enabled_flag is equal to the second value (e.g., 0)
- geom_octree_qp_offsets_enabled_flag When the value of geom_octree_qp_offsets_enabled_flag is equal to the first value (e.g., 1), geom_octree_qp_offsets_depth may be signaled.
- geom_octree_qp_offsets_depth may represent the depth of a geometry octree when geom_node_qp_offset_eq0_flag exists in the geometry node syntax.
- Geometry slice data may include a repetition statement (first repetition statement) that is repeated as many times as the value of MaxGeometryOctreeDepth.
- MaxGeometryOctreeDepth may represent the maximum value of geometry octree depth. In the first iteration statement, the depth can be increased by 1 from 0 to (MaxGeometryOctreeDepth-1).
- the first repetition statement may include a repetition statement (second repetition statement) that is repeated as many times as the value of NumNodesAtDepth. NumNodesAtDepth[depth] may indicate the number of nodes to be decoded at a corresponding depth.
- nodeidx can be increased by 1 from 0 until it becomes (NumNodesAtDepth - 1).
- xN NodeX[depth][nodeIdx]
- yN NodeY[depth][nodeIdx]
- zN NodeZ[depth][nodeIdx]
- geometry_node(depth, nodeIdx, xN, yN, zN) may be signaled.
- NodeX[depth][nodeIdx], NodeY[depth][nodeIdx], and NodeZ[depth][nodeIdx] may indicate the x, y, and z coordinates of the Idx-th node in decoding order at a given depth.
- a geometry bitstream of a corresponding node at a corresponding depth may be transmitted through geometry_node (depth, nodeIdx, xN, yN, zN).
- geometry_trisoup_data() may be further signaled. That is, if the size of triangle nodes is greater than 0, a geometry bitstream encoded with trishoop geometry may be signaled through geometry_trisup_data().
- FIGS. 29 and 30 show an example of a syntax structure of an attribute slice.
- the syntax elements (or fields) represented in FIGS. 29 and 30 may be syntax elements included in an attribute slice or syntax elements signaled through an attribute slice.
- the attribute slice bitstream (attribute_slice_bitstream()) may include an attribute slice header (attribute_slice_header()) and attribute slice data attribute_slice_data().
- Attribute slice data attribute_slice_data( ) may include an attribute or attribute-related data related to part or all of the point cloud.
- syntax elements signaled through the attribute slice header may be as follows.
- ash_attr_parameter_set_id may indicate a value of aps_attr_parameter_set_id of an active APS.
- ash_attr_sps_attr_idx may indicate an attribute set in an active SPS.
- ash_attr_geom_slice_id may indicate the value of gsh_slice_id of the active geometry slice header.
- aps_slice_qp_delta_present_flag may represent whether ash_attr_layer_qp_delta_luma and ash_attr_layer_qp_delta_chroma syntax elements are present in the current ASH.
- aps_slice_qp_delta_present_flag when the value of aps_slice_qp_delta_present_flag is equal to the first value (e.g., 1), this may indicate that ash_attr_layer_qp_delta_luma and ash_attr_layer_qp_delta_chroma currently exist in ASH, and when the value of aps_slice_qp_delta_present_flag is equal to the second value (e.g., 0) , this may indicate that ash_attr_layer_qp_delta_luma and ash_attr_layer_qp_delta_chroma do not currently exist in the ASH.
- ash_attr_qp_delta_luma may be signaled.
- ash_attr_qp_delta_luma may indicate a luma delta quantization parameter (qp) from an initial slice qp in the active attribute parameter set.
- attribute_dimension_minus1[ash_attr_sps_attr_idx] is greater than 0, ash_attr_qp_delta_chroma may be signaled.
- ash_attr_qp_delta_chroma may indicate a chroma delta quantization parameter (qp) from an initial slice qp in the active attribute parameter set.
- Variable InitialSliceQpY and variable InitialSliceQpC can be derived as follows.
- InitialSliceQpY aps_attrattr_initial_qp + ash_attr_qp_delta_luma
- InitialSliceQpC aps_attrtr_initial_qp + aps_attr_chroma_qp_offset + ash_attr_qp_delta_chroma
- ash_attr_num_layer_qp_delta_present_flag When the value of ash_attr_layer_qp_delta_present_flag is equal to the first value (e.g., 1), ash_attr_num_layer_qp_minus1 may be signaled. A value obtained by adding 1 to the value of ash_attr_num_layer_qp_minus1 may represent the number of layers for which ash_attr_qp_delta_luma and ash_attr_qp_delta_chroma are signaled. If ash_attr_num_layer_qp is not signaled, the value of ash_attr_num_layer_qp can be inferred to be 0.
- ash_attr_layer_qp_delta_present_flag When the value of ash_attr_layer_qp_delta_present_flag is equal to the first value (e.g., 1), ash_attr_layer_qp_delta_luma[i] may be repeatedly signaled as much as the value of NumLayerQp. i can be increased by 1 from 0 to (NumLayerQp-1). Also, in the iterative process of increasing i by 1, when the value of attribute_dimension_minus1[ash_attr_sps_attr_idx] is greater than 0, ash_attr_layer_qp_delta_chroma[i] may be further signaled.
- the first value e.g. 1
- ash_attr_layer_qp_delta_chroma[i] when the value of attribute_dimension_minus1[ash_attr_sps_attr_idx] is
- ash_attr_layer_qp_delta_luma may indicate a luma delta quantization parameter (qp) from InitialSliceQpY in each layer.
- ash_attr_layer_qp_delta_chroma may indicate a chroma delta quantization parameter (qp) from InitialSliceQpC in each layer.
- Variable SliceQpY[i] and variable SliceQpC[i] can be derived as follows.
- SliceQpY[i] InitialSliceQpY + ash_attr_layer_qp_delta_luma[i]
- SliceQpC[i] InitialSliceQpC + ash_attr_layer_qp_delta_chroma[i]
- ash_attr_region_qp_delta_present_flag may be further signaled. If the value of ash_attr_region_qp_delta_present_flag is equal to the first value (e.g., 1), it may indicate that ash_attr_region_qp_delta, region bounding box origin, and size are present in the current attribute slice header. If the value of ash_attr_region_qp_delta_present_flag is equal to the second value (e.g., 0), it may indicate that ash_attr_region_qp_delta, region bounding box origin, and size do not exist in the current attribute slice header.
- the first value e.g. 1
- the value of ash_attr_region_qp_delta_present_flag is equal to the second value (e.g., 0)
- ash_attr_qp_region_box_origin_x may indicate the x offset of the region bounding box related to slice_origin_x
- ash_attr_qp_region_box_origin_y may indicate the y offset of the region bounding box related to slice_origin_y
- ash_attr_qp_region_box_origin_z may indicate the z offset of the region bounding box related to slice_origin_z.
- ash_attr_qp_region_box_size_width may indicate the width of the region bounding box
- ash_attr_qp_region_box_size_height may indicate the height of the region bounding box
- ash_attr_qp_region_box_size_depth may indicate the depth of the region bounding box.
- ash_attr_region_qp_delta may indicate delta qp from SliceQpY[i] and SliceQpC[i] of the region designated by ash_attr_qp_region_box.
- a variable RegionboxDeltaQp specifying a region box delta quantization parameter may be set equal to a value of ash_attr_region_qp_delta.
- syntax elements signaled through attribute slice data may be as follows.
- Zerorun can indicate the number of zeros preceding (preferring) predIndex or residual.
- predIndex[i] may indicate a predictor index for decoding the i-th point value of the attribute.
- the value of predIndex[i] can have a range from 0 to the value of max_num_predictors.
- FIG. 31 shows an example of a syntax structure of a metadata slice.
- the syntax elements (or fields) represented in FIG. 31 may be syntax elements included in an attribute slice or syntax elements signaled through an attribute slice.
- the metadata slice bitstream may include a metadata slice header (metadata_slice_header()) and metadata slice data metadata_slice_data().
- (b) of FIG. 31 shows an example of a metadata slice header
- (c) of FIG. 31 shows an example of metadata slice data.
- syntax elements signaled through the meta data slice header may be as follows.
- msh_slice_id may indicate an identifier for identifying a corresponding metadata slice bitstream.
- msh_geom_slice_id may indicate an identifier for identifying a geometry slice related to meta data carried in a corresponding meta data slice.
- msh_attr_id may indicate an identifier for identifying an attribute related to meta data carried in a corresponding meta data slice.
- msh_attr_slice_id may indicate an identifier for identifying an attribute slice related to meta data carried in a corresponding meta data slice.
- a meta data bitstream (metadata_bitstream()) may be signaled through meta data slice data.
- the G-PCC bitstream may refer to a bitstream of point cloud data consisting of a sequence of TLV structures.
- the TLV structure may be referred to as "TLV encapsulation structure", “G-PCC TLV encapsulation structure”, or "G-PCC TLV structure”.
- each TLV encapsulation structure may consist of a TLV type (TLV TYPE), a TLV length (TLV LENGTH), and/or a TLV payload (TLV PAYLOAD).
- TLV type may be TLV payload type information
- the TLV length may be TLV payload length information
- the TLV payload may be a payload (or payload bytes).
- tlv_type may indicate TLV payload type information
- tlv_num_payload_bytes may indicate TLV payload length information
- tlv_payload_byte[i] may indicate a TLV payload.
- tlv_payload_byte[i] may be signaled as much as the value of tlv_num_payload_bytes, and i may increase by 1 from 0 to (tlv_num_payload_bytes - 1).
- TLV payloads may include SPS, GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metadata slices.
- the TLV payload of each TLV encapsulation structure includes an SPS, a GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metas according to type information of the TLV payload. It may contain one of the data slices. Data included in the TLV payload may be distinguished through the type information of the TLV payload. For example, as illustrated in FIG.
- tlv_type if the value of tlv_type is 0, it indicates that the data included in the TLV payload is SPS, and if the value of tlv_type is 1, it can indicate that the data included in the TLV payload is GPS. there is. If the value of tlv_type is 2, it may indicate that the data included in the TLV payload is a geometry slice, and if the value of tlv_type is 3, it may indicate that the data included in the TLV payload is an APS.
- tlv_type 4
- tlv_type 5
- tlv_type 6
- NAL Network Abstraction Layer
- Information included in the SPS in the TLV payload may include some or all of the information included in the SPS of FIG. 21 .
- Information included in the tile inventory in the TLV payload may include some or all of the information included in the tile inventory of FIG. 26 .
- Information included in the GPS in the TLV payload may include some or all of the information included in the GPS of FIG. 23 .
- Information included in the APS in the TLV payload may include some or all of the information included in the APS of FIG. 24 .
- Information included in the geometry slice in the TLV payload may include some or all of the information included in the geometry slices of FIGS. 27 and 28 .
- Information included in the attribute slice in the TLV payload may include all or part of the information included in the attribute slices of FIGS. 29 and 30 .
- Information included in the metadata slice in the TLV payload may include all or part of the information included in the metadata slice of FIG. 31 .
- Such a TLV encapsulation structure may be generated by the transmission unit, transmission processing unit, and encapsulation unit mentioned in this specification.
- the G-PCC bitstream composed of TLV encapsulation structures may be transmitted to the receiving device as it is or encapsulated and transmitted to the receiving device.
- the encapsulation processing unit 1525 may encapsulate a G-PCC bitstream composed of TLV encapsulation structures in the form of a file/segment and transmit the encapsulation.
- the decapsulation processing unit 1710 may obtain a G-PCC bitstream by decapsulating the encapsulated file/segment.
- the G-PCC bitstream may be encapsulated in an ISOBMFF-based file format.
- the G-PCC bitstream may be stored in a single track or multiple tracks in the ISOBMFF file.
- single tracks or multiple tracks in a file may be referred to as “tracks” or “G-PCC tracks”.
- ISOBMFF-based files may be referred to as containers, container files, media files, G-PCC files, and the like.
- the file may be composed of boxes and/or information that may be referred to as ftyp, moov, and mdat.
- a box ftyp may provide information related to a file type or file compatibility for a corresponding file.
- the receiving device can identify the corresponding file by referring to the ftyp box.
- the mdat box is also referred to as a media data box, and may include actual media data.
- a geometry slice or a coded geometry bitstream
- zero or more attribute slices or a coded attribute bitstream
- the moov box is also called a movie box, and may contain metadata about the media data of the file.
- the moov box may include information necessary for decoding and reproducing the corresponding media data, and may include information about tracks and samples of the corresponding file.
- a moov box can act as a container for all metadata.
- the moov box may be a box of a top layer among meta data related boxes.
- the moov box may include a track box providing information related to a track of a file
- the trak box may include a media box providing media information of a corresponding track
- a track reference container (tref) box for linking a corresponding track and a sample of a file corresponding to the corresponding track may be included.
- the media box (MediaBox) may include a media information container (minf) box providing information of corresponding media data and a handler (hdlr) box (HandlerBox) indicating the type of stream.
- the minf box may include a sample table (stbl) box providing meta data related to the sample of the mdat box.
- the stbl box may include a sample description (stsd) box providing information on a used coding type and initialization information necessary for the coding type.
- the sample description (stsd) box may include a sample entry for a track.
- signaling information such as SPS, GPS, APS, and tile inventory may be included in a sample entry of a moov box or a sample of an mdat box in a file.
- a G-PCC track is defined as a geometry slice (or a coded geometry bitstream) or an attribute slice (or a coded attribute bitstream), or a volumetric visual track carrying both a geometry slice and an attribute slice. It can be.
- the volumetric visual track is a volumetric visual media handler type 'volv' in the HandlerBox of the MediaBox and/or a minf box of the MediaBox. It can be identified by my volumetric visual media header (vvhd).
- a minf box may be referred to as a media information container or a media information box. The minf box may be included in the media box (MediaBox), the media box (MediaBox) may be included in the track box, and the track box may be included in the moov box of the file.
- a single volumetric visual track or multiple volumetric visual tracks can exist in a file.
- Volumetric visual tracks can use the volumetric visual media header (vvhd) box in the media information box (MediaInformationBox).
- a volumetric visual media header box can be defined as follows.
- the syntax of the volumetric visual media header box may be as follows.
- version may be an integer value representing the version of the volumetric visual media header box.
- volumetric visual tracks may use a volumetric visual sample entry as follows for transmission of signaling information.
- compressorname may indicate the name of a compressor for informative purposes.
- a sample entry ie, a superclass of VolumetricVisualSampleEntry
- inherited from VolumetricVisualSampleEntry may include a GPCC decoder configuration box (GPCCConfigurationBox).
- G-PCC Decoder Configuration Box (GPCCConfigurationBox)
- the G-PCC decoder configuration box may include GPCCDecoderConfigurationRecord() as follows.
- class GPCCConfigurationBox extends Box('gpcC') ⁇
- GPCCDecoderConfigurationRecord( ) may provide G-PCC decoder configuration information for geometry-based point cloud content.
- the syntax of GPCCDecoderConfigurationRecord() can be defined as follows.
- configurationVersion may be a version field. Incompatible changes to that record may be indicated by a change in the version number. Values for profile_idc, profile_compatibility_flags, and level_idc may be valid for all parameter sets activated when the bitstream described by the corresponding record is decoded.
- profile_idc may indicate a profile followed by a bitstream related to a corresponding configuration record.
- profile_idc may include a profile code to indicate a specific profile of G-PCC. If the value of profile_compatibility_flags is 1, it may indicate that the corresponding bitstream conforms to the profile indicated by the profile_idc field. Each bit in profile_compatibility_flags can only be set if all parameter sets set that bit.
- level_idc may include a profile level code.
- level_idc may indicate a capability level equal to or higher than the highest level indicated for the highest tier in all parameter sets.
- numOfSetupUnitArrays may indicate the number of arrays of G-PCC setup units of the type indicated by setupUnitTye. That is, numOfSetupUnitArrays may indicate the number of arrays of G-PCC setup units included in GPCCDecoderConfigurationRecord().
- setupUnitType, setupUnit_completeness, and numOfSetupUnits may be further included in GPCCDecoderConfigurationRecord().
- setupUnitType may indicate the type of G-PCC setupUnits. That is, the value of setupUnitType may be one of values indicating SPS, GPS, APS, or tile inventory. If the value of setupUnit_completeness is 1, it may indicate that all setup units of a given type are in the next array and there are none in the corresponding stream.
- setupUnit_completeness field if the value of the setupUnit_completeness field is 0, it may indicate that additional setup units of the indicated type are present in the corresponding stream.
- numOfSetupUnits may indicate the number of G-PCC setup units of the type indicated by setupUnitTye.
- setupUnit (tlv_encapsulatuon setupUnit) may be further included in GPCCDecoderConfigurationRecord().
- setupUnit is included by a loop that repeats as much as the value of numOfSetupUnits, and this loop can be repeated while increasing by 1 until i becomes (numOfSetupUnits - 1) from 0.
- setupUnit setupUnit may be a setup unit of the type indicated by setupUnitTye, for example, an instance of a SPS, GPS, APS, or TLV encapsulation structure that carries a tile inventory.
- Volumetric visual tracks may use volumetric visual samples for transmission of actual data.
- a volumetric visual sample entry may be referred to as a sample entry or a G-PCC sample entry, and a volumetric visual sample may be referred to as a sample or a G-PCC sample.
- a single volumetric visual track may be referred to as a single track or a G-PCC single track, and multiple volumetric visual tracks may be referred to as a multiple track or multiple G-PCC tracks.
- Signaling information related to grouping of samples, grouping of tracks, single track encapsulation of a G-PCC bitstream, encapsulation of multiple tracks of a G-PCC bitstream, or signaling information for supporting spatial access is It can be added to the sample entry in the form of a full box.
- the signaling information may include at least one of a GPCC entry information box (GPCCEntryInfoBox), a GPCC component type box (GPCCComponentTypeBox), a cubic region information box (CubicRegionInfoBox), a 3D bounding box information box (3DBoundingBoxInfoBox), or a tile inventory box (TileInventoryBox).
- GPCCEntryInfoBox a GPCC entry information box
- GPCCComponentTypeBox a GPCC component type box
- CubicRegionInfoBox cubic region information box
- 3DBoundingBoxInfoBox 3D bounding box information box
- TileInventoryBox tile inventory box
- G-PCCEntryInfoBox The syntax structure of the G-PCC entry information box (GPCCEntryInfoBox) can be defined as follows.
- GPCCEntryInfoBox having a sample entry type of 'gpsb' may include GPCCEntryInfoStruct ().
- the syntax of GPCCEntryInfoStruct () can be defined as follows.
- GPCCEntryInfoStruct can include main_entry_flag and dependent_on.
- main_entry_flag may indicate whether or not an entry point for decoding a G-PCC bitstream.
- dependent_on indicates its decoding is dependent on others. If dependent_on is present in a sample entry, dependent_on may indicate that decoding of samples in a track is dependent on other tracks. If the value of dependency_on is 1, GPCCEntryInfoStruct() may further include dependency_id.
- dependency_id may indicate identifiers of tracks for decoding related data. If dependency_id exists in a sample entry, dependency_id may indicate an identifier of a track carrying a G-PCC sub-bitstream to which decoding of samples within a track depends. If dependency_id exists in a sample group, dependency_id may indicate identifiers of samples carrying a G-PCC sub-bitstream to which decoding of related samples depends.
- the syntax structure of the G-PCC component type box (GPCCComponentTypeBox) can be defined as follows.
- a GPCCComponentTypeBox having a sample entry type of 'gtyp' may include GPCCComponentTypeStruct().
- the syntax of GPCCComponentTypeStruct() can be defined as follows.
- numOfComponents may indicate the number of G-PCC components signaled to the corresponding GPCCComponentTypeStruct.
- gpcc_type can be included in GPCCComponentTypeStruct by a repetition statement repeated as many times as the value of numOfComponents. This iteration statement can be repeated while increasing by 1 until i becomes (numOfComponents - 1) from 0.
- gpcc_type may indicate the type of G-PCC component. For example, if the value of gpcc_type is 2, it can indicate a geometry component, and if it is 4, it can indicate an attribute component.
- AttrIdx may indicate an identifier of an attribute signaled in SPS().
- a G-PCC component type box (GPCCComponentTypeBox) may be included in a sample entry for multiple tracks. If the G-PCC Component Type box (GPCCComponentTypeBox) is present in the sample entries of tracks carrying some or all of the G-PCC bitstream, GPCCComponentTypeStruct() may indicate one or more G-PCC component types carried by each track. can A GPCCComponentTypeBox or GPCCComponentTypeStruct() including GPCCComponentTypeStruct() may be referred to as G-PCC component information.
- the encapsulation processor mentioned in this disclosure may create a sample group by grouping one or more samples.
- the encapsulation processor, meta data processor, or signaling processor mentioned in this disclosure may signal signaling information related to a sample group to a sample, a sample group, or a sample entry. That is, sample group information associated with a sample group may be added to a sample, sample group, or sample entry.
- the sample group information may be 3D bounding box sample group information, 3D region sample group information, 3D tile sample group information, 3D tile inventory sample group information, and the like.
- the encapsulation processor mentioned in this disclosure may create a track group by grouping one or more tracks.
- the encapsulation processor, meta data processor, or signaling processor mentioned in this disclosure may signal signaling information related to a track group to a sample, track group, or sample entry. That is, track group information associated with a track group may be added to a sample, track group, or sample entry.
- the track group information may be 3D bounding box track group information, point cloud composition track group information, spatial area track group information, 3D tile track group information, 3D tile inventory track group information, and the like.
- 34 is a diagram for explaining an ISOBMFF-based file including a single track.
- 34(a) shows an example of the layout of an ISOBMFF-based file including a single track
- FIG. 34(b) shows a sample structure of an mdat box when a G-PCC bitstream is stored in a single track of a file.
- shows an example for 35 is a diagram for explaining an ISOBMFF-based file including multiple tracks.
- 35(a) shows an example of the layout of an ISOBMFF-based file including multiple tracks
- FIG. 35(b) shows a sample structure of an mdat box when a G-PCC bitstream is stored in a single track of a file. shows an example for
- a stsd box (SampleDescriptionBox) included in the moov box of the file may include a sample entry for a single track storing a G-PCC bitstream.
- SPS, GPS, APS, and tile inventories can be included in sample entries in the moov box or samples in the mdat box in the file.
- geometry slices and attribute slices of zero or more may be included in the sample of the mdat box in the file.
- each sample may contain multiple G-PCC components. That is, each sample may consist of one or more TLV encapsulation structures.
- a single track sample entry can be defined as follows.
- Quantity One or more sample entries may be present
- the sample entry type 'gpe1' or 'gpeg' is mandatory, and one or more sample entries may exist.
- a G-PCC track can use a VolumetricVisualSampleEntry having a sample entry type of 'gpe1' or 'gpeg'.
- the sample entry of the G-PCC track may include a G-PCC decoder configuration box (GPCCConfigurationBox), and the G-PCC decoder configuration box may include a G-PCC decoder configuration record (GPCCDecoderConfigurationRecord()).
- GPCCDecoderConfigurationRecord() may include at least one of configurationVersion, profile_idc, profile_compatibility_flags, level_idc, numOfSetupUnitArrays, SetupUnitType, completeness, numOfSepupUnit, and setupUnit.
- the setupUnit array field included in GPCCDecoderConfigurationRecord() may include TLV encapsulation structures including one SPS.
- sample entry type is 'gpe1'
- all parameter sets such as SPS, GPS, APS, and tile inventory may be included in the array of setupUints.
- the sample entry type is 'gpeg'
- the above parameter sets may be included in an array of setupUints (ie sample entry) or included in a corresponding stream (ie sample).
- An example of the syntax of a G-PCC sample entry (GPCCSampleEntry) having a sample entry type of 'gpe1' is as follows.
- a G-PCC sample entry (GPCCSampleEntry) having a sample entry type of 'gpe1' may include GPCCConfigurationBox, 3DBoundingBoxInfoBox(), CubicRegionInfoBox(), and TileInventoryBox().
- 3DBoundingBoxInfoBox( ) may indicate 3D bounding box information of point cloud data related to samples carried to a corresponding track.
- CubicRegionInfoBox( ) may indicate one or more pieces of spatial domain information of point cloud data carried by samples within a corresponding track.
- TileInventoryBox() may indicate 3D tile inventory information of point cloud data carried by samples within a corresponding track.
- the sample may include TLV encapsulation structures including geometry slices. Additionally, a sample may include TLV encapsulation structures that include one or more parameter sets. Additionally, a sample may contain TLV encapsulation structures containing one or more attribute slices.
- each geometry slice or attribute slice may be mapped to an individual track.
- a geometry slice may be mapped to track 1
- an attribute slice may be mapped to track 2.
- a track (track 1) carrying a geometry slice may be referred to as a geometry track or a G-PCC geometry track
- a track (track 2) carrying an attribute slice may be referred to as an attribute track or a G-PCC attribute track.
- the geometry track may be defined as a volumetric visual track carrying geometry slices
- the attribute track may be defined as a volumetric visual track carrying attribute slices.
- a track carrying a part of a G-PCC bitstream including both a geometry slice and an attribute slice may be referred to as a multiplexed track.
- each sample in a track may include at least one TLV encapsulation structure carrying data of a single G-PCC component.
- each sample contains neither geometry nor attributes, and may also contain multiple attributes.
- Multi-track encapsulation of the G-PCC bitstream can enable a G-PCC player to effectively access one of the G-PCC components.
- the following conditions need to be satisfied for a G-PCC player to effectively access one of the G-PCC components.
- a new box is added to indicate the role of the stream included in the corresponding track.
- the new box may be the aforementioned G-PCC component type box (GPCCComponentTypeBox). That is, GPCCComponentTypeBox can be included in sample entries for multiple tracks.
- a track reference is introduced from a track carrying only the G-PCC geometry bitstream to a track carrying the G-PCC attribute bitstream.
- GPCCComponentTypeBox may include GPCCComponentTypeStruct(). If GPCCComponentTypeBox is present in a sample entry of tracks carrying some or all of the G-PCC bitstream, then GPCCComponentTypeStruct() specifies the type (e.g. geometry, attribute) of one or more G-PCC components carried by each track. can instruct For example, if the value of the gpcc_type field included in GPCCComponentTypeStruct() is 2, it can indicate a geometry component, and if it is 4, it can indicate an attribute component. In addition, when the value of the gpcc_type field is 4, that is, indicates an attribute component, an AttrIdx field indicating an identifier of an attribute signaled to SPS() may be further included.
- GPCCComponentTypeBox specifies the type (e.g. geometry, attribute) of one or more G-PCC components carried by each track. can instruct For example, if the value of the gpcc_type
- the syntax of the sample entry can be defined as follows.
- Quantity One or more sample entries may be present
- the sample entry type 'gpc1', 'gpcg', 'gpc1' or 'gpcg' is mandatory, and one or more sample entries may be present. Multiple tracks (eg geometry or attribute tracks) may use a VolumetricVisualSampleEntry with a sample entry type of 'gpc1', 'gpcg', 'gpc1' or 'gpcg'.
- all parameter sets can be present in the setupUnit array.
- the parameter set may exist in the corresponding array or stream.
- the GPCCComponentTypeBox may not exist.
- the SPS, GPS and tile inventories may be present in the SetupUnit array of tracks carrying the G-PCC geometry bitstream. All relevant APSs may be present in the SetupUnit array of tracks carrying the G-PCC attribute bitstream.
- SPS, GPS, APS or tile inventory may exist in the corresponding array or stream.
- a GPCCComponentTypeBox may need to be present.
- the compressorname of the base class VolumetricVisualSampleEntry can indicate the name of the compressor used with the recommended " ⁇ 013GPCC coding" value.
- the first byte (octal number 13 or decimal number 11 represented by ⁇ 013) is the number of remaining bytes, and may indicate the number of bytes of the remaining string.
- congif may include G-PCC decoder configuration information. info may indicate G-PCC component information carried in each track. info may indicate a component tile carried in a track, and may also indicate an attribute name, index, and attribute type of a G-PCC component carried in a G-PCC attribute track.
- the syntax for the sample format is as follows.
- each sample corresponds to a single point cloud frame and may be composed of one or more TLV encapsulation structures belonging to the same presentation time.
- Each TLV encapsulation structure may contain a single type of TLV payload.
- one sample may be independent (eg, sync sample).
- GPCCLength indicates the length of a corresponding sample
- gpcc_unit may include an instance of a TLV encapsulation structure including a single G-PCC component (eg, a geometry slice).
- each sample may correspond to a single point cloud frame, and samples contributing to the same point cloud frame in different tracks may have the same presentation time.
- Each sample may consist of one or more G-PCC units of the G-PCC component indicated in the GPCCComponentInfoBox of the sample entry and zero or more G-PCC units carrying either a parameter set or a tile inventory. If a G-PCC unit containing a parameter set or tile inventory exists in a sample, the corresponding F-PCC sample may need to appear before the G-PCC unit of the G-PCC component.
- Each sample may include one or more G-PCC units containing an attribute data unit and zero or more G-PCC units carrying a parameter set.
- each TLV encapsulation structure in the corresponding sample need to access Also, if one sample is composed of multiple TLV encapsulation structures, each of the multiple TLV encapsulation structures may be stored as a subsample. A subsample may be referred to as a G-PCC subsample.
- a sample contains a parameter set TLV encapsulation structure containing parameter sets, a geometry TLV encapsulation structure containing geometry slices, and an attribute TLV encapsulation structure containing attribute slices
- the parameter set A TLV encapsulation structure, a geometry TLV encapsulation structure, and an attribute TLV encapsulation structure may each be stored as a subsample.
- the type of TLV encapsulation structure carried in the corresponding subsample may be required.
- the G-PCC subsample may contain only one TLV encapsulation structure.
- One SubSampleInformationBox may exist in the sample table box (SampleTableBox, stbl) of the moov box, or may exist in the track fragment box (TrackFragmentBox, traf) of each movie fragment box (MovieFragmentBox, moof). If SubSampleInformationBox exists, an 8-bit type value of a TLV encapsulation structure may be included in a 32-bit codec_specific_parameters field of a subsample entry in SubSampleInformationBox.
- a 6-bit value of the attribute index may be included in the 32-bit codec_specific_parameters field of the subsample entry in SubSampleInformationBox.
- the type of each subsample may be identified by parsing the codec_specific_parameters field of the subsample entry in SubSampleInformationBox.
- codec_specific_parameters of SubSampleInformationBox can be defined as follows.
- bit(18) reserved 0;
- bit(24) reserved 0;
- bit(7) reserved 0;
- bit(24) reserved 0;
- payloadType may indicate the tlv_type of the TLV encapsulation structure in the corresponding subsample. For example, if the value of payloadType is 4, an attribute slice (ie, attribute slice) may be indicated.
- attrIdx may indicate an identifier of attribute information of a TLV encapsulation structure including an attribute payload in a corresponding subsample.
- attrIdx may be the same as ash_attr_sps_attr_idx of the TLV encapsulation structure including the attribute payload in the corresponding subsample.
- tile_data may indicate whether a subsample contains one tile or another tile.
- tile_data If the value of tile_data is 1, it may indicate that the subsample includes a TLV encapsulation structure (s) including a geometry data unit or attribute data unit corresponding to one G-PCC tile. If the value of tile_data is 0, it may indicate that the subsample includes the TLV encapsulation structure(s) including each parameter set, tile inventory or frame boundary marker. tile_id may indicate the index of the G-PCC tile to which the subsample is associated within the tile inventory.
- a track reference tool may be used.
- One TrackReferenceTypeBoxes can be added to the TrackReferenceBox in the TrackBox of the G-PCC track.
- TrackReferenceTypeBox may contain an array of track_IDs specifying the tracks referenced by the G-PCC track.
- the present disclosure relates to the carriage of G-PCC data (hereinafter, which may be referred to as a G-PCC bitstream, an encapsulated G-PCC bitstream, or a G-PCC file).
- G-PCC bitstream an encapsulated G-PCC bitstream
- G-PCC file a G-PCC file
- Devices and methods for supporting temporal scalability may be provided.
- the present disclosure can propose apparatus and methods for providing a point cloud content service that efficiently stores a G-PCC bitstream in a single track in a file or divides and stores a G-PCC bitstream in a plurality of tracks and provides signaling therefor. there is.
- the present disclosure proposes an apparatus and methods for processing a file storage scheme to support efficient access to a stored G-PCC bitstream.
- Temporal scalability can refer to functionality that allows for the possibility of extracting one or more subsets of independently coded frames.
- temporal scalability may mean a function of dividing G-PCC data into a plurality of different temporal levels and independently processing each G-PCC frame belonging to the different temporal levels. If temporal scalability is supported, the G-PCC player (or the transmission device and/or the reception device of the present disclosure) can effectively access a desired component (target component) among G-PCC components.
- target component target component
- temporal scalability support can be expressed as more flexible temporal sub-layering at the system level.
- temporal scalability allows a system that processes G-PCC data (point cloud content provision system) to manipulate data at a high level to match network capabilities or decoder capabilities, etc. It is possible to improve the performance of the point cloud content providing system.
- G-PCC data point cloud content provision system
- FIGS. 36 to 39 Flowcharts for the first embodiment supporting temporal scalability are shown in FIGS. 36 to 39 .
- the transmission device 10 or 1500 may generate information about a sample group (S3610).
- a sample group may be a grouping of samples (G-PCC samples) in a G-PCC file based on one or more temporal levels.
- the transmission device 10 or 1500 may generate a G-PCC file (S3620).
- the transmission device 10 or 1500 may generate a G-PCC file including point cloud data, information on sample groups, and/or information on temporal levels.
- Processes S3610 and S3620 may be performed by one or more of the encapsulation processor 13, the encapsulation processor 1525, and/or the signaling processor 1510.
- the transmission device 10 or 1500 may signal a G-PCC file.
- the receiving device 20 or 1700 may obtain a G-PCC file (S3710).
- a G-PCC file may include point cloud data, information about sample groups, and information about temporal levels.
- a sample group may be a grouping of samples (G-PCC samples) in a G-PCC file based on one or more temporal levels.
- the receiving device 20 or 1700 may extract one or more samples belonging to a target temporal level from samples (G-PCC samples) in the G-PCC file (S3720). Specifically, the receiving device 20 or 1700 may decapsulate or extract samples belonging to a target temporal level based on information on sample groups and/or information on temporal levels.
- the target temporal level may be a desired temporal level.
- process S3720 is a process of determining whether one or more temporal levels exist, and if one or more temporal levels exist, determining whether a desired temporal level and / or desired frame rate is available and, when a desired temporal level and/or a desired frame rate are available, extracting samples within a sample group belonging to a desired temporal level. Steps S3710 (and/or parsing the sample entry) and S3720 may be performed by one or more of the decapsulation processing unit 22, the decapsulation processing unit 1710, and/or the signaling processing unit 1715.
- the receiving device 20 determines or determines the temporal level of samples in the G-PCC file based on the information on the sample group and/or the information on the temporal levels, in step S3710 and It can be performed between the steps of S3720.
- a sample grouping method and a track grouping method may exist.
- the sample grouping method may be a method of grouping samples in a G-PCC file according to a temporal level
- the track grouping method may be a method of grouping tracks in a G-PCC file according to a temporal level.
- a sample group can be used to correlate samples with the temporal level assigned to them. That is, a sample group may indicate which sample belongs to which temporal level. Also, the sample group may be information about a result of grouping one or more samples into one or more temporal levels.
- the sample group may be referred to as a 'tele' sample group and a temporal level sample group 'tele'.
- Information on the sample group may include information on a result of sample grouping. Accordingly, the information about the sample group may be information for use in associating the samples with the temporal level assigned to them. That is, the information on the sample group may indicate which sample belongs to which temporal level, and may be information on a result of grouping one or more samples into one or more temporal levels.
- Information on sample groups may exist in tracks including geometry data units.
- information on sample groups may exist only in the geometry track to group each sample in the track into a specified temporal level.
- Samples in attribute tracks can be inferred based on their relationship with their associated geometry track. For example, samples in attribute tracks can belong to the same temporal level as samples in their associated geometry track.
- the G-PCC tile track may be a volumetric visual track carrying all G-PCC components or a single G-PCC component corresponding to one or more G-PCC tiles.
- the G-PCC tile-based track can be a volumetric visual track that carries the tile inventory and all parameter sets corresponding to the G-PCC tile track.
- Information on a temporal level may be signaled to describe temporal scalability supported by the G-PCC file.
- Information on the temporal level may be present in a sample entry of a track including a sample group (or information on a sample group).
- information on the temporal level may exist in GPCCDecoderConfigurationRecord () or in a G-PCC scalability information box (GPCCScalabilityInfoBox) signaling scalability information for a G-PCC track.
- the information on the temporal level includes one or more of number information (e.g., num_temporal_levels), temporal level identification information (e.g., level_idc), and / or frame rate information. can do.
- the information on the frame rate may include frame rate information (e.g., avg_frame_rate) and/or frame rate presence information (e.g., avg_frame_rate_present_flag).
- bit(4) reserved '1111'b
- the number information may indicate the number of temporal levels. For example, when the value of num_temporal_levels is greater than the first value (e.g., 1), the value of num_temporal_levels may indicate the number of temporal levels. According to embodiments, the number information may further indicate whether samples in the G-PCC file are temporally scalable. For example, when the value of num_temporal_levels is equal to the first value (e.g., 1), it may indicate that temporal scalability is not supported (temporal scalability is not supported).
- num_temporal_levels if the value of num_temporal_levels is smaller than the first value (e.g., 1) (e.g., 0), this may indicate that whether or not temporally extensible is not known. In summary, when the value of num_temporal_levels is equal to or less than the first value (e.g., 1), the value of num_temporal_levels may indicate whether or not it is temporally extensible.
- the temporal level identification information may indicate a temporal level identifier (or level code) for a corresponding track. That is, the temporal level identification information may represent temporal level identifiers of samples within a corresponding track. According to embodiments, when the value of num_temporal_levels is equal to or less than the first value (e.g., 1), the temporal level identification information may indicate level codes of samples in the G-PCC file. Temporal level identification information may be generated as many times as the number of temporal levels indicated by the number information in step S3620, and obtained as many times as the number of temporal levels indicated by the number information in step S3710. For example, temporal level identification information may be generated/obtained as i increases by 1 from 0 to 'num_temporal_levels-1'.
- the information on the frame rate may include frame rate information (e.g., avg_frame_rate) and/or frame rate presence information (e.g., avg_frame_rate_present_flag).
- frame rate information e.g., avg_frame_rate
- frame rate presence information e.g., avg_frame_rate_present_flag
- Frame rate information may indicate the frame rate at temporal levels.
- the frame rate information may indicate frame rates of temporal levels in units of frames, and the frame rate indicated by the frame rate information may be an average frame rate.
- the value of avg_frame_rate when the value of avg_frame_rate is equal to the first value (e.g., 0), it may indicate an unspecified average frame rate.
- Frame rate information may be generated as many times as the number of temporal levels indicated by the number information, and obtained as many times as the number of temporal levels indicated by the number information. For example, frame rate information may be generated/obtained as i increases by 1 from 0 to 'num_temporal_levels-1'.
- Frame rate presence information may indicate whether frame rate information exists (ie, whether frame rate information is signaled). For example, when the value of avg_frame_rate_present_flag is equal to the first value (e.g., 1), this may indicate that frame rate information exists, and when the value of avg_frame_rate_present_flag is equal to the second value (e.g., 0), this indicates that frame rate information is present. It may indicate that rate information does not exist.
- Frame rate information may be signaled regardless of the value of frame rate presence information. According to embodiments, whether frame rate information is signaled may be determined according to a value of frame rate presence information. 38 and 39 are flowcharts for explaining an example in which whether or not frame rate information is signaled according to a value of frame rate presence information is determined.
- the transmission device 10 or 1500 may determine whether frame rate information exists (S3810).
- the transmission device 10 or 1500 may include only frame rate presence information in information on temporal levels. That is, when frame rate information does not exist, the transmission device 10 or 1500 may signal only frame rate presence information.
- the value of avg_frame_rate_present_flag may be equal to the second value (e.g., 0).
- the transmission device 10 or 1500 may include frame rate existence information and frame rate information in information on temporal levels (S3830). That is, when frame rate information exists, the transmission device 10 or 1500 may signal frame rate existence information and frame rate information.
- Steps S3810 to S3830 may be performed by one or more of the encapsulation processor 13, the encapsulation processor 1525, and/or the signaling processor 1510.
- the receiving device 20 or 1700 may obtain frame rate existence information (S3910). Also, the receiving device 20 or 1700 may determine a value indicated by the frame rate existence information (S3920). When the value of avg_frame_rate_present_flag is equal to the second value (e.g., 0), the receiving device 20 or 1700 may end the process of obtaining frame rate information without obtaining frame rate information. In contrast, when the value of avg_frame_rate_present_flag is equal to the first value (e.g., 1), the receiving device 20 or 1700 may obtain frame rate information (S3930). The receiving device 20 or 1700 may extract samples corresponding to the target frame rate from samples in the G-PCC file (S3940).
- 'samples corresponding to the target frame rate' may be a temporal level or track having a frame rate corresponding to the target frame rate.
- the 'frame rate corresponding to the target frame rate' may include not only a frame rate having the same value as the target frame rate, but also a frame rate having a value less than the value of the target frame rate.
- Steps S3910 to S3940 may be performed by one or more of the decapsulation processing unit 22, the decapsulation processing unit 1710, and/or the signaling processing unit 1715.
- a process of determining the frame rate of temporal levels (or tracks) based on the frame rate information may be performed between processes S3930 and S3940.
- bit efficiency may be increased by reducing the number of bits for signaling the frame rate information.
- a particular file parser or player e.g., a receiving device
- another file parser or player may not want or may not use frame rate information.
- the value of frame rate presence information is set to a second value (e.g. 0). By setting, the efficiency of bits for signaling frame rate information can be increased.
- the second embodiment is an embodiment in which a condition for 'the smallest composition time difference between successive samples' is added to the first embodiment.
- the coded frames of the G-PCC bitstream can be aligned to different temporal levels. Also, different temporal levels may be stored in different tracks. Also, the frame rate of each temporal level (or each track) may be determined based on the frame rate information.
- samples are arranged in three temporal levels (temporal level 0, temporal level 1 and temporal level 2), and each temporal level is stored in one track, so that three tracks (track 0, track 1 and track 2) can be configured.
- the file parser or player e.g., the receiving device
- a file parser or player eg, a receiving device
- temporal level 0 associated with a frame rate of 30 fps
- temporal level 1 associated with a frame rate of 60 fps
- temporal level 2 associated with a frame rate of 120 fps
- the target frame rate being 60 fps
- the complexity and size of tracks may also increase as file playback proceeds at higher temporal levels. there is. That is, if samples (or tracks associated with a certain temporal level) with a certain temporal level have twice the frame rate (twice the number of frames) compared to the track with the immediately lower temporal level, As the temporal level of the track being played increases, the complexity for playback can increase many fold.
- This disclosure proposes a constraint condition that the minimum combined time difference between successive samples (frames) of each of the tracks for reproduction may be equal to each other.
- the temporal levels include a 'first temporal level' and a 'second temporal level having a level value greater than the first temporal level'
- 'minimum combining time between successive samples within the second temporal level' The difference' may be equal to the 'minimum difference in combining time between consecutive samples within the first temporal level' or greater than the 'minimum difference in combining time between consecutive samples within the first temporal level'. That is, the 'minimum difference in combining time between consecutive samples in the second temporal level' may be greater than or equal to the 'minimum difference in combining time between consecutive samples in the first temporal level'.
- the third embodiment is based on the first embodiment and/or the second embodiment and prevents redundant signaling of information on temporal levels.
- the sample group (or information on the sample group) may be present in tracks including geometry data units, and the information on the temporal level includes the sample group (or information on the sample group). may exist in the sample entry of the track to be That is, information on the temporal level may exist only in tracks including geometry tracks. Also, samples in attribute tracks can be inferred based on their relationship with their associated geometry track. Accordingly, signaling information on a temporal level even for a track that does not include a geometry track may cause a problem of redundant signaling.
- the following syntax structure is a syntax structure according to the third embodiment.
- bit(3) reserved 0;
- multiple_temporal_level_tracks_flag is a syntax element included in information about temporal levels, and may indicate whether multiple temporal level tracks exist in the G-PCC file. For example, when the value of multiple_temporal_level_tracks_flag is equal to the first value (e.g., 1), this may indicate that G-PCC bitstream frames are grouped into multiple temporal level tracks, and the value of multiple_temporal_level_tracks_flag is equal to the second value (e.g., 1). In the case of 2), this may indicate that all temporal level samples exist in a single track.
- the first value e.g. 1
- the value of multiple_temporal_level_tracks_flag is equal to the second value (e.g., 1).
- this may indicate that all temporal level samples exist in a single track.
- the type of component carried by the current track (gpcc_data) is an attribute' and/or 'In case the current track is a track with a predetermined sample entry type (eg, 'gpc1' and/or 'gpcg')' multiple_temporal_level_tracks_flag may not be signaled.
- the value of multiple_temporal_level_tracks_flag may be the same as a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track.
- frame_rate_present_flag is frame rate presence information, and its meaning may be as described above.
- the type of component carried by the current track (gpcc_data) is an attribute' and/or 'In case the current track is a track with a predetermined sample entry type (eg, 'gpc1' and/or 'gpcg')' frame_rate_present_flag may not be signaled.
- the value of frame_rate_present_flag may be the same as the corresponding syntax element (information on temporal levels) in the geometry track referred to by the current track.
- num_temporal_levels is number information and may indicate the maximum number of temporal levels in which G-PCC bitstream frames are grouped.
- the value of num_temporal_levels may be set to 1 when information on temporal levels is not available or when all frames are signaled at a single temporal level.
- the minimum value of num_temporal_levels may be 1.
- the type of component carried by the current track (gpcc_data) is an attribute' and/or 'In case the current track is a track with a predetermined sample entry type (eg, 'gpc1' and/or 'gpcg')' num_temporal_levels may not be signaled.
- the value of num_temporal_levels may be the same as a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track.
- 40 and 41 are flowcharts of methods for preventing the problem of redundant signaling.
- the transmission device 10 or 1500 may determine the type of component carried by the current track (type of component of the current track) (S4010).
- the component type can be determined according to the value of gpcc_type and Table 4 below.
- the transmission device 10 or 1500 may not include information on temporal levels (S4020). That is, when the type of component carried by the current track is an attribute, the transmitting device 10 or 1500 may not signal information on temporal levels. In this case, one or more values among multiple_temporal_level_tracks_flag, frame_rate_present_flag, and/or frame_rate_present_flag may be the same as a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track. In contrast, when the type of component carried by the current track is an attribute, the transmission device 10 or 1500 may include information on temporal levels (S4030).
- Steps S4010 to S4030 may be performed by one or more of the encapsulation processor 13, the encapsulation processor 1525, and/or the signaling processor 1510.
- the transmission device 10 or 1500 may further determine the type of sample entry of the current track.
- the transmission device 10 (1500) Information on levels may not be included (signaled) (S4020).
- one or more values among multiple_temporal_level_tracks_flag, frame_rate_present_flag, and/or frame_rate_present_flag may be the same as a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track.
- the transmission device ( 10, 1500) may include (signal) information on temporal levels (S4030).
- the receiving device 20 or 1700 may determine the type of component carried by the current track (S4110).
- the component type can be determined according to the value of gpcc_type and Table 4 above.
- the receiving device 20 (1700) may not obtain information on temporal levels (S4120).
- one or more values among multiple_temporal_level_tracks_flag, frame_rate_present_flag, and/or frame_rate_present_flag may be set equal to a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track.
- the receiving device 20 (1700) may obtain information on temporal levels (S4130). Steps S4110 to S4130 may be performed by one or more of the decapsulation processing unit 22, the decapsulation processing unit 1710, and/or the signaling processing unit 1715.
- the receiving device 20 or 1700 may further determine the type of sample entry of the current track.
- the type of component carried by the current track is an attribute and the current track is a track having a predetermined sample entry type (eg, 'gpc1' and/or 'gpcg')
- the receiving device 20 (1700) Information on levels may not be acquired (S4120).
- one or more values among multiple_temporal_level_tracks_flag, frame_rate_present_flag, and/or frame_rate_present_flag may be set equal to a corresponding syntax element (information on temporal levels) in a geometry track referred to by the current track.
- the receiving device may obtain information on temporal levels (S4130).
- the scope of the present disclosure is software or machine-executable instructions (eg, operating systems, applications, firmware, programs, etc.) that cause operations in accordance with the methods of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium in which instructions and the like are stored and executable on a device or computer.
- Embodiments according to this disclosure may be used to provide point cloud content. Also, embodiments according to the present disclosure may be used to encode/decode point cloud data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (15)
- 포인트 클라우드 데이터의 수신 장치에서 수행되는 방법으로서,상기 포인트 클라우드 데이터를 포함하는 G-PCC(geometry-based point cloud compression) 파일을 획득하는 단계 - 상기 G-PCC 파일은 상기 G-PCC 파일 내 샘플들이 하나 이상의 시간적 레벨들에 기반하여 그룹핑된 샘플 그룹에 대한 정보 및 상기 시간적 레벨들에 대한 정보를 포함함 -; 및상기 샘플 그룹에 대한 정보 및 상기 시간적 레벨들에 대한 정보에 기반하여 상기 G-PCC 파일 내 샘플들 중에서 대상(target) 시간적 레벨에 속하는 하나 이상의 샘플들을 추출하는 단계를 포함하는 방법.
- 제1항에 있어서,상기 시간적 레벨들에 대한 정보는,상기 시간적 레벨들의 개수를 나타내는 개수 정보 및 상기 시간적 레벨들을 식별하기 위한 시간적 레벨 식별 정보를 포함하는 방법.
- 제2항에 있어서,상기 개수 정보는,상기 G-PCC 파일 내 샘플들의 시간적 확장 가능성 여부를 더 나타내는 방법.
- 제3항에 있어서,상기 개수 정보는,상기 개수 정보의 값이 제1값보다 큰 것에 기초하여 상기 시간적 레벨들의 개수를 나타내며,상기 개수 정보의 값이 상기 제1값 이하인 것에 기초하여 상기 G-PCC 파일 내 샘플들의 시간적 확장 가능성 여부를 나타내는 방법.
- 제1항에 있어서,상기 샘플 그룹에 대한 정보는,상기 G-PCC 파일이 멀티플(multiple) 트랙들로 운반되는 것에 기초하여 상기 멀티플 트랙들 중에서 상기 포인트 클라우드 데이터의 지오메트리(geometry) 데이터를 운반하는 지오메트리 트랙에 존재하는 방법.
- 제5항에 있어서,어트리뷰트(attribute) 트랙에 존재하는 하나 이상의 샘플들의 시간적 레벨은,상기 지오메트리 트랙 내 대응하는 샘플의 시간적 레벨과 동일하며,상기 어트리뷰트 트랙은,상기 멀티플 트랙들 중에서 상기 포인트 클라우드 데이터의 어트리뷰트 데이터를 운반하는 트랙인 방법.
- 제1항에 있어서,상기 시간적 레벨들에 대한 정보는,상기 시간적 레벨들의 프레임 레이트(frame rate)를 나타내는 프레임 레이트 정보를 포함하고,상기 추출하는 단계는,상기 G-PCC 파일 내 샘플들 중에서 대상 프레임 속도에 대응하는 하나 이상의 샘플들을 추출하는 방법.
- 제7항에 있어서,상기 시간적 레벨들에 대한 정보는,상기 프레임 레이트 정보가 존재하는지 여부를 나타내는 프레임 레이트 존재 정보를 포함하고,상기 프레임 레이트 정보는,상기 프레임 레이트 존재 정보가 상기 프레임 레이트 정보의 존재함을 나타내는 것에 기초하여 상기 시간적 레벨들에 대한 정보에 포함되는 방법.
- 제7항에 있어서,상기 시간적 레벨들은,제1시간적 레벨 및 상기 제1시간적 레벨보다 큰 레벨 값을 가지는 제2시간적 레벨을 포함하고,상기 제2시간적 레벨 내 연속하는 샘플들 간의 최소 결합 시간 차이(smallest composition time difference)는,상기 제1시간적 레벨 내 연속하는 샘플들 간의 최소 결합 시간 차이와 같거나, 상기 제1시간적 레벨 내 연속하는 샘플들 간의 최소 결합 시간 차이보다 큰 방법.
- 제1항에 있어서,상기 시간적 레벨들에 대한 정보는,현재 트랙(current track)이 운반하는 컴포넌트(component)의 타입이 상기 포인트 클라우드 데이터의 어트리뷰트인 것에 기초하여 상기 현재 트랙이 참조하는 지오메트리 트랙 내 대응하는 정보들과 동일한 값을 가지는 방법.
- 제10항에 있어서,상기 시간적 레벨들에 대한 정보는,상기 컴포넌트의 타입이 상기 포인트 클라우드 데이터의 어트리뷰트이면서 상기 현재 트랙의 샘플 엔트리(sample entry)의 타입이 소정의 샘플 엔트리 타입인 것에 기초하여 상기 현재 트랙이 참조하는 지오메트리 트랙 내 대응하는 정보들과 동일한 값을 가지는 방법.
- 제11항에 있어서,상기 소정의 샘플 엔트리 타입은,gpc1 샘플 엔트리 타입 또는 gpcg 샘플 엔트리 타입 중에서 하나 이상을 포함하는 방법.
- 포인트 클라우드 데이터의 수신 장치로서,메모리; 및적어도 하나의 프로세서를 포함하고,상기 적어도 하나의 프로세서는,상기 포인트 클라우드 데이터를 포함하는 G-PCC(geometry-based point cloud compression) 파일을 획득하고 - 상기 G-PCC 파일은 상기 G-PCC 파일 내 샘플들이 하나 이상의 시간적 레벨들에 기반하여 그룹핑된 샘플 그룹에 대한 정보, 및 상기 시간적 레벨들에 대한 정보를 포함함 -,상기 샘플 그룹에 대한 정보 및 상기 시간적 레벨들에 대한 정보에 기반하여 상기 G-PCC 파일 내 샘플들 중에서 대상(target) 시간적 레벨에 속하는 하나 이상의 샘플들을 추출하는 수신 장치.
- 포인트 클라우드 데이터의 전송 장치에서 수행되는 방법으로서,G-PCC(geometry-based point cloud compression) 샘플들이 하나 이상의 시간적 레벨들에 기반하여 그룹핑된 샘플 그룹에 대한 정보를 생성하는 단계; 및상기 샘플 그룹에 대한 정보, 상기 시간적 레벨들에 대한 정보 및 상기 포인트 클라우드 데이터를 포함하여 G-PCC 파일을 생성하는 단계를 포함하는 방법.
- 포인트 클라우드 데이터의 전송 장치로서,메모리; 및적어도 하나의 프로세서를 포함하고,상기 적어도 하나의 프로세서는,G-PCC(geometry-based point cloud compression) 샘플들이 하나 이상의 시간적 레벨들에 기반하여 그룹핑된 샘플 그룹에 대한 정보를 생성하고,상기 샘플 그룹에 대한 정보, 상기 시간적 레벨들에 대한 정보 및 상기 포인트 클라우드 데이터를 포함하여 G-PCC 파일을 생성하는 전송 장치.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22837895.6A EP4369722A1 (en) | 2021-07-03 | 2022-06-30 | Transmission apparatus for point cloud data and method performed by transmission apparatus, and reception apparatus for point cloud data and method performed by reception apparatus |
CN202280047124.8A CN117643062A (zh) | 2021-07-03 | 2022-06-30 | 点云数据的发送装置和由发送装置执行的方法及点云数据的接收装置和由接收装置执行的方法 |
KR1020247003668A KR20240032899A (ko) | 2021-07-03 | 2022-06-30 | 포인트 클라우드 데이터의 전송 장치와 이 전송 장치에서 수행되는 방법 및, 포인트 클라우드 데이터의 수신 장치와 이 수신 장치에서 수행되는 방법 |
JP2023580651A JP2024527320A (ja) | 2021-07-03 | 2022-06-30 | ポイントクラウドデータの伝送装置とこの伝送装置で行われる方法、及びポイントクラウドデータの受信装置とこの受信装置で行われる方法 |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163218279P | 2021-07-03 | 2021-07-03 | |
US63/218,279 | 2021-07-03 | ||
US202163219360P | 2021-07-08 | 2021-07-08 | |
US63/219,360 | 2021-07-08 | ||
US202163223051P | 2021-07-18 | 2021-07-18 | |
US63/223,051 | 2021-07-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023282543A1 true WO2023282543A1 (ko) | 2023-01-12 |
Family
ID=84800633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/009457 WO2023282543A1 (ko) | 2021-07-03 | 2022-06-30 | 포인트 클라우드 데이터의 전송 장치와 이 전송 장치에서 수행되는 방법 및, 포인트 클라우드 데이터의 수신 장치와 이 수신 장치에서 수행되는 방법 |
Country Status (5)
Country | Link |
---|---|
US (1) | US12120328B2 (ko) |
EP (1) | EP4369722A1 (ko) |
JP (1) | JP2024527320A (ko) |
KR (1) | KR20240032899A (ko) |
WO (1) | WO2023282543A1 (ko) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024193613A1 (en) * | 2023-03-20 | 2024-09-26 | Douyin Vision Co., Ltd. | Method, apparatus, and medium for point cloud coding |
WO2024207247A1 (zh) * | 2023-04-04 | 2024-10-10 | Oppo广东移动通信有限公司 | 点云的编解码方法、码流、编码器、解码器以及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150016547A1 (en) * | 2013-07-15 | 2015-01-15 | Sony Corporation | Layer based hrd buffer management for scalable hevc |
US20150103926A1 (en) * | 2013-10-15 | 2015-04-16 | Nokia Corporation | Video encoding and decoding |
KR20210004885A (ko) * | 2019-07-03 | 2021-01-13 | 엘지전자 주식회사 | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
KR20210057161A (ko) * | 2018-09-14 | 2021-05-20 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 포인트 클라우드 코딩에서의 개선된 속성 지원 |
WO2021123159A1 (en) * | 2019-12-20 | 2021-06-24 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Video data stream, video encoder, apparatus and methods for hrd timing fixes, and further additions for scalable and mergeable bitstreams |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020145668A1 (ko) * | 2019-01-08 | 2020-07-16 | 삼성전자주식회사 | 3차원 컨텐츠의 처리 및 전송 방법 |
CN115211131A (zh) * | 2020-01-02 | 2022-10-18 | 诺基亚技术有限公司 | 用于全向视频的装置、方法及计算机程序 |
-
2022
- 2022-06-30 EP EP22837895.6A patent/EP4369722A1/en active Pending
- 2022-06-30 KR KR1020247003668A patent/KR20240032899A/ko active Search and Examination
- 2022-06-30 WO PCT/KR2022/009457 patent/WO2023282543A1/ko active Application Filing
- 2022-06-30 US US17/855,472 patent/US12120328B2/en active Active
- 2022-06-30 JP JP2023580651A patent/JP2024527320A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150016547A1 (en) * | 2013-07-15 | 2015-01-15 | Sony Corporation | Layer based hrd buffer management for scalable hevc |
US20150103926A1 (en) * | 2013-10-15 | 2015-04-16 | Nokia Corporation | Video encoding and decoding |
KR20210057161A (ko) * | 2018-09-14 | 2021-05-20 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 포인트 클라우드 코딩에서의 개선된 속성 지원 |
KR20210004885A (ko) * | 2019-07-03 | 2021-01-13 | 엘지전자 주식회사 | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
WO2021123159A1 (en) * | 2019-12-20 | 2021-06-24 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Video data stream, video encoder, apparatus and methods for hrd timing fixes, and further additions for scalable and mergeable bitstreams |
Also Published As
Publication number | Publication date |
---|---|
KR20240032899A (ko) | 2024-03-12 |
US12120328B2 (en) | 2024-10-15 |
JP2024527320A (ja) | 2024-07-24 |
US20230014844A1 (en) | 2023-01-19 |
EP4369722A1 (en) | 2024-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021002730A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021066312A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021066615A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
WO2021210763A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021210837A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2020189976A1 (ko) | 포인트 클라우드 데이터 처리 장치 및 방법 | |
WO2020242244A1 (ko) | 포인트 클라우드 데이터 처리 방법 및 장치 | |
WO2020189903A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021071257A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021049758A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021060850A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021141258A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2020262831A1 (ko) | 포인트 클라우드 데이터 처리 장치 및 방법 | |
WO2021045601A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2020256308A1 (ko) | 포인트 클라우드 데이터 처리 장치 및 방법 | |
WO2021206291A1 (ko) | 포인트 클라우드 데이터 전송 장치, 전송 방법, 처리 장치 및 처리 방법 | |
WO2021210860A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021002592A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2020197086A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021210867A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021261865A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2023282543A1 (ko) | 포인트 클라우드 데이터의 전송 장치와 이 전송 장치에서 수행되는 방법 및, 포인트 클라우드 데이터의 수신 장치와 이 수신 장치에서 수행되는 방법 | |
WO2022019713A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2021029511A1 (ko) | 포인트 클라우드 데이터 전송 장치, 포인트 클라우드 데이터 전송 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 | |
WO2022035256A1 (ko) | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22837895 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023580651 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280047124.8 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 20247003668 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022837895 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022837895 Country of ref document: EP Effective date: 20240205 |