WO2024103513A1 - Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux binaire, et support de stockage - Google Patents

Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux binaire, et support de stockage Download PDF

Info

Publication number
WO2024103513A1
WO2024103513A1 PCT/CN2023/071279 CN2023071279W WO2024103513A1 WO 2024103513 A1 WO2024103513 A1 WO 2024103513A1 CN 2023071279 W CN2023071279 W CN 2023071279W WO 2024103513 A1 WO2024103513 A1 WO 2024103513A1
Authority
WO
WIPO (PCT)
Prior art keywords
index value
value
coefficient
attribute
current point
Prior art date
Application number
PCT/CN2023/071279
Other languages
English (en)
Chinese (zh)
Inventor
马闯
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2024103513A1 publication Critical patent/WO2024103513A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]

Definitions

  • the embodiments of the present application relate to the field of point cloud compression technology, and in particular to a point cloud encoding and decoding method, encoder, decoder, bit stream and storage medium.
  • the geometry information and attribute information of the point cloud are encoded separately. After the geometry encoding is completed, the geometry information is reconstructed, and the encoding of the attribute information will depend on the reconstructed geometry information.
  • the attribute information encoding is mainly aimed at the encoding of color information, which is converted into a YUV color space that is more in line with the visual characteristics of the human eye, and then the attribute information after preprocessing is attribute encoded, and finally a binary attribute code stream is generated.
  • the embodiments of the present application provide a point cloud encoding and decoding method, encoder, decoder, bit stream and storage medium, which can improve the encoding and decoding performance of point cloud attributes.
  • an embodiment of the present application provides a point cloud decoding method, which is applied to a decoder, and the method includes:
  • the attribute coefficient of the current point is determined according to the decoded coefficient.
  • an embodiment of the present application provides a point cloud encoding method, which is applied to an encoder, and the method includes:
  • the attribute coefficient of the current point is determined according to the encoding coefficient.
  • an encoder comprising a first determining unit, wherein:
  • the first determination unit is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • an embodiment of the present application provides an encoder, the encoder comprising a first memory and a first processor; wherein,
  • the first memory is used to store a computer program that can be run on the first processor
  • the first processor is used to execute the point cloud encoding method as described above when running the computer program.
  • an embodiment of the present application provides a decoder, wherein the decoder includes a second determining unit, wherein:
  • the second determination unit is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the decoding coefficient.
  • an embodiment of the present application provides a decoder, the decoder comprising a second memory and a second processor; wherein:
  • the second memory is used to store a computer program that can be run on the second processor
  • the second processor is used to execute the point cloud decoding method as described above when running the computer program.
  • an embodiment of the present application provides a code stream, which is generated by bit encoding based on information to be encoded; wherein the information to be encoded includes at least: adaptive context identification information of the current point, geometric information of the current point, and a zero-run value corresponding to the current point.
  • an embodiment of the present application provides a computer storage medium, wherein the computer storage medium stores a computer program, and when the computer program is executed by a first processor, it implements the point cloud encoding method as described above, or, when the computer program is executed by a second processor, it implements the point cloud decoding method as described above.
  • the embodiment of the present application provides a point cloud encoding and decoding method, an encoder, a decoder, a bitstream and a storage medium, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines a coding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG1A is a schematic diagram of a three-dimensional point cloud image provided in an embodiment of the present application.
  • FIG1B is a partially enlarged schematic diagram of a three-dimensional point cloud image provided in an embodiment of the present application.
  • FIG2A is a schematic diagram of a point cloud image at different viewing angles provided in an embodiment of the present application.
  • FIG2B is a schematic diagram of a data storage format corresponding to FIG2A provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of a network architecture of point cloud encoding and decoding provided in an embodiment of the present application
  • FIG4 is a schematic diagram of the structure of a point cloud encoder provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of the structure of a point cloud decoder provided in an embodiment of the present application.
  • FIG6 shows a schematic diagram of a composition framework of a point cloud encoder
  • FIG7 shows a schematic diagram of a composition framework of a point cloud decoder
  • FIG8 is a schematic diagram of an implementation flow of a point cloud decoding method proposed in an embodiment of the present application.
  • FIG9 is a schematic diagram of an implementation flow of a point cloud encoding method proposed in an embodiment of the present application.
  • FIG10 is a schematic diagram of the structure of the encoder
  • FIG11 is a second schematic diagram of the structure of the encoder
  • FIG12 is a schematic diagram of the structure of a decoder
  • FIG. 13 is a second schematic diagram of the structure of the decoder.
  • first ⁇ second ⁇ third involved in the embodiments of the present application are only used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged in a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
  • Point Cloud is a three-dimensional representation of the surface of an object.
  • Point cloud (data) on the surface of an object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera.
  • a point cloud is a set of discrete points that are irregularly distributed in space and express the spatial structure and surface properties of a three-dimensional object or scene.
  • FIG1A shows a three-dimensional point cloud image
  • FIG1B shows a partial magnified view of the three-dimensional point cloud image. It can be seen that the point cloud surface is composed of densely distributed points.
  • Two-dimensional images have information expression at each pixel point, and the distribution is regular, so there is no need to record its position information additionally; however, the distribution of points in point clouds in three-dimensional space is random and irregular, so it is necessary to record the position of each point in space in order to fully express a point cloud.
  • each position in the acquisition process has corresponding attribute information, usually RGB color values, and the color value reflects the color of the object; for point clouds, in addition to color information, the attribute information corresponding to each point is also commonly the reflectance value, which reflects the surface material of the object. Therefore, the points in the point cloud can include the position information of the point and the attribute information of the point.
  • the position information of the point can be the three-dimensional coordinate information (x, y, z) of the point.
  • the position information of the point can also be called the geometric information of the point.
  • the attribute information of the point can include color information (three-dimensional color information) and/or reflectance (one-dimensional reflectance information r), etc.
  • the color information can be information on any color space.
  • the color information can be RGB information. Among them, R represents red (Red, R), G represents green (Green, G), and B represents blue (Blue, B).
  • the color information may be luminance and chrominance (YCbCr, YUV) information, where Y represents brightness (Luma), Cb (U) represents blue color difference, and Cr (V) represents red color difference.
  • the points in the point cloud may include the three-dimensional coordinate information of the points and the reflectivity value of the points.
  • the points in the point cloud may include the three-dimensional coordinate information of the points and the three-dimensional color information of the points.
  • a point cloud obtained by combining the principles of laser measurement and photogrammetry may include the three-dimensional coordinate information of the points, the reflectivity value of the points and the three-dimensional color information of the points.
  • Figure 2A and 2B a point cloud image and its corresponding data storage format are shown.
  • Figure 2A provides six viewing angles of the point cloud image
  • Figure 2B consists of a file header information part and a data part.
  • the header information includes the data format, data representation type, the total number of point cloud points, and the content represented by the point cloud.
  • the point cloud is in the ".ply" format, represented by ASCII code, with a total number of 207242 points, and each point has three-dimensional coordinate information (x, y, z) and three-dimensional color information (r, g, b).
  • Point clouds can be divided into the following categories according to the way they are obtained:
  • Static point cloud the object is stationary, and the device that obtains the point cloud is also stationary;
  • Dynamic point cloud The object is moving, but the device that obtains the point cloud is stationary;
  • Dynamic point cloud acquisition The device used to acquire the point cloud is in motion.
  • point clouds can be divided into two categories according to their usage:
  • Category 1 Machine perception point cloud, which can be used in autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, emergency rescue robots, etc.
  • Category 2 Point cloud perceived by the human eye, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • Point clouds can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes. Point clouds are obtained by directly sampling real objects, so they can provide a strong sense of reality while ensuring accuracy. Therefore, they are widely used, including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, three-dimensional immersive remote presentation, and three-dimensional reconstruction of biological tissues and organs.
  • Point clouds can be collected mainly through the following methods: computer generation, 3D laser scanning, 3D photogrammetry, etc.
  • Computers can generate point clouds of virtual three-dimensional objects and scenes; 3D laser scanning can obtain point clouds of static real-world three-dimensional objects or scenes, and can obtain millions of point clouds per second; 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
  • 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
  • the number of points in each point cloud frame is 700,000, and each point has coordinate information xyz (float) and color information RGB (uchar).
  • the point cloud is a collection of massive points, storing the point cloud will not only consume a lot of memory, but also be inconvenient for transmission. There is also not enough bandwidth to support direct transmission of the point cloud at the network layer without compression. Therefore, the point cloud needs to be compressed.
  • the point cloud coding framework that can compress point clouds can be the geometry-based point cloud compression (G-PCC) codec framework or the video-based point cloud compression (V-PCC) codec framework provided by the Moving Picture Experts Group (MPEG), or the AVS-PCC codec framework provided by AVS.
  • G-PCC geometry-based point cloud compression
  • V-PCC video-based point cloud compression
  • MPEG Moving Picture Experts Group
  • AVS-PCC codec framework provided by AVS.
  • the G-PCC codec framework can be used to compress the first type of static point clouds and the third type of dynamically acquired point clouds
  • the V-PCC codec framework can be used to compress the second type of dynamic point clouds.
  • FIG3 is a schematic diagram of a network architecture of a point cloud encoding and decoding provided by the embodiment of the present application.
  • the network architecture includes one or more electronic devices 13 to 1N and a communication network 01, wherein the electronic devices 13 to 1N can perform video interaction through the communication network 01.
  • the electronic device can be various types of devices with point cloud encoding and decoding functions.
  • the electronic device can include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensor device, a server, etc., which is not limited by the embodiment of the present application.
  • the decoder or encoder in the embodiment of the present application can be the above-mentioned electronic device.
  • the electronic device in the embodiment of the present application has a point cloud encoding and decoding function, generally including a point cloud encoder (ie, encoder) and a point cloud decoder (ie, decoder).
  • a point cloud encoder ie, encoder
  • a point cloud decoder ie, decoder
  • point cloud compression generally adopts the method of compressing point cloud geometry information and attribute information separately.
  • the point cloud geometry information is first encoded in the geometry encoder, and then the reconstructed geometry information is input into the attribute encoder as additional information to assist in the compression of point cloud attributes;
  • the point cloud geometry information is first decoded in the geometry decoder, and then the decoded geometry information is input into the attribute decoder as additional information to assist in the compression of point cloud attributes.
  • the entire codec consists of pre-processing/post-processing, geometry encoding/decoding, and attribute encoding/decoding.
  • the embodiment of the present application provides a point cloud encoder, as shown in FIG4 , which is a reference frame for point cloud compression.
  • the point cloud encoder 11 includes a geometry encoder: a coordinate translation unit 111, a coordinate quantization unit 112, an octree construction unit 113, a geometry entropy encoder 114, and a geometry reconstruction unit 115.
  • An attribute encoder an attribute recoloring unit 116, a color space conversion unit 117, a first attribute prediction unit 118, a quantization unit 119, and an attribute entropy encoder 1110.
  • the original geometric information is first preprocessed, and the geometric origin is normalized to the minimum position in the point cloud space through the coordinate translation unit 111.
  • the geometric information is converted from floating point numbers to integers through the coordinate quantization unit 112 to facilitate subsequent regularization processing; then the regularized geometric information is geometrically encoded, and the point cloud space is recursively divided using an octree structure in the octree construction unit 113.
  • the current node is divided into eight sub-blocks of the same size, and the occupancy codeword of each sub-block is judged. When the sub-block does not contain a point, it is recorded as empty, otherwise it is recorded as non-empty.
  • the occupancy codeword information of all blocks is recorded in the last layer of the recursive division, and geometric encoding is performed; the geometric information expressed by the octree structure is input into the geometric entropy encoder 114 to form a geometric code stream on the one hand, and geometric reconstruction processing is performed in the geometric reconstruction unit 115 on the other hand, and the reconstructed geometric information is input into the attribute encoder as additional information.
  • the original attribute information is first preprocessed. Since the geometric information changes after geometric encoding, the attribute value is reallocated to each point after geometric encoding through the attribute recoloring unit 116 to achieve attribute recoloring.
  • the processed attribute information is color information
  • the original color information needs to be transformed into a YUV color space that is more in line with the visual characteristics of the human eye through the color space conversion unit 117; then the preprocessed attribute information is attribute encoded through the first attribute prediction unit 118.
  • Attribute encoding first requires the point cloud to be reordered, and the reordering method is Morton code, so the traversal order of attribute encoding is Morton order.
  • the attribute prediction method is a single-point prediction based on the Morton order, that is, according to the Morton order, trace back one point from the current point to be encoded (current node), and the node found is the prediction reference point of the current point to be encoded, and then the attribute reconstruction value of the prediction reference point is used as the attribute prediction value, and the attribute residual value is the difference between the attribute original value and the attribute prediction value of the current point to be encoded; finally, the attribute residual value is quantized by the quantization unit 119, and the quantized residual information is input into the attribute entropy encoder 1110 to form an attribute code stream.
  • FIG5 is a schematic diagram of the structure of a point cloud decoder provided by the present application.
  • FIG5 is a reference frame of point cloud compression.
  • the point cloud decoder 12 includes a geometric decoder: a geometric entropy decoder 121, an octree reconstruction unit 122, a coordinate inverse quantization unit 123, and a coordinate inverse translation unit 124.
  • An attribute decoder an attribute entropy decoder 125, an inverse quantization unit 126, a second attribute prediction unit 127, and a color space inverse transformation unit 128.
  • the geometry bitstream is first entropy decoded by the geometry entropy decoder 121 to obtain the geometry information of each node, and then the octree structure is constructed by the octree reconstruction unit 122 in the same way as the geometry encoding.
  • the geometry information expressed by the octree structure after coordinate transformation is reconstructed in combination with the decoded geometry.
  • the information is dequantized by the coordinate dequantization unit 123 and detranslated by the coordinate detranslation unit 124 to obtain the decoded geometry information.
  • it is input into the attribute decoder as additional information.
  • the Morton order is constructed in the same way as the encoding end.
  • the attribute code stream is first entropy decoded by the attribute entropy decoder 125 to obtain the quantized residual information; then, the inverse quantization unit 126 performs inverse quantization to obtain the attribute residual value; similarly, in the same way as the attribute encoding, the attribute prediction value of the current point to be decoded is obtained by the second attribute prediction unit 127, and then the attribute prediction value is added to the attribute residual value to restore the attribute reconstruction value (for example, YUV attribute value) of the current point to be decoded; finally, the decoded attribute information is obtained by color space inverse transformation by the color space inverse transformation unit 128.
  • test conditions There are 4 general test conditions, which can include:
  • Condition 1 The geometric position is limitedly lossy and the attributes are lossy;
  • Condition 3 The geometric position is lossless, and the attributes are limitedly lossy
  • Condition 4 The geometric position and attributes are lossless.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.), and the prediction algorithm is first used to obtain the attribute prediction value, and the attribute residual is obtained according to the attribute value and the attribute prediction value. Then, the attribute residual is quantized to generate a quantized residual, and finally the quantized residual is encoded;
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the prediction algorithm is first used to obtain the attribute prediction value, and then the decoding is performed to obtain the quantized residual.
  • the quantized residual is then dequantized, and finally the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized residual.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.), and the entire point cloud is first divided into several small groups with a maximum length of Y (such as 2), and then these small groups are combined into several large groups (the number of points in each large group does not exceed X, such as 4096), and then the prediction algorithm is used to obtain the attribute prediction value, and the attribute residual is obtained according to the attribute value and the attribute prediction value.
  • the attribute residual is transformed by DCT in small groups to generate transformation coefficients, and then the transformation coefficients are quantized to generate quantized transformation coefficients, and finally the quantized transformation coefficients are encoded in large groups;
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2), and then these small groups are combined into several large groups (the number of points in each large group does not exceed X, such as 4096).
  • the quantized transform coefficients are decoded in large groups, and then the prediction algorithm is used to obtain the attribute prediction value.
  • the quantized transform coefficients are dequantized and inversely transformed in small groups.
  • the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized and inversely transformed coefficients.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2).
  • the prediction algorithm is used to obtain the attribute prediction value.
  • the attribute residual is obtained according to the attribute value and the attribute prediction value.
  • the attribute residual is transformed by DCT in small groups to generate transformation coefficients.
  • the transformation coefficients are quantized to generate quantized transformation coefficients.
  • the quantized transformation coefficients of the entire point cloud are encoded.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2), and the quantized transformation coefficients of the entire point cloud are obtained by decoding.
  • the prediction algorithm is used to obtain the attribute prediction value, and then the quantized transformation coefficients are dequantized and inversely transformed in groups.
  • the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized and inversely transformed coefficients.
  • the entire point cloud is subjected to multi-layer wavelet transform to generate transform coefficients, which are then quantized to generate quantized transform coefficients, and finally the quantized transform coefficients of the entire point cloud are encoded;
  • decoding obtains the quantized transform coefficients of the entire point cloud, and then dequantizes and inversely transforms the quantized transform coefficients to obtain attribute reconstruction values.
  • the coefficients may be quantized residuals, and in the above-mentioned embodiments 2, 3, and 4, the coefficients may be quantized transform coefficients.
  • the geometric information of the point cloud and the attribute information corresponding to each point are encoded separately.
  • the current reference attribute encoding framework can be divided into Pred branch-based, PredLift branch-based, and RAHT branch-based.
  • FIG6 shows a schematic diagram of the composition framework of a point cloud encoder.
  • the geometric information is transformed so that all the point clouds are contained in a bounding box (Bounding Box), and then quantized.
  • This step of quantization mainly plays a role in scaling. Due to the quantization rounding, the geometric information of a part of the point cloud is the same, so whether to remove duplicate points is determined based on parameters.
  • the process of quantization and removal of duplicate points is also called voxelization.
  • the Bounding Box is divided into octrees or a prediction tree is constructed.
  • arithmetic coding is performed on the points in the leaf nodes of the division to generate a binary geometric bit stream; or, arithmetic coding is performed on the intersection points (Vertex) generated by the division (surface fitting is performed based on the intersection points) to generate a binary geometric bit stream.
  • color conversion is required first to convert the color information (i.e., attribute information) from the RGB color space to the YUV color space. Then, the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information. Attribute encoding is mainly performed on color information.
  • FIG7 shows a schematic diagram of the composition framework of a point cloud decoder.
  • the geometric bit stream and the attribute bit stream in the binary bit stream are first decoded independently.
  • the geometric information of the point cloud is obtained through arithmetic decoding-reconstruction of the octree/reconstruction of the prediction tree-reconstruction of the geometry-coordinate inverse conversion;
  • the attribute information of the point cloud is obtained through arithmetic decoding-inverse quantization-LOD partitioning/RAHT-color inverse conversion, and the point cloud data to be encoded (i.e., the output point cloud) is restored based on the geometric information and attribute information.
  • the current point cloud geometry encoding and decoding can be divided into octree-based geometry encoding and decoding (marked with a dotted box) and prediction tree-based geometry encoding and decoding (marked with a dotted box).
  • test conditions There are 4 general test conditions, which can include:
  • Condition 1 The geometric position is lossless, but the attributes are lossy;
  • the attribute residual coefficients are obtained by using the prediction method of Pred, and entropy coding is performed on the attribute residual coefficients;
  • entropy decoding obtains the attribute residual coefficients, and the original values are restored using the Pred prediction method.
  • the attribute transformation coefficients are obtained by using the Predlift method, and entropy coding is performed on the attribute transformation coefficients;
  • entropy decoding obtains the attribute transformation coefficients, and Predlift's transformation method is used to restore the original values.
  • the attribute transformation coefficients are obtained by using the RAHT method, and entropy coding is performed on the attribute transformation coefficients;
  • entropy decoding is used to obtain the attribute transformation coefficients, and the RAHT method is used to restore the original values.
  • attribute entropy coding and decoding are performed.
  • an embodiment of the present application provides a point cloud encoding and decoding method.
  • the correlation between the already encoded/decoded attribute coefficients and related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer limited to using fixed contexts for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG8 is a schematic diagram of an implementation flow of the point cloud decoding method proposed in the embodiment of the present application. As shown in FIG8, the following steps may be included when decoding the point cloud:
  • Step 101 Determine the index value.
  • the index value may be determined first.
  • the decoding method of the embodiment of the present application specifically refers to a point cloud decoding method, which can be applied to a point cloud decoder (also referred to as a "decoder").
  • the point cloud to be processed includes at least one node.
  • a node in the point cloud to be processed when decoding the node, it can be used as a node to be decoded in the point cloud to be processed, and there are multiple decoded nodes around the node.
  • the current node current point is the node to be decoded that currently needs to be decoded in the at least one node.
  • each node in the point cloud to be processed corresponds to a geometric information and an attribute information; wherein the geometric information represents the spatial relationship of the point, and the attribute information represents the relevant information of the attribute of the point.
  • the attribute information may be color information, or reflectivity or other attributes, which is not specifically limited in the embodiments of the present application.
  • the attribute information may be color information in any color space.
  • the attribute information may be color information in an RGB space, or color information in a YUV space, or color information in a YCbCr space, etc., which is not specifically limited in the embodiments of the present application.
  • the decoder can arrange the at least one node in a preset decoding order so as to determine the index number corresponding to each node. In this way, according to the index number corresponding to each node, the decoder can process each node in the point cloud to be processed in the preset decoding order.
  • the preset decoding order may be one of the following: original order of point cloud, Morton order, Hilbert order, etc., which is not specifically limited in the embodiments of the present application.
  • the index value may be used to determine the context used by the attribute coefficient of the current point. If the attribute information of the current point is color information, different index values may be determined corresponding to different color components of the current point.
  • the index value may include at least one of a first index value, a second index value, and a third index value.
  • the first index value, the second index value, and the third index value may correspond to the three color components of the current point, respectively, that is, the first index value, the second index value, and the third index value may be used to determine the first context, the second context, and the third context used by the attribute coefficients of the three color components of the current point, respectively.
  • the first index value can be used to determine the first context used by the attribute coefficient of the R component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the G component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the B component of the current point.
  • the first index value can be used to determine the first context used by the attribute coefficient of the Y component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the U component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the V component of the current point.
  • At least one of the first index value, the second index value, and the third index value of the current point may be determined first.
  • the adaptive context identification information of the current point when decoding the code stream, can be first determined; if the adaptive context identification information indicates that the attribute coefficient of the current point is determined using the adaptive context, then the first index value, and/or the second index value, and/or the third index value determination process can be executed.
  • the adaptive context identification information can be understood as a flag indicating whether to use the adaptive context for the node in the point cloud.
  • the decoder decodes the bitstream and can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the values of the adaptive context identification information are different, and the method of determining the context used for the attribute coefficient of the current point is also different. Among them, it can be determined whether to use the adaptive context to determine the attribute coefficients of some or all color components of the current point according to the adaptive context identification information.
  • the value of the adaptive context identification information is 1, then you can choose to use the adaptive context to determine the attribute coefficient of the current point; if the value of the adaptive context identification information is 0, then you can choose not to use the adaptive context to determine the attribute coefficient of the current point.
  • the value of the adaptive context identification information may also be set to other values or parameters, and the present application does not impose any limitation thereto.
  • the index value used to indicate the context can be further determined. That is, after determining that the adaptive context identification information indicates that the adaptive context is used, the index value determination process is executed.
  • the adaptive context identification information determination process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the adaptive context can be used for the attribute coefficients of some or all color components of the current point, or a pre-set context can be used for the attribute coefficients of some or all color components of the current point.
  • the code stream can also be decoded to determine the geometric information of the current point and the zero-run value corresponding to the current point.
  • the geometric information of the current point may include the position coordinate information of the current point.
  • the geometric information of the current point may be the spatial coordinate information (x, y, z) corresponding to the current point.
  • the zero run value corresponding to the current point may include the zero run value of the current point, or the previous zero run value of the current point, or the previous non-zero zero run value of the current point.
  • the zero run value run_length can be used to count whether the attribute coefficient is 0. For the color attribute, if the zero run value run_length is not 0 (or greater than 0), it can be determined that the attribute coefficients of all color components of the current point are all 0; if the zero run value run_length is 0, it can be determined that the attribute coefficients of all color components of the current point are not all 0.
  • the zero run value run_length indicates that the attribute coefficients of all color components of the current point are all 0, then there is no need to determine the attribute coefficients, but the zero run value can be first decremented by 1 to update the zero run value, and then the attribute coefficient of the next point is determined according to the zero run value. For the attribute coefficient of the next point, it is continued to be judged whether the attribute coefficients of all color components are all 0 according to the zero run value run_length to determine whether the attribute coefficients of all color components need to be determined.
  • the zero run value run_length of the current point determined by the decoded code stream is 3, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the current point are all 0, so there is no need to decode the attribute coefficients of the current point, and you can choose to first decrement the zero run value by 1, that is, perform the --run_length operation, and then determine the attribute coefficient of the next point based on the zero run value.
  • the corresponding zero run value run_length is 2, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the point are all 0, so there is no need to decode the attribute coefficients of all color components of the point, and continue to decrement the zero run value by 1, that is, perform the --run_length operation.
  • an index value may be first determined based on geometric information and/or a zero-run value.
  • the first index value corresponding to the first color component may be determined according to geometric information and/or a zero-run value.
  • the index value can be determined based on at least one of the absolute value, geometric information and zero-run value of the first attribute coefficient.
  • the second index value corresponding to the second color component may be determined according to at least one of the absolute value of the first attribute coefficient, geometric information, and a zero-run value.
  • the index value can be determined based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value.
  • a third index value corresponding to the third color component may be determined according to at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and a zero-run value.
  • the zero-run value corresponding to the current point to determine the index value that is, when determining the first index value or the second index value or the third index value based on the zero-run value, you can choose to first add or subtract the zero-run value and the first value to determine the first operation result; and then determine the index value based on the first operation result.
  • the first value may be any value.
  • the first value may be 1 or 3, and the present application does not specifically limit this.
  • the first operation result when determining the index value according to the first operation result, the first operation result may be selected as the index value, the absolute value of the first operation result may be selected as the index value, or the index value may be derived from the first operation result.
  • This application does not make any specific limitation.
  • the zero-run value and the first numerical value can be first added or subtracted to determine the first operation result; then the first numerical range corresponding to the first operation result is determined; finally, the index value can be determined according to the first numerical range and the correspondence between the first preset index value and the numerical range.
  • the value of the zero-run value corresponding to the current point can be an integer greater than or equal to 0, and after performing addition or subtraction operation on the zero-run value and the first value, the first operation result obtained can be a value greater than, equal to, or less than 0. Therefore, the first numerical range corresponding to the first operation result can be any numerical range.
  • the correspondence between the first preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 1 is the correspondence between the first preset index value and the numerical range, as shown in Table 1,
  • the first numerical range corresponding to the first operation result can be (0, 1], and accordingly, the index value determined based on the correspondence between the first preset index value and the numerical range is 2.
  • the second numerical range corresponding to the zero run value when using the zero run value corresponding to the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the zero run value, the second numerical range corresponding to the zero run value can be determined first; then the index value can be determined according to the second numerical range and the correspondence between the second preset index value and the numerical range.
  • the zero run value corresponding to the current point may be an integer greater than or equal to 0. Therefore, the second numerical range corresponding to the zero run value may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the second preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 2 is the correspondence between the second preset index value and the numerical range, as shown in Table 2,
  • the first numerical range corresponding to the zero-run value can be (1, 3], and accordingly, the index value determined based on the correspondence between the second preset index value and the numerical range is 2.
  • the index value when using the geometric information of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the geometric information, you can choose to first determine the position range corresponding to the geometric information; then determine the index value according to the position range, and the correspondence between the preset position range and the index value.
  • the geometric information of the current point may include the position coordinate information of the current point, which may include different spatial components, such as the x component, the y component, and the z component
  • the range can be divided by referring to some or all of the different spatial components.
  • the position range corresponding to the geometric information of the current point may be determined only according to the x component, or the position range corresponding to the geometric information of the current point may be determined according to the y component and the z component, or the position range corresponding to the geometric information of the current point may be determined according to the x component, the y component, and the z component.
  • the correspondence between the preset position range and the index value can represent the mapping relationship between the position range and the index value.
  • corresponding index values can be determined.
  • Table 3 is the correspondence between the preset position range and the index value, as shown in Table 3,
  • Position Range Index value Position Range 1 1 Position Range 2 2 Position range 3 3 Position range 4 4
  • the corresponding position range is position range 3
  • the index value determined based on the correspondence between the preset position range and the index value is 3.
  • the absolute value of the first attribute coefficient of the current point when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to directly set the absolute value of the first attribute coefficient as the index value.
  • the index value may be determined to be 2 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first determine the third numerical range corresponding to the absolute value of the first attribute coefficient; then determine the index value according to the third numerical range, and the correspondence between the third preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0. Therefore, the third numerical range corresponding to the absolute value of the first attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the third preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 4 is the correspondence between the third preset index value and the numerical range, as shown in Table 4,
  • the third numerical range corresponding to the absolute value of the first attribute coefficient can be (2, 4], and accordingly, the index value determined based on the correspondence between the third preset index value and the numerical range is 3.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; and then determine the index value based on the second operation result.
  • the second value may be any value.
  • the second value may be -1 or 2, which is not specifically limited in the present application.
  • the absolute value of the first attribute coefficient of the current point is 1 and the second value is -2
  • the second operation result when determining the index value according to the second operation result, the second operation result may be selected as the index value, the absolute value of the second operation result may be selected as the index value, or the index value may be derived from the second operation result.
  • This application does not make any specific limitation.
  • the absolute value of the first attribute coefficient of the current point when the absolute value of the first attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; then determine the fourth numerical range corresponding to the second operation result; finally, the indexed value can be determined according to the fourth numerical range and the correspondence between the fourth preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0, and after performing an addition or subtraction operation on the absolute value of the first attribute coefficient and the second value, the second operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the fourth value range corresponding to the second operation result may be any value range.
  • the correspondence between the fourth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 5 is the correspondence between the fourth preset index value and the numerical range, as shown in Table 5,
  • the fourth numerical range corresponding to the fourth operation result can be (-1, 1], and accordingly, the index value determined based on the correspondence between the fourth preset index value and the numerical range is 2.
  • the absolute value of the second attribute coefficient of the current point when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to directly set the absolute value of the second attribute coefficient as the index value.
  • the index value may be determined to be 1 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first determine the fifth numerical range corresponding to the absolute value of the second attribute coefficient; then determine the index value according to the fifth numerical range, and the correspondence between the fifth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0. Therefore, the fifth numerical range corresponding to the absolute value of the second attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the fifth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 6 is the correspondence between the fifth preset index value and the numerical range, as shown in Table 6,
  • the fifth numerical range corresponding to the absolute value of the second attribute coefficient can be (3, 5], and accordingly, the index value determined based on the correspondence between the fifth preset index value and the numerical range is 4.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; and then determine the index value based on the third operation result.
  • the third value may be any value.
  • the third value may be 0 or 1, and the present application does not specifically limit it.
  • the third operation result when determining the index value according to the third operation result, the third operation result may be selected as the index value, the absolute value of the third operation result may be selected as the index value, or the index value may be derived from the third operation result.
  • This application does not make any specific limitation.
  • the absolute value of the second attribute coefficient of the current point when the absolute value of the second attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; then determine the sixth numerical range corresponding to the third operation result; finally, the index value can be determined according to the sixth numerical range and the correspondence between the sixth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0, and after performing addition or subtraction operation on the absolute value of the second attribute coefficient and the third value, the third operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the sixth value range corresponding to the third operation result may be any value range.
  • the correspondence between the sixth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 7 is the correspondence between the sixth preset index value and the numerical range, as shown in Table 7,
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the index value determined based on the geometric information is A1
  • the index value determined based on the zero run value is A2
  • A1 can be directly determined as the first index value
  • A2 can be directly determined as the first index value
  • A1 and A2 can be compared in size, and the larger or smaller value of the two can be determined as the first index value
  • A1 and A2 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the first index value.
  • an index value based on at least one of the absolute value of the first attribute coefficient, geometric information, and zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the index value determined based on the absolute value of the first attribute coefficient is B1
  • the index value determined based on the geometric information is B2
  • the index value determined based on the zero run value is B3
  • B1 can be directly determined as the second index value
  • B2 can be directly determined as the second index value
  • B3 can be directly determined as the second index value
  • B1, B2 and B3 can be compared in size, and the larger or smaller value among the three can be determined as the second index value
  • B1, B2 and B3 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the second index value.
  • This application does not make specific limitations.
  • an index value based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the absolute value of the second attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • C1 can be directly determined as the third index value
  • C2 can be directly determined as the third index value
  • C3 can be directly determined as the third index value
  • C4 can be directly determined as the third index value
  • C1, C2, C3 and B4 can be compared in size, and the larger or smaller value among the four can be determined as the third index value
  • C1, C2, C3 and B4 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the third index value.
  • This application does not make specific limitations.
  • Step 102 Determine a decoding coefficient of the current point according to the context indicated by the index value.
  • the decoding coefficient of the current point may be further determined according to the context indicated by the index value.
  • the decoding coefficient may be a value obtained after decoding processing is performed using the context indicated by the index value.
  • the index value may include at least one of the first index value, the second index value, and the third index value of the current point
  • the context indicated by the index value corresponding to different color components may be used to determine the decoding coefficient of the corresponding color component.
  • the decoding coefficient may include at least one of a first decoding coefficient, a second decoding coefficient and a third decoding coefficient.
  • the first decoding coefficient, the second decoding coefficient and the third decoding coefficient may correspond to the three color components of the current point respectively, that is, the first decoding coefficient, the second decoding coefficient and the third decoding coefficient may be obtained by parsing using the first context, the second context and the third context respectively.
  • the first decoding coefficient of the current point when determining the decoding coefficient of the current point according to the context indicated by the index value, can be determined according to the first context indicated by the first index value, the second decoding coefficient of the current point can be determined according to the second context indicated by the second index value, and the third decoding coefficient of the current point can be determined according to the third context indicated by the third index value.
  • Step 103 Determine the attribute coefficient of the current point according to the decoded coefficient.
  • the attribute coefficient of the current point can be further determined according to the decoding coefficient.
  • the attribute coefficient may be a related value of the attribute information determined based on the decoding coefficient.
  • the decoding coefficient may include at least one of the first decoding coefficient, the second decoding coefficient and the third decoding coefficient, the attribute coefficient of the corresponding color component may be determined using the decoding coefficients corresponding to different color components.
  • the first attribute coefficient of the current point when determining the attribute coefficient of the current point based on the decoding coefficient, the first attribute coefficient of the current point can be determined based on the first decoding coefficient, the second attribute coefficient of the current point can be determined based on the second decoding coefficient, and the third attribute coefficient of the current point can be determined based on the third decoding coefficient.
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • the attribute coefficient of the current point may include attribute coefficients of all color components, that is, the attribute coefficient of the current point may include at least one of a first attribute coefficient, a second attribute coefficient, and a third attribute coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first decoding coefficient
  • the first decoding coefficient can be used to determine the first attribute coefficient
  • the second context indicated by the second index value can be used to determine the second decoding coefficient
  • the second attribute coefficient can be determined using the second decoding coefficient
  • the third context indicated by the third index value can be used to determine the third decoding coefficient, and then the third attribute coefficient can be determined using the third decoding coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first decoding coefficient
  • the first decoding coefficient can be used to determine the first attribute coefficient
  • the second context indicated by the second index value can be used to determine the second decoding coefficient
  • the second attribute coefficient can be determined using the second decoding coefficient
  • the third context indicated by the third index value can be used to determine the third decoding coefficient, and then the third attribute coefficient can be determined using the third decoding coefficient.
  • an adaptive context may be used for the attribute coefficient of some or all color components of the current point, or a pre-set context may be used for the attribute coefficient of some or all color components of the current point. Therefore, the attribute coefficient of any color component of the current point may be determined by an adaptive context or by a pre-set context.
  • a first decoding coefficient can be determined according to a first preset context; and/or, a second decoding coefficient can be determined according to a second preset context; and/or, a third decoding coefficient can be determined according to a third preset context.
  • the first color component of the current point you can choose to determine the first decoding coefficient according to the first preset context, and determine the first attribute coefficient according to the first decoding coefficient, or you can choose to determine the first index value, and then determine the first decoding coefficient of the current point according to the first context indicated by the first index value, and determine the first attribute coefficient of the current point according to the first decoding coefficient;
  • the second color component of the current point you can choose to determine the second decoding coefficient according to the second preset context, and determine the second attribute coefficient according to the second decoding coefficient, or you can choose to determine the second index value, and then determine the second decoding coefficient of the current point according to the second context indicated by the second index value, and determine the second attribute coefficient of the current point according to the second decoding coefficient;
  • the third color component of the current point you can choose to determine the third decoding coefficient according to the third preset context, and determine the third attribute coefficient according to the third decoding coefficient, or you can choose to determine the third index value, and
  • a pre-set context or an adaptive context can be used for any color component of the current point.
  • the context can be adaptively selected based on an index value determined based on the geometric information of the current point, or based on an index value determined based on a zero-run value corresponding to the current point, or based on an index value determined based on attribute coefficients of other color components of the current point (such as a first attribute coefficient and/or a second attribute coefficient).
  • the present application does not make any specific limitation on this.
  • the methods of determining the context are independent of each other, that is, the methods of determining the context used by different color components are not limited to be the same.
  • the context can be adaptively selected based on the index value determined by the zero run value corresponding to the current point
  • the context can be adaptively selected based on the index value determined by the first attribute coefficient
  • a pre-set context can be used. This application does not specifically limit this.
  • the first color component, the second color component, and the third color component may be different color components among all color components of the current point.
  • the first color component may be a G component
  • the first color component may be a B component
  • the third color component may be an R component
  • the first color component may be a U component
  • the first color component may be a Y component
  • the third color component may be a V component.
  • the bitstream can be decoded to determine the sign of the non-zero attribute coefficient.
  • the code stream can continue to be decoded to determine the sign of the attribute coefficient corresponding to the color component for which the attribute coefficient is not 0.
  • the determination of the sign of the first attribute coefficient can be continued; if the second attribute coefficient is not 0, then the determination of the sign of the second attribute coefficient can be continued; if the third attribute coefficient is not 0, then the determination of the sign of the third attribute coefficient can be continued.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, so that a variety of different adaptive context modes can be introduced, thereby improving the encoding and decoding performance of the point cloud attribute.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the first attribute coefficient may be decoded first; then the decoded first attribute coefficient may be used to adaptively select context to decode the second attribute coefficient; finally, the decoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select context to decode the third attribute coefficient.
  • the first preset context is used to decode the first attribute coefficient;
  • the second attribute coefficient is adaptively selected for decoding using the decoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range);
  • the third attribute coefficient is adaptively selected for decoding using the decoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants, and the decoded second attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range).
  • an adaptive context may be selected for the attribute coefficient of one color component of the current point, and a preset context may be selected for the attribute coefficients of the other two color components.
  • the first attribute coefficient may be decoded first, and then the second attribute coefficient may be decoded; finally, the decoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select a context to decode the third attribute coefficient.
  • the first preset context is used to decode the first attribute coefficient
  • the second preset context is used to decode the second attribute coefficient
  • the context is adaptively selected to decode the third attribute coefficient by using the relationship between the decoded first attribute coefficient plus or minus a constant and the decoded second attribute coefficient plus or minus a constant.
  • a reference zero run value may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient adaptive selection context may be decoded using the decoded runlength information; then the second attribute coefficient adaptive selection context may be decoded using the decoded runlength information; and finally the third attribute coefficient adaptive selection context may be decoded using the decoded runlength information.
  • the first attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values
  • the second attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values
  • the third attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values.
  • the first attribute coefficient is adaptively selected for context decoding using the previous set of runlength values
  • the second attribute coefficient is adaptively selected for context decoding using the previous set of runlength values
  • the third attribute coefficient is adaptively selected for context decoding using the previous set of runlength values.
  • a reference geometric position may be selected to use an adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the geometric information of the current point to decode the context; then the second attribute coefficient may be adaptively selected using the geometric information of the current point to decode the context; and finally the third attribute coefficient may be adaptively selected using the context of the current point to decode the context.
  • the first attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point
  • the second attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point
  • the third attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point.
  • the point cloud encoding and decoding method proposed in the embodiment of the present application can obtain stable performance gains without increasing the time complexity, and can improve the performance of point cloud encoding and decoding.
  • the embodiment of the present application provides a point cloud decoding method, wherein the decoder determines an index value; determines a decoding coefficient of the current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient. That is, in the embodiment of the present application, when the attribute coefficient is determined using the context, the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG9 is a schematic diagram of an implementation flow of the point cloud encoding method proposed in the embodiment of the present application. As shown in FIG9 , when encoding the point cloud, the following steps may be included:
  • Step 201 Determine the index value.
  • the index value may be determined first.
  • the encoding method of the embodiment of the present application specifically refers to a point cloud encoding method, which can be applied to a point cloud encoder (also referred to as "encoder” for short).
  • the point cloud to be processed includes at least one node.
  • a node in the point cloud to be processed when encoding the node, it can be used as a node to be encoded in the point cloud to be processed, and there are multiple encoded nodes around the node.
  • the current node current point is the node to be encoded that currently needs to be encoded in the at least one node.
  • each node in the point cloud to be processed corresponds to a geometric information and an attribute information; wherein the geometric information represents the spatial relationship of the point, and the attribute information represents the attribute information of the point.
  • the attribute information may be color information, or reflectivity or other attributes, which is not specifically limited in the embodiments of the present application.
  • the attribute information may be color information in any color space.
  • the attribute information may be color information in an RGB space, or color information in a YUV space, or color information in a YCbCr space, etc., which is not specifically limited in the embodiments of the present application.
  • the encoder can arrange the at least one node according to a preset coding order so as to determine the index number corresponding to each node. In this way, according to the index number corresponding to each node, the encoder can process each node in the point cloud to be processed according to the preset coding order.
  • the preset encoding order may be one of the following: original order of point cloud, Morton order, Hilbert order, etc., which is not specifically limited in the embodiments of the present application.
  • the index value may be used to determine the context used by the attribute coefficient of the current point. If the attribute information of the current point is color information, different index values may be determined corresponding to different color components of the current point.
  • the index value may include at least one of a first index value, a second index value, and a third index value.
  • the first index value, the second index value, and the third index value may correspond to the three color components of the current point, respectively, that is, the first index value, the second index value, and the third index value may be used to determine the first context, the second context, and the third context used by the attribute coefficients of the three color components of the current point, respectively.
  • the first index value can be used to determine the first context used by the attribute coefficient of the R component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the G component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the B component of the current point.
  • the first index value can be used to determine the first context used by the attribute coefficient of the Y component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the U component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the V component of the current point.
  • At least one of the first index value, the second index value, and the third index value of the current point may be determined first.
  • the adaptive context identification information of the current point can be set, and then the adaptive context identification information of the current point can be written into the bitstream.
  • the adaptive context identification information can be set to indicate that the attribute coefficient of the current point is determined using an adaptive context.
  • a process for determining the first index value, and/or the second index value, and/or the third index value may be executed.
  • the adaptive context identification information can be understood as a flag indicating whether the adaptive context is used for the node in the point cloud.
  • the encoder can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the values of the adaptive context identification information are different, and the method of determining the context used for the attribute coefficient of the current point is also different. Among them, it can be determined whether to use the adaptive context to determine the attribute coefficients of some or all color components of the current point according to the adaptive context identification information.
  • the value of the adaptive context identification information is 1, it may indicate that the attribute coefficient of the current point is determined using the adaptive context; if the value of the adaptive context identification information is 0, it may indicate that the attribute coefficient of the current point is not determined using the adaptive context.
  • the value of the adaptive context identification information may also be set to other values or parameters, and the present application does not impose any limitation thereto.
  • the index value used to indicate the context can be further determined. That is, after determining to use the adaptive context, the index value determination process is performed.
  • the adaptive context identification information setting process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the adaptive context can be used for the attribute coefficients of some or all color components of the current point, or a pre-set context can be used for the attribute coefficients of some or all color components of the current point.
  • the geometric information of the current point and/or the zero-run value corresponding to the current point can be referred to. Therefore, the geometric information of the current point and the zero-run value corresponding to the current point can also be determined.
  • the geometric information of the current point may include the position coordinate information of the current point.
  • the geometric information of the current point may be the spatial coordinate information (x, y, z) corresponding to the current point.
  • the zero run value corresponding to the current point may include the zero run value of the current point, or the previous zero run value of the current point, or the previous non-zero zero run value of the current point.
  • the zero run value run_length can be used to count whether the attribute coefficient is 0. For the color attribute, if the zero run value run_length is not 0 (or greater than 0), it can be determined that the attribute coefficients of all color components of the current point are all 0; if the zero run value run_length is 0, it can be determined that the attribute coefficients of all color components of the current point are not all 0.
  • the zero run value run_length indicates that the attribute coefficients of all color components of the current point are all 0, then there is no need to determine the attribute coefficients, but the zero run value can be first decremented by 1 to update the zero run value, and then the attribute coefficient of the next point is determined according to the zero run value. For the attribute coefficient of the next point, it is continued to be judged whether the attribute coefficients of all color components are all 0 according to the zero run value run_length to determine whether the attribute coefficients of all color components need to be determined.
  • the zero run value run_length of the current point is determined to be 3, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the current point are all 0, so there is no need to encode the attribute coefficients of the current point, and you can choose to first decrement the zero run value by 1, that is, perform the --run_length operation, and then determine the attribute coefficient of the next point based on the zero run value.
  • the corresponding zero run value run_length is 2, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the point are all 0, so there is no need to encode the attribute coefficients of all color components of the point, and continue to decrement the zero run value by 1, that is, perform the --run_length operation.
  • an index value may be first determined based on geometric information and/or a zero-run value.
  • the first index value corresponding to the first color component may be determined according to geometric information and/or a zero-run value.
  • the index value can be determined based on at least one of the absolute value, geometric information and zero-run value of the first attribute coefficient.
  • the second index value corresponding to the second color component may be determined according to at least one of the absolute value of the first attribute coefficient, geometric information, and a zero-run value.
  • the index value can be determined based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value.
  • a third index value corresponding to the third color component may be determined according to at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and a zero-run value.
  • the zero-run value corresponding to the current point to determine the index value that is, when determining the first index value or the second index value or the third index value based on the zero-run value, you can choose to first add or subtract the zero-run value and the first value to determine the first operation result; and then determine the index value based on the first operation result.
  • the first value may be any value.
  • the first value may be 1 or 3, and the present application does not specifically limit this.
  • the first operation result when determining the index value according to the first operation result, the first operation result may be selected as the index value, the absolute value of the first operation result may be selected as the index value, or the index value may be derived from the first operation result.
  • This application does not make any specific limitation.
  • the zero-run value and the first numerical value can be first added or subtracted to determine the first operation result; then the first numerical range corresponding to the first operation result is determined; finally, the index value can be determined according to the first numerical range and the correspondence between the first preset index value and the numerical range.
  • the value of the zero-run value corresponding to the current point can be an integer greater than or equal to 0, and after performing addition or subtraction operation on the zero-run value and the first value, the first operation result obtained can be a value greater than, equal to, or less than 0. Therefore, the first numerical range corresponding to the first operation result can be any numerical range.
  • the correspondence between the first preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • corresponding index values can be determined.
  • Table 1 shows the correspondence between the first preset index value and the numerical range.
  • the first numerical range corresponding to the first operation result can be (0, 1], and accordingly, the index value determined based on the correspondence between the first preset index value and the numerical range is 2.
  • the second numerical range corresponding to the zero run value when using the zero run value corresponding to the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the zero run value, the second numerical range corresponding to the zero run value can be determined first; then the index value can be determined according to the second numerical range and the correspondence between the second preset index value and the numerical range.
  • the zero run value corresponding to the current point may be an integer greater than or equal to 0. Therefore, the second numerical range corresponding to the zero run value may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the second preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 2 is the correspondence between the second preset index value and the numerical range.
  • the first numerical range corresponding to the zero-run value can be (1, 3], and accordingly, the index value determined based on the correspondence between the second preset index value and the numerical range is 2.
  • the index value when using the geometric information of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the geometric information, you can choose to first determine the position range corresponding to the geometric information; then determine the index value according to the position range, and the correspondence between the preset position range and the index value.
  • the geometric information of the current point may include the position coordinate information of the current point, which may include different spatial components, such as the x component, the y component, and the z component
  • the range can be divided by referring to some or all of the different spatial components.
  • the position range corresponding to the geometric information of the current point can be determined only according to the x component, the position range corresponding to the geometric information of the current point can also be determined according to the y component and the z component, or the position range corresponding to the geometric information of the current point can also be determined according to the x component, the y component, and the z component.
  • the correspondence between the preset position range and the index value can represent the mapping relationship between the position range and the index value.
  • corresponding index values can be determined.
  • Table 3 is the correspondence between the preset position range and the index value.
  • the corresponding position range is position range 3
  • the index value determined based on the correspondence between the preset position range and the index value is 3.
  • the absolute value of the first attribute coefficient of the current point when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to directly set the absolute value of the first attribute coefficient as the index value.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first determine the third numerical range corresponding to the absolute value of the first attribute coefficient; then determine the index value according to the third numerical range, and the correspondence between the third preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0. Therefore, the third numerical range corresponding to the absolute value of the first attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the third preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 4 is the correspondence between the third preset index value and the numerical range.
  • the third numerical range corresponding to the absolute value of the first attribute coefficient can be (2, 4], and accordingly, the index value determined based on the correspondence between the third preset index value and the numerical range is 3.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; and then determine the index value based on the second operation result.
  • the second value may be any value.
  • the second value may be -1 or 2, which is not specifically limited in the present application.
  • the absolute value of the first attribute coefficient of the current point is 1 and the second value is -2
  • the second operation result when determining the index value according to the second operation result, the second operation result may be selected as the index value, the absolute value of the second operation result may be selected as the index value, or the index value may be derived from the second operation result.
  • This application does not make any specific limitation.
  • the absolute value of the first attribute coefficient of the current point when the absolute value of the first attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; then determine the fourth numerical range corresponding to the second operation result; finally, the indexed value can be determined according to the fourth numerical range and the correspondence between the fourth preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0, and after performing an addition or subtraction operation on the absolute value of the first attribute coefficient and the second value, the second operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the fourth value range corresponding to the second operation result may be any value range.
  • the correspondence between the fourth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 5 is the correspondence between the fourth preset index value and the numerical range.
  • the fourth numerical range corresponding to the fourth operation result can be (-1, 1], and accordingly, the index value determined based on the correspondence between the fourth preset index value and the numerical range is 2.
  • the index value may be determined to be 1 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first determine the fifth numerical range corresponding to the absolute value of the second attribute coefficient; then determine the index value according to the fifth numerical range, and the correspondence between the fifth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0. Therefore, the fifth numerical range corresponding to the absolute value of the second attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the fifth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 6 is the correspondence between the fifth preset index value and the numerical range.
  • the fifth numerical range corresponding to the absolute value of the second attribute coefficient can be (3, 5], and accordingly, the index value determined based on the correspondence between the fifth preset index value and the numerical range is 4.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; and then determine the index value based on the third operation result.
  • the third value may be any value.
  • the third value may be 0 or 1, and the present application does not specifically limit it.
  • the third operation result when determining the index value according to the third operation result, the third operation result may be selected as the index value, the absolute value of the third operation result may be selected as the index value, or the index value may be derived from the third operation result.
  • This application does not make any specific limitation.
  • the absolute value of the second attribute coefficient of the current point when the absolute value of the second attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; then determine the sixth numerical range corresponding to the third operation result; finally, the index value can be determined according to the sixth numerical range and the correspondence between the sixth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0, and after performing addition or subtraction operation on the absolute value of the second attribute coefficient and the third value, the third operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the sixth value range corresponding to the third operation result may be any value range.
  • the correspondence between the sixth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 7 is the correspondence between the sixth preset index value and the numerical range.
  • the fourth numerical range corresponding to the sixth operation result can be (-1, 2], and accordingly, the index value determined based on the correspondence between the sixth preset index value and the numerical range is 2.
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the index value determined based on the geometric information is A1
  • the index value determined based on the zero run value is A2
  • A1 can be directly determined as the first index value
  • A2 can be directly determined as the first index value
  • A1 and A2 can be compared in size, and the larger or smaller value of the two can be determined as the first index value
  • A1 and A2 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the first index value.
  • an index value based on at least one of the absolute value of the first attribute coefficient, geometric information, and zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the index value determined based on the absolute value of the first attribute coefficient is B1
  • the index value determined based on the geometric information is B2
  • the index value determined based on the zero run value is B3
  • B1 can be directly determined as the second index value
  • B2 can be directly determined as the second index value
  • B3 can be directly determined as the second index value
  • B1, B2 and B3 can be compared in size, and the larger or smaller value among the three can be determined as the second index value
  • B1, B2 and B3 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the second index value.
  • This application does not make specific limitations.
  • an index value based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the absolute value of the second attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • C1 can be directly determined as the third index value
  • C2 can be directly determined as the third index value
  • C3 can be directly determined as the third index value
  • C4 can be directly determined as the third index value
  • C1, C2, C3 and B4 can be compared in size, and the larger or smaller value among the four can be determined as the third index value
  • C1, C2, C3 and B4 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the third index value.
  • This application does not make specific limitations.
  • Step 202 Determine the coding coefficient of the current point according to the context indicated by the index value.
  • the coding coefficient of the current point may be further determined according to the context indicated by the index value.
  • the coding coefficient may be a value obtained after coding processing is performed using the context indicated by the index value.
  • the index value may include at least one of the first index value, the second index value, and the third index value of the current point, the context indicated by the index value corresponding to different color components may be used to determine the encoding coefficient of the corresponding color component.
  • the coding coefficient may include at least one of a first coding coefficient, a second coding coefficient and a third coding coefficient.
  • the first coding coefficient, the second coding coefficient and the third coding coefficient may correspond to the three color components of the current point respectively, that is, the first coding coefficient, the second coding coefficient and the third coding coefficient may be obtained by parsing using the first context, the second context and the third context respectively.
  • the first coding coefficient of the current point when determining the coding coefficient of the current point according to the context indicated by the index value, can be determined according to the first context indicated by the first index value, the second coding coefficient of the current point can be determined according to the second context indicated by the second index value, and the third coding coefficient of the current point can be determined according to the third context indicated by the third index value.
  • Step 203 Determine the attribute coefficient of the current point according to the encoding coefficient.
  • the attribute coefficient of the current point can be further determined according to the coding coefficient.
  • the attribute coefficient may be a related value of the attribute information determined based on the coding coefficient.
  • the coding coefficient may include at least one of the first coding coefficient, the second coding coefficient and the third coding coefficient, the attribute coefficient of the corresponding color component may be determined using the coding coefficients corresponding to different color components.
  • the first attribute coefficient of the current point when determining the attribute coefficient of the current point according to the coding coefficient, can be determined according to the first coding coefficient, the second attribute coefficient of the current point can be determined according to the second coding coefficient, and the third attribute coefficient of the current point can be determined according to the third coding coefficient.
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • the attribute coefficient of the current point may include attribute coefficients of all color components, that is, the attribute coefficient of the current point may include at least one of a first attribute coefficient, a second attribute coefficient, and a third attribute coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first coding coefficient, and then the first attribute coefficient can be determined using the first coding coefficient
  • the second context indicated by the second index value can be used to determine the second coding coefficient, and then the second attribute coefficient can be determined using the second coding coefficient
  • the third context indicated by the third index value can be used to determine the third coding coefficient, and then the third attribute coefficient can be determined using the third coding coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first coding coefficient, and then the first attribute coefficient can be determined using the first coding coefficient
  • the second context indicated by the second index value can be used to determine the second coding coefficient, and then the second attribute coefficient can be determined using the second coding coefficient
  • the third context indicated by the third index value can be used to determine the third coding coefficient, and then the third attribute coefficient can be determined using the third coding coefficient.
  • an adaptive context may be used for the attribute coefficient of some or all color components of the current point, or a pre-set context may be used for the attribute coefficient of some or all color components of the current point. Therefore, the attribute coefficient of any color component of the current point may be determined by an adaptive context or by a pre-set context.
  • the first coding coefficient can be determined according to the first preset context; and/or, the second coding coefficient can be determined according to the second preset context; and/or, the third coding coefficient can be determined according to the third preset context.
  • the first color component of the current point you can choose to determine the first coding coefficient according to the first preset context, and determine the first attribute coefficient according to the first coding coefficient, or you can choose to determine the first index value, and then determine the first coding coefficient of the current point according to the first context indicated by the first index value, and determine the first attribute coefficient of the current point according to the first coding coefficient;
  • the second color component of the current point you can choose to determine the second coding coefficient according to the second preset context, and determine the second attribute coefficient according to the second coding coefficient, or you can choose to determine the second index value, and then determine the second coding coefficient of the current point according to the second context indicated by the second index value, and determine the second attribute coefficient of the current point according to the second coding coefficient;
  • the third color component of the current point you can choose to determine the third coding coefficient according to the third preset context, and determine the third attribute coefficient according to the third coding coefficient, or you can choose to determine the third index value, and
  • a pre-set context or an adaptive context can be used for any color component of the current point.
  • the context can be adaptively selected based on an index value determined based on the geometric information of the current point, or based on an index value determined based on a zero-run value corresponding to the current point, or based on an index value determined based on attribute coefficients of other color components of the current point (such as a first attribute coefficient and/or a second attribute coefficient).
  • the present application does not make any specific limitation on this.
  • the methods of determining the context are independent of each other, that is, there is no restriction that the methods of determining the context used by different color components must be the same.
  • the context can be adaptively selected based on the index value determined by the zero-run value corresponding to the current point
  • the context can be adaptively selected based on the index value determined by the first attribute coefficient
  • a pre-set context can be used. This application does not make specific limitations on this.
  • the first color component, the second color component, and the third color component may be different color components among all color components of the current point.
  • the first color component may be a G component
  • the first color component may be a B component
  • the third color component may be an R component
  • the first color component may be a U component
  • the first color component may be a Y component
  • the third color component may be a V component.
  • the attribute coefficient of the current point after determining the attribute coefficient of the current point, it is possible to continue to determine whether to determine the sign of the attribute coefficient according to whether the attribute coefficient of the current point is 0. If the attribute coefficients of the current point are not all 0, the sign of the non-zero attribute coefficient can be determined.
  • the sign of the attribute coefficient corresponding to the color component for which the attribute coefficient is not 0 may continue to be determined.
  • the determination of the sign of the first attribute coefficient can be continued; if the second attribute coefficient is not 0, then the determination of the sign of the second attribute coefficient can be continued; if the third attribute coefficient is not 0, then the determination of the sign of the third attribute coefficient can be continued.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, so that a variety of different adaptive context modes can be introduced, thereby improving the encoding and decoding performance of the point cloud attribute.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the first attribute coefficient may be encoded first; then the encoded first attribute coefficient may be used to adaptively select context to encode the second attribute coefficient; finally, the encoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select context to encode the third attribute coefficient.
  • the first preset context is used to encode the first attribute coefficient;
  • the second attribute coefficient is adaptively selected for context encoding using the encoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range);
  • the third attribute coefficient is adaptively selected for context encoding using the encoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants, and the encoded second attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range).
  • an adaptive context may be selected for the attribute coefficient of one color component of the current point, and a preset context may be selected for the attribute coefficients of the other two color components.
  • the first attribute coefficient may be encoded first, and then the second attribute coefficient may be encoded; and finally, the encoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select a context to encode the third attribute coefficient.
  • the first attribute coefficient is encoded using the first preset context
  • the second attribute coefficient is encoded using the second preset context
  • the third attribute coefficient is encoded by adaptively selecting a context based on the relationship between the encoded first attribute coefficient plus or minus a constant and the encoded second attribute coefficient plus or minus a constant.
  • a reference zero run value may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the encoded runlength information to encode the context; then the second attribute coefficient may be adaptively selected using the encoded runlength information; and finally the third attribute coefficient may be adaptively selected using the encoded runlength information to encode the context.
  • the first attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values
  • the second attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values
  • the third attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values.
  • the first attribute coefficient is adaptively selected to encode the context using the previous set of runlength values
  • the second attribute coefficient is adaptively selected to encode the context using the previous set of runlength values
  • the third attribute coefficient is adaptively selected to encode the context using the previous set of runlength values.
  • a reference geometric position may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the geometric information of the current point to encode the context; then the second attribute coefficient may be adaptively selected using the geometric information of the current point to encode the context; and finally the third attribute coefficient may be adaptively selected using the context of the current point.
  • the first attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point
  • the second attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point
  • the third attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point.
  • the point cloud encoding and decoding method proposed in the embodiment of the present application can obtain stable performance gains without increasing the time complexity, and can improve the performance of point cloud encoding and decoding.
  • the embodiment of the present application provides a point cloud encoding method, wherein the encoder determines an index value; determines the encoding coefficient of the current point according to the context indicated by the index value; and determines the attribute coefficient of the current point according to the decoding coefficient. That is to say, in the embodiment of the present application, when the attribute coefficient is determined using the context, the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • a point cloud encoding and decoding method proposed in another embodiment of the present application, when encoding and decoding the attribute coefficient of the current point, the attribute coefficient of any color component of the current point can be determined by an adaptive context or by a pre-set context. Specifically.
  • the correlation between the attribute coefficients that have been encoded/decoded, and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, so that a variety of different adaptive context modes can be introduced, thereby improving the encoding and decoding performance of point cloud attributes.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the attribute coefficients of the three color components of the current point are value0, value1, value2 (such as the first color component, the second color component, and the third color component), and the zero run value corresponding to the current point is run length.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder encodes the absolute value of value2 and the context of the absolute value of value1 adaptively.
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the absolute values of value1 and value2 are used to adaptively select the context for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first encoding coefficient is the same as the first attribute coefficient) using the absolute value of value1 and the absolute value of value2 to adaptively select the context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the attribute decoder decodes the absolute value of value2 and the context of value1 adaptively.
  • the attribute decoder decodes the absolute value of value0, the absolute value of value1, and the absolute value of value2 to adaptively select the context for decoding;
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient is subtracted by 1 using the fixed context coding (for example, the third coding coefficient is subtracted by 1 to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient minus 1 is encoded using a fixed context (such as a second preset context) (for example, the second encoding coefficient is subtracted by 1 to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously encoded using the context (that is, the third encoding coefficient is the same as the third attribute coefficient);
  • a fixed context such as a second preset context
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient minus 1 is encoded using a fixed context (such as a first preset context) (for example, the first encoding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously encoded using the fixed context (that is, the second encoding coefficient is the same as the second attribute coefficient);
  • a fixed context such as a first preset context
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient minus one and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is encoded using the adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is decoded using the fixed context (the third preset context), and the absolute value of the third attribute coefficient is its decoded value (the third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is decoded using a fixed context (a second preset context), the absolute value of the second attribute coefficient is its decoded value (a second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is decoded using the fixed context;
  • the absolute value of the first attribute coefficient is decoded using the fixed context (the first preset context)
  • the absolute value of the first attribute coefficient is the decoded value (the first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient)
  • the absolute value of the second attribute coefficient is decoded continuously using the fixed context (the second preset context);
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient minus one and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is decoded using the adaptively selected context.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the run length information is used to adaptively select the context for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first encoding coefficient is the same as the first attribute coefficient) using the run length information to adaptively select the context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient minus 1 is adaptively selected for context coding using the run length information (for example, the third coding coefficient is subtracted by 1 to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is subtracted from 1 by adaptively selecting context encoding using the run length information (for example, subtracting 1 from the second coding coefficient to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously adaptively selected from context encoding using the run length information (that is, the third coding coefficient is the same as the third attribute coefficient);
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient is subtracted by 1 by adaptively selecting context encoding using the run length information (for example, the first coding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously adaptively selected context encoding using the run length information (that is, the second coding coefficient is the same as the second attribute coefficient);
  • the run length information is used to adaptively select a context, and the absolute value of the third attribute coefficient is encoded using this adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is adaptively selected for context decoding using the run length information, and the absolute value of the third attribute coefficient is its decoded value (the third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient of the context is adaptively selected using the run length information, and the absolute value of the second attribute coefficient is its decoded value (the second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient of the context is adaptively selected using the run length information;
  • the absolute value of the first attribute coefficient is adaptively selected context decoding using the run length information
  • the absolute value of the first attribute coefficient is its decoded value (first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient)
  • the absolute value of the second attribute coefficient is adaptively selected context decoding using the run length information
  • the run length information is used to adaptively select a context, and the absolute value of the third attribute coefficient is decoded using this adaptively selected context.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder encodes the absolute value of value1 and the context of adaptive selection using geometric information
  • the attribute encoder encodes the absolute value of value2 using the geometric information to adaptively select the context
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the context is adaptively selected using geometric information for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first coding coefficient is the same as the first attribute coefficient) using the geometric information adaptive selection context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the attribute decoder decodes the absolute value of value1 and the context of adaptive selection using geometric information
  • the attribute decoder decodes the absolute value of value2 and the context of adaptive selection using geometric information
  • the attribute decoder decodes the absolute value of value0 and adaptively selects the context using geometric information
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient is subtracted by 1 by adaptively selecting the context encoding using geometric information (for example, subtracting 1 from the third encoding coefficient to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is subtracted by 1 by using the context adaptively selected by the geometric information (for example, the second coding coefficient is subtracted by 1 to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously encoded by using the context adaptively selected by the geometric information (that is, the third coding coefficient is the same as the third attribute coefficient);
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient is subtracted by 1 by using the context encoding adaptively selected by the geometric information (for example, the first coding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously encoded by using the context encoding adaptively selected by the geometric information (that is, the second coding coefficient is the same as the second attribute coefficient);
  • the context is adaptively selected using geometric information, and the absolute value of the third attribute coefficient is encoded using the adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is adaptively selected context-decoded using geometric information, and the absolute value of the third attribute coefficient is its decoded value (third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient of the context is selected adaptively using the geometric information, the absolute value of the second attribute coefficient is its decoded value (the second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is decoded adaptively using the context of the geometric information;
  • the absolute value of the first attribute coefficient is decoded by using the context of adaptive selection of geometric information.
  • the absolute value of the first attribute coefficient is the decoded value (first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient).
  • the absolute value of the second attribute coefficient is decoded by using the context of adaptive selection of geometric information.
  • a context is adaptively selected using geometric information, and the absolute value of the third attribute coefficient is decoded using the adaptively selected context.
  • the attribute coefficient of the third color component can be encoded and decoded by adaptively selecting a context based on the absolute value of the first attribute coefficient and/or the absolute value of the second attribute coefficient.
  • the first attribute coefficient is encoded using an attribute encoder; the second attribute coefficient is encoded using an attribute encoder; the attribute encoder uses the absolute value of the encoded first attribute coefficient plus a constant 1, and the absolute value of the encoded second attribute coefficient plus a constant 2 to adaptively select a context for encoding.
  • the first attribute coefficient is decoded using an attribute decoder; the second attribute coefficient is decoded using an attribute decoder; the attribute decoder uses the absolute value of the decoded first attribute coefficient plus a constant 1, and the absolute value of the decoded second attribute coefficient plus a constant 2 to adaptively select a context for decoding.
  • the first attribute coefficient is encoded using an attribute encoder; the second attribute coefficient is adaptively selected context encoded using the attribute encoder to determine whether the absolute value of the encoded first attribute coefficient is equal to constant 1 and is less than or equal to constant 2; the third attribute coefficient is adaptively selected context encoded using the attribute encoder to determine whether the absolute value of the encoded first attribute coefficient is equal to constant 3 and is less than or equal to constant 4, and whether the absolute value of the encoded second attribute coefficient is equal to constant 5 and is less than or equal to constant 6; for the decoding end: the first attribute coefficient is decoded using an attribute decoder; the second attribute coefficient is adaptively selected context decoded using the attribute decoder to determine whether the absolute value of the decoded first attribute coefficient is equal to constant 1 and is less than or equal to constant 2; the third attribute coefficient is adaptively selected context decoded using the attribute decoder to determine whether the absolute value of the decoded first attribute coefficient is equal to constant 3 and is less than or equal to constant
  • the second attribute coefficient is encoded by using an attribute encoder; the third attribute coefficient is adaptively selected to encode the context by using whether the absolute value of the encoded second attribute coefficient is equal to constant 7 and is less than or equal to constant 8; for the decoding end: the second attribute coefficient is decoded by using an attribute decoder; the third attribute coefficient is adaptively selected to decode the context by using whether the absolute value of the decoded second attribute coefficient is equal to constant 7 and is less than or equal to constant 8.
  • the second attribute coefficient is encoded by using an attribute encoder; the third attribute coefficient is adaptively selected to encode the context by using whether the absolute value of the encoded second attribute coefficient is equal to constant 7 and whether it is greater than or equal to constant 8; for the decoding end: the second attribute coefficient is decoded by using an attribute decoder; the third attribute coefficient is adaptively selected to decode the context by using whether the absolute value of the decoded second attribute coefficient is equal to constant 7 and whether it is greater than or equal to constant 8.
  • the first attribute coefficient is encoded by using an attribute encoder
  • the second attribute coefficient is encoded by using an attribute encoder
  • the attribute encoder adaptively selects a context for encoding based on the relationship between the absolute value of the encoded first attribute coefficient minus a constant 9 and the absolute value of the encoded second attribute coefficient minus a constant 10
  • the first attribute coefficient is decoded by using an attribute decoder
  • the second attribute coefficient is decoded by using the attribute decoder; the attribute decoder adaptively selects the context for decoding based on the absolute value of the decoded first attribute coefficient minus the constant 9 and the absolute value of the decoded second attribute coefficient minus the constant 10.
  • the absolute value of the third attribute coefficient is subtracted by 1 using the fixed context coding (for example, the third coding coefficient is subtracted by 1 to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient minus 1 is encoded using a fixed context (such as a second preset context) (for example, the second encoding coefficient is subtracted by 1 to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously encoded using the context (that is, the third encoding coefficient is the same as the third attribute coefficient);
  • a fixed context such as a second preset context
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient minus 1 is encoded using a fixed context (such as a first preset context) (for example, the first encoding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously encoded using the fixed context (that is, the second encoding coefficient is the same as the second attribute coefficient);
  • a fixed context such as a first preset context
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is encoded using the adaptively selected context.
  • Adaptively selecting a context using the magnitude relationship between the absolute value of the first attribute coefficient and the absolute value of the second attribute coefficient can be understood as: if the absolute value of the first attribute coefficient is greater than the absolute value of the second attribute coefficient, one context is selected; if the absolute value of the first attribute coefficient is less than or equal to the absolute value of the second attribute coefficient, another context is selected.
  • the absolute value of the first attribute coefficient is greater than or equal to the absolute value of the second attribute coefficient, a previous context is selected; if the absolute value of the first attribute coefficient is less than the absolute value of the second attribute coefficient, another context is selected.
  • the absolute value of the first attribute coefficient is greater than the absolute value of the second attribute coefficient, select an upper or lower context; if the absolute value of the first attribute coefficient is equal to the absolute value of the second attribute coefficient, select another context; if the absolute value of the first attribute coefficient is less than the absolute value of the second attribute coefficient, select yet another context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is decoded using the fixed context (the third preset context), and the absolute value of the third attribute coefficient is its decoded value (the third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is decoded using a fixed context (a second preset context), the absolute value of the second attribute coefficient is its decoded value (a second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is decoded using the fixed context;
  • the absolute value of the first attribute coefficient is decoded using the fixed context (the first preset context)
  • the absolute value of the first attribute coefficient is the decoded value (the first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient)
  • the absolute value of the second attribute coefficient is decoded continuously using the fixed context (the second preset context);
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is decoded using the adaptively selected context.
  • Adaptively selecting a context using the magnitude relationship between the absolute value of the first attribute coefficient and the absolute value of the second attribute coefficient can be understood as: if the absolute value of the first attribute coefficient is greater than the absolute value of the second attribute coefficient, one context is selected; if the absolute value of the first attribute coefficient is less than or equal to the absolute value of the second attribute coefficient, another context is selected.
  • the absolute value of the first attribute coefficient is greater than or equal to the absolute value of the second attribute coefficient, a previous context is selected; if the absolute value of the first attribute coefficient is less than the absolute value of the second attribute coefficient, another context is selected.
  • the absolute value of the first attribute coefficient is greater than the absolute value of the second attribute coefficient, select a context; if the absolute value of the first attribute coefficient is equal to the absolute value of the second attribute coefficient, select another context; if the absolute value of the first attribute coefficient is less than the absolute value of the second attribute coefficient, select yet another context.
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • the coding coefficient (decoding coefficient) of the current point can be further determined according to the context indicated by the index value.
  • the coding coefficient (decoding coefficient) can be a value obtained after encoding and decoding processing using the context indicated by the index value.
  • the attribute coefficients of the current point can be further determined according to the coding coefficients (decoding coefficients).
  • the attribute coefficients can be related values of attribute information determined based on the coding coefficients (decoding coefficients).
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • a 1-bit flag (such as adaptive context identification information) can also be used to indicate whether the adaptive context selection method is turned on. This flag can be placed in the attribute header of the high-level syntax element. This flag is conditionally analyzed under certain specific conditions. If this flag does not appear in the bitstream, the value of the flag can be defaulted to a fixed value.
  • the decoding end needs to decode the flag bit. If the flag bit does not appear in the bitstream, it will not be decoded.
  • the default value of the flag bit is a fixed value.
  • the adaptive context identification information can be understood as a flag indicating whether the adaptive context is used for the node in the point cloud.
  • the encoder can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the value of the adaptive context identification information is 1, it may indicate that the attribute coefficient of the current point is determined using the adaptive context; if the value of the adaptive context identification information is 0, it may indicate that the attribute coefficient of the current point is not determined using the adaptive context.
  • the adaptive context identification information setting process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the embodiment of the present application provides a point cloud encoding and decoding method, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines an encoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG. 10 is a schematic diagram of a composition structure of an encoder.
  • the encoder 20 may include: a first determining unit 21, wherein:
  • the first determination unit 21 is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc.
  • an embodiment of the present application provides a computer-readable storage medium, which is applied to the encoder 20, and the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the aforementioned embodiments is implemented.
  • Figure 11 is a second schematic diagram of the composition structure of the encoder.
  • the encoder 20 may include: a first memory 22 and a first processor 23, a first communication interface 24 and a first bus system 25.
  • the first memory 22, the first processor 23, and the first communication interface 24 are coupled together through the first bus system 25.
  • the first bus system 25 is used to achieve connection and communication between these components.
  • the first bus system 25 also includes a power bus, a control bus, and a status signal bus.
  • various buses are labeled as the first bus system 25 in Figure 11. Among them,
  • the first communication interface 24 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the first memory 22 is used to store a computer program that can be run on the first processor
  • the first processor 23 is used to determine an index value when running the computer program; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • the first memory 22 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate synchronous DRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM bus RAM
  • the first processor 23 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit in the first processor 23 or the instruction in the form of software.
  • the above-mentioned first processor 23 can be a general processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the first memory 22, and the first processor 23 reads the information in the first memory 22 and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • the technology described in this application can be implemented by modules (such as processes, functions, etc.) that perform the functions described in this application.
  • the software code can be stored in a memory and executed by a processor.
  • the memory can be implemented in the processor or outside the processor.
  • the first processor 23 is further configured to execute any one of the methods described in the foregoing embodiments when running the computer program.
  • FIG12 is a schematic diagram of a structure of a decoder.
  • the decoder 30 may include: a second determining unit 31; wherein:
  • the second determination unit 31 is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine a property coefficient of the current point according to the decoding coefficient.
  • a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc.
  • an embodiment of the present application provides a computer-readable storage medium, which is applied to the decoder 30.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the above embodiments is implemented.
  • Figure 13 is a second schematic diagram of the composition structure of the decoder.
  • the decoder 30 may include: a second memory 32 and a second processor 33, a second communication interface 34 and a second bus system 35.
  • the second memory 32 and the second processor 33, and the second communication interface 34 are coupled together through the second bus system 35.
  • the second bus system 35 is used to realize the connection and communication between these components.
  • the second bus system 35 also includes a power bus, a control bus and a status signal bus.
  • various buses are marked as the second bus system 35 in Figure 13. Among them,
  • the second communication interface 34 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 32 is used to store a computer program that can be run on the second processor
  • the second processor 33 is used to determine an index value when running the computer program; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the decoding coefficient.
  • the second memory 32 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate synchronous DRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM bus RAM
  • the second processor 33 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit or software instructions in the second processor 33.
  • the above-mentioned second processor 33 can be a general processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the second memory 32, and the second processor 33 reads the information in the second memory 32 and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • the technology described in this application can be implemented by a module (such as a process, function, etc.) that performs the functions described in this application.
  • the software code can be stored in a memory and executed by a processor.
  • the memory can be implemented in the processor or outside the processor.
  • the embodiment of the present application provides an encoder and a decoder.
  • the correlation between the already encoded/decoded attribute coefficients and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • the embodiment of the present application also provides a code stream, which is generated by bit encoding based on the information to be encoded; wherein the information to be encoded includes at least: adaptive context identification information of the current point, geometric information of the current point, and zero run value corresponding to the current point.
  • the embodiment of the present application provides a point cloud encoding and decoding method, an encoder, a decoder, a bitstream and a storage medium, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines a coding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Des modes de réalisation de la présente demande concernent un procédé d'encodage de nuage de points et un procédé de décodage de nuage de points. Un décodeur détermine une valeur d'indice, détermine un coefficient de décodage du point courant en fonction d'un contexte indiqué par la valeur d'indice, et détermine un coefficient d'attribut du point courant en fonction du coefficient de décodage. Un encodeur détermine une valeur d'indice, détermine un coefficient d'encodage du point courant en fonction d'un contexte indiqué par la valeur d'indice, et détermine un coefficient d'attribut du point courant en fonction du coefficient d'encodage.
PCT/CN2023/071279 2022-11-16 2023-01-09 Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux binaire, et support de stockage WO2024103513A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2022/132330 2022-11-16
PCT/CN2022/132330 WO2024103304A1 (fr) 2022-11-16 2022-11-16 Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux de code, et support de stockage

Publications (1)

Publication Number Publication Date
WO2024103513A1 true WO2024103513A1 (fr) 2024-05-23

Family

ID=91083458

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/132330 WO2024103304A1 (fr) 2022-11-16 2022-11-16 Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux de code, et support de stockage
PCT/CN2023/071279 WO2024103513A1 (fr) 2022-11-16 2023-01-09 Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux binaire, et support de stockage

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132330 WO2024103304A1 (fr) 2022-11-16 2022-11-16 Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux de code, et support de stockage

Country Status (1)

Country Link
WO (2) WO2024103304A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111095929A (zh) * 2017-09-14 2020-05-01 苹果公司 点云压缩
CN112352431A (zh) * 2019-09-30 2021-02-09 浙江大学 一种数据编码、解码方法、设备及存储介质
US20220005229A1 (en) * 2019-03-21 2022-01-06 SZ DJI Technology Co., Ltd. Point cloud attribute encoding method and device, and point cloud attribute decoding method and devcie
US20220030243A1 (en) * 2019-04-24 2022-01-27 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US20220337872A1 (en) * 2021-04-15 2022-10-20 Lg Electronics Inc. Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473127B (zh) * 2020-03-30 2022-09-23 鹏城实验室 一种点云几何编码方法、解码方法、编码设备及解码设备
CN113497936A (zh) * 2020-04-08 2021-10-12 Oppo广东移动通信有限公司 编码方法、解码方法、编码器、解码器以及存储介质
CN114793484A (zh) * 2020-11-24 2022-07-26 浙江大学 点云编码方法、点云解码方法、装置及存储介质
TW202234894A (zh) * 2020-12-29 2022-09-01 美商高通公司 用於幾何形狀譯碼的用於幀間和幀內預測的混合樹譯碼

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111095929A (zh) * 2017-09-14 2020-05-01 苹果公司 点云压缩
US20220005229A1 (en) * 2019-03-21 2022-01-06 SZ DJI Technology Co., Ltd. Point cloud attribute encoding method and device, and point cloud attribute decoding method and devcie
US20220030243A1 (en) * 2019-04-24 2022-01-27 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
CN112352431A (zh) * 2019-09-30 2021-02-09 浙江大学 一种数据编码、解码方法、设备及存储介质
US20220337872A1 (en) * 2021-04-15 2022-10-20 Lg Electronics Inc. Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method

Also Published As

Publication number Publication date
WO2024103304A1 (fr) 2024-05-23

Similar Documents

Publication Publication Date Title
JP2022528528A (ja) 点群コーディングのための方法、装置、およびコンピュータプログラム
CN114930858A (zh) 用于基于几何的点云压缩的高级语法
WO2023130333A1 (fr) Procédé de codage et de décodage, codeur, décodeur, et support de stockage
TW202404359A (zh) 編解碼方法、編碼器、解碼器以及可讀儲存媒介
EP4258671A1 (fr) Procédé de prédiction d'attribut de nuage de points, codeur, décodeur et support d'enregistrement
US20230377208A1 (en) Geometry coordinate scaling for ai-based dynamic point cloud coding
JP2024505798A (ja) 点群符号化・復号化方法及びシステム、点群符号器並びに点群復号器
WO2024103513A1 (fr) Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, encodeur, décodeur, flux binaire, et support de stockage
WO2022141461A1 (fr) Procédé de codage et de décodage de nuage de points, codeur, décodeur et support de stockage informatique
WO2024082152A1 (fr) Procédés et appareils de codage et de décodage, codeur et décodeur, flux de code, dispositif et support de stockage
WO2023240662A1 (fr) Procédé de codage, procédé de décodage, codeur, décodeur, et support de stockage
WO2024119420A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur, et support de stockage
WO2024119419A1 (fr) Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur et support de stockage
WO2022170511A1 (fr) Procédé de décodage de nuage de points, décodeur et support d'enregistrement informatique
WO2024065406A1 (fr) Procédés de codage et de décodage, train de bits, codeur, décodeur et support de stockage
WO2023240660A1 (fr) Procédé de décodage, procédé de codage, décodeur et codeur
WO2024007144A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeurs, décodeurs et support de stockage
WO2024119518A1 (fr) Procédé de codage, procédé de décodage, décodeur, codeur, flux de code et support de stockage
WO2024011472A1 (fr) Procédés de codage et de décodage de nuage de points, codeur et décodeur, et support de stockage informatique
WO2024065408A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage
WO2024065272A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositif, et support de stockage
WO2023024842A1 (fr) Procédé, appareil et dispositif de codage/décodage de nuage de points, et support de stockage
WO2024082127A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage
WO2023197338A1 (fr) Procédé et appareil de détermination d'indice, décodeur et codeur
RU2778377C1 (ru) Способ и устройство для кодирования облака точек