WO2024103304A1 - 点云编解码方法、编码器、解码器、码流及存储介质 - Google Patents

点云编解码方法、编码器、解码器、码流及存储介质 Download PDF

Info

Publication number
WO2024103304A1
WO2024103304A1 PCT/CN2022/132330 CN2022132330W WO2024103304A1 WO 2024103304 A1 WO2024103304 A1 WO 2024103304A1 CN 2022132330 W CN2022132330 W CN 2022132330W WO 2024103304 A1 WO2024103304 A1 WO 2024103304A1
Authority
WO
WIPO (PCT)
Prior art keywords
index value
value
coefficient
attribute
current point
Prior art date
Application number
PCT/CN2022/132330
Other languages
English (en)
French (fr)
Inventor
马闯
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/132330 priority Critical patent/WO2024103304A1/zh
Priority to PCT/CN2023/071279 priority patent/WO2024103513A1/zh
Publication of WO2024103304A1 publication Critical patent/WO2024103304A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]

Definitions

  • the embodiments of the present application relate to the field of point cloud compression technology, and in particular to a point cloud encoding and decoding method, encoder, decoder, bit stream and storage medium.
  • the geometry information and attribute information of the point cloud are encoded separately. After the geometry encoding is completed, the geometry information is reconstructed, and the encoding of the attribute information will depend on the reconstructed geometry information.
  • the attribute information encoding is mainly aimed at the encoding of color information, which is converted into a YUV color space that is more in line with the visual characteristics of the human eye, and then the attribute information after preprocessing is attribute encoded, and finally a binary attribute code stream is generated.
  • the embodiments of the present application provide a point cloud encoding and decoding method, encoder, decoder, bit stream and storage medium, which can improve the encoding and decoding performance of point cloud attributes.
  • an embodiment of the present application provides a point cloud decoding method, which is applied to a decoder, and the method includes:
  • the attribute coefficient of the current point is determined according to the decoded coefficient.
  • an embodiment of the present application provides a point cloud encoding method, which is applied to an encoder, and the method includes:
  • the attribute coefficient of the current point is determined according to the encoding coefficient.
  • an encoder comprising a first determining unit, wherein:
  • the first determination unit is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • an embodiment of the present application provides an encoder, the encoder comprising a first memory and a first processor; wherein,
  • the first memory is used to store a computer program that can be run on the first processor
  • the first processor is used to execute the point cloud encoding method as described above when running the computer program.
  • an embodiment of the present application provides a decoder, wherein the decoder includes a second determining unit, wherein:
  • the second determination unit is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the decoding coefficient.
  • an embodiment of the present application provides a decoder, the decoder comprising a second memory and a second processor; wherein:
  • the second memory is used to store a computer program that can be run on the second processor
  • the second processor is used to execute the point cloud decoding method as described above when running the computer program.
  • an embodiment of the present application provides a code stream, which is generated by bit encoding based on information to be encoded; wherein the information to be encoded includes at least: adaptive context identification information of the current point, geometric information of the current point, and a zero-run value corresponding to the current point.
  • an embodiment of the present application provides a computer storage medium, wherein the computer storage medium stores a computer program, and when the computer program is executed by a first processor, it implements the point cloud encoding method as described above, or, when the computer program is executed by a second processor, it implements the point cloud decoding method as described above.
  • the embodiment of the present application provides a point cloud encoding and decoding method, an encoder, a decoder, a bitstream and a storage medium, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines a coding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG1A is a schematic diagram of a three-dimensional point cloud image provided in an embodiment of the present application.
  • FIG1B is a partially enlarged schematic diagram of a three-dimensional point cloud image provided in an embodiment of the present application.
  • FIG2A is a schematic diagram of a point cloud image at different viewing angles provided in an embodiment of the present application.
  • FIG2B is a schematic diagram of a data storage format corresponding to FIG2A provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of a network architecture of point cloud encoding and decoding provided in an embodiment of the present application
  • FIG4 is a schematic diagram of the structure of a point cloud encoder provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of the structure of a point cloud decoder provided in an embodiment of the present application.
  • FIG6 shows a schematic diagram of a composition framework of a point cloud encoder
  • FIG7 shows a schematic diagram of a composition framework of a point cloud decoder
  • FIG8 is a schematic diagram of an implementation flow of a point cloud decoding method proposed in an embodiment of the present application.
  • FIG9 is a schematic diagram of an implementation flow of a point cloud encoding method proposed in an embodiment of the present application.
  • FIG10 is a schematic diagram of the structure of the encoder
  • FIG11 is a second schematic diagram of the structure of the encoder
  • FIG12 is a schematic diagram of the structure of a decoder
  • FIG. 13 is a second schematic diagram of the structure of the decoder.
  • first ⁇ second ⁇ third involved in the embodiments of the present application are only used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged in a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
  • Point Cloud is a three-dimensional representation of the surface of an object.
  • Point cloud (data) on the surface of an object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera.
  • a point cloud is a set of discrete points that are irregularly distributed in space and express the spatial structure and surface properties of a three-dimensional object or scene.
  • FIG1A shows a three-dimensional point cloud image
  • FIG1B shows a partial magnified view of the three-dimensional point cloud image. It can be seen that the point cloud surface is composed of densely distributed points.
  • Two-dimensional images have information expression at each pixel point, and the distribution is regular, so there is no need to record its position information additionally; however, the distribution of points in point clouds in three-dimensional space is random and irregular, so it is necessary to record the position of each point in space in order to fully express a point cloud.
  • each position in the acquisition process has corresponding attribute information, usually RGB color values, and the color value reflects the color of the object; for point clouds, in addition to color information, the attribute information corresponding to each point is also commonly the reflectance value, which reflects the surface material of the object. Therefore, the points in the point cloud can include the position information of the point and the attribute information of the point.
  • the position information of the point can be the three-dimensional coordinate information (x, y, z) of the point.
  • the position information of the point can also be called the geometric information of the point.
  • the attribute information of the point can include color information (three-dimensional color information) and/or reflectance (one-dimensional reflectance information r), etc.
  • the color information can be information on any color space.
  • the color information can be RGB information. Among them, R represents red (Red, R), G represents green (Green, G), and B represents blue (Blue, B).
  • the color information may be luminance and chrominance (YCbCr, YUV) information, where Y represents brightness (Luma), Cb (U) represents blue color difference, and Cr (V) represents red color difference.
  • the points in the point cloud may include the three-dimensional coordinate information of the points and the reflectivity value of the points.
  • the points in the point cloud may include the three-dimensional coordinate information of the points and the three-dimensional color information of the points.
  • a point cloud obtained by combining the principles of laser measurement and photogrammetry may include the three-dimensional coordinate information of the points, the reflectivity value of the points and the three-dimensional color information of the points.
  • Figure 2A and 2B a point cloud image and its corresponding data storage format are shown.
  • Figure 2A provides six viewing angles of the point cloud image
  • Figure 2B consists of a file header information part and a data part.
  • the header information includes the data format, data representation type, the total number of point cloud points, and the content represented by the point cloud.
  • the point cloud is in the ".ply" format, represented by ASCII code, with a total number of 207242 points, and each point has three-dimensional coordinate information (x, y, z) and three-dimensional color information (r, g, b).
  • Point clouds can be divided into the following categories according to the way they are obtained:
  • Static point cloud the object is stationary, and the device that obtains the point cloud is also stationary;
  • Dynamic point cloud The object is moving, but the device that obtains the point cloud is stationary;
  • Dynamic point cloud acquisition The device used to acquire the point cloud is in motion.
  • point clouds can be divided into two categories according to their usage:
  • Category 1 Machine perception point cloud, which can be used in autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, emergency rescue robots, etc.
  • Category 2 Point cloud perceived by the human eye, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • Point clouds can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes. Point clouds are obtained by directly sampling real objects, so they can provide a strong sense of reality while ensuring accuracy. Therefore, they are widely used, including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, three-dimensional immersive remote presentation, and three-dimensional reconstruction of biological tissues and organs.
  • Point clouds can be collected mainly through the following methods: computer generation, 3D laser scanning, 3D photogrammetry, etc.
  • Computers can generate point clouds of virtual three-dimensional objects and scenes; 3D laser scanning can obtain point clouds of static real-world three-dimensional objects or scenes, and can obtain millions of point clouds per second; 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
  • 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
  • the number of points in each point cloud frame is 700,000, and each point has coordinate information xyz (float) and color information RGB (uchar).
  • the point cloud is a collection of massive points, storing the point cloud will not only consume a lot of memory, but also be inconvenient for transmission. There is also not enough bandwidth to support direct transmission of the point cloud at the network layer without compression. Therefore, the point cloud needs to be compressed.
  • the point cloud coding framework that can compress point clouds can be the geometry-based point cloud compression (G-PCC) codec framework or the video-based point cloud compression (V-PCC) codec framework provided by the Moving Picture Experts Group (MPEG), or the AVS-PCC codec framework provided by AVS.
  • G-PCC geometry-based point cloud compression
  • V-PCC video-based point cloud compression
  • MPEG Moving Picture Experts Group
  • AVS-PCC codec framework provided by AVS.
  • the G-PCC codec framework can be used to compress the first type of static point clouds and the third type of dynamically acquired point clouds
  • the V-PCC codec framework can be used to compress the second type of dynamic point clouds.
  • FIG3 is a schematic diagram of a network architecture of a point cloud encoding and decoding provided by the embodiment of the present application.
  • the network architecture includes one or more electronic devices 13 to 1N and a communication network 01, wherein the electronic devices 13 to 1N can perform video interaction through the communication network 01.
  • the electronic device can be various types of devices with point cloud encoding and decoding functions.
  • the electronic device can include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensor device, a server, etc., which is not limited by the embodiment of the present application.
  • the decoder or encoder in the embodiment of the present application can be the above-mentioned electronic device.
  • the electronic device in the embodiment of the present application has a point cloud encoding and decoding function, generally including a point cloud encoder (ie, encoder) and a point cloud decoder (ie, decoder).
  • a point cloud encoder ie, encoder
  • a point cloud decoder ie, decoder
  • point cloud compression generally adopts the method of compressing point cloud geometry information and attribute information separately.
  • the point cloud geometry information is first encoded in the geometry encoder, and then the reconstructed geometry information is input into the attribute encoder as additional information to assist in the compression of point cloud attributes;
  • the point cloud geometry information is first decoded in the geometry decoder, and then the decoded geometry information is input into the attribute decoder as additional information to assist in the compression of point cloud attributes.
  • the entire codec consists of pre-processing/post-processing, geometry encoding/decoding, and attribute encoding/decoding.
  • the embodiment of the present application provides a point cloud encoder, as shown in FIG4 , which is a reference frame for point cloud compression.
  • the point cloud encoder 11 includes a geometry encoder: a coordinate translation unit 111, a coordinate quantization unit 112, an octree construction unit 113, a geometry entropy encoder 114, and a geometry reconstruction unit 115.
  • An attribute encoder an attribute recoloring unit 116, a color space conversion unit 117, a first attribute prediction unit 118, a quantization unit 119, and an attribute entropy encoder 1110.
  • the original geometric information is first preprocessed, and the geometric origin is normalized to the minimum position in the point cloud space through the coordinate translation unit 111.
  • the geometric information is converted from floating point numbers to integers through the coordinate quantization unit 112 to facilitate subsequent regularization processing; then the regularized geometric information is geometrically encoded, and the point cloud space is recursively divided using an octree structure in the octree construction unit 113.
  • the current node is divided into eight sub-blocks of the same size, and the occupancy codeword of each sub-block is judged. When the sub-block does not contain a point, it is recorded as empty, otherwise it is recorded as non-empty.
  • the occupancy codeword information of all blocks is recorded in the last layer of the recursive division, and geometric encoding is performed; the geometric information expressed by the octree structure is input into the geometric entropy encoder 114 to form a geometric code stream on the one hand, and geometric reconstruction processing is performed in the geometric reconstruction unit 115 on the other hand, and the reconstructed geometric information is input into the attribute encoder as additional information.
  • the original attribute information is first preprocessed. Since the geometric information changes after geometric encoding, the attribute value is reallocated to each point after geometric encoding through the attribute recoloring unit 116 to achieve attribute recoloring.
  • the processed attribute information is color information
  • the original color information needs to be transformed into a YUV color space that is more in line with the visual characteristics of the human eye through the color space conversion unit 117; then the preprocessed attribute information is attribute encoded through the first attribute prediction unit 118.
  • Attribute encoding first requires the point cloud to be reordered, and the reordering method is Morton code, so the traversal order of attribute encoding is Morton order.
  • the attribute prediction method is a single-point prediction based on the Morton order, that is, according to the Morton order, trace back one point from the current point to be encoded (current node), and the node found is the prediction reference point of the current point to be encoded, and then the attribute reconstruction value of the prediction reference point is used as the attribute prediction value, and the attribute residual value is the difference between the attribute original value and the attribute prediction value of the current point to be encoded; finally, the attribute residual value is quantized by the quantization unit 119, and the quantized residual information is input into the attribute entropy encoder 1110 to form an attribute code stream.
  • FIG5 is a schematic diagram of the structure of a point cloud decoder provided by the present application.
  • FIG5 is a reference frame of point cloud compression.
  • the point cloud decoder 12 includes a geometric decoder: a geometric entropy decoder 121, an octree reconstruction unit 122, a coordinate inverse quantization unit 123, and a coordinate inverse translation unit 124.
  • An attribute decoder an attribute entropy decoder 125, an inverse quantization unit 126, a second attribute prediction unit 127, and a color space inverse transformation unit 128.
  • the geometry bitstream is first entropy decoded by the geometry entropy decoder 121 to obtain the geometry information of each node, and then the octree structure is constructed by the octree reconstruction unit 122 in the same way as the geometry encoding.
  • the geometry information expressed by the octree structure after coordinate transformation is reconstructed in combination with the decoded geometry.
  • the information is dequantized by the coordinate dequantization unit 123 and detranslated by the coordinate detranslation unit 124 to obtain the decoded geometry information.
  • it is input into the attribute decoder as additional information.
  • the Morton order is constructed in the same way as the encoding end.
  • the attribute code stream is first entropy decoded by the attribute entropy decoder 125 to obtain the quantized residual information; then, the inverse quantization unit 126 performs inverse quantization to obtain the attribute residual value; similarly, in the same way as the attribute encoding, the attribute prediction value of the current point to be decoded is obtained by the second attribute prediction unit 127, and then the attribute prediction value is added to the attribute residual value to restore the attribute reconstruction value (for example, YUV attribute value) of the current point to be decoded; finally, the decoded attribute information is obtained by color space inverse transformation by the color space inverse transformation unit 128.
  • test conditions There are 4 general test conditions, which can include:
  • Condition 1 The geometric position is limitedly lossy and the attributes are lossy;
  • Condition 3 The geometric position is lossless, and the attributes are limitedly lossy
  • Condition 4 The geometric position and attributes are lossless.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.), and the prediction algorithm is first used to obtain the attribute prediction value, and the attribute residual is obtained according to the attribute value and the attribute prediction value. Then, the attribute residual is quantized to generate a quantized residual, and finally the quantized residual is encoded;
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the prediction algorithm is first used to obtain the attribute prediction value, and then the decoding is performed to obtain the quantized residual.
  • the quantized residual is then dequantized, and finally the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized residual.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.), and the entire point cloud is first divided into several small groups with a maximum length of Y (such as 2), and then these small groups are combined into several large groups (the number of points in each large group does not exceed X, such as 4096), and then the prediction algorithm is used to obtain the attribute prediction value, and the attribute residual is obtained according to the attribute value and the attribute prediction value.
  • the attribute residual is transformed by DCT in small groups to generate transformation coefficients, and then the transformation coefficients are quantized to generate quantized transformation coefficients, and finally the quantized transformation coefficients are encoded in large groups;
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2), and then these small groups are combined into several large groups (the number of points in each large group does not exceed X, such as 4096).
  • the quantized transform coefficients are decoded in large groups, and then the prediction algorithm is used to obtain the attribute prediction value.
  • the quantized transform coefficients are dequantized and inversely transformed in small groups.
  • the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized and inversely transformed coefficients.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, the Morton order, the Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2).
  • the prediction algorithm is used to obtain the attribute prediction value.
  • the attribute residual is obtained according to the attribute value and the attribute prediction value.
  • the attribute residual is transformed by DCT in small groups to generate transformation coefficients.
  • the transformation coefficients are quantized to generate quantized transformation coefficients.
  • the quantized transformation coefficients of the entire point cloud are encoded.
  • the points in the point cloud are processed in a certain order (the original acquisition order of the point cloud, Morton order, Hilbert order, etc.).
  • the entire point cloud is divided into several small groups with a maximum length of Y (such as 2), and the quantized transformation coefficients of the entire point cloud are obtained by decoding.
  • the prediction algorithm is used to obtain the attribute prediction value, and then the quantized transformation coefficients are dequantized and inversely transformed in groups.
  • the attribute reconstruction value is obtained based on the attribute prediction value and the dequantized and inversely transformed coefficients.
  • the entire point cloud is subjected to multi-layer wavelet transform to generate transform coefficients, which are then quantized to generate quantized transform coefficients, and finally the quantized transform coefficients of the entire point cloud are encoded;
  • decoding obtains the quantized transform coefficients of the entire point cloud, and then dequantizes and inversely transforms the quantized transform coefficients to obtain attribute reconstruction values.
  • the coefficients may be quantized residuals, and in the above embodiments 2, 3, and 4, the coefficients may be quantized transform coefficients.
  • the geometric information of the point cloud and the attribute information corresponding to each point are encoded separately.
  • the current reference attribute encoding framework can be divided into Pred branch-based, PredLift branch-based, and RAHT branch-based.
  • FIG6 shows a schematic diagram of the composition framework of a point cloud encoder.
  • the geometric information is transformed so that all the point clouds are contained in a bounding box (Bounding Box), and then quantized.
  • This step of quantization mainly plays a role in scaling. Due to the quantization rounding, the geometric information of a part of the point cloud is the same, so whether to remove duplicate points is determined based on parameters.
  • the process of quantization and removal of duplicate points is also called voxelization.
  • the Bounding Box is divided into octrees or a prediction tree is constructed.
  • arithmetic coding is performed on the points in the divided leaf nodes to generate a binary geometric bit stream; or, arithmetic coding is performed on the intersection points (Vertex) generated by the division (surface fitting is performed based on the intersection points) to generate a binary geometric bit stream.
  • color conversion is required first to convert the color information (i.e., attribute information) from the RGB color space to the YUV color space. Then, the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information. Attribute encoding is mainly performed on color information.
  • FIG7 shows a schematic diagram of the composition framework of a point cloud decoder.
  • the geometric bit stream and the attribute bit stream in the binary bit stream are first decoded independently.
  • the geometric information of the point cloud is obtained through arithmetic decoding-reconstruction of the octree/reconstruction of the prediction tree-reconstruction of the geometry-coordinate inverse conversion;
  • the attribute information of the point cloud is obtained through arithmetic decoding-inverse quantization-LOD partitioning/RAHT-color inverse conversion, and the point cloud data to be encoded (i.e., the output point cloud) is restored based on the geometric information and attribute information.
  • the current point cloud geometry encoding and decoding can be divided into octree-based geometry encoding and decoding (marked with a dotted box) and prediction tree-based geometry encoding and decoding (marked with a dotted box).
  • test conditions There are 4 general test conditions, which can include:
  • Condition 1 The geometric position is lossless, but the attributes are lossy;
  • the attribute residual coefficients are obtained by using the prediction method of Pred, and entropy coding is performed on the attribute residual coefficients;
  • entropy decoding obtains the attribute residual coefficients, and the original values are restored using the Pred prediction method.
  • the attribute transformation coefficients are obtained by using the Predlift method, and entropy coding is performed on the attribute transformation coefficients;
  • entropy decoding obtains the attribute transformation coefficients, and Predlift's transformation method is used to restore the original values.
  • the attribute transformation coefficients are obtained by using the RAHT method, and entropy coding is performed on the attribute transformation coefficients;
  • entropy decoding is used to obtain the attribute transformation coefficients, and the RAHT method is used to restore the original values.
  • attribute entropy coding and decoding are performed.
  • an embodiment of the present application provides a point cloud encoding and decoding method.
  • the correlation between the already encoded/decoded attribute coefficients and related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer limited to using fixed contexts for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG8 is a schematic diagram of an implementation flow of the point cloud decoding method proposed in the embodiment of the present application. As shown in FIG8 , the following steps may be included when decoding the point cloud:
  • Step 101 Determine the index value.
  • the index value may be determined first.
  • the decoding method of the embodiment of the present application specifically refers to a point cloud decoding method, which can be applied to a point cloud decoder (also referred to as a "decoder" for short).
  • the point cloud to be processed includes at least one node.
  • a node in the point cloud to be processed when decoding the node, it can be used as a node to be decoded in the point cloud to be processed, and there are multiple decoded nodes around the node.
  • the current node current point is the node to be decoded that currently needs to be decoded in the at least one node.
  • each node in the point cloud to be processed corresponds to a geometric information and an attribute information; wherein the geometric information represents the spatial relationship of the point, and the attribute information represents the relevant information of the attribute of the point.
  • the attribute information may be color information, or reflectivity or other attributes, which is not specifically limited in the embodiments of the present application.
  • the attribute information may be color information in any color space.
  • the attribute information may be color information in an RGB space, or color information in a YUV space, or color information in a YCbCr space, etc., which is not specifically limited in the embodiments of the present application.
  • the decoder can arrange the at least one node in a preset decoding order so as to determine the index number corresponding to each node. In this way, according to the index number corresponding to each node, the decoder can process each node in the point cloud to be processed in the preset decoding order.
  • the preset decoding order may be one of the following: original order of point cloud, Morton order, Hilbert order, etc., which is not specifically limited in the embodiments of the present application.
  • the index value may be used to determine the context used by the attribute coefficient of the current point. If the attribute information of the current point is color information, different index values may be determined corresponding to different color components of the current point.
  • the index value may include at least one of a first index value, a second index value, and a third index value.
  • the first index value, the second index value, and the third index value may correspond to the three color components of the current point, respectively, that is, the first index value, the second index value, and the third index value may be used to determine the first context, the second context, and the third context used by the attribute coefficients of the three color components of the current point, respectively.
  • the first index value can be used to determine the first context used by the attribute coefficient of the R component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the G component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the B component of the current point.
  • the first index value can be used to determine the first context used by the attribute coefficient of the Y component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the U component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the V component of the current point.
  • At least one of the first index value, the second index value, and the third index value of the current point may be determined first.
  • the adaptive context identification information of the current point when decoding the code stream, can be first determined; if the adaptive context identification information indicates that the attribute coefficient of the current point is determined using the adaptive context, then the first index value, and/or the second index value, and/or the third index value determination process can be executed.
  • the adaptive context identification information can be understood as a flag indicating whether the adaptive context is used for the node in the point cloud.
  • the decoder decodes the bitstream and can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the values of the adaptive context identification information are different, and the method of determining the context used for the attribute coefficient of the current point is also different. Among them, it can be determined whether to use the adaptive context to determine the attribute coefficients of some or all color components of the current point according to the adaptive context identification information.
  • the value of the adaptive context identification information is 1, then you can choose to use the adaptive context to determine the attribute coefficient of the current point; if the value of the adaptive context identification information is 0, then you can choose not to use the adaptive context to determine the attribute coefficient of the current point.
  • the value of the adaptive context identification information may also be set to other values or parameters, and the present application does not impose any limitation thereto.
  • the index value used to indicate the context can be further determined. That is, after determining that the adaptive context identification information indicates that the adaptive context is used, the index value determination process is executed.
  • the adaptive context identification information determination process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the adaptive context can be used for the attribute coefficients of some or all color components of the current point, or a pre-set context can be used for the attribute coefficients of some or all color components of the current point.
  • the code stream can also be decoded to determine the geometric information of the current point and the zero-run value corresponding to the current point.
  • the geometric information of the current point may include the position coordinate information of the current point.
  • the geometric information of the current point may be the spatial coordinate information (x, y, z) corresponding to the current point.
  • the zero run value corresponding to the current point may include the zero run value of the current point, or the previous zero run value of the current point, or the previous non-zero zero run value of the current point.
  • the zero run value run_length can be used to count whether the attribute coefficient is 0. For the color attribute, if the zero run value run_length is not 0 (or greater than 0), it can be determined that the attribute coefficients of all color components of the current point are all 0; if the zero run value run_length is 0, it can be determined that the attribute coefficients of all color components of the current point are not all 0.
  • the zero run value run_length indicates that the attribute coefficients of all color components of the current point are all 0, then there is no need to determine the attribute coefficients, but the zero run value can be first decremented by 1 to update the zero run value, and then the attribute coefficient of the next point is determined according to the zero run value. For the attribute coefficient of the next point, it is continued to be judged whether the attribute coefficients of all color components are all 0 according to the zero run value run_length to determine whether the attribute coefficients of all color components need to be determined.
  • the zero run value run_length of the current point determined by the decoded code stream is 3, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the current point are all 0, so there is no need to decode the attribute coefficients of the current point, and you can choose to first decrement the zero run value by 1, that is, perform the --run_length operation, and then determine the attribute coefficient of the next point based on the zero run value.
  • the corresponding zero run value run_length is 2, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the point are all 0, so there is no need to decode the attribute coefficients of all color components of the point, and continue to decrement the zero run value by 1, that is, perform the --run_length operation.
  • an index value may be first determined based on geometric information and/or a zero-run value.
  • the first index value corresponding to the first color component may be determined according to geometric information and/or a zero-run value.
  • the index value can be determined based on at least one of the absolute value, geometric information and zero-run value of the first attribute coefficient.
  • the second index value corresponding to the second color component may be determined according to at least one of the absolute value of the first attribute coefficient, geometric information, and a zero-run value.
  • the index value can be determined based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value.
  • a third index value corresponding to the third color component may be determined according to at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and a zero-run value.
  • the zero-run value corresponding to the current point to determine the index value that is, when determining the first index value or the second index value or the third index value based on the zero-run value, you can choose to first add or subtract the zero-run value and the first value to determine the first operation result; and then determine the index value based on the first operation result.
  • the first value may be any value.
  • the first value may be 1 or 3, and the present application does not specifically limit this.
  • the first operation result when determining the index value according to the first operation result, the first operation result may be selected as the index value, the absolute value of the first operation result may be selected as the index value, or the index value may be derived from the first operation result.
  • This application does not make any specific limitation.
  • the zero-run value and the first numerical value can be first added or subtracted to determine the first operation result; then the first numerical range corresponding to the first operation result is determined; finally, the index value can be determined according to the first numerical range and the correspondence between the first preset index value and the numerical range.
  • the value of the zero-run value corresponding to the current point can be an integer greater than or equal to 0, and after performing addition or subtraction operation on the zero-run value and the first value, the first operation result obtained can be a value greater than, equal to, or less than 0. Therefore, the first numerical range corresponding to the first operation result can be any numerical range.
  • the correspondence between the first preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 1 is the correspondence between the first preset index value and the numerical range, as shown in Table 1,
  • the first numerical range corresponding to the first operation result can be (0, 1], and accordingly, the index value determined based on the correspondence between the first preset index value and the numerical range is 2.
  • the second numerical range corresponding to the zero run value when using the zero run value corresponding to the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the zero run value, the second numerical range corresponding to the zero run value can be determined first; then the index value can be determined according to the second numerical range and the correspondence between the second preset index value and the numerical range.
  • the zero run value corresponding to the current point may be an integer greater than or equal to 0. Therefore, the second numerical range corresponding to the zero run value may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the second preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 2 is the correspondence between the second preset index value and the numerical range, as shown in Table 2,
  • the first numerical range corresponding to the zero-run value can be (1, 3], and accordingly, the index value determined based on the correspondence between the second preset index value and the numerical range is 2.
  • the index value when using the geometric information of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the geometric information, you can choose to first determine the position range corresponding to the geometric information; then determine the index value according to the position range, and the correspondence between the preset position range and the index value.
  • the geometric information of the current point may include the position coordinate information of the current point, which may include different spatial components, such as the x component, the y component, and the z component
  • the range can be divided by referring to some or all of the different spatial components.
  • the position range corresponding to the geometric information of the current point may be determined only according to the x component, or the position range corresponding to the geometric information of the current point may be determined according to the y component and the z component, or the position range corresponding to the geometric information of the current point may be determined according to the x component, the y component, and the z component.
  • the correspondence between the preset position range and the index value can represent the mapping relationship between the position range and the index value.
  • corresponding index values can be determined.
  • Table 3 is the correspondence between the preset position range and the index value, as shown in Table 3,
  • Position Range Index value Position Range 1 1 Position Range 2 2 Position range 3 3 Position range 4 4
  • the corresponding position range is position range 3
  • the index value determined based on the correspondence between the preset position range and the index value is 3.
  • the absolute value of the first attribute coefficient of the current point when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to directly set the absolute value of the first attribute coefficient as the index value.
  • the index value may be determined to be 2 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first determine the third numerical range corresponding to the absolute value of the first attribute coefficient; then determine the index value according to the third numerical range, and the correspondence between the third preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0. Therefore, the third numerical range corresponding to the absolute value of the first attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the third preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 4 is the correspondence between the third preset index value and the numerical range, as shown in Table 4,
  • the third numerical range corresponding to the absolute value of the first attribute coefficient can be (2, 4], and accordingly, the index value determined based on the correspondence between the third preset index value and the numerical range is 3.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; and then determine the index value based on the second operation result.
  • the second value may be any value.
  • the second value may be -1 or 2, which is not specifically limited in the present application.
  • the absolute value of the first attribute coefficient of the current point is 1 and the second value is -2
  • the second operation result when determining the index value according to the second operation result, the second operation result may be selected as the index value, the absolute value of the second operation result may be selected as the index value, or the index value may be derived from the second operation result.
  • This application does not make any specific limitation.
  • the absolute value of the first attribute coefficient of the current point when the absolute value of the first attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; then determine the fourth numerical range corresponding to the second operation result; finally, the indexed value can be determined according to the fourth numerical range and the correspondence between the fourth preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0, and after performing an addition or subtraction operation on the absolute value of the first attribute coefficient and the second value, the second operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the fourth value range corresponding to the second operation result may be any value range.
  • the correspondence between the fourth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 5 is the correspondence between the fourth preset index value and the numerical range, as shown in Table 5,
  • the fourth numerical range corresponding to the fourth operation result can be (-1, 1], and accordingly, the index value determined based on the correspondence between the fourth preset index value and the numerical range is 2.
  • the absolute value of the second attribute coefficient of the current point when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to directly set the absolute value of the second attribute coefficient as the index value.
  • the index value may be determined to be 1 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first determine the fifth numerical range corresponding to the absolute value of the second attribute coefficient; then determine the index value according to the fifth numerical range, and the correspondence between the fifth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0. Therefore, the fifth numerical range corresponding to the absolute value of the second attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the fifth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 6 is the correspondence between the fifth preset index value and the numerical range, as shown in Table 6,
  • the fifth numerical range corresponding to the absolute value of the second attribute coefficient can be (3, 5], and accordingly, the index value determined based on the correspondence between the fifth preset index value and the numerical range is 4.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; and then determine the index value based on the third operation result.
  • the third value may be any value.
  • the third value may be 0 or 1, and the present application does not specifically limit it.
  • the third operation result when determining the index value according to the third operation result, the third operation result may be selected as the index value, the absolute value of the third operation result may be selected as the index value, or the index value may be derived from the third operation result.
  • This application does not make any specific limitation.
  • the absolute value of the second attribute coefficient of the current point when the absolute value of the second attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; then determine the sixth numerical range corresponding to the third operation result; finally, the index value can be determined according to the sixth numerical range and the correspondence between the sixth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0, and after performing addition or subtraction operation on the absolute value of the second attribute coefficient and the third value, the third operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the sixth value range corresponding to the third operation result may be any value range.
  • the correspondence between the sixth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 7 is the correspondence between the sixth preset index value and the numerical range, as shown in Table 7,
  • the fourth numerical range corresponding to the sixth operation result can be (-1, 2], and accordingly, the index value determined based on the correspondence between the sixth preset index value and the numerical range is 2.
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the index value determined based on the geometric information is A1
  • the index value determined based on the zero run value is A2
  • A1 can be directly determined as the first index value
  • A2 can be directly determined as the first index value
  • A1 and A2 can be compared in size, and the larger or smaller value of the two can be determined as the first index value
  • A1 and A2 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the first index value.
  • an index value based on at least one of the absolute value of the first attribute coefficient, geometric information, and zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the index value determined based on the absolute value of the first attribute coefficient is B1
  • the index value determined based on the geometric information is B2
  • the index value determined based on the zero run value is B3
  • B1 can be directly determined as the second index value
  • B2 can be directly determined as the second index value
  • B3 can be directly determined as the second index value
  • B1, B2 and B3 can be compared in size, and the larger or smaller value among the three can be determined as the second index value
  • B1, B2 and B3 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the second index value.
  • This application does not make specific limitations.
  • an index value based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the absolute value of the second attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • C1 can be directly determined as the third index value
  • C2 can be directly determined as the third index value
  • C3 can be directly determined as the third index value
  • C4 can be directly determined as the third index value
  • C1, C2, C3 and B4 can be compared in size, and the larger or smaller value among the four can be determined as the third index value
  • C1, C2, C3 and B4 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the third index value.
  • This application does not make specific limitations.
  • Step 102 Determine a decoding coefficient of the current point according to the context indicated by the index value.
  • the decoding coefficient of the current point may be further determined according to the context indicated by the index value.
  • the decoding coefficient may be a value obtained after decoding processing is performed using the context indicated by the index value.
  • the index value may include at least one of the first index value, the second index value, and the third index value of the current point
  • the context indicated by the index value corresponding to different color components may be used to determine the decoding coefficient of the corresponding color component.
  • the decoding coefficient may include at least one of a first decoding coefficient, a second decoding coefficient and a third decoding coefficient.
  • the first decoding coefficient, the second decoding coefficient and the third decoding coefficient may correspond to the three color components of the current point respectively, that is, the first decoding coefficient, the second decoding coefficient and the third decoding coefficient may be obtained by parsing using the first context, the second context and the third context respectively.
  • the first decoding coefficient of the current point when determining the decoding coefficient of the current point according to the context indicated by the index value, can be determined according to the first context indicated by the first index value, the second decoding coefficient of the current point can be determined according to the second context indicated by the second index value, and the third decoding coefficient of the current point can be determined according to the third context indicated by the third index value.
  • Step 103 Determine the attribute coefficient of the current point according to the decoded coefficient.
  • the attribute coefficient of the current point can be further determined according to the decoding coefficient.
  • the attribute coefficient may be a related value of the attribute information determined based on the decoding coefficient.
  • the decoding coefficient may include at least one of the first decoding coefficient, the second decoding coefficient and the third decoding coefficient, the attribute coefficient of the corresponding color component may be determined using the decoding coefficients corresponding to different color components.
  • the first attribute coefficient of the current point when determining the attribute coefficient of the current point based on the decoding coefficient, the first attribute coefficient of the current point can be determined based on the first decoding coefficient, the second attribute coefficient of the current point can be determined based on the second decoding coefficient, and the third attribute coefficient of the current point can be determined based on the third decoding coefficient.
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • the attribute coefficient of the current point may include attribute coefficients of all color components, that is, the attribute coefficient of the current point may include at least one of a first attribute coefficient, a second attribute coefficient, and a third attribute coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first decoding coefficient
  • the first decoding coefficient can be used to determine the first attribute coefficient
  • the second context indicated by the second index value can be used to determine the second decoding coefficient
  • the second attribute coefficient can be determined using the second decoding coefficient
  • the third context indicated by the third index value can be used to determine the third decoding coefficient, and then the third attribute coefficient can be determined using the third decoding coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first decoding coefficient
  • the first decoding coefficient can be used to determine the first attribute coefficient
  • the second context indicated by the second index value can be used to determine the second decoding coefficient
  • the second attribute coefficient can be determined using the second decoding coefficient
  • the third context indicated by the third index value can be used to determine the third decoding coefficient, and then the third attribute coefficient can be determined using the third decoding coefficient.
  • an adaptive context may be used for the attribute coefficient of some or all color components of the current point, or a pre-set context may be used for the attribute coefficient of some or all color components of the current point. Therefore, the attribute coefficient of any color component of the current point may be determined by an adaptive context or by a pre-set context.
  • a first decoding coefficient can be determined according to a first preset context; and/or, a second decoding coefficient can be determined according to a second preset context; and/or, a third decoding coefficient can be determined according to a third preset context.
  • the first color component of the current point you can choose to determine the first decoding coefficient according to the first preset context, and determine the first attribute coefficient according to the first decoding coefficient, or you can choose to determine the first index value, and then determine the first decoding coefficient of the current point according to the first context indicated by the first index value, and determine the first attribute coefficient of the current point according to the first decoding coefficient;
  • the second color component of the current point you can choose to determine the second decoding coefficient according to the second preset context, and determine the second attribute coefficient according to the second decoding coefficient, or you can choose to determine the second index value, and then determine the second decoding coefficient of the current point according to the second context indicated by the second index value, and determine the second attribute coefficient of the current point according to the second decoding coefficient;
  • the third color component of the current point you can choose to determine the third decoding coefficient according to the third preset context, and determine the third attribute coefficient according to the third decoding coefficient, or you can choose to determine the third index value, and
  • a pre-set context or an adaptive context can be used for any color component of the current point.
  • the context can be adaptively selected based on an index value determined based on the geometric information of the current point, or based on an index value determined based on a zero-run value corresponding to the current point, or based on an index value determined based on attribute coefficients of other color components of the current point (such as a first attribute coefficient and/or a second attribute coefficient).
  • the present application does not make any specific limitation on this.
  • the methods of determining the context are independent of each other, that is, there is no restriction that the methods of determining the context used by different color components must be the same.
  • the context can be adaptively selected based on the index value determined by the zero-run value corresponding to the current point
  • the context can be adaptively selected based on the index value determined by the first attribute coefficient
  • a pre-set context can be used. This application does not make specific limitations on this.
  • the first color component, the second color component, and the third color component may be different color components among all color components of the current point.
  • the first color component may be a G component
  • the first color component may be a B component
  • the third color component may be an R component
  • the first color component may be a U component
  • the first color component may be a Y component
  • the third color component may be a V component.
  • the bitstream can be decoded to determine the sign of the non-zero attribute coefficient.
  • the code stream can continue to be decoded to determine the sign of the attribute coefficient corresponding to the color component for which the attribute coefficient is not 0.
  • the determination of the sign of the first attribute coefficient can be continued; if the second attribute coefficient is not 0, then the determination of the sign of the second attribute coefficient can be continued; if the third attribute coefficient is not 0, then the determination of the sign of the third attribute coefficient can be continued.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, so that a variety of different adaptive context modes can be introduced, thereby improving the encoding and decoding performance of the point cloud attribute.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the first attribute coefficient may be decoded first; then the decoded first attribute coefficient may be used to adaptively select context to decode the second attribute coefficient; finally, the decoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select context to decode the third attribute coefficient.
  • the first preset context is used to decode the first attribute coefficient;
  • the second attribute coefficient is adaptively selected for context decoding using the decoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range);
  • the third attribute coefficient is adaptively selected for context decoding using the decoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants, and the decoded second attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range).
  • an adaptive context may be selected for the attribute coefficient of one color component of the current point, and a preset context may be selected for the attribute coefficients of the other two color components.
  • the first attribute coefficient may be decoded first, and then the second attribute coefficient may be decoded; finally, the decoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select a context to decode the third attribute coefficient.
  • the first preset context is used to decode the first attribute coefficient
  • the second preset context is used to decode the second attribute coefficient
  • the context is adaptively selected to decode the third attribute coefficient by using the relationship between the decoded first attribute coefficient plus or minus a constant and the decoded second attribute coefficient plus or minus a constant.
  • a reference zero run value may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient adaptive selection context may be decoded using the decoded runlength information; then the second attribute coefficient adaptive selection context may be decoded using the decoded runlength information; and finally the third attribute coefficient adaptive selection context may be decoded using the decoded runlength information.
  • the first attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values
  • the second attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values
  • the third attribute coefficient is adaptively selected to decode the context using the previous set of non-zero runlength values.
  • the first attribute coefficient is adaptively selected for context decoding using the previous set of runlength values
  • the second attribute coefficient is adaptively selected for context decoding using the previous set of runlength values
  • the third attribute coefficient is adaptively selected for context decoding using the previous set of runlength values.
  • a reference geometric position may be selected to use an adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the geometric information of the current point to decode the context; then the second attribute coefficient may be adaptively selected using the geometric information of the current point to decode the context; and finally the third attribute coefficient may be adaptively selected using the context of the current point to decode the context.
  • the first attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point
  • the second attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point
  • the third attribute coefficient is adaptively selected for context decoding using the geometric information position size of the current point.
  • the point cloud encoding and decoding method proposed in the embodiment of the present application can obtain stable performance gains without increasing the time complexity, and can improve the performance of point cloud encoding and decoding.
  • the embodiment of the present application provides a point cloud decoding method, wherein the decoder determines an index value; determines a decoding coefficient of the current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient. That is, in the embodiment of the present application, when the attribute coefficient is determined using the context, the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG9 is a schematic diagram of an implementation flow of the point cloud encoding method proposed in the embodiment of the present application. As shown in FIG9 , when encoding the point cloud, the following steps may be included:
  • Step 201 Determine the index value.
  • the index value may be determined first.
  • the encoding method of the embodiment of the present application specifically refers to a point cloud encoding method, which can be applied to a point cloud encoder (also referred to as "encoder” for short).
  • the point cloud to be processed includes at least one node.
  • a node in the point cloud to be processed when encoding the node, it can be used as a node to be encoded in the point cloud to be processed, and there are multiple encoded nodes around the node.
  • the current node current point is the node to be encoded that currently needs to be encoded in the at least one node.
  • each node in the point cloud to be processed corresponds to a geometric information and an attribute information; wherein the geometric information represents the spatial relationship of the point, and the attribute information represents the attribute information of the point.
  • the attribute information may be color information, or reflectivity or other attributes, which is not specifically limited in the embodiments of the present application.
  • the attribute information may be color information in any color space.
  • the attribute information may be color information in an RGB space, or color information in a YUV space, or color information in a YCbCr space, etc., which is not specifically limited in the embodiments of the present application.
  • the encoder can arrange the at least one node according to a preset coding order so as to determine the index number corresponding to each node. In this way, according to the index number corresponding to each node, the encoder can process each node in the point cloud to be processed according to the preset coding order.
  • the preset encoding order may be one of the following: original order of point cloud, Morton order, Hilbert order, etc., which is not specifically limited in the embodiments of the present application.
  • the index value may be used to determine the context used by the attribute coefficient of the current point. If the attribute information of the current point is color information, different index values may be determined corresponding to different color components of the current point.
  • the index value may include at least one of a first index value, a second index value, and a third index value.
  • the first index value, the second index value, and the third index value may correspond to the three color components of the current point, respectively, that is, the first index value, the second index value, and the third index value may be used to determine the first context, the second context, and the third context used by the attribute coefficients of the three color components of the current point, respectively.
  • the first index value can be used to determine the first context used by the attribute coefficient of the R component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the G component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the B component of the current point.
  • the first index value can be used to determine the first context used by the attribute coefficient of the Y component of the current point
  • the second index value can be used to determine the second context used by the attribute coefficient of the U component of the current point
  • the third index value can be used to determine the third context used by the attribute coefficient of the V component of the current point.
  • At least one of the first index value, the second index value, and the third index value of the current point may be determined first.
  • the adaptive context identification information of the current point can be set, and then the adaptive context identification information of the current point can be written into the bitstream.
  • the adaptive context identification information can be set to indicate that the attribute coefficient of the current point is determined using an adaptive context.
  • a process for determining the first index value, and/or the second index value, and/or the third index value may be executed.
  • the adaptive context identification information can be understood as a flag indicating whether the adaptive context is used for the node in the point cloud.
  • the encoder can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the values of the adaptive context identification information are different, and the method of determining the context used for the attribute coefficient of the current point is also different. Among them, it can be determined whether to use the adaptive context to determine the attribute coefficients of some or all color components of the current point according to the adaptive context identification information.
  • the value of the adaptive context identification information is 1, it may indicate that the attribute coefficient of the current point is determined using the adaptive context; if the value of the adaptive context identification information is 0, it may indicate that the attribute coefficient of the current point is not determined using the adaptive context.
  • the value of the adaptive context identification information may also be set to other values or parameters, and the present application does not impose any limitation thereto.
  • the index value used to indicate the context can be further determined. That is, after determining to use the adaptive context, the index value determination process is performed.
  • the adaptive context identification information setting process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the adaptive context can be used for the attribute coefficients of some or all color components of the current point, or a pre-set context can be used for the attribute coefficients of some or all color components of the current point.
  • the geometric information of the current point and/or the zero-run value corresponding to the current point can be referred to. Therefore, the geometric information of the current point and the zero-run value corresponding to the current point can also be determined.
  • the geometric information of the current point may include the position coordinate information of the current point.
  • the geometric information of the current point may be the spatial coordinate information (x, y, z) corresponding to the current point.
  • the zero run value corresponding to the current point may include the zero run value of the current point, or the previous zero run value of the current point, or the previous non-zero zero run value of the current point.
  • the zero run value run_length can be used to count whether the attribute coefficient is 0. For the color attribute, if the zero run value run_length is not 0 (or greater than 0), it can be determined that the attribute coefficients of all color components of the current point are all 0; if the zero run value run_length is 0, it can be determined that the attribute coefficients of all color components of the current point are not all 0.
  • the zero run value run_length indicates that the attribute coefficients of all color components of the current point are all 0, then there is no need to determine the attribute coefficients, but the zero run value can be first decremented by 1 to update the zero run value, and then the attribute coefficient of the next point is determined according to the zero run value. For the attribute coefficient of the next point, it is continued to be judged whether the attribute coefficients of all color components are all 0 according to the zero run value run_length to determine whether the attribute coefficients of all color components need to be determined.
  • the zero run value run_length of the current point is determined to be 3, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the current point are all 0, so there is no need to encode the attribute coefficients of the current point, and you can choose to first decrement the zero run value by 1, that is, perform the --run_length operation, and then determine the attribute coefficient of the next point based on the zero run value.
  • the corresponding zero run value run_length is 2, which is greater than 0, then it can be determined that the attribute coefficients of all color components of the point are all 0, so there is no need to encode the attribute coefficients of all color components of the point, and continue to decrement the zero run value by 1, that is, perform the --run_length operation.
  • an index value may be first determined based on geometric information and/or a zero-run value.
  • the first index value corresponding to the first color component may be determined according to geometric information and/or a zero-run value.
  • the index value can be determined based on at least one of the absolute value, geometric information and zero-run value of the first attribute coefficient.
  • the second index value corresponding to the second color component may be determined according to at least one of the absolute value of the first attribute coefficient, geometric information, and a zero-run value.
  • the index value can be determined based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value.
  • a third index value corresponding to the third color component may be determined according to at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and a zero-run value.
  • the zero-run value corresponding to the current point to determine the index value that is, when determining the first index value or the second index value or the third index value based on the zero-run value, you can choose to first add or subtract the zero-run value and the first value to determine the first operation result; and then determine the index value based on the first operation result.
  • the first value may be any value.
  • the first value may be 1 or 3, and the present application does not specifically limit this.
  • the first operation result when determining the index value according to the first operation result, the first operation result may be selected as the index value, the absolute value of the first operation result may be selected as the index value, or the index value may be derived from the first operation result.
  • This application does not make any specific limitation.
  • the zero-run value and the first numerical value can be first added or subtracted to determine the first operation result; then the first numerical range corresponding to the first operation result is determined; finally, the index value can be determined according to the first numerical range and the correspondence between the first preset index value and the numerical range.
  • the value of the zero-run value corresponding to the current point can be an integer greater than or equal to 0, and after performing addition or subtraction operation on the zero-run value and the first value, the first operation result obtained can be a value greater than, equal to, or less than 0. Therefore, the first numerical range corresponding to the first operation result can be any numerical range.
  • the correspondence between the first preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 1 is the correspondence between the first preset index value and the numerical range.
  • the first numerical range corresponding to the first operation result can be (0, 1], and accordingly, the index value determined based on the correspondence between the first preset index value and the numerical range is 2.
  • the second numerical range corresponding to the zero run value when using the zero run value corresponding to the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the zero run value, the second numerical range corresponding to the zero run value can be determined first; then the index value can be determined according to the second numerical range and the correspondence between the second preset index value and the numerical range.
  • the zero run value corresponding to the current point may be an integer greater than or equal to 0. Therefore, the second numerical range corresponding to the zero run value may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the second preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 2 is the correspondence between the second preset index value and the numerical range.
  • the first numerical range corresponding to the zero-run value can be (1, 3], and accordingly, the index value determined based on the correspondence between the second preset index value and the numerical range is 2.
  • the index value when using the geometric information of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the geometric information, you can choose to first determine the position range corresponding to the geometric information; then determine the index value according to the position range, and the correspondence between the preset position range and the index value.
  • the geometric information of the current point may include the position coordinate information of the current point, which may include different spatial components, such as the x component, the y component, and the z component
  • the range can be divided by referring to some or all of the different spatial components.
  • the position range corresponding to the geometric information of the current point may be determined only according to the x component, or the position range corresponding to the geometric information of the current point may be determined according to the y component and the z component, or the position range corresponding to the geometric information of the current point may be determined according to the x component, the y component, and the z component.
  • the correspondence between the preset position range and the index value can represent the mapping relationship between the position range and the index value.
  • corresponding index values can be determined.
  • Table 3 is the correspondence between the preset position range and the index value.
  • the corresponding position range is position range 3
  • the index value determined based on the correspondence between the preset position range and the index value is 3.
  • the absolute value of the first attribute coefficient of the current point when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to directly set the absolute value of the first attribute coefficient as the index value.
  • the index value may be determined to be 2 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first determine the third numerical range corresponding to the absolute value of the first attribute coefficient; then determine the index value according to the third numerical range, and the correspondence between the third preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0. Therefore, the third numerical range corresponding to the absolute value of the first attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the third preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 4 is the correspondence between the third preset index value and the numerical range.
  • the third numerical range corresponding to the absolute value of the first attribute coefficient can be (2, 4], and accordingly, the index value determined based on the correspondence between the third preset index value and the numerical range is 3.
  • the index value when using the absolute value of the first attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; and then determine the index value based on the second operation result.
  • the second value may be any value.
  • the second value may be -1 or 2, which is not specifically limited in the present application.
  • the absolute value of the first attribute coefficient of the current point is 1 and the second value is -2
  • the second operation result when determining the index value according to the second operation result, the second operation result may be selected as the index value, the absolute value of the second operation result may be selected as the index value, or the index value may be derived from the second operation result.
  • This application does not make any specific limitation.
  • the absolute value of the first attribute coefficient of the current point when the absolute value of the first attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the first attribute coefficient, you can choose to first add or subtract the absolute value of the first attribute coefficient and the second value to determine the second operation result; then determine the fourth numerical range corresponding to the second operation result; finally, the indexed value can be determined according to the fourth numerical range and the correspondence between the fourth preset index value and the numerical range.
  • the absolute value of the first attribute coefficient may be an integer greater than or equal to 0, and after performing an addition or subtraction operation on the absolute value of the first attribute coefficient and the second value, the second operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the fourth value range corresponding to the second operation result may be any value range.
  • the correspondence between the fourth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 5 is the correspondence between the fourth preset index value and the numerical range.
  • the fourth numerical range corresponding to the fourth operation result can be (-1, 1], and accordingly, the index value determined based on the correspondence between the fourth preset index value and the numerical range is 2.
  • the absolute value of the second attribute coefficient of the current point when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to directly set the absolute value of the second attribute coefficient as the index value.
  • the index value may be determined to be 1 based on the absolute value of the first attribute coefficient.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first determine the fifth numerical range corresponding to the absolute value of the second attribute coefficient; then determine the index value according to the fifth numerical range, and the correspondence between the fifth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0. Therefore, the fifth numerical range corresponding to the absolute value of the second attribute coefficient may be a numerical range including an integer greater than or equal to 0.
  • the correspondence between the fifth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 6 shows the correspondence between the fifth preset index value and the numerical range.
  • the fifth numerical range corresponding to the absolute value of the second attribute coefficient can be (3, 5], and accordingly, the index value determined based on the correspondence between the fifth preset index value and the numerical range is 4.
  • the index value when using the absolute value of the second attribute coefficient of the current point to determine the index value, that is, when determining the first index value or the second index value or the third index value based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; and then determine the index value based on the third operation result.
  • the third value may be any value.
  • the third value may be 0 or 1, and the present application does not specifically limit it.
  • the third operation result when determining the index value according to the third operation result, the third operation result may be selected as the index value, the absolute value of the third operation result may be selected as the index value, or the index value may be derived from the third operation result.
  • This application does not make any specific limitation.
  • the absolute value of the second attribute coefficient of the current point when the absolute value of the second attribute coefficient of the current point is used to determine the index value, that is, when the first index value or the second index value or the third index value is determined based on the absolute value of the second attribute coefficient, you can choose to first add or subtract the absolute value of the second attribute coefficient and the third value to determine the third operation result; then determine the sixth numerical range corresponding to the third operation result; finally, the index value can be determined according to the sixth numerical range and the correspondence between the sixth preset index value and the numerical range.
  • the absolute value of the second attribute coefficient may be an integer greater than or equal to 0, and after performing addition or subtraction operation on the absolute value of the second attribute coefficient and the third value, the third operation result obtained may be a value greater than, equal to, or less than 0. Therefore, the sixth value range corresponding to the third operation result may be any value range.
  • the correspondence between the sixth preset index value and the numerical range can represent the mapping relationship between the numerical range and the index value.
  • the corresponding index value can be determined.
  • Table 7 is the correspondence between the sixth preset index value and the numerical range.
  • the fourth numerical range corresponding to the sixth operation result can be (-1, 2], and accordingly, the index value determined based on the correspondence between the sixth preset index value and the numerical range is 2.
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the index value determined based on the geometric information is A1
  • the index value determined based on the zero run value is A2
  • A1 can be directly determined as the first index value
  • A2 can be directly determined as the first index value
  • A1 and A2 can be compared in size, and the larger or smaller value of the two can be determined as the first index value
  • A1 and A2 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the first index value.
  • an index value based on at least one of the absolute value of the first attribute coefficient, geometric information, and zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the index value determined based on the absolute value of the first attribute coefficient is B1
  • the index value determined based on the geometric information is B2
  • the index value determined based on the zero run value is B3
  • B1 can be directly determined as the second index value
  • B2 can be directly determined as the second index value
  • B3 can be directly determined as the second index value
  • B1, B2 and B3 can be compared in size, and the larger or smaller value among the three can be determined as the second index value
  • B1, B2 and B3 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the second index value.
  • This application does not make specific limitations.
  • an index value based on at least one of the absolute value of the first attribute coefficient, the absolute value of the second attribute coefficient, geometric information, and the zero-run value you can choose to use the index value determined based on the absolute value of the first attribute coefficient, or you can choose to use the index value determined based on the absolute value of the second attribute coefficient, or you can choose to use the index value determined based on the geometric information, or you can choose to use the index value determined based on the zero-run value, or you can choose to perform calculations on the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value to obtain the final index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • C1 can be directly determined as the third index value
  • C2 can be directly determined as the third index value
  • C3 can be directly determined as the third index value
  • C4 can be directly determined as the third index value
  • C1, C2, C3 and B4 can be compared in size, and the larger or smaller value among the four can be determined as the third index value
  • C1, C2, C3 and B4 can be calculated by addition, subtraction, weighted average, etc., and the calculation result can be determined as the third index value.
  • This application does not make specific limitations.
  • Step 202 Determine the coding coefficient of the current point according to the context indicated by the index value.
  • the coding coefficient of the current point may be further determined according to the context indicated by the index value.
  • the coding coefficient may be a value obtained after coding processing is performed using the context indicated by the index value.
  • the index value may include at least one of the first index value, the second index value, and the third index value of the current point, the context indicated by the index value corresponding to different color components may be used to determine the encoding coefficient of the corresponding color component.
  • the coding coefficient may include at least one of a first coding coefficient, a second coding coefficient and a third coding coefficient.
  • the first coding coefficient, the second coding coefficient and the third coding coefficient may correspond to the three color components of the current point respectively, that is, the first coding coefficient, the second coding coefficient and the third coding coefficient may be obtained by parsing using the first context, the second context and the third context respectively.
  • the first coding coefficient of the current point when determining the coding coefficient of the current point according to the context indicated by the index value, can be determined according to the first context indicated by the first index value, the second coding coefficient of the current point can be determined according to the second context indicated by the second index value, and the third coding coefficient of the current point can be determined according to the third context indicated by the third index value.
  • Step 203 Determine the attribute coefficient of the current point according to the encoding coefficient.
  • the attribute coefficient of the current point can be further determined according to the coding coefficient.
  • the attribute coefficient may be a related value of the attribute information determined based on the coding coefficient.
  • the coding coefficient may include at least one of the first coding coefficient, the second coding coefficient and the third coding coefficient, the attribute coefficient of the corresponding color component may be determined using the coding coefficients corresponding to different color components.
  • the first attribute coefficient of the current point when determining the attribute coefficient of the current point according to the coding coefficient, can be determined according to the first coding coefficient, the second attribute coefficient of the current point can be determined according to the second coding coefficient, and the third attribute coefficient of the current point can be determined according to the third coding coefficient.
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • the attribute coefficient of the current point may include attribute coefficients of all color components, that is, the attribute coefficient of the current point may include at least one of a first attribute coefficient, a second attribute coefficient, and a third attribute coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first coding coefficient, and then the first attribute coefficient can be determined using the first coding coefficient
  • the second context indicated by the second index value can be used to determine the second coding coefficient, and then the second attribute coefficient can be determined using the second coding coefficient
  • the third context indicated by the third index value can be used to determine the third coding coefficient, and then the third attribute coefficient can be determined using the third coding coefficient.
  • the attribute coefficient of the current point is the attribute coefficient of the color component
  • the first context indicated by the first index value can be used to determine the first coding coefficient, and then the first attribute coefficient can be determined using the first coding coefficient
  • the second context indicated by the second index value can be used to determine the second coding coefficient, and then the second attribute coefficient can be determined using the second coding coefficient
  • the third context indicated by the third index value can be used to determine the third coding coefficient, and then the third attribute coefficient can be determined using the third coding coefficient.
  • an adaptive context may be used for the attribute coefficient of some or all color components of the current point, or a pre-set context may be used for the attribute coefficient of some or all color components of the current point. Therefore, the attribute coefficient of any color component of the current point may be determined by an adaptive context or by a pre-set context.
  • the first coding coefficient can be determined according to the first preset context; and/or, the second coding coefficient can be determined according to the second preset context; and/or, the third coding coefficient can be determined according to the third preset context.
  • the first color component of the current point you can choose to determine the first coding coefficient according to the first preset context, and determine the first attribute coefficient according to the first coding coefficient, or you can choose to determine the first index value, and then determine the first coding coefficient of the current point according to the first context indicated by the first index value, and determine the first attribute coefficient of the current point according to the first coding coefficient;
  • the second color component of the current point you can choose to determine the second coding coefficient according to the second preset context, and determine the second attribute coefficient according to the second coding coefficient, or you can choose to determine the second index value, and then determine the second coding coefficient of the current point according to the second context indicated by the second index value, and determine the second attribute coefficient of the current point according to the second coding coefficient;
  • the third color component of the current point you can choose to determine the third coding coefficient according to the third preset context, and determine the third attribute coefficient according to the third coding coefficient, or you can choose to determine the third index value, and
  • a pre-set context or an adaptive context can be used for any color component of the current point.
  • the context can be adaptively selected based on an index value determined based on the geometric information of the current point, or based on an index value determined based on a zero-run value corresponding to the current point, or based on an index value determined based on attribute coefficients of other color components of the current point (such as a first attribute coefficient and/or a second attribute coefficient).
  • the present application does not make any specific limitation on this.
  • the methods of determining the context are independent of each other, that is, there is no restriction that the methods of determining the context used by different color components must be the same.
  • the context can be adaptively selected based on the index value determined by the zero-run value corresponding to the current point
  • the context can be adaptively selected based on the index value determined by the first attribute coefficient
  • a pre-set context can be used. This application does not make specific limitations on this.
  • the first color component, the second color component, and the third color component may be different color components among all color components of the current point.
  • the first color component may be a G component
  • the first color component may be a B component
  • the third color component may be an R component
  • the first color component may be a U component
  • the first color component may be a Y component
  • the third color component may be a V component.
  • the attribute coefficient of the current point after determining the attribute coefficient of the current point, it is possible to continue to determine whether to determine the sign of the attribute coefficient according to whether the attribute coefficient of the current point is 0. If the attribute coefficients of the current point are not all 0, the sign of the non-zero attribute coefficient can be determined.
  • the sign of the attribute coefficient corresponding to the color component for which the attribute coefficient is not 0 may continue to be determined.
  • the determination of the sign of the first attribute coefficient can be continued; if the second attribute coefficient is not 0, then the determination of the sign of the second attribute coefficient can be continued; if the third attribute coefficient is not 0, then the determination of the sign of the third attribute coefficient can be continued.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, so that a variety of different adaptive context modes can be introduced, thereby improving the encoding and decoding performance of the point cloud attribute.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the first attribute coefficient may be encoded first; then the encoded first attribute coefficient may be used to adaptively select context to encode the second attribute coefficient; finally, the encoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select context to encode the third attribute coefficient.
  • the first preset context is used to encode the first attribute coefficient;
  • the second attribute coefficient is adaptively selected for context encoding using the encoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range);
  • the third attribute coefficient is adaptively selected for context encoding using the encoded first attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants, and the encoded second attribute coefficient being greater than or equal to, less than or equal to, or equal to certain constants (i.e., determining the corresponding numerical range).
  • an adaptive context may be selected for the attribute coefficient of one color component of the current point, and a preset context may be selected for the attribute coefficients of the other two color components.
  • the first attribute coefficient may be encoded first, and then the second attribute coefficient may be encoded; and finally, the encoded first attribute coefficient and/or the second attribute coefficient may be used to adaptively select a context to encode the third attribute coefficient.
  • the first attribute coefficient is encoded using the first preset context
  • the second attribute coefficient is encoded using the second preset context
  • the third attribute coefficient is encoded by adaptively selecting a context based on the relationship between the encoded first attribute coefficient plus or minus a constant and the encoded second attribute coefficient plus or minus a constant.
  • a reference zero run value may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the encoded runlength information to encode the context; then the second attribute coefficient may be adaptively selected using the encoded runlength information; and finally the third attribute coefficient may be adaptively selected using the encoded runlength information to encode the context.
  • the first attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values
  • the second attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values
  • the third attribute coefficient is adaptively selected to encode the context using the previous set of non-zero runlength values.
  • the first attribute coefficient is adaptively selected to encode the context using the previous set of runlength values
  • the second attribute coefficient is adaptively selected to encode the context using the previous set of runlength values
  • the third attribute coefficient is adaptively selected to encode the context using the previous set of runlength values.
  • a reference geometric position may be selected to use adaptive context for attribute coefficients of all color components.
  • the first attribute coefficient may be adaptively selected using the geometric information of the current point to encode the context; then the second attribute coefficient may be adaptively selected using the geometric information of the current point to encode the context; and finally the third attribute coefficient may be adaptively selected using the context of the current point.
  • the first attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point
  • the second attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point
  • the third attribute coefficient is adaptively selected to encode the context using the geometric information position size of the current point.
  • the point cloud encoding and decoding method proposed in the embodiment of the present application can obtain stable performance gains without increasing the time complexity, and can improve the performance of point cloud encoding and decoding.
  • the embodiment of the present application provides a point cloud encoding method, wherein the encoder determines an index value; determines the encoding coefficient of the current point according to the context indicated by the index value; and determines the attribute coefficient of the current point according to the decoding coefficient. That is to say, in the embodiment of the present application, when the attribute coefficient is determined using the context, the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • a point cloud encoding and decoding method when encoding and decoding the attribute coefficients of the current point, the attribute coefficients of any color component of the current point can be determined by an adaptive context or by a pre-set context. Specifically.
  • the correlation between the attribute coefficients that have been encoded/decoded, as well as the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, thereby improving the encoding and decoding performance of point cloud attributes.
  • a preset context such as a first preset context, a second preset context, and a third preset context
  • the attribute coefficients of the three color components of the current point are value0, value1, value2 (such as the first color component, the second color component, and the third color component), and the zero run value corresponding to the current point is run length.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder encodes the absolute value of value2 and the context of the absolute value of value1 adaptively.
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the absolute values of value1 and value2 are used to adaptively select the context for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first encoding coefficient is the same as the first attribute coefficient) using the absolute value of value1 and the absolute value of value2 to adaptively select the context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the attribute decoder decodes the absolute value of value2 and the context of value1 adaptively.
  • the attribute decoder decodes the absolute value of value0, the absolute value of value1, and the absolute value of value2 to adaptively select the context for decoding;
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient is subtracted by 1 using the fixed context coding (for example, the third coding coefficient is subtracted by 1 to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient minus 1 is encoded using a fixed context (such as a second preset context) (for example, the second encoding coefficient is subtracted by 1 to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously encoded using the context (that is, the third encoding coefficient is the same as the third attribute coefficient);
  • a fixed context such as a second preset context
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient minus 1 is encoded using a fixed context (such as a first preset context) (for example, the first encoding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously encoded using the fixed context (that is, the second encoding coefficient is the same as the second attribute coefficient);
  • a fixed context such as a first preset context
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient minus one and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is encoded using the adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is decoded using the fixed context (the third preset context), and the absolute value of the third attribute coefficient is its decoded value (the third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is decoded using a fixed context (a second preset context), the absolute value of the second attribute coefficient is its decoded value (a second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is decoded using the fixed context;
  • the absolute value of the first attribute coefficient is decoded using the fixed context (the first preset context)
  • the absolute value of the first attribute coefficient is the decoded value (the first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient)
  • the absolute value of the second attribute coefficient is decoded continuously using the fixed context (the second preset context);
  • a context is adaptively selected using the magnitude relationship between the absolute value of the first attribute coefficient minus one and the absolute value of the second attribute coefficient, and the absolute value of the third attribute coefficient is decoded using the adaptively selected context.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the run length information is used to adaptively select the context for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first encoding coefficient is the same as the first attribute coefficient) using the run length information to adaptively select the context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient minus 1 is adaptively selected for context coding using the run length information (for example, the third coding coefficient is subtracted by 1 to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is subtracted from 1 by adaptively selecting context encoding using the run length information (for example, subtracting 1 from the second coding coefficient to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously adaptively selected from context encoding using the run length information (that is, the third coding coefficient is the same as the third attribute coefficient);
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient is subtracted by 1 by adaptively selecting context encoding using the run length information (for example, the first coding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously adaptively selected context encoding using the run length information (that is, the second coding coefficient is the same as the second attribute coefficient);
  • the run length information is used to adaptively select a context, and the absolute value of the third attribute coefficient is encoded using this adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is adaptively selected for context decoding using the run length information, and the absolute value of the third attribute coefficient is its decoded value (the third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient of the context is adaptively selected using the run length information, the absolute value of the second attribute coefficient is its decoded value (the second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient of the context is adaptively selected using the run length information;
  • the absolute value of the first attribute coefficient is adaptively selected context decoding using the run length information
  • the absolute value of the first attribute coefficient is its decoded value (first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient)
  • the absolute value of the second attribute coefficient is adaptively selected context decoding using the run length information
  • the run length information is used to adaptively select a context, and the absolute value of the third attribute coefficient is decoded using this adaptively selected context.
  • run length is used for encoding, i.e., run length is encoded; when value0, value1, and value2 are not all 0 at the same time, the following scheme is used for encoding:
  • the attribute encoder encodes the absolute value of value1 and the context of adaptive selection using geometric information
  • the attribute encoder encodes the absolute value of value2 using the geometric information to adaptively select the context
  • the attribute encoder is used to reduce the absolute value of value0 by one (for example, the first coding coefficient is reduced by one to obtain the corresponding first attribute coefficient), and the context is adaptively selected using geometric information for encoding;
  • the attribute encoder is used to encode the absolute value of value0 (that is, the first coding coefficient is the same as the first attribute coefficient) using the geometric information adaptive selection context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the attribute decoder decodes the absolute value of value1 and the context of adaptive selection using geometric information
  • the attribute decoder decodes the absolute value of value2 and the context of adaptive selection using geometric information
  • the attribute decoder decodes the absolute value of value0 and adaptively selects the context using geometric information
  • the absolute value of value0 is equal to value0 plus one (for example, add one to the first decoding coefficient to obtain the corresponding first attribute coefficient);
  • the absolute value of value0 is equal to the absolute value of value0 (ie, the first decoding coefficient is the same as the first attribute coefficient).
  • run length is used for encoding, that is, run length is encoded; when the three attribute coefficients are not all 0 at the same time, the following scheme is used for encoding:
  • the absolute value of the third attribute coefficient is subtracted by 1 by adaptively selecting the context encoding using geometric information (for example, subtracting 1 from the third encoding coefficient to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient is subtracted by 1 by using the context adaptively selected by the geometric information (for example, the second coding coefficient is subtracted by 1 to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is continuously encoded by using the context adaptively selected by the geometric information (that is, the third coding coefficient is the same as the third attribute coefficient);
  • the absolute value of the first attribute coefficient is not equal to 0, the absolute value of the first attribute coefficient is subtracted by 1 by using the context encoding adaptively selected by the geometric information (for example, the first coding coefficient is subtracted by 1 to obtain the corresponding first attribute coefficient), and the absolute value of the second attribute coefficient is continuously encoded by using the context encoding adaptively selected by the geometric information (that is, the second coding coefficient is the same as the second attribute coefficient);
  • the context is adaptively selected using geometric information, and the absolute value of the third attribute coefficient is encoded using the adaptively selected context.
  • the value of run length is decoded.
  • decoding is performed as follows:
  • the absolute value of the third attribute coefficient is adaptively selected context-decoded using geometric information, and the absolute value of the third attribute coefficient is its decoded value (third decoded coefficient) plus one (for example, the third decoded coefficient is added by one to obtain the corresponding third attribute coefficient);
  • the absolute value of the first attribute coefficient is equal to 0 but the absolute value of the second attribute coefficient is not equal to 0, the absolute value of the second attribute coefficient of the context is selected adaptively using the geometric information, the absolute value of the second attribute coefficient is its decoded value (the second decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the third attribute coefficient is decoded adaptively using the context of the geometric information;
  • the absolute value of the first attribute coefficient is decoded by using the context adaptively selected by the geometric information, the absolute value of the first attribute coefficient is the decoded value thereof (the first decoded coefficient) plus one (for example, the second decoded coefficient is added by one to obtain the corresponding second attribute coefficient), and the absolute value of the second attribute coefficient is decoded by using the context adaptively selected by the geometric information continuously;
  • a context is adaptively selected using geometric information, and the absolute value of the third attribute coefficient is decoded using the adaptively selected context.
  • the attribute coefficient of the third color component can be encoded and decoded by adaptively selecting a context based on the absolute value of the first attribute coefficient and/or the absolute value of the second attribute coefficient.
  • the first attribute coefficient is encoded by using an attribute encoder; the second attribute coefficient is encoded by using an attribute encoder; the absolute value of the encoded first attribute coefficient plus a constant 1 is used to adaptively select a context for encoding by using the attribute encoder.
  • the first attribute coefficient is decoded by using an attribute decoder; the second attribute coefficient is decoded by using an attribute decoder; the context is adaptively selected by using the absolute value of the decoded first attribute coefficient plus a constant 1 to adaptively select a context for decoding by using the absolute value of the decoded second attribute coefficient plus a constant 2.
  • the first attribute coefficient is encoded using an attribute encoder; the second attribute coefficient is adaptively selected context encoded using the attribute encoder to determine whether the absolute value of the encoded first attribute coefficient is equal to constant 1 and is less than or equal to constant 2; the third attribute coefficient is adaptively selected context encoded using the attribute encoder to determine whether the absolute value of the encoded first attribute coefficient is equal to constant 3 and is less than or equal to constant 4, and whether the absolute value of the encoded second attribute coefficient is equal to constant 5 and is less than or equal to constant 6; for the decoding end: the first attribute coefficient is decoded using an attribute decoder; the second attribute coefficient is adaptively selected context decoded using the attribute decoder to determine whether the absolute value of the decoded first attribute coefficient is equal to constant 1 and is less than or equal to constant 2; the third attribute coefficient is adaptively selected context decoded using the attribute decoder to determine whether the absolute value of the decoded first attribute coefficient is equal to constant 3 and is less than or equal to constant
  • the second attribute coefficient is encoded by using an attribute encoder; the third attribute coefficient is adaptively selected to encode the context by using whether the absolute value of the encoded second attribute coefficient is equal to constant 7 and is less than or equal to constant 8; for the decoding end: the second attribute coefficient is decoded by using an attribute decoder; the third attribute coefficient is adaptively selected to decode the context by using whether the absolute value of the decoded second attribute coefficient is equal to constant 7 and is less than or equal to constant 8.
  • the second attribute coefficient is encoded by using an attribute encoder; the third attribute coefficient is adaptively selected to encode the context by using whether the absolute value of the encoded second attribute coefficient is equal to constant 7 and whether it is greater than or equal to constant 8; for the decoding end: the second attribute coefficient is decoded by using an attribute decoder; the third attribute coefficient is adaptively selected to decode the context by using whether the absolute value of the decoded second attribute coefficient is equal to constant 7 and whether it is greater than or equal to constant 8.
  • the first attribute coefficient is encoded by using an attribute encoder
  • the second attribute coefficient is encoded by using an attribute encoder
  • the attribute encoder adaptively selects a context for encoding based on the relationship between the absolute value of the encoded first attribute coefficient minus a constant 9 and the absolute value of the encoded second attribute coefficient minus a constant 10
  • the first attribute coefficient is decoded by using an attribute decoder
  • the second attribute coefficient is decoded by using the attribute decoder; the attribute decoder adaptively selects the context for decoding based on the absolute value of the decoded first attribute coefficient minus the constant 9 and the absolute value of the decoded second attribute coefficient minus the constant 10.
  • the index value determined based on the geometric information, or the index value determined based on the zero-run value can be determined as the first index value; or the index value determined based on the geometric information and the index value determined based on the zero-run value can be operated and processed to obtain the first index value.
  • the second index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or based on the index value determined based on the geometric information, or based on the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the second index value.
  • the third index value can be determined based on the index value determined based on the absolute value of the first attribute coefficient, or the index value determined based on the absolute value of the second attribute coefficient, or the index value determined based on the geometric information, or the index value determined based on the zero-run value; the index value determined based on the absolute value of the first attribute coefficient, and/or the index value determined based on the absolute value of the second attribute coefficient, and/or the index value determined based on the geometric information, and/or the index value determined based on the zero-run value can also be operated and processed to obtain the third index value.
  • the coding coefficient (decoding coefficient) of the current point can be further determined according to the context indicated by the index value.
  • the coding coefficient (decoding coefficient) can be a value obtained after encoding and decoding processing using the context indicated by the index value.
  • the attribute coefficients of the current point can be further determined according to the coding coefficients (decoding coefficients).
  • the attribute coefficients can be related values of attribute information determined based on the coding coefficients (decoding coefficients).
  • the attribute coefficient of the current point may be a quantized residual or a quantized transform coefficient of the attribute information of the current point.
  • the attribute coefficient may be a quantized residual or a quantized transform coefficient.
  • a 1-bit flag (such as adaptive context identification information) can also be used to indicate whether the adaptive context selection method is turned on. This flag can be placed in the attribute header of the high-level syntax element. This flag is conditionally analyzed under certain specific conditions. If this flag does not appear in the bitstream, the value of the flag can be defaulted to a fixed value.
  • the decoding end needs to decode the flag bit. If the flag bit does not appear in the bit stream, it will not be decoded.
  • the default value of the flag bit is a fixed value.
  • the adaptive context identification information can be understood as a flag indicating whether the adaptive context is used for the node in the point cloud.
  • the encoder can determine a variable as the adaptive context identification information, so that the adaptive context identification information can be determined by the value of the variable.
  • the value of the adaptive context identification information is 1, it may indicate that the attribute coefficient of the current point is determined using the adaptive context; if the value of the adaptive context identification information is 0, it may indicate that the attribute coefficient of the current point is not determined using the adaptive context.
  • the adaptive context identification information setting process may not be performed. That is, it is possible to preset whether to use the adaptive context to determine the attribute coefficient of the current point, and it is also possible to preset whether to use the adaptive context to determine the attribute coefficient of one or more color components of all color components of the current point. That is, whether to use the adaptive context for some or all color components can be independently executed without relying on the value of the adaptive context identification information.
  • the embodiment of the present application provides a point cloud encoding and decoding method, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines an encoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • FIG. 10 is a schematic diagram of a composition structure of an encoder.
  • the encoder 20 may include: a first determining unit 21, wherein:
  • the first determination unit 21 is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.
  • an embodiment of the present application provides a computer-readable storage medium, which is applied to the encoder 20, and the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the aforementioned embodiments is implemented.
  • Figure 11 is a second schematic diagram of the composition structure of the encoder.
  • the encoder 20 may include: a first memory 22 and a first processor 23, a first communication interface 24 and a first bus system 25.
  • the first memory 22, the first processor 23, and the first communication interface 24 are coupled together through the first bus system 25.
  • the first bus system 25 is used to achieve connection and communication between these components.
  • the first bus system 25 also includes a power bus, a control bus, and a status signal bus.
  • various buses are labeled as the first bus system 25 in Figure 9. Among them,
  • the first communication interface 24 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the first memory 22 is used to store a computer program that can be run on the first processor
  • the first processor 23 is used to determine an index value when running the computer program; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the coding coefficient.
  • the first memory 22 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate synchronous DRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM bus RAM
  • the first processor 23 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit in the first processor 23 or the instruction in the form of software.
  • the above-mentioned first processor 23 can be a general processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the first memory 22, and the first processor 23 reads the information in the first memory 22 and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • the technology described in this application can be implemented by a module (such as a process, function, etc.) that performs the functions described in this application.
  • the software code can be stored in a memory and executed by a processor.
  • the memory can be implemented in the processor or outside the processor.
  • the first processor 23 is further configured to execute any one of the methods described in the foregoing embodiments when running the computer program.
  • FIG12 is a schematic diagram of a structure of a decoder.
  • the decoder 30 may include: a second determining unit 31; wherein:
  • the second determination unit 31 is configured to determine an index value; determine a coding coefficient of a current point according to a context indicated by the index value; and determine a property coefficient of the current point according to the decoding coefficient.
  • a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.
  • an embodiment of the present application provides a computer-readable storage medium, which is applied to the decoder 30.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the above embodiments is implemented.
  • Figure 13 is a second schematic diagram of the composition structure of the decoder.
  • the decoder 30 may include: a second memory 32 and a second processor 33, a second communication interface 34 and a second bus system 35.
  • the second memory 32 and the second processor 33, and the second communication interface 34 are coupled together through the second bus system 35.
  • the second bus system 35 is used to realize the connection and communication between these components.
  • the second bus system 35 also includes a power bus, a control bus and a status signal bus.
  • various buses are marked as the second bus system 35 in Figure 11. Among them,
  • the second communication interface 34 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 32 is used to store a computer program that can be run on the second processor
  • the second processor 33 is used to determine an index value when running the computer program; determine a coding coefficient of a current point according to a context indicated by the index value; and determine an attribute coefficient of the current point according to the decoding coefficient.
  • the second memory 32 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate synchronous DRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM bus RAM
  • the second processor 33 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit or software instructions in the second processor 33.
  • the above-mentioned second processor 33 can be a general processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the second memory 32, and the second processor 33 reads the information in the second memory 32 and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
  • the technology described in this application can be implemented by a module (such as a process, function, etc.) that performs the functions described in this application.
  • the software code can be stored in a memory and executed by a processor.
  • the memory can be implemented in the processor or outside the processor.
  • the embodiment of the present application provides an encoder and a decoder.
  • the correlation between the already encoded/decoded attribute coefficients and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding attribute information, thereby improving the encoding and decoding performance of point cloud attributes.
  • the embodiment of the present application also provides a code stream, which is generated by bit encoding based on the information to be encoded; wherein the information to be encoded includes at least: adaptive context identification information of the current point, geometric information of the current point, and zero run value corresponding to the current point.
  • the embodiment of the present application provides a point cloud encoding and decoding method, an encoder, a decoder, a bitstream and a storage medium, wherein the decoder determines an index value; determines a decoding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the encoder determines an index value; determines a coding coefficient of a current point according to the context indicated by the index value; and determines an attribute coefficient of the current point according to the decoding coefficient.
  • the correlation between the attribute coefficients that have been encoded/decoded and the related parameters can be fully utilized to adaptively select different contexts for encoding and decoding, thereby being able to introduce a variety of different adaptive context modes, and no longer being limited to using a fixed context for encoding and decoding of attribute information, thereby improving the encoding and decoding performance of point cloud attributes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例提供了一种点云编解码方法,解码器确定索引值;根据索引值所指示的上下文确定当前点的解码系数;根据解码系数确定当前点的属性系数。编码器确定索引值;根据索引值所指示的上下文确定当前点的编码系数;根据解码系数确定当前点的属性系数。

Description

点云编解码方法、编码器、解码器、码流及存储介质 技术领域
本申请实施例涉及点云压缩技术领域,尤其涉及一种点云编解码方法、编码器、解码器、码流及存储介质。
背景技术
在运动图像专家组(Moving Picture Experts Group,MPEG)提供的基于几何的点云压缩(Geometry-based Point Cloud Compression,G-PCC)编解码框架或基于视频的点云压缩(Video-based Point Cloud Compression,V-PCC)编解码框架中,点云的几何信息和属性信息是分开进行编码的。几何编码完成后,对几何信息进行重建,而属性信息的编码将依赖于重建的几何信息。其中,属性信息编码主要针对颜色信息的编码,将其转变成更符合人眼视觉特性的YUV色彩空间,然后对预处理后属性信息进行属性编码,最终生成二进制的属性码流。
然而,目前在编码/解码属性系数值的过程中,没有考虑到充分利用已经编码/解码的属性系数之间的相关性来自适应选取不同的上下文进行编解码,因此无法引入多种不同的自适应上下文的模式,进而降低了点云属性的编解码性能。
发明内容
本申请实施例提供一种点云编解码方法、编码器、解码器、码流及存储介质,能够提高点云属性的编解码性能。
第一方面,本申请实施例提供了一种点云解码方法,应用于解码器,所述方法包括:
确定索引值;
根据所述索引值所指示的上下文确定当前点的解码系数;
根据所述解码系数确定所述当前点的属性系数。
第二方面,本申请实施例提供了一种点云编码方法,应用于编码器,所述方法包括:
确定索引值;
根据所述索引值所指示的上下文确定当前点的编码系数;
根据所述编码系数确定所述当前点的属性系数。
第三方面,本申请实施例提供了一种编码器,所述编码器包括第一确定单元,其中,
所述第一确定单元,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述编码系数确定所述当前点的属性系数。
第四方面,本申请实施例提供了一种编码器,所述编码器包括第一存储器和第一处理器;其中,
所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
所述第一处理器,用于在运行所述计算机程序时,执行如上所述的点云编码方法。
第五方面,本申请实施例提供了一种解码器,所述解码器包括第二确定单元,其中,
所述第二确定单元,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述解码系数确定所述当前点的属性系数。
第六方面,本申请实施例提供了一种解码器,所述解码器包括第二存储器和第二处理器;其中,
所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
所述第二处理器,用于在运行所述计算机程序时,执行如上所述的点云解码方法。
第七方面,本申请实施例提供了一种码流,所述码流是根据待编码信息进行比特编码生成的;其中,所述待编码信息至少包括:当前点的自适应上下文标识信息、当前点的几何信息、当前点对应的零游程值。
第八方面,本申请实施例提供了一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如上所述的点云编码方法,或者,被第二处理器执行时实现如上所述的点云解码方法。
本申请实施例提供了一种点云编解码方法、编码器、解码器、码流及存储介质,解码器确定索引值;根据索引值所指示的上下文确定当前点的解码系数;根据解码系数确定当前点的属性系数。编码器确定索引值;根据索引值所指示的上下文确定当前点的编码系数;根据解码系数确定当前点的属性系数。也就是说,在本申请的实施例中,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
附图说明
图1A为本申请实施例提供的一种三维点云图像示意图;
图1B为本申请实施例提供的一种三维点云图像的局部放大示意图;
图2A为本申请实施例提供的一种不同观看角度下的点云图像示意图;
图2B为本申请实施例提供的一种图2A对应的数据存储格式示意图;
图3为本申请实施例提供的一种点云编解码的网络架构示意图;
图4为本申请实施例提供的一种点云编码器的组成结构示意图;
图5为本申请实施例提供的一种点云解码器的组成结构示意图;
图6示出了一种点云编码器的组成框架示意图;
图7示出了一种点云解码器的组成框架示意图;
图8为本申请实施例提出的点云解码方法的实现流程示意图;
图9为本申请实施例提出的点云编码方法的实现流程示意图;
图10为编码器的组成结构示意图一;
图11为编码器的组成结构示意图二;
图12为解码器的组成结构示意图一;
图13为解码器的组成结构示意图二。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
点云(Point Cloud)是物体表面的三维表现形式,通过光电雷达、激光雷达、激光扫描仪、多视角相机等采集设备,可以采集得到物体表面的点云(数据)。
点云是空间中一组无规则分布的、表达三维物体或场景的空间结构及表面属性的离散点集,图1A展示了三维点云图像和图1B展示了三维点云图像的局部放大图,可以看到点云表面是由分布稠密的点所组成的。
二维图像在每一个像素点均有信息表达,分布规则,因此不需要额外记录其位置信息;然而点云中的点在三维空间中的分布具有随机性和不规则性,因此需要记录每一个点在空间中的位置,才能完整地表达一幅点云。与二维图像类似,采集过程中每一个位置均有对应的属性信息,通常为RGB颜色值,颜色值反映物体的色彩;对于点云来说,每一个点所对应的属性信息除了颜色信息以外,还有比较常见的是反射率(reflectance)值,反射率值反映物体的表面材质。因此,点云中的点可以包括点的位置信息和点的属性信息。例如,点的位置信息可以是点的三维坐标信息(x,y,z)。点的位置信息也可称为点的几何信息。例如,点的属性信息可以包括颜色信息(三维颜色信息)和/或反射率(一维反射率信息r)等等。例如,颜色信息可以是任意一种色彩空间上的信息。例如,颜色信息可以是RGB信息。其中,R表示红色(Red,R),G表示绿色(Green,G),B表示蓝色(Blue,B)。再如,颜色信息可以是亮度色度(YCbCr,YUV)信息。其中,Y表示明亮度(Luma),Cb(U)表示蓝色色差,Cr(V)表示红色色差。
根据激光测量原理得到的点云,点云中的点可以包括点的三维坐标信息和点的反射率值。再如,根据摄影测量原理得到的点云,点云中的点可以可包括点的三维坐标信息和点的三维颜色信息。再如,结合激光测量和摄影测量原理得到点云,点云中的点可以可包括点的三维坐标信息、点的反射率值和点的三维颜色信息。
如图2A和图2B所示为一幅点云图像及其对应的数据存储格式。其中,图2A提供了点云图像的六个观看角度,图2B由文件头信息部分和数据部分组成,头信息包含了数据格式、数据表示类型、点云总点数、以及点云所表示的内容。例如,点云为“.ply”格式,由ASCII码表示,总点数为207242,每个点具有三维坐标信息(x,y,z)和三维颜色信息(r,g,b)。
点云可以按获取的途径分为:
静态点云:即物体是静止的,获取点云的设备也是静止的;
动态点云:物体是运动的,但获取点云的设备是静止的;
动态获取点云:获取点云的设备是运动的。
例如,按点云的用途分为两大类:
类别一:机器感知点云,其可以用于自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等场景;
类别二:人眼感知点云,其可以用于数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。
点云可以灵活方便地表达三维物体或场景的空间结构及表面属性,并且由于点云通过直接对真实物体采样获得,在保证精度的前提下能提供极强的真实感,因而应用广泛,其范围包括虚拟现实游戏、计算机辅助设计、地理信息系统、自动导航系统、数字文化遗产、自由视点广播、三维沉浸远程呈现、生物组织器官三维重建等。
点云的采集主要有以下途径:计算机生成、3D激光扫描、3D摄影测量等。计算机可以生成虚拟三维物体及场景的点云;3D激光扫描可以获得静态现实世界三维物体或场景的点云,每秒可以获取百万级点云;3D摄影测量可以获得动态现实世界三维物体或场景的点云,每秒可以获取千万级点云。这些技术降低了点云数据获取成本和时间周期,提高了数据的精度。点云数据获取方式的变革,使大量点云数据的获取成为可能,伴随着应用需求的增长,海量3D点云数据的处理遭遇存储空间和传输带宽限制的瓶颈。
示例性地,以帧率为30帧每秒(fps)的点云视频为例,每帧点云的点数为70万,每个点具有坐标信息xyz(float)和颜色信息RGB(uchar),则10s点云视频的数据量大约为0.7million×(4Byte×3+1Byte×3)×30fps×10s=3.15GB,其中, 1Byte为8bit,而YUV采样格式为4:2:0,帧率为24fps的1280×720二维视频,其10s的数据量约为1280×720×12bit×24fps×10s≈0.33GB,10s的两视角三维视频的数据量约为0.33×2=0.66GB。由此可见,点云视频的数据量远超过相同时长的二维视频和三维视频的数据量。因此,为更好地实现数据管理,节省服务器存储空间,降低服务器与客户端之间的传输流量及传输时间,点云压缩成为促进点云产业发展的关键问题。
也就是说,由于点云是海量点的集合,存储点云不仅会消耗大量的内存,而且不利于传输,也没有这么大的带宽可以支持将点云不经过压缩直接在网络层进行传输,因此,需要对点云进行压缩。
目前,可对点云进行压缩的点云编码框架可以是运动图像专家组(Moving Picture Experts Group,MPEG)提供的基于几何的点云压缩(Geometry-based Point Cloud Compression,G-PCC)编解码框架或基于视频的点云压缩(Video-based Point Cloud Compression,V-PCC)编解码框架,也可以是AVS提供的AVS-PCC编解码框架。G-PCC编解码框架可用于针对第一类静态点云和第三类动态获取点云进行压缩,V-PCC编解码框架可用于针对第二类动态点云进行压缩。
本申请实施例提供了一种包含点云解码方法和点云编码方法的点云编解码系统的网络架构,图3为本申请实施例提供的一种点云编解码的网络架构示意图。如图3所示,该网络架构包括一个或多个电子设备13至1N和通信网络01,其中,电子设备13至1N可以通过通信网络01进行视频交互。电子设备在实施的过程中可以为各种类型的具有点云编解码功能的设备,例如,所述电子设备可以包括手机、平板电脑、个人计算机、个人数字助理、导航仪、数字电话、视频电话、电视机、传感设备、服务器等,本申请实施例不作限制。其中,本申请实施例中的解码器或编码器就可以为上述电子设备。
其中,本申请实施例中的电子设备具有点云编解码功能,一般包括点云编码器(即编码器)和点云解码器(即解码器)。
下面以编解码框架为例进行点云压缩技术的说明。
可以理解,点云压缩一般采用点云几何信息和属性信息分别压缩的方式,在编码端,首先在几何编码器中编码点云几何信息,然后将重建几何信息作为附加信息输入到属性编码器中,辅助点云属性的压缩;在解码端,首先在几何解码器中解码点云几何信息,然后将解码后的几何信息作为附加信息输入到属性解码器中,辅助点云属性的压缩。整个编解码器由预处理/后处理、几何编码/解码、属性编码/解码几部分组成。
本申请实施例提供一种点云编码器,如图4所示为点云压缩的参考框架,该点云编码器11包括几何编码器:坐标平移单元111、坐标量化单元112、八叉树构建单元113、几何熵编码器114、几何重建单元115。属性编码器:属性重上色单元116、颜色空间变换单元117、第一属性预测单元118、量化单元119和属性熵编码器1110。
在编码端的几何编码部分,首先对原始几何信息进行预处理,通过坐标平移单元111将几何原点归一化到点云空间中的最小值位置,通过坐标量化单元112将几何信息从浮点数转化为整形,便于后续的规则化处理;然后对规则化的几何信息进行几何编码,在八叉树构建单元113中采用八叉树结构对点云空间进行递归划分,每次将当前节点划分成八个相同大小的子块,并判断每个子块的占有码字情况,当子块内不包含点时记为空,否则记为非空,在递归划分的最后一层记录所有块的占有码字信息,并进行几何编码;通过八叉树结构表达的几何信息一方面输入到几何熵编码器114中形成几何码流,另一方面在几何重建单元115进行几何重建处理,重建后的几何信息作为附加信息输入到属性编码器中。
在属性编码部分,首先对原始的属性信息进行预处理,由于几何信息在几何编码之后有所异动,因此,通过属性重上色单元116为几何编码后的每一个点重新分配属性值,实现属性重上色。此外,如果处理的属性信息为颜色信息,还需要将原始的颜色信息通过颜色空间变换单元117进行颜色空间变换,将其转变成更符合人眼视觉特性的YUV色彩空间;然后通过第一属性预测单元118对预处理后属性信息进行属性编码,属性编码首先需要将点云进行重排序,重排序的方式是莫顿码,因此属性编码的遍历顺序为莫顿顺序。属性预测方法为基于莫顿顺序的单点预测,即按照莫顿顺序从当前待编码点(当前节点)向前回溯一个点,找到的节点为当前待编码点的预测参考点,然后将预测参考点的属性重建值作为属性预测值,属性残差值为当前待编码点的属性原始值与属性预测值之间的差值;最后通过量化单元119对属性残差值进行量化,将量化后的残差信息输入到属性熵编码器1110中形成属性码流。
本申请实施例还提供一种点云解码器,图5为本申请实施例提供的一种点云解码器的组成结构示意图,如图5所示为点云压缩的参考框架,该点云解码器12包括几何解编码器:几何熵解码器121、八叉树重建单元122、坐标反量化单元123、坐标反平移单元124。属性解码器:属性熵解码器125、反量化单元126、第二属性预测单元127和颜色空间反变换单元128。
在解码端,同样采用几何和属性分别解码的方式。在几何解码部分,首先通过几何熵解码器121对几何码流进行熵解码,得到每个节点的几何信息,然后按照和几何编码相同的方式通过八叉树重建单元122构建八叉树结构,结合解码几何重建出坐标变换后的、通过八叉树结构表达的几何信息,一方面将该信息通过坐标反量化单元123进行坐标反量化和通过坐标反平移单元124进行反平移,得到解码几何信息。另一方面作为附加信息输入到属性解码器中。在属性解码部分,按照与编码端相同的方式构建莫顿顺序,先通过属性熵解码器125对属性码流进行熵解码,得到量化后的残差信息;然后通过反量化单元126进行反量化,得到属性残差值;类似的,按照与属性编码相同的方式,通过第二属性预测单元127获得当前待解码点的属性预测值,然后将属性预测值与属性残差值相加,可以恢复出当前待解码点的属性重建值(例如,YUV属性值);最后,经过颜色空间反变换单元128的颜色空间反变换得到解码属性信息。
还可以理解,对于编解码框架而言,可以分为基于Pred,基于Predtrans-资源受限,基于Predtrans-资源不受限,基于Trans。
通用测试条件共4种,具体可以包括:
条件1:几何位置有限度有损、属性有损;
条件2:几何位置无损、属性有损;
条件3:几何位置无损、属性有限度有损;
条件4:几何位置无损、属性无损。
技术路线共4种,以属性压缩所采用的算法进行区分。
技术路线1:Pred(预测)分支,属性压缩采用基于帧内预测的方法:
在编码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先采用预测算法得到属性预测值,根据属性值和属性预测值得到属性残差,然后对属性残差进行量化,生成量化残差,最后对量化残差进行编码;
在解码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先采用预测算法得到属性预测值,然后解码获取量化残差,再对量化残差进行反量化,最后根据属性预测值和反量化后的残差,获得属性重建值。
技术路线2:基于Predtrans-资源受限(基于预测变换分支—资源受限),属性压缩采用基于帧内预测和k元离散余弦变换(Discrete Cosine Transform,DCT)变换的方法,在编码量化后的变换系数时,有最大点数X(如4096)的限制,即最多每X点为一组进行编码:
在编码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先将整个点云分成长度最大为Y(如2)的若干小组,然后将这若干个小组组合成若干个大组(每个大组中的点数不超过X,如4096),然后采用预测算法得到属性预测值,根据属性值和属性预测值得到属性残差,以小组为单位对属性残差进行DCT变换,生成变换系数,再对变换系数进行量化,生成量化后的变换系数,最后以大组为单位对量化后的变换系数进行编码;
在解码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先将整个点云分成长度最大为Y(如2)的若干小组,然后将这若干个小组组合成若干个大组(每个大组中的点数不超过X,如4096),以大组为单位解码获取量化后的变换系数,然后采用预测算法得到属性预测值,再以小组为单位对量化后的变换系数进行反量化、反变换,最后根据属性预测值和反量化、反变换后的系数,获得属性重建值。
技术路线3:基于Predtrans-资源不受限(基于预测变换分支—资源不受限),属性压缩采用基于帧内预测和DCT变换的方法,在编码量化后的变换系数时,没有最大点数X的限制,即所有系数一起进行编码:
在编码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先将整个点云分成长度最大为Y(如2)的若干小组,然后采用预测算法得到属性预测值,根据属性值和属性预测值得到属性残差,以小组为单位对属性残差进行DCT变换,生成变换系数,再对变换系数进行量化,生成量化后的变换系数,最后对整个点云的量化后的变换系数进行编码;
在解码端,按照一定的顺序(点云原始采集顺序、莫顿顺序、希尔伯特顺序等)处理点云中的点,先将整个点云分成长度最大为Y(如2)的若干小组,解码获取整个点云的量化后的变换系数,然后采用预测算法得到属性预测值,再以小组为单位对量化后的变换系数进行反量化、反变换,最后根据属性预测值和反量化、反变换后的系数,获得属性重建值。
技术路线4:基于Trans分支(多层变换分支),属性压缩采用基于多层小波变换的方法:
在编码端,对整个点云进行多层小波变换,生成变换系数,然后对变换系数进行量化,生成量化后的变换系数,最后对整个点云的量化后的变换系数进行编码;
在解码端,解码获取整个点云的量化后的变换系数,然后对量化后的变换系数进行反量化、反变换,获得属性重建值。
在技术路线1中,系数可以为量化残差,在上述实施例2、3、4中,系数可以为量化后的变换系数。
可见,对于编码/解码端,并没有涉及自适应选择上下文的模式进行编解码的方案。
下面以属性编解码框架为例进行点云压缩技术的说明。
在点云编码器框架中,点云的几何信息和每点所对应的属性信息是分开进行编码的。目前的参考属性编码框架可以分为基于Pred分支,基于PredLift分支,基于RAHT分支。
图6示出了一种点云编码器的组成框架示意图。如图6所示,在几何编码过程中,对几何信息进行坐标转换,使点云全都包含在一个包围盒(Bounding Box)中,然后再进行量化,这一步量化主要起到缩放的作用,由于量化取整,使得一部分点云的几何信息相同,于是再基于参数来决定是否移除重复点,量化和移除重复点这一过程又被称为体素化过程。接着对Bounding Box进行八叉树划分或者预测树构建。在该过程中,针对划分的叶子结点中的点进行算术编码,生成二进制的几何比特流;或者,针对划分产生的交点(Vertex)进行算术编码(基于交点进行表面拟合),生成二进制的几何比特流。在属性编码过程中,几何编码完成,对几何信息进行重建后,需要先进行颜色转换,将颜色信息(即属性信息)从RGB颜色空间转换到YUV颜色空间。然后,利用重建的几何信息对点云重新着色,使得未编码的属性信息与重建的几何信息对应起来。属性编码主要针对颜色信息进行,在颜色信息编码过程中,主要有两种变换方法,一是依赖于细节层次(Level of Detail,LOD)划分的基于距离的提升变换,二是直接进行区域自适应分层变换(Region Adaptive Hierarchal Transform,RAHT),这两种方法都会将颜色信息从空间域转换到频域,通过变换得到高频系数和低频系数,最后对系数进行量化,再对量化系数进行算术编码,可以生成二进制的属性比特流。
图7示出了一种点云解码器的组成框架示意图。如图7所示,针对所获取的二进制比特流,首先对二进制比特流中的几何比特流和属性比特流分别进行独立解码。在对几何比特流的解码时,通过算术解码-重构八叉树/重构预测树-重建几何-坐标逆转换,得到点云的几何信息;在对属性比特流的解码时,通过算术解码-反量化-LOD划分/RAHT-颜色逆转换,得到点云的属性信息,基于几何信息和属性信息还原待编码的点云数据(即输出点云)。
需要说明的是,在如图6或图7所示,目前的点云几何编解码可以分为基于八叉树的几何编解码(用虚线框标识)和基于预测树的几何编解码(用点划线框标识)。
通用测试条件共4种,具体可以包括:
条件1:几何位置无损、属性有损;
条件2:几何位置有损、属性有损;
条件3:几何位置无损、属性无损;
条件4:几何位置无损、属性有限度有损。
技术路线共3种,以属性压缩所采用的算法进行区分。
技术路线1:Pred分支(仅针对条件3,条件4):
在编码端,利用Pred的预测方法得到属性残差系数,对属性残差系数进行熵编码;
在解码端,熵解码得到属性残差系数,利用Pred的预测方法还原原始值。
技术路线2:Predlift分支(仅针对条件1,条件2):
在编码端,利用Predlift的方法得到属性变换系数,对属性变换系数进行熵编码;
在解码端,熵解码得到属性变换系数,利用Predlift的变换方法还原原始值。
技术路线3:RAHT分支(仅针对条件1,条件2):
在编码端,利用RAHT方法得到属性变换系数,对属性变换系数进行熵编码;
在解码端,熵解码得到属性变换系数,利用RAHT方法还原原始值。
对于上述技术路线产生的属性残差系数(pred编码产生的属性残差系数)或者属性变换系数(predlift编码或者raht产生的属性变换系数),进行属性熵编解码。
对于上述点云属性压缩的算法,对产生的属性残差/变换系数进行编解码时,对于编码/解码端,仅仅采用了较为固定简单的上下文的选择方法来完成属性系数的编解码。
也就是说,目前在编码/解码属性系数值的过程中,没有考虑到充分利用已经编码/解码的属性系数之间的相关性来自适应选取不同的上下文进行编解码,因此无法引入多种不同的自适应上下文的模式,进而降低了点云属性的编解码性能。
为了解决上述问题,本申请实施例提供了一种点云编解码方法,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
下面将结合附图对本申请各实施例进行详细说明。
本申请的一个实施例提出的一种点云解码方法,图8为本申请实施例提出的点云解码方法的实现流程示意图,如图8所示,在对点云进行解码处理时可以包括以下步骤:
步骤101、确定索引值。
在本申请的实施例中,可以先确定索引值。
需要说明的是,本申请实施例的解码方法具体是指点云解码方法,该方法可以应用于点云解码器(也可简称为“解码器”)。
需要说明的是,在本申请实施例中,待处理点云包括至少一个节点。其中,对于待处理点云中的节点,在对该节点进行解码时,其可以作为待处理点云中的待解码节点,而且该节点的周围存在有多个已解码节点。在这里,当前节点(当前点)即是这至少一个节点中当前需要解码的待解码节点。
进一步地,在本申请实施例中,对于待处理点云中的每一个节点,其对应一个几何信息和一个属性信息;其中,几何信息表征该点的空间关系,属性信息表征该点的属性的相关信息。
在这里,属性信息可以为颜色信息,也可以是反射率或者其它属性,本申请实施例不作具体限定。其中,当属性信息为颜色信息时,具体可以为任意颜色空间的颜色信息。示例性地,属性信息可以为RGB空间的颜色信息,也可以为YUV空间的颜色信息,还可以为YCbCr空间的颜色信息等等,本申请实施例也不作具体限定。
还需要说明的是,在本申请实施例中,解码器可以按照预设解码顺序对这至少一个节点进行排列,以便确定每一节点对应的索引序号。这样,根据每一节点对应的索引序号,解码器能够按照预设解码顺序处理待处理点云中的每一节点。
在一些实施例中,预设解码顺序可以为下述其中之一:点云原始顺序、莫顿顺序、希尔伯特顺序等等,本申请实施例不作具体限定。
进一步地,在本申请的实施例中,索引值可以用于确定当前点的属性系数所使用的上下文。其中,如果当前点的属性信息为颜色信息,那么对应于当前点的不同的颜色分量,可以确定不同的索引值。
也就是说,在本申请的实施例中,索引值可以包括第一索引值、第二索引值以及第三索引值中的至少一个。其中,第一索引值、第二索引值以及第三索引值可以分别对应于当前点的三个颜色分量,即第一索引值、第二索引值以及第三索引值可以分别用于确定当前点的三个颜色分量的属性系数所使用的第一上下文、第二上下文、第三上下文。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,第一索引值可以用于确定当前点的R分量的属性系数所使用的第一上下文,第二索引值可以用于确定当前点的G分量的属性系数所使用的第二上下文,第三索引值可以用于确定当前点的B分量的属性系数所使用的第三上下文。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,第一索引值可以用于确定当前点的Y分量的属性系数所使用的第一上下文,第二索引值可以用于确定当前点的U分量的属性系数所使用的第二上下文,第三索引值可以用于确定当前点的V分量的属性系数所使用的第三上下文。
因此,在本申请的实施例中,可以先确定当前点的第一索引值、第二索引值以及第三索引值中的至少一个。
进一步地,在本申请的实施例中,解码码流,可以先确定当前点的自适应上下文标识信息;如果自适应上下文标识信息指示使用自适应上下文确定当前点的属性系数,那么可以执行第一索引值,和/或第二索引值,和/或第三索引值的确定流程。
需要说明的是,在本申请的实施例中,可以将自适应上下文标识信息理解为一个表明是否对点云中的节点使用自适应上下文的标志位。具体地,解码器解码码流,可以确定作为自适应上下文标识信息的一个变量,从而可以通过该变量的取值来实现自适应上下文标识信息的确定。
也就是说,在本申请的实施例中,自适应上下文标识信息的取值不同,确定当前点的属性系数所使用的上下文的方法也不同。其中,可以根据自适应上下文标识信息确定是否使用自适应上下文确定当前点的部分或者全部颜色分量的属性系数。
可以理解的是,在本申请的实施例中,基于自适应上下文标识信息的取值,在进行点云的解码时,可以选择使用自适应上下文确定当前点的属性系数,也可以选择不使用自适应上下文确定当前点的属性系数。
示例性的,在本申请的实施例中,如果自适应上下文标识信息的取值为1,那么可以选择使用自适应上下文确定当前点的属性系数;如果自适应上下文标识信息的取值为0,那么可以选择不使用自适应上下文确定当前点的属性系数。
需要说明的是,在本申请的实施例中,自适应上下文标识信息的取值也可以设置为其他数值或参数,本申请不作任何限定。
可以理解的是,在本申请的实施例中,在解码码流确定自适应上下文标识信息之后,如果自适应上下文标识信息指示使用自适应上下文确定当前点的属性系数,那么可以进一步确定用于指示上下文的索引值。即在确定自适应上下文标识信息指示使用自适应上下文之后,再执行索引值的确定流程。
当然,也可以不进行自适应上下文标识信息确定流程。也就是说,可以预先设置是否使用自适应上下文确定当前点的属性系数,也可以预先设置是否使用自适应上下文确定当前点的全部颜色分量中的一个或多个颜色分量的属性系数。即是否对部分或者全部颜色分量使用自适应上下文,可以不依赖于自适应上下文标识信息的取值而独立执行。
进一步地,在本申请的实施例中,由于可以根据自适应上下文标识信息确定是否使用自适应上下文确定当前点的部分或者全部颜色分量的属性系数,因此,在进行当前点的属性系数的确定时,可以对当前点的部分或者全部颜色分量的属性系数使用自适应上下文,也可以对当前点的部分或者全部颜色分量的属性系数使用预先设置的上下文。
进一步地,在本申请的实施例中,在确定当前点的索引值时,可以参考当前点的几何信息和/或当前点对应的零游程值,因此,还可以解码码流,确定当前点的几何信息和当前点对应的零游程值。
需要说明的是,在本申请的实施例中,当前点的几何信息可以包括当前点的位置坐标信息。例如,当前点的几何信息可以为当前点对应的空间坐标信息(x,y,z)。
可以理解的是,在本申请的实施例中,当前点对应的零游程值可以包括当前点的零游程值,或当前点的前一个零游程值,或当前点的前一个非0的零游程值。
需要说明的是,在本申请的实施例中,零游程值run_length可以用于对属性系数进行否为0的计数。其中,对于颜色属性,如果零游程值run_length不为0(或大于0),则可以确定当前点的全部颜色分量的属性系数均为0,如果零游程值run_length为0,则可以确定当前点的全部颜色分量的属性系数不全为0。
可以理解的是,在本申请的实施例中,对于当前点的全部颜色分量的属性系数,如果零游程值run_length指示当前点的全部颜色分量的属性系数均为0,那么便不需要再进行属性系数的确定,而是可以先对零游程值进行自减1处理,以更新零游程值,然后再根据零游程值确定下一个点的属性系数。对于下一个点的属性系数,继续根据零游程值run_length判断全部颜色分量的属性系数是否均为0,以确定是否需要对全部颜色分量的属性系数进行确定。
示例性的,在本申请的实施例中,如果解码码流确定的当前点的零游程值run_length为3,大于0,那么可以确定当前点的全部颜色分量的属性系数均为0,因此不需要解码当前点的属性系数,可以选择先对零游程值进行自减1处理,即执行--run_length操作,然后再根据零游程值确定下一个点的属性系数。对于下一个点的属性系数,对应的零游程值run_length为2,大于0,那么可以确定该点的全部颜色分量的属性系数均为0,因此不需要解码该点的全部颜色分量的属性系数,继续对零游程值进行自减1处理,即执行--run_length操作。
进一步地,在本申请的实施例中,对于当前点的一个颜色分量的属性系数,可以先根据几何信息和/或零游程值确定索引值。
示例性的,在本申请的实施例中,可以根据几何信息和/或零游程值确定第一颜色分量对应的第一索引值。
进一步地,在本申请的实施例中,在确定出当前点的一个颜色分量的属性系数之后,即确定出第一属性系数之后,可以根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个,确定索引值。
示例性的,在本申请的实施例中,可以根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个,确定第二颜色分量对应的第二索引值。
进一步地,在本申请的实施例中,在确定出当前点的两个颜色分量的属性系数之后,即确定出第一属性系数和第二属性系数之后,可以根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个,确定索引值。
示例性的,在本申请的实施例中,可以根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个,确定第三颜色分量对应的第三索引值。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以选择先对零游程值和第一数值进行加法或者减法运算,确定第一运算结果;然后再根据第一运算结果确定索引值。
需要说明的是,在本申请的实施例中,第一数值可以为任意数值。例如,第一数值可以为1,也可以为3,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点对应的零游程值为2,第一数值为1,在基于零游程值进行索引值的确定时,可以对零游程值和第一数值进行减法运算,获得第一运算结果为2-1=1,然后便可以根据第一运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第一运算结果确定索引值时,可以选择将第一运算结果确定为索引值,也可以选择将第一运算结果的绝对值确定为索引值,还可以选择利用第一运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以先对零游程值和第一数值进行加法或者减法运算,确定第一运算结果;然后确定第一运算结果对应的第一数值范围;最后可以按照第一数值范围,以及第一预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,当前点对应的零游程值的取值可以为大于或者等于0的整数,在对零游程值和第一数值进行加法或者减法运算之后,获得的第一运算结果可以为大于、等于或者小于0的数值。因此,第一运算结果对应的第一数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第一预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表1为第一预设索引值与数值范围的对应关系,如表1所示,
表1
数值范围 索引值
(-3,0] 1
(0,1] 2
(1,2] 3
(2,3] 4
示例性的,在本申请的实施例中,假设第一运算结果为1,那么第一运算结果对应的第一数值范围可以为(0,1],相应的,基于第一预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以先确定零游程值对应的第二数值范围;然后可以按照第二数值范围,以及第二预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,当前点对应的零游程值的取值可以为大于或者等于0的整数。因此,零游程值对应的第二数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第二预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表2为第二预设索引值与数值范围的对应关系,如表2所示,
表2
数值范围 索引值
(0,1] 1
(1,3] 2
(3,5] 3
(5,7] 4
示例性的,在本申请的实施例中,假设当前点对应的零游程值为2,那么零游程值对应的第一数值范围可以为(1,3],相应的,基于第二预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点的几何信息进行索引值的确定时,即基于几何信息确定第一索引值或第二索引值或第三索引值时,可以选择先确定几何信息对应的位置范围;然后按照位置范围,以及预设位置范围与索引值的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,由于当前点的几何信息可以包括当前点的位置坐标信息,其中可以包括不同的空间分量,如x分量、y分量、z分量,因此在利用当前点的几何信息进行位置范围的确定时,可以参考不同空间分量中的部分或者全部进行范围的划分。例如,可以仅按照x分量确定当前点的几何信息对应的位置范围,也可以按照y分量和z分量确定当前点的几何信息对应的位置范围,还可以按照x分量、y分量、z分量确定当前点的几何信息对应的位置范围。
需要说明的是,在本申请的实施例中,预设位置范围与索引值的对应关系可以表征位置范围与索引值的映射关系。其中,对于不同的位置范围,可以确定出对应的索引值。例如,表3为预设位置范围与索引值的对应关系,如表3所示,
表3
位置范围 索引值
位置范围1 1
位置范围2 2
位置范围3 3
位置范围4 4
示例性的,在本申请的实施例中,假设当前点的几何信息为(x,y,z),对应的位置范围为位置范围3,相应的,基于预设位置范围与索引值的对应关系所确定的索引值为3。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择直接将第一属性系数的绝对值设置为索引值。
示例性的,在本申请的实施例中,假设第一属性系数的绝对值为2,那么可以基于第一属性系数的绝对值确定索引值为2。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先确定第一属性系数的绝对值对应的第三数值范围;然后按照第三数值范围,以及第三预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第一属性系数的绝对值可以为大于或者等于0的整数。因此,第一属性系数的绝对值对应的第三数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第三预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表4为第三预设索引值与数值范围的对应关系,如表4所示,
表4
数值范围 索引值
(0,1] 1
(1,2] 2
(2,4] 3
(4,6] 4
示例性的,在本申请的实施例中,假设第一属性系数的绝对值为3,那么第一属性系数的绝对值对应的第三数值范围可以为(2,4],相应的,基于第三预设索引值与数值范围的对应关系所确定的索引值为3。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;然后再根据第二运算结果确定索引值。
需要说明的是,在本申请的实施例中,第二数值可以为任意数值。例如,第二数值可以为-1,也可以为2,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点的第一属性系数的绝对值为1,第二数值为-2,在基于第一属性系数的绝对值进行索引值的确定时,可以对第一属性系数的绝对值和第二数值进行加法运算,获得第二运算结果为-2+1=-1,然后便可以根据第二运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第二运算结果确定索引值时,可以选择将第二运算结果确定为索引值,也可以选择将第二运算结果的绝对值确定为索引值,还可以选择利用第二运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;然后再确定第二运算结果对应的第四数值范围;最后便可以按照第四数值范围,以及第四预设索引值与数值范围的对应关系,确定所索引值。
可以理解的是,在本申请的实施例中,第一属性系数的绝对值可以为大于或者等于0的整数,在对第一属性系数的绝对值和第二数值进行加法或者减法运算之后,获得的第二运算结果可以为大于、等于或者小于0的数值。因此,第二运算结果对应的第四数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第四预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表5为第四预设索引值与数值范围的对应关系,如表5所示,
表5
数值范围 索引值
(-3,-1] 1
(-1,1] 2
(1,2] 3
(2,3] 4
示例性的,在本申请的实施例中,假设第四运算结果为1,那么第四运算结果对应的第四数值范围可以为(-1,1],相应的,基于第四预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择直接将第二属性系数的绝对值设置为索引值。
示例性的,在本申请的实施例中,假设第二属性系数的绝对值为1,那么可以基于第一属性系数的绝对值确定索引值为1。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先确定第二属性系数的绝对值对应的第五数值范围;然后按照第五数值范围,以及第五预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第二属性系数的绝对值可以为大于或者等于0的整数。因此,第二属性系数的绝对值对应的第五数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第五预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表6为第五预设索引值与数值范围的对应关系,如表6所示,
表6
数值范围 索引值
(0,1] 1
(1,2] 2
(2,3] 3
(3,5] 4
示例性的,在本申请的实施例中,假设第二属性系数的绝对值为4,那么第二属性系数的绝对值对应的第五数值范围可以为(3,5],相应的,基于第五预设索引值与数值范围的对应关系所确定的索引值为4。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;然后再根据第三运算结果确定索引值。
需要说明的是,在本申请的实施例中,第三数值可以为任意数值。例如,第三数值可以为0,也可以为1,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点的第二属性系数的绝对值为2,第三数值为1,在基于第二属性系数 的绝对值进行索引值的确定时,可以对第二属性系数的绝对值和第三数值进行加法运算,获得第三运算结果为2+1=3,然后便可以根据第三运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第三运算结果确定索引值时,可以选择将第三运算结果确定为索引值,也可以选择将第三运算结果的绝对值确定为索引值,还可以选择利用第三运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;然后再确定第三运算结果对应的第六数值范围;最后便可以按照第六数值范围,以及第六预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第二属性系数的绝对值可以为大于或者等于0的整数,在对第二属性系数的绝对值和第三数值进行加法或者减法运算之后,获得的第三运算结果可以为大于、等于或者小于0的数值。因此,第三运算结果对应的第六数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第六预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表7为第六预设索引值与数值范围的对应关系,如表7所示,
表7
数值范围 索引值
(-3,-1] 1
(-1,2] 2
(2,4] 3
(4,7] 4
示例性的,在本申请的实施例中,假设第六运算结果为1,那么第六运算结果对应的第四数值范围可以为(-1,2],相应的,基于第六预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在根据几何信息和/或零游程值确定索引值时,可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于几何信息确定的索引值和基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,对于当前点的一个颜色分量的属性系数,在根据几何信息和/或零游程值确定第一颜色分量对应的第一索引值时,可以将基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第一索引值;也可以对基于几何信息确定的索引值和基于零游程值确定的索引值进行运算处理,从而可以获得第一索引值。
示例性的,在本申请的实施例中,假设基于几何信息确定的索引值为A1,基于零游程值确定的索引值为A2,那么,可以直接将A1确定为第一索引值,也可以直接将A2确定为第一索引值,也可以对A1和A2进行大小比较,并将二者中的较大值或者较小值确定为第一索引值,还可以对A1和A2进行加法、减法、加权平均等计算,将计算结果确定为第一索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值时,可以选择使用基于第一属性系数的绝对值确定的索引值,也可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于第一属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,在确定出当前点的一个颜色分量的属性系数之后,即确定出第一属性系数之后,在根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定第二颜色分量对应的第二索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第二索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第二索引值。
示例性的,在本申请的实施例中,假设基于第一属性系数的绝对值确定的索引值为B1,基于几何信息确定的索引值为B2,基于零游程值确定的索引值为B3,那么,可以直接将B1确定为第二索引值,也可以直接将B2确定为第二索引值,也可以直接将B3确定为第二索引值,也可以对B1、B2和B3进行大小比较,并将三者中的较大值或者较小值确定为第二索引值,还可以对B1、B2和B3进行加法、减法、加权平均等计算,将计算结果确定为第二索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值时,可以选择使用基于第一属性系数的绝对值确定的索引值,也可以选择使用基于第二属性系数的绝对值确定的索引值,也可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于第一属性系数的绝对值确定的索引值,和/或基于第二属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,在确定出当前点的两个颜色分量的属性系数之后,即确定出第一属性系数和第二属性系数之后,在根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定第三颜色分量对应的第三索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于第二属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第三索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于第二属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第三索引值。
示例性的,在本申请的实施例中,假设基于第一属性系数的绝对值确定的索引值为C1,基于第二属性系数的绝对值确定的索引值为C2,或基于几何信息确定的索引值为C3,基于零游程值确定的索引值为C4,那么,可以直接将C1确 定为第三索引值,也可以直接将C2确定为第三索引值,也可以直接将C3确定为第三索引值,也可以直接将C4确定为第三索引值,也可以对C1、C2、C3和B4进行大小比较,并将四者中的较大值或者较小值确定为第三索引值,还可以对C1、C2、C3和B4进行加法、减法、加权平均等计算,将计算结果确定为第三索引值。本申请不进行具体限定。
步骤102、根据索引值所指示的上下文确定当前点的解码系数。
在本申请的实施例中,在确定索引值之后,可以进一步根据索引值所指示的上下文确定当前点的解码系数。
可以理解的是,在本申请的实施例中,解码系数可以为使用索引值指示的上下文进行解码处理后所获得的值。
需要说明的是,在本申请的实施例中,由于索引值可以包括当前点的第一索引值、第二索引值以及第三索引值中的至少一个。因此,可以使用不同的颜色分量对应的索引值所指示的上下文确定出相应的颜色分量的解码系数。
可以理解的是,在本申请的实施例中,解码系数可以包括第一解码系数、第二解码系数以及第三解码系数中的至少一个。其中,第一解码系数、第二解码系数以及第三解码系数可以分别对应于当前点的三个颜色分量,即第一解码系数、第二解码系数以及第三解码系数可以分别为使用第一上下文、第二上下文、第三上下文解析获得的。
也就是说,在本申请的实施例中,在根据索引值所指示的上下文确定当前点的解码系数时,可以根据第一索引值所指示的第一上下文确定当前点的第一解码系数,也可以根据第二索引值所指示的第二上下文确定当前点的第二解码系数,还可以根据第三索引值所指示的第三上下文确定当前点的第三解码系数。
步骤103、根据解码系数确定当前点的属性系数。
在本申请的实施例中,在根据索引值所指示的上下文确定当前点的解码系数之后,便可以进一步根据解码系数确定当前点的属性系数。
可以理解的是,在本申请的实施例中,属性系数可以为基于解码系数所确定的属性信息的相关值。
需要说明的是,在本申请的实施例中,由于解码系数可以包括第一解码系数、第二解码系数以及第三解码系数中的至少一个。因此,可以使用不同的颜色分量对应的解码系数确定出相应的颜色分量的属性系数。
也就是说,在本申请的实施例中,在根据解码系数确定当前点的属性系数时,可以根据第一解码系数确定当前点的第一属性系数,也可以根据第二解码系数确定当前点的第二属性系数,还可以根据第三解码系数确定当前点的第三属性系数。
可以理解的是,在本申请的实施例中,当前点的属性系数可以为当前点的属性信息的量化残差或量化后的变换系数。也就是说,属性系数可以为量化残差,也可以为量化后的变换系数。
进一步地,在本申请的实施例中,当前点的属性系数可以包括全部颜色分量的属性系数,即当前点的属性系数可以包括第一属性系数、第二属性系数以及第三属性系数中的至少一个。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,对于R分量,可以使用第一索引值所指示的第一上下文可以确定第一解码系数,进而可以利用第一解码系数确定第一属性系数,对于G分量,可以使用第二索引值所指示的第二上下文可以确定第二解码系数,进而可以利用第二解码系数确定第二属性系数,对于B分量,可以使用第三索引值所指示的第三上下文可以确定第三解码系数,进而可以利用第三解码系数确定第三属性系数。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,对于Y分量,可以使用第一索引值所指示的第一上下文可以确定第一解码系数,进而可以利用第一解码系数确定第一属性系数,对于U分量,可以使用第二索引值所指示的第二上下文可以确定第二解码系数,进而可以利用第二解码系数确定第二属性系数,对于V分量,可以使用第三索引值所指示的第三上下文可以确定第三解码系数,进而可以利用第三解码系数确定第三属性系数。
需要说明的是,在本申请的实施例中,由于在进行当前点的属性系数的确定时,可以对当前点的部分或者全部颜色分量的属性系数使用自适应上下文,也可以对当前点的部分或者全部颜色分量的属性系数使用预先设置的上下文。因此,当前点的任意一个颜色分量的属性系数可以是通过自适应上下文确定的,也可以是通过预先设置的上下文确定的。
可以理解的是,在本申请的实施例中,可以根据第一预设上下文确定第一解码系数;和/或,根据第二预设上下文确定第二解码系数;和/或,根据第三预设上下文确定第三解码系数。
示例性的,在本申请的实施例中,对于当前点的第一颜色分量,可以选择根据第一预设上下文确定第一解码系数,并根据第一解码系数确定第一属性系数,也可以选择确定第一索引值,然后根据第一索引值所指示的第一上下文确定当前点的第一解码系数,并根据第一解码系数确定当前点的第一属性系数;对于当前点的第二颜色分量,可以选择根据第二预设上下文确定第二解码系数,并根据第二解码系数确定第二属性系数,也可以选择确定第二索引值,然后根据第二索引值所指示的第二上下文确定当前点的第二解码系数,并根据第二解码系数确定当前点的第二属性系数;对于当前点的第三颜色分量,可以选择根据第三预设上下文确定第三解码系数,并根据第三解码系数确定第三属性系数,也可以选择确定第三索引值,然后根据第三索引值所指示的第三上下文确定当前点的第三解码系数,并根据第三解码系数确定当前点的第三属性系数。
也就是说,在本申请的实施例中,对于当前点的任意一个颜色分量,既可以使用预先设置的上下文,也可以选择使用自适应上下文。其中,在使用自适应上下文时,既可以基于当前点的几何信息确定的索引值来自适应选择上下文,也可以基于当前点对应的零游程值确定的索引值来自适应选择上下文,还可以基于当前点的其他颜色分量的属性系数(如第一属性系数和/或第二属性系数)确定的索引值来自适应选择上下文。对此本申请不进行具体限定。
可以理解的是,在本申请的实施例中,对于当前点的不同的颜色分量,确定上下文的方式是互相独立的,即不限制不同的颜色分量所使用的上下文的确定方式一定相同。例如,对于第一颜色分量,可以基于当前点对应的零游程值确定的索引值来自适应选择上下文,对于第二颜色分量,可以基于第一属性系数确定的索引值来自适应选择上下文,对于第三颜色分量,可以使用预先设置的上下文。对此本申请不进行具体限定。
需要说明的是,在本申请的实施例中,第一颜色分量、第二颜色分量以及第三颜色分量可以为当前点的全部颜色分量中的不同的颜色分量。例如,第一颜色分量可以为G分量,第一颜色分量可以为B分量,第三颜色分量可以为R分量;或者,第一颜色分量可以为U分量,第一颜色分量可以为Y分量,第三颜色分量可以为V分量。本申请不进行具体限定。
进一步地,在本申请的实施例中,在确定当前点的属性系数之后,可以继续根据当前点的属性系数是否为0来确定 是否进行属性系数的符号的确定。其中,如果当前点的属性系数不全为0,那么可以解码码流,确定非0的属性系数的符号。
也就是说,在本申请的实施例中,对于不为0的颜色分量的属性系数,可以继续解码码流,确定该属性系数不为0的颜色分量所对应的属性系数的符号。
实例性的,在本申请的实施例中,如果第一属性系数不为0,那么可以继续进行第一属性系数的符号的确定;如果第二属性系数不为0,那么可以继续进行第二属性系数的符号的确定;如果第三属性系数不为0,那么可以继续进行第三属性系数的符号的确定。
综上所述,通过步骤101至步骤103所提出的点云解码方法,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,从而可以提升点云属性的编解码性能。其中,对于当前点的任意一个颜色分量的属性系数,既可以选择使用预设上下文(如第一预设上下文、第二预设上下文、第三预设上下文)确定解码系数,进而确定对应的属性系数;也可以选择使用几何信息和/或零游程值确定索引值,再使用索引值指示的上下文确定解码系数,进而确定对应的属性系数;也可以选择使用第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示的上下文确定解码系数,进而确定对应的属性系数;还可以选择使用第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示的上下文确定解码系数,进而确定对应的属性系数。
示例性的,在本申请的实施例中,可以选择对当前点的两个颜色分量的属性系数使用自适应上下文,对另一个颜色分量的属性系数使用预设上下文。其中,可以先对第一个属性系数进行解码;然后利用已解码的第一个属性系数去自适应选择上下文对第二个属性系数进行解码;最后可以利用已解码的第一个属性系数和/或第二个属性系数去自适应选择上下文对第三个属性系数进行解码。
例如,使用第一预设上下文对第一属性系数进行解码;利用已解码的第一属性系数大于等于或者小于等于或者等于等于某几个常量(即确定对应的数值范围),对第二属性系数自适应选择上下文进行解码;利用已解码的第一属性系数大于等于或者小于等于或者等于等于某几个常量,以及已解码的第二属性系数大于等于或者小于等于或者等于等于某几个常量(即确定对应的数值范围),对第三个属性系数自适应选择上下文进行解码。
示例性的,在本申请的实施例中,可以选择对当前点的一个颜色分量的属性系数使用自适应上下文,对另两个颜色分量的属性系数使用预设上下文。其中,可以先对第一个属性系数进行解码;然后对第二个属性系数进行解码;最后可以利用已解码的第一个属性系数和/或第二个属性系数去自适应选择上下文对第三个属性系数进行解码。
例如,使用第一预设上下文对第一属性系数进行解码;使用第二预设上下文对第二属性系数进行解码;最后利用已解码第一属性系数加或减一个常量,与已解码第二属性系数加或减一个常量的大小关系自适应选择上下文对第三属性系数进行解码。
示例性的,在本申请的实施例中,可以选择参考零游程值来对全部颜色分量的属性系数使用自适应上下文。其中,在确定先验信息零游程值run_length之后,可以先利用已解码的runlength的信息对第一个属性系数自适应选择上下文进行解码;然后可以利用已解码的runlength的信息对第二个属性系数自适应选择上下文进行解码;最后可以利用已解码的runlength的信息对第三个属性系数自适应选择上下文进行解码。
例如,利用前一组非0的runlength值对第一属性系数自适应选择上下文进行解码;利用前一组非0的runlength值对第二属性系数自适应选择上下文进行解码;利用前一组非0的runlength值对第三属性系数自适应选择上下文进行解码。
例如,利用前一组runlength值对第一个属性系数自适应选择上下文进行解码;利用前一组runlength值对第二个属性系数自适应选择上下文进行解码;利用前一组runlength值对第三个属性系数自适应选择上下文进行解码。
示例性的,在本申请的实施例中,可以选择参考几何位置(几何信息)来对全部颜色分量的属性系数使用自适应上下文。其中,在确定当前点的几何信息之后,可以先利用当前点的几何信息对第一个属性系数自适应选择上下文进行解码;然后可以利用当前点的几何信息对第二个属性系数自适应选择上下文进行解码;最后可以利用当前点的几何信息对第三个属性系数自适应选择上下文进行解码。
例如,利用当前点的几何信息位置大小对第一个属性系数自适应选择上下文进行解码;利用当前点的几何信息位置大小对第二个属性系数自适应选择上下文进行解码;利用当前点的几何信息位置大小对第三个属性系数自适应选择上下文进行解码。
表8
Figure PCTCN2022132330-appb-000001
表9
Figure PCTCN2022132330-appb-000002
Figure PCTCN2022132330-appb-000003
如表8、9所示,本申请实施例提出的点云编解码方法,能够在不增加时间复杂度的情况下,获得稳定的性能增益,可以提升点云编解码的性能。
本申请实施例提供了一种点云解码方法,解码器确定索引值;根据索引值所指示的上下文确定当前点的解码系数;根据解码系数确定当前点的属性系数。也就是说,在本申请的实施例中,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
本申请的一个实施例提出的一种点云编码方法,图9为本申请实施例提出的点云编码方法的实现流程示意图,如图9所示,在对点云进行编码处理时可以包括以下步骤:
步骤201、确定索引值。
在本申请的实施例中,可以先确定索引值。
需要说明的是,本申请实施例的编码方法具体是指点云编码方法,该方法可以应用于点云编码器(也可简称为“编码器”)。
需要说明的是,在本申请实施例中,待处理点云包括至少一个节点。其中,对于待处理点云中的节点,在对该节点进行编码时,其可以作为待处理点云中的待编码节点,而且该节点的周围存在有多个已编码节点。在这里,当前节点(当前点)即是这至少一个节点中当前需要编码的待编码节点。
进一步地,在本申请实施例中,对于待处理点云中的每一个节点,其对应一个几何信息和一个属性信息;其中,几何信息表征该点的空间关系,属性信息表征该点的属性的信息。
在这里,属性信息可以为颜色信息,也可以是反射率或者其它属性,本申请实施例不作具体限定。其中,当属性信息为颜色信息时,具体可以为任意颜色空间的颜色信息。示例性地,属性信息可以为RGB空间的颜色信息,也可以为YUV空间的颜色信息,还可以为YCbCr空间的颜色信息等等,本申请实施例也不作具体限定。
还需要说明的是,在本申请实施例中,编码器可以按照预设编码顺序对这至少一个节点进行排列,以便确定每一节点对应的索引序号。这样,根据每一节点对应的索引序号,编码器能够按照预设编码顺序处理待处理点云中的每一节点。
在一些实施例中,预设编码顺序可以为下述其中之一:点云原始顺序、莫顿顺序、希尔伯特顺序等等,本申请实施例不作具体限定。
进一步地,在本申请的实施例中,索引值可以用于确定当前点的属性系数所使用的上下文。其中,如果当前点的属性信息为颜色信息,那么对应于当前点的不同的颜色分量,可以确定不同的索引值。
也就是说,在本申请的实施例中,索引值可以包括第一索引值、第二索引值以及第三索引值中的至少一个。其中,第一索引值、第二索引值以及第三索引值可以分别对应于当前点的三个颜色分量,即第一索引值、第二索引值以及第三索引值可以分别用于确定当前点的三个颜色分量的属性系数所使用的第一上下文、第二上下文、第三上下文。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,第一索引值可以用于确定当前点的R分量的属性系数所使用的第一上下文,第二索引值可以用于确定当前点的G分量的属性系数所使用的第二上下文,第三索引值可以用于确定当前点的B分量的属性系数所使用的第三上下文。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,第一索引值可以用于确定当前点的Y分量的属性系数所使用的第一上下文,第二索引值可以用于确定当前点的U分量的属性系数所使用的第二上下文,第三索引值可以用于确定当前点的V分量的属性系数所使用的第三上下文。
因此,在本申请的实施例中,可以先确定当前点的第一索引值、第二索引值以及第三索引值中的至少一个。
进一步地,在本申请的实施例中,如果确定使用自适应上下文确定当前点的属性系数,那么可以设置当前点的自适应上下文标识信息,然后可以将当前点的自适应上下文标识信息写入码流。其中,如果确定使用自适应上下文,可以将自适应上下文标识信息设置为指示使用自适应上下文确定当前点的属性系数。
相应的,在本申请的实施例中,在确定使用自适应上下文确定当前点的属性系数之后,可以执行第一索引值,和/或第二索引值,和/或第三索引值的确定流程。
需要说明的是,在本申请的实施例中,可以将自适应上下文标识信息理解为一个表明是否对点云中的节点使用自适应上下文的标志位。具体地,编码器可以确定作为自适应上下文标识信息的一个变量,从而可以通过该变量的取值来实现自适应上下文标识信息的确定。
也就是说,在本申请的实施例中,自适应上下文标识信息的取值不同,确定当前点的属性系数所使用的上下文的方法也不同。其中,可以根据自适应上下文标识信息确定是否使用自适应上下文确定当前点的部分或者全部颜色分量的属性系数。
可以理解的是,在本申请的实施例中,基于自适应上下文标识信息的取值,在进行点云的编码时,可以选择使用自适应上下文确定当前点的属性系数,也可以选择不使用自适应上下文确定当前点的属性系数。
示例性的,在本申请的实施例中,如果自适应上下文标识信息的取值为1,那么可以指示使用自适应上下文确定当前点的属性系数;如果自适应上下文标识信息的取值为0,那么可以指示不使用自适应上下文确定当前点的属性系数。
需要说明的是,在本申请的实施例中,自适应上下文标识信息的取值也可以设置为其他数值或参数,本申请不作任何限定。
可以理解的是,在本申请的实施例中,在确定使用自适应上下文确定当前点的属性系数之后,那么可以进一步确定用于指示上下文的索引值。即在确定使用自适应上下文之后,再执行索引值的确定流程。
当然,也可以不进行自适应上下文标识信息设置流程。也就是说,可以预先设置是否使用自适应上下文确定当前点的属性系数,也可以预先设置是否使用自适应上下文确定当前点的全部颜色分量中的一个或多个颜色分量的属性系数。 即是否对部分或者全部颜色分量使用自适应上下文,可以不依赖于自适应上下文标识信息的取值而独立执行。
进一步地,在本申请的实施例中,由于可以根据自适应上下文标识信息确定是否使用自适应上下文确定当前点的部分或者全部颜色分量的属性系数,因此,在进行当前点的属性系数的确定时,可以对当前点的部分或者全部颜色分量的属性系数使用自适应上下文,也可以对当前点的部分或者全部颜色分量的属性系数使用预先设置的上下文。
进一步地,在本申请的实施例中,在确定当前点的索引值时,可以参考当前点的几何信息和/或当前点对应的零游程值,因此,还可以确定当前点的几何信息和当前点对应的零游程值。
需要说明的是,在本申请的实施例中,当前点的几何信息可以包括当前点的位置坐标信息。例如,当前点的几何信息可以为当前点对应的空间坐标信息(x,y,z)。
可以理解的是,在本申请的实施例中,当前点对应的零游程值可以包括当前点的零游程值,或当前点的前一个零游程值,或当前点的前一个非0的零游程值。
需要说明的是,在本申请的实施例中,零游程值run_length可以用于对属性系数进行否为0的计数。其中,对于颜色属性,如果零游程值run_length不为0(或大于0),则可以确定当前点的全部颜色分量的属性系数均为0,如果零游程值run_length为0,则可以确定当前点的全部颜色分量的属性系数不全为0。
可以理解的是,在本申请的实施例中,对于当前点的全部颜色分量的属性系数,如果零游程值run_length指示当前点的全部颜色分量的属性系数均为0,那么便不需要再进行属性系数的确定,而是可以先对零游程值进行自减1处理,以更新零游程值,然后再根据零游程值确定下一个点的属性系数。对于下一个点的属性系数,继续根据零游程值run_length判断全部颜色分量的属性系数是否均为0,以确定是否需要对全部颜色分量的属性系数进行确定。
示例性的,在本申请的实施例中,如果确定的当前点的零游程值run_length为3,大于0,那么可以确定当前点的全部颜色分量的属性系数均为0,因此不需要编码当前点的属性系数,可以选择先对零游程值进行自减1处理,即执行--run_length操作,然后再根据零游程值确定下一个点的属性系数。对于下一个点的属性系数,对应的零游程值run_length为2,大于0,那么可以确定该点的全部颜色分量的属性系数均为0,因此不需要编码该点的全部颜色分量的属性系数,继续对零游程值进行自减1处理,即执行--run_length操作。
进一步地,在本申请的实施例中,对于当前点的一个颜色分量的属性系数,可以先根据几何信息和/或零游程值确定索引值。
示例性的,在本申请的实施例中,可以根据几何信息和/或零游程值确定第一颜色分量对应的第一索引值。
进一步地,在本申请的实施例中,在确定出当前点的一个颜色分量的属性系数之后,即确定出第一属性系数之后,可以根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个,确定索引值。
示例性的,在本申请的实施例中,可以根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个,确定第二颜色分量对应的第二索引值。
进一步地,在本申请的实施例中,在确定出当前点的两个颜色分量的属性系数之后,即确定出第一属性系数和第二属性系数之后,可以根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个,确定索引值。
示例性的,在本申请的实施例中,可以根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个,确定第三颜色分量对应的第三索引值。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以选择先对零游程值和第一数值进行加法或者减法运算,确定第一运算结果;然后再根据第一运算结果确定索引值。
需要说明的是,在本申请的实施例中,第一数值可以为任意数值。例如,第一数值可以为1,也可以为3,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点对应的零游程值为2,第一数值为1,在基于零游程值进行索引值的确定时,可以对零游程值和第一数值进行减法运算,获得第一运算结果为2-1=1,然后便可以根据第一运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第一运算结果确定索引值时,可以选择将第一运算结果确定为索引值,也可以选择将第一运算结果的绝对值确定为索引值,还可以选择利用第一运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以先对零游程值和第一数值进行加法或者减法运算,确定第一运算结果;然后确定第一运算结果对应的第一数值范围;最后可以按照第一数值范围,以及第一预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,当前点对应的零游程值的取值可以为大于或者等于0的整数,在对零游程值和第一数值进行加法或者减法运算之后,获得的第一运算结果可以为大于、等于或者小于0的数值。因此,第一运算结果对应的第一数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第一预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表1为第一预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设第一运算结果为1,那么第一运算结果对应的第一数值范围可以为(0,1],相应的,基于第一预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点对应的零游程值进行索引值的确定时,即基于零游程值确定第一索引值或第二索引值或第三索引值时,可以先确定零游程值对应的第二数值范围;然后可以按照第二数值范围,以及第二预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,当前点对应的零游程值的取值可以为大于或者等于0的整数。因此,零游程值对应的第二数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第二预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表2为第二预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设当前点对应的零游程值为2,那么零游程值对应的第一数值范围可以为(1,3],相应的,基于第二预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点的几何信息进行索引值的确定时,即基于几何信息确定第一索引值或第二索引值或第三索引值时,可以选择先确定几何信息对应的位置范围;然后按照位置范围,以及预设位置范围与索引值的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,由于当前点的几何信息可以包括当前点的位置坐标信息,其中可以包括不同的空间分量,如x分量、y分量、z分量,因此在利用当前点的几何信息进行位置范围的确定时,可以参考不同空间分量中的部分或者全部进行范围的划分。例如,可以仅按照x分量确定当前点的几何信息对应的位置范围,也可以按照y分量和z分量确定当前点的几何信息对应的位置范围,还可以按照x分量、y分量、z分量确定当前点的几何信息对应的位置范围。
需要说明的是,在本申请的实施例中,预设位置范围与索引值的对应关系可以表征位置范围与索引值的映射关系。其中,对于不同的位置范围,可以确定出对应的索引值。例如,表3为预设位置范围与索引值的对应关系。
示例性的,在本申请的实施例中,假设当前点的几何信息为(x,y,z),对应的位置范围为位置范围3,相应的,基于预设位置范围与索引值的对应关系所确定的索引值为3。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择直接将第一属性系数的绝对值设置为索引值。
示例性的,在本申请的实施例中,假设第一属性系数的绝对值为2,那么可以基于第一属性系数的绝对值确定索引值为2。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先确定第一属性系数的绝对值对应的第三数值范围;然后按照第三数值范围,以及第三预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第一属性系数的绝对值可以为大于或者等于0的整数。因此,第一属性系数的绝对值对应的第三数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第三预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表4为第三预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设第一属性系数的绝对值为3,那么第一属性系数的绝对值对应的第三数值范围可以为(2,4],相应的,基于第三预设索引值与数值范围的对应关系所确定的索引值为3。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;然后再根据第二运算结果确定索引值。
需要说明的是,在本申请的实施例中,第二数值可以为任意数值。例如,第二数值可以为-1,也可以为2,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点的第一属性系数的绝对值为1,第二数值为-2,在基于第一属性系数的绝对值进行索引值的确定时,可以对第一属性系数的绝对值和第二数值进行加法运算,获得第二运算结果为-2+1=-1,然后便可以根据第二运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第二运算结果确定索引值时,可以选择将第二运算结果确定为索引值,也可以选择将第二运算结果的绝对值确定为索引值,还可以选择利用第二运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点的第一属性系数的绝对值进行索引值的确定时,即基于第一属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;然后再确定第二运算结果对应的第四数值范围;最后便可以按照第四数值范围,以及第四预设索引值与数值范围的对应关系,确定所索引值。
可以理解的是,在本申请的实施例中,第一属性系数的绝对值可以为大于或者等于0的整数,在对第一属性系数的绝对值和第二数值进行加法或者减法运算之后,获得的第二运算结果可以为大于、等于或者小于0的数值。因此,第二运算结果对应的第四数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第四预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表5为第四预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设第四运算结果为1,那么第四运算结果对应的第四数值范围可以为(-1,1],相应的,基于第四预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择直接将第二属性系数的绝对值设置为索引值。
示例性的,在本申请的实施例中,假设第二属性系数的绝对值为1,那么可以基于第一属性系数的绝对值确定索引值为1。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先确定第二属性系数的绝对值对应的第五数值范围;然后按照第五数值范围,以及第五预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第二属性系数的绝对值可以为大于或者等于0的整数。因此,第二属性系数的绝对值对应的第五数值范围可以为包括大于或者等于0的整数的数值范围。
需要说明的是,在本申请的实施例中,第五预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关 系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表6为第五预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设第二属性系数的绝对值为4,那么第二属性系数的绝对值对应的第五数值范围可以为(3,5],相应的,基于第五预设索引值与数值范围的对应关系所确定的索引值为4。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;然后再根据第三运算结果确定索引值。
需要说明的是,在本申请的实施例中,第三数值可以为任意数值。例如,第三数值可以为0,也可以为1,本申请不进行具体限定。
示例性的,在本申请的实施例中,假设当前点的第二属性系数的绝对值为2,第三数值为1,在基于第二属性系数的绝对值进行索引值的确定时,可以对第二属性系数的绝对值和第三数值进行加法运算,获得第三运算结果为2+1=3,然后便可以根据第三运算结果确定索引值。
可以理解的是,在本申请的实施例中,在根据第三运算结果确定索引值时,可以选择将第三运算结果确定为索引值,也可以选择将第三运算结果的绝对值确定为索引值,还可以选择利用第三运算结果推导出索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在使用当前点的第二属性系数的绝对值进行索引值的确定时,即基于第二属性系数的绝对值确定第一索引值或第二索引值或第三索引值时,可以选择先对第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;然后再确定第三运算结果对应的第六数值范围;最后便可以按照第六数值范围,以及第六预设索引值与数值范围的对应关系,确定索引值。
可以理解的是,在本申请的实施例中,第二属性系数的绝对值可以为大于或者等于0的整数,在对第二属性系数的绝对值和第三数值进行加法或者减法运算之后,获得的第三运算结果可以为大于、等于或者小于0的数值。因此,第三运算结果对应的第六数值范围可以为任意的数值范围。
需要说明的是,在本申请的实施例中,第六预设索引值与数值范围的对应关系可以表征数值范围与索引值的映射关系。其中,对于不同的数值范围,可以确定出对应的索引值。例如,表7为第六预设索引值与数值范围的对应关系。
示例性的,在本申请的实施例中,假设第六运算结果为1,那么第六运算结果对应的第四数值范围可以为(-1,2],相应的,基于第六预设索引值与数值范围的对应关系所确定的索引值为2。
进一步地,在本申请的实施例中,在根据几何信息和/或零游程值确定索引值时,可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于几何信息确定的索引值和基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,对于当前点的一个颜色分量的属性系数,在根据几何信息和/或零游程值确定第一颜色分量对应的第一索引值时,可以将基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第一索引值;也可以对基于几何信息确定的索引值和基于零游程值确定的索引值进行运算处理,从而可以获得第一索引值。
示例性的,在本申请的实施例中,假设基于几何信息确定的索引值为A1,基于零游程值确定的索引值为A2,那么,可以直接将A1确定为第一索引值,也可以直接将A2确定为第一索引值,也可以对A1和A2进行大小比较,并将二者中的较大值或者较小值确定为第一索引值,还可以对A1和A2进行加法、减法、加权平均等计算,将计算结果确定为第一索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值时,可以选择使用基于第一属性系数的绝对值确定的索引值,也可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于第一属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,在确定出当前点的一个颜色分量的属性系数之后,即确定出第一属性系数之后,在根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定第二颜色分量对应的第二索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第二索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第二索引值。
示例性的,在本申请的实施例中,假设基于第一属性系数的绝对值确定的索引值为B1,基于几何信息确定的索引值为B2,基于零游程值确定的索引值为B3,那么,可以直接将B1确定为第二索引值,也可以直接将B2确定为第二索引值,也可以直接将B3确定为第二索引值,也可以对B1、B2和B3进行大小比较,并将三者中的较大值或者较小值确定为第二索引值,还可以对B1、B2和B3进行加法、减法、加权平均等计算,将计算结果确定为第二索引值。本申请不进行具体限定。
进一步地,在本申请的实施例中,在根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值时,可以选择使用基于第一属性系数的绝对值确定的索引值,也可以选择使用基于第二属性系数的绝对值确定的索引值,也可以选择使用基于几何信息确定的索引值,也可以选择使用基于零游程值确定的索引值,还可以选择对基于第一属性系数的绝对值确定的索引值,和/或基于第二属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,以获得最终的索引值。
也就是说,在本申请的实施例中,在确定出当前点的两个颜色分量的属性系数之后,即确定出第一属性系数和第二属性系数之后,在根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定第三颜色分量对应的第三索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于第二属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第三索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于第二属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第三索引值。
示例性的,在本申请的实施例中,假设基于第一属性系数的绝对值确定的索引值为C1,基于第二属性系数的绝对值 确定的索引值为C2,或基于几何信息确定的索引值为C3,基于零游程值确定的索引值为C4,那么,可以直接将C1确定为第三索引值,也可以直接将C2确定为第三索引值,也可以直接将C3确定为第三索引值,也可以直接将C4确定为第三索引值,也可以对C1、C2、C3和B4进行大小比较,并将四者中的较大值或者较小值确定为第三索引值,还可以对C1、C2、C3和B4进行加法、减法、加权平均等计算,将计算结果确定为第三索引值。本申请不进行具体限定。
步骤202、根据索引值所指示的上下文确定当前点的编码系数。
在本申请的实施例中,在确定索引值之后,可以进一步根据索引值所指示的上下文确定当前点的编码系数。
可以理解的是,在本申请的实施例中,编码系数可以为使用索引值指示的上下文进行编码处理后所获得的值。
需要说明的是,在本申请的实施例中,由于索引值可以包括当前点的第一索引值、第二索引值以及第三索引值中的至少一个。因此,可以使用不同的颜色分量对应的索引值所指示的上下文确定出相应的颜色分量的编码系数。
可以理解的是,在本申请的实施例中,编码系数可以包括第一编码系数、第二编码系数以及第三编码系数中的至少一个。其中,第一编码系数、第二编码系数以及第三编码系数可以分别对应于当前点的三个颜色分量,即第一编码系数、第二编码系数以及第三编码系数可以分别为使用第一上下文、第二上下文、第三上下文解析获得的。
也就是说,在本申请的实施例中,在根据索引值所指示的上下文确定当前点的编码系数时,可以根据第一索引值所指示的第一上下文确定当前点的第一编码系数,也可以根据第二索引值所指示的第二上下文确定当前点的第二编码系数,还可以根据第三索引值所指示的第三上下文确定当前点的第三编码系数。
步骤203、根据编码系数确定当前点的属性系数。
在本申请的实施例中,在根据索引值所指示的上下文确定当前点的编码系数之后,便可以进一步根据编码系数确定当前点的属性系数。
可以理解的是,在本申请的实施例中,属性系数可以为基于编码系数所确定的属性信息的相关值。
需要说明的是,在本申请的实施例中,由于编码系数可以包括第一编码系数、第二编码系数以及第三编码系数中的至少一个。因此,可以使用不同的颜色分量对应的编码系数确定出相应的颜色分量的属性系数。
也就是说,在本申请的实施例中,在根据编码系数确定当前点的属性系数时,可以根据第一编码系数确定当前点的第一属性系数,也可以根据第二编码系数确定当前点的第二属性系数,还可以根据第三编码系数确定当前点的第三属性系数。
可以理解的是,在本申请的实施例中,当前点的属性系数可以为当前点的属性信息的量化残差或量化后的变换系数。也就是说,属性系数可以为量化残差,也可以为量化后的变换系数。
进一步地,在本申请的实施例中,当前点的属性系数可以包括全部颜色分量的属性系数,即当前点的属性系数可以包括第一属性系数、第二属性系数以及第三属性系数中的至少一个。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,对于R分量,可以使用第一索引值所指示的第一上下文可以确定第一编码系数,进而可以利用第一编码系数确定第一属性系数,对于G分量,可以使用第二索引值所指示的第二上下文可以确定第二编码系数,进而可以利用第二编码系数确定第二属性系数,对于B分量,可以使用第三索引值所指示的第三上下文可以确定第三编码系数,进而可以利用第三编码系数确定第三属性系数。
示例性的,在本申请的实施例中,假设当前点的属性系数为颜色分量的属性系数,那么,对于Y分量,可以使用第一索引值所指示的第一上下文可以确定第一编码系数,进而可以利用第一编码系数确定第一属性系数,对于U分量,可以使用第二索引值所指示的第二上下文可以确定第二编码系数,进而可以利用第二编码系数确定第二属性系数,对于V分量,可以使用第三索引值所指示的第三上下文可以确定第三编码系数,进而可以利用第三编码系数确定第三属性系数。
需要说明的是,在本申请的实施例中,由于在进行当前点的属性系数的确定时,可以对当前点的部分或者全部颜色分量的属性系数使用自适应上下文,也可以对当前点的部分或者全部颜色分量的属性系数使用预先设置的上下文。因此,当前点的任意一个颜色分量的属性系数可以是通过自适应上下文确定的,也可以是通过预先设置的上下文确定的。
可以理解的是,在本申请的实施例中,可以根据第一预设上下文确定第一编码系数;和/或,根据第二预设上下文确定第二编码系数;和/或,根据第三预设上下文确定第三编码系数。
示例性的,在本申请的实施例中,对于当前点的第一颜色分量,可以选择根据第一预设上下文确定第一编码系数,并根据第一编码系数确定第一属性系数,也可以选择确定第一索引值,然后根据第一索引值所指示的第一上下文确定当前点的第一编码系数,并根据第一编码系数确定当前点的第一属性系数;对于当前点的第二颜色分量,可以选择根据第二预设上下文确定第二编码系数,并根据第二编码系数确定第二属性系数,也可以选择确定第二索引值,然后根据第二索引值所指示的第二上下文确定当前点的第二编码系数,并根据第二编码系数确定当前点的第二属性系数;对于当前点的第三颜色分量,可以选择根据第三预设上下文确定第三编码系数,并根据第三编码系数确定第三属性系数,也可以选择确定第三索引值,然后根据第三索引值所指示的第三上下文确定当前点的第三编码系数,并根据第三编码系数确定当前点的第三属性系数。
也就是说,在本申请的实施例中,对于当前点的任意一个颜色分量,既可以使用预先设置的上下文,也可以选择使用自适应上下文。其中,在使用自适应上下文时,既可以基于当前点的几何信息确定的索引值来自适应选择上下文,也可以基于当前点对应的零游程值确定的索引值来自适应选择上下文,还可以基于当前点的其他颜色分量的属性系数(如第一属性系数和/或第二属性系数)确定的索引值来自适应选择上下文。对此本申请不进行具体限定。
可以理解的是,在本申请的实施例中,对于当前点的不同的颜色分量,确定上下文的方式是互相独立的,即不限制不同的颜色分量所使用的上下文的确定方式一定相同。例如,对于第一颜色分量,可以基于当前点对应的零游程值确定的索引值来自适应选择上下文,对于第二颜色分量,可以基于第一属性系数确定的索引值来自适应选择上下文,对于第三颜色分量,可以使用预先设置的上下文。对此本申请不进行具体限定。
需要说明的是,在本申请的实施例中,第一颜色分量、第二颜色分量以及第三颜色分量可以为当前点的全部颜色分量中的不同的颜色分量。例如,第一颜色分量可以为G分量,第一颜色分量可以为B分量,第三颜色分量可以为R分量;或者,第一颜色分量可以为U分量,第一颜色分量可以为Y分量,第三颜色分量可以为V分量。本申请不进行具体限定。
进一步地,在本申请的实施例中,在确定当前点的属性系数之后,可以继续根据当前点的属性系数是否为0来确定是否进行属性系数的符号的确定。其中,如果当前点的属性系数不全为0,那么可以确定非0的属性系数的符号。
也就是说,在本申请的实施例中,对于不为0的颜色分量的属性系数,可以继续确定该属性系数不为0的颜色分量所对应的属性系数的符号。
实例性的,在本申请的实施例中,如果第一属性系数不为0,那么可以继续进行第一属性系数的符号的确定;如果第二属性系数不为0,那么可以继续进行第二属性系数的符号的确定;如果第三属性系数不为0,那么可以继续进行第三属性系数的符号的确定。
综上所述,通过步骤201至步骤203所提出的点云编码方法,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,从而可以提升点云属性的编解码性能。其中,对于当前点的任意一个颜色分量的属性系数,既可以选择使用预设上下文(如第一预设上下文、第二预设上下文、第三预设上下文)确定编码系数,进而确定对应的属性系数;也可以选择使用几何信息和/或零游程值确定索引值,再使用索引值指示的上下文确定编码系数,进而确定对应的属性系数;也可以选择使用第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示的上下文确定编码系数,进而确定对应的属性系数;还可以选择使用第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示的上下文确定编码系数,进而确定对应的属性系数。
示例性的,在本申请的实施例中,可以选择对当前点的两个颜色分量的属性系数使用自适应上下文,对另一个颜色分量的属性系数使用预设上下文。其中,可以先对第一个属性系数进行编码;然后利用已编码的第一个属性系数去自适应选择上下文对第二个属性系数进行编码;最后可以利用已编码的第一个属性系数和/或第二个属性系数去自适应选择上下文对第三个属性系数进行编码。
例如,使用第一预设上下文对第一属性系数进行编码;利用已编码的第一属性系数大于等于或者小于等于或者等于等于某几个常量(即确定对应的数值范围),对第二属性系数自适应选择上下文进行编码;利用已编码的第一属性系数大于等于或者小于等于或者等于等于某几个常量,以及已编码的第二属性系数大于等于或者小于等于或者等于等于某几个常量(即确定对应的数值范围),对第三个属性系数自适应选择上下文进行编码。
示例性的,在本申请的实施例中,可以选择对当前点的一个颜色分量的属性系数使用自适应上下文,对另两个颜色分量的属性系数使用预设上下文。其中,可以先对第一个属性系数进行编码;然后对第二个属性系数进行编码;最后可以利用已编码的第一个属性系数和/或第二个属性系数去自适应选择上下文对第三个属性系数进行编码。
例如,使用第一预设上下文对第一属性系数进行编码;使用第二预设上下文对第二属性系数进行编码;最后利用已编码第一属性系数加或减一个常量,与已编码第二属性系数加或减一个常量的大小关系自适应选择上下文对第三属性系数进行编码。
示例性的,在本申请的实施例中,可以选择参考零游程值来对全部颜色分量的属性系数使用自适应上下文。其中,在确定先验信息零游程值run_length之后,可以先利用已编码的runlength的信息对第一个属性系数自适应选择上下文进行编码;然后可以利用已编码的runlength的信息对第二个属性系数自适应选择上下文进行编码;最后可以利用已编码的runlength的信息对第三个属性系数自适应选择上下文进行编码。
例如,利用前一组非0的runlength值对第一属性系数自适应选择上下文进行编码;利用前一组非0的runlength值对第二属性系数自适应选择上下文进行编码;利用前一组非0的runlength值对第三属性系数自适应选择上下文进行编码。
例如,利用前一组runlength值对第一个属性系数自适应选择上下文进行编码;利用前一组runlength值对第二个属性系数自适应选择上下文进行编码;利用前一组runlength值对第三个属性系数自适应选择上下文进行编码。
示例性的,在本申请的实施例中,可以选择参考几何位置(几何信息)来对全部颜色分量的属性系数使用自适应上下文。其中,在确定当前点的几何信息之后,可以先利用当前点的几何信息对第一个属性系数自适应选择上下文进行编码;然后可以利用当前点的几何信息对第二个属性系数自适应选择上下文进行编码;最后可以利用当前点的几何信息对第三个属性系数自适应选择上下文进行编码。
例如,利用当前点的几何信息位置大小对第一个属性系数自适应选择上下文进行编码;利用当前点的几何信息位置大小对第二个属性系数自适应选择上下文进行编码;利用当前点的几何信息位置大小对第三个属性系数自适应选择上下文进行编码。
进一步地,如表8、9所示,本申请实施例提出的点云编解码方法,能够在不增加时间复杂度的情况下,获得稳定的性能增益,可以提升点云编解码的性能。
本申请实施例提供了一种点云编码方法,编码器确定索引值;根据索引值所指示的上下文确定当前点的编码系数;根据解码系数确定当前点的属性系数。也就是说,在本申请的实施例中,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
基于上述实施例,本申请的再一实施例提出的点云编解码方法,在对当前点的属性系数进行编解码时,当前点的任意一个颜色分量的属性系数可以是通过自适应上下文确定的,也可以是通过预先设置的上下文确定的。具体地。在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,从而可以提升点云属性的编解码性能。其中,对于当前点的任意一个颜色分量的属性系数,既可以选择使用预设上下文(如第一预设上下文、第二预设上下文、第三预设上下文)确定解码系数,进而确定对应的属性系数;也可以选择使用几何信息和/或零游程值确定索引值,再使用索引值指示的上下文确定解码系数,进而确定对应的属性系数;也可以选择使用第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示的上下文确定解码系数,进而确定对应的属性系数;还可以选择使用第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定索引值,再使用索引值指示 的上下文确定解码系数,进而确定对应的属性系数。
可以理解的是,在本申请的实施例中,假设当前点的三个颜色分量的属性系数分别为value0、value1、value2(如第一颜色分量,第二颜色分量,第三颜色分量),当前点对应的零游程值为run length,下面,对自适应选择上下文进行编解码的方法进行示例性说明。
示例性的,在本申请的实施例中,在编码侧,当value0=value1=value2=0,利用run length进行编码,即编码run length;当value0,value1,value2不同时为0的时候,利用如下方案进行编码:
利用属性编码器对value1的绝对值,利用固定的上下文(如第二预设上下文)进行编码;
利用属性编码器对value2的绝对值,利用value1绝对值自适应的选取上下文进行编码;
当value1的绝对值和value2的绝对值同时等于0时候,利用属性编码器对value0的绝对值减一(例如,对第一编码系数减一,获得对应的第一属性系数),利用value1绝对值和value2绝对值自适应的选取上下文进行编码;
当value1的绝对值和value2的绝对值不同时等于0时候,利用属性编码器对value0的绝对值(即第一编码系数与第一属性系数相同),利用value1绝对值和value2绝对值自适应的选取上下文进行编码。
如果value0的绝对值不为0,则编码其符号;
如果value1的绝对值不为0,则编码其符号;
如果value2的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
利用属性解码器对value1的绝对值,利用固定的上下文(如第二预设上下文)进行解码;
利用属性解码器对value2的绝对值,利用value1绝对值自适应的选取上下文进行解码;
利用属性解码器对value0的绝对值,利用value1绝对值和value2绝对值自适应的选取上下文进行解码;
当value1的绝对值和value2的绝对值同时等于0时候,value0的绝对值等于value0加一(例如,对第一解码系数加一,获得对应的第一属性系数);
当value1的绝对值和value2的绝对值不同时等于0时候,value0的绝对值等于value0的绝对值(即第一解码系数与第一属性系数相同)。
如果value0的绝对值不为0,则解码其符号;
如果value1的绝对值不为0,则解码其符号;
如果value2的绝对值不为0,则解码其符号。
示例性的,在本申请的实施例中,在编码侧,当三个属性系数value0=value1=value2=0,利用run length进行编码,即编码run length;当三个属性系数不同时为0的时候,利用如下方案进行编码:
1、利用固定上下文编码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文编码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用固定上下文编码第三属性系数的绝对值减1(例如,对第三编码系数减一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用固定的上下文(如第二预设上下文)编码第二属性系数绝对值减1(例如,对第二编码系数减一,获得对应的第二属性系数),继续利用上下文编码第三属性系数绝对值(即第三编码系数与第三属性系数相同);
3、如果第一属性系数绝对值不等于0,利用固定上下文(如第一预设上下文)编码第一属性系数绝对值减1(例如,对第一编码系数减一,获得对应的第一属性系数),继续利用固定的上下文编码第二属性系数绝对值(即第二编码系数与第二属性系数相同);
利用第一属性系数绝对值减一的值和第二属性系数绝对值的大小关系自适应选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行编码。
如果第一属性系数的绝对值不为0,则编码其符号;
如果第二属性系数的绝对值不为0,则编码其符号;
如果第三属性系数的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的三个属性系数value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
1、利用固定上下文解码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文解码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用固定上下文(第三预设上下文)解码第三属性系数的绝对值,第三属性系数的绝对值为其解码值(第三解码系数)加一(例如,对第三解码系数加一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用固定的上下文(第二预设上下文)解码第二属性系数绝对值,第二属性系数的绝对值为其解码值(第二解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用固定的上下文解码第三属性系数绝对值;
3、如果第一属性系数绝对值不等于0,利用固定上下文(第一预设上下文)解码第一属性系数绝对值,第一属性系数的绝对值为其解码值(第一解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用固定的上下文(第二预设上下文)解码第二属性系数绝对值;
利用第一属性系数绝对值减一的值和第二属性系数绝对值的大小关系自适应选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行解码。
如果第一属性系数的绝对值不为0,则解码其符号;
如果第二属性系数的绝对值不为0,则解码其符号;
如果第三属性系数的绝对值不为0,则解码其符号。
示例性的,在本申请的实施例中,在编码侧,当value0=value1=value2=0,利用run length进行编码,即编码run length;当value0,value1,value2不同时为0的时候,利用如下方案进行编码:
利用属性编码器对value1的绝对值,利用run length信息自适应的选择上下文进行编码;
利用属性编码器对value2的绝对值,利用run length信息自适应的选择上下文进行编码;
当value1的绝对值和value2的绝对值同时等于0时候,利用属性编码器对value0的绝对值减一(例如,对第一编码系数减一,获得对应的第一属性系数),利用run length信息自适应的选择上下文进行编码;
当value1的绝对值和value2的绝对值不同时等于0时候,利用属性编码器对value0的绝对值(即第一编码系数与第一属性系数相同),利用run length信息自适应的选择上下文进行编码。
如果value0的绝对值不为0,则编码其符号;
如果value1的绝对值不为0,则编码其符号;
如果value2的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
利用属性解码器对value1的绝对值,利用run length信息自适应的选择上下文进行解码;
利用属性解码器对value2的绝对值,利用run length信息自适应的选择上下文进行解码;
利用属性解码器对value0的绝对值,利用run length信息自适应的选择上下文进行解码;
当value1的绝对值和value2的绝对值同时等于0时候,value0的绝对值等于value0加一(例如,对第一解码系数加一,获得对应的第一属性系数);
当value1的绝对值和value2的绝对值不同时等于0时候,value0的绝对值等于value0的绝对值(即第一解码系数与第一属性系数相同)。
如果value0的绝对值不为0,则解码其符号;
如果value1的绝对值不为0,则解码其符号;
如果value2的绝对值不为0,则解码其符号;
示例性的,在本申请的实施例中,在编码侧,当三个属性系数value0=value1=value2=0,利用run length进行编码,即编码run length;当三个属性系数不同时为0的时候,利用如下方案进行编码:
1、利用固定上下文编码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文编码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用run length信息自适应的选择上下文编码第三属性系数的绝对值减1(例如,对第三编码系数减一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用run length信息自适应的选择上下文编码第二属性系数绝对值减1(例如,对第二编码系数减一,获得对应的第二属性系数),继续利用run length信息自适应的选择上下文编码第三属性系数绝对值(即第三编码系数与第三属性系数相同);
3、如果第一属性系数绝对值不等于0,利用run length信息自适应的选择上下文编码第一属性系数绝对值减1(例如,对第一编码系数减一,获得对应的第一属性系数),继续利用run length信息自适应的选择上下文编码第二属性系数绝对值(即第二编码系数与第二属性系数相同);
利用run length信息自适应的选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行编码。
如果第一属性系数的绝对值不为0,则编码其符号;
如果第二属性系数的绝对值不为0,则编码其符号;
如果第三属性系数的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的三个属性系数value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
1、利用固定上下文解码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文解码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用run length信息自适应的选择上下文解码第三属性系数的绝对值,第三属性系数的绝对值为其解码值(第三解码系数)加一(例如,对第三解码系数加一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用run length信息自适应的选择上下文第二属性系数绝对值,第二属性系数的绝对值为其解码值(第二解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用run length信息自适应的选择上下文解码第三属性系数绝对值;
3、如果第一属性系数绝对值不等于0,利用run length信息自适应的选择上下文解码第一属性系数绝对值,第一属性系数的绝对值为其解码值(第一解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用run length信息自适应的选择上下文解码第二属性系数绝对值;
利用run length信息自适应的选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行解码。
如果第一属性系数的绝对值不为0,则解码其符号;
如果第二属性系数的绝对值不为0,则解码其符号;
如果第三属性系数的绝对值不为0,则解码其符号。
示例性的,在本申请的实施例中,在编码侧,当value0=value1=value2=0,利用run length进行编码,即编码run length;当value0,value1,value2不同时为0的时候,利用如下方案进行编码:
利用属性编码器对value1的绝对值,利用几何信息自适应的选择上下文进行编码;
利用属性编码器对value2的绝对值,利用几何信息自适应的选择上下文进行编码;
当value1的绝对值和value2的绝对值同时等于0时候,利用属性编码器对value0的绝对值减一(例如,对第一编码系数减一,获得对应的第一属性系数),利用几何信息自适应的选择上下文进行编码;
当value1的绝对值和value2的绝对值不同时等于0时候,利用属性编码器对value0的绝对值(即第一编码系数与第一属性系数相同),利用几何信息自适应的选择上下文进行编码。
如果value0的绝对值不为0,则编码其符号;
如果value1的绝对值不为0,则编码其符号;
如果value2的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
利用属性解码器对value1的绝对值,利用几何信息自适应的选择上下文进行解码;
利用属性解码器对value2的绝对值,利用几何信息自适应的选择上下文进行解码;
利用属性解码器对value0的绝对值,利用几何信息自适应的选择上下文进行解码;
当value1的绝对值和value2的绝对值同时等于0时候,value0的绝对值等于value0加一(例如,对第一解码系数加一,获得对应的第一属性系数);
当value1的绝对值和value2的绝对值不同时等于0时候,value0的绝对值等于value0的绝对值(即第一解码系数与第一属性系数相同)。
如果value0的绝对值不为0,则解码其符号;
如果value1的绝对值不为0,则解码其符号;
如果value2的绝对值不为0,则解码其符号;
示例性的,在本申请的实施例中,在编码侧,当三个属性系数value0=value1=value2=0,利用run length进行编码,即编码run length;当三个属性系数不同时为0的时候,利用如下方案进行编码:
1、利用固定上下文编码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文编码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用几何信息自适应的选择上下文编码第三属性系数的绝对值减1(例如,对第三编码系数减一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用几何信息自适应的选择上下文编码第二属性系数绝对值减1(例如,对第二编码系数减一,获得对应的第二属性系数),继续利用几何信息自适应的选择上下文编码第三属性系数绝对值(即第三编码系数与第三属性系数相同);
3、如果第一属性系数绝对值不等于0,利用几何信息自适应的选择上下文编码第一属性系数绝对值减1(例如,对第一编码系数减一,获得对应的第一属性系数),继续利用几何信息自适应的选择上下文编码第二属性系数绝对值(即第二编码系数与第二属性系数相同);
利用几何信息自适应的选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行编码。
如果第一属性系数的绝对值不为0,则编码其符号;
如果第二属性系数的绝对值不为0,则编码其符号;
如果第三属性系数的绝对值不为0,则编码其符号。
相应的,在解码侧,解码run length的值,当run length的值不为0的时候,即从当前点开始后续run length的值个点的三个属性系数value0=value1=value2都等于0。
当run length的值为0的时候,按照如下方法进行解码:
1、利用固定上下文解码一个标志位代表第一属性系数绝对值是否等于0,
2、如果第一属性系数等于0,继续利用固定上下文解码一个标志位代表第二属性系数绝对值是否等于0;
1)如果第一属性系数绝对值以及第二属性系数绝对值都等于0,利用几何信息自适应的选择上下文解码第三属性系数的绝对值,第三属性系数的绝对值为其解码值(第三解码系数)加一(例如,对第三解码系数加一,获得对应的第三属性系数);
2)如果第一属性系数绝对值等于0但第二属性系数绝对值不等于0,利用几何信息自适应的选择上下文第二属性系数绝对值,第二属性系数的绝对值为其解码值(第二解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用几何信息自适应的选择上下文解码第三属性系数绝对值;
3、如果第一属性系数绝对值不等于0,利用几何信息自适应的选择上下文解码第一属性系数绝对值,第一属性系数的绝对值为其解码值(第一解码系数)加一(例如,对第二解码系数加一,获得对应的第二属性系数),继续利用几何信息自适应的选择上下文解码第二属性系数绝对值;
利用几何信息自适应的选择上下文,利用此自适应选择的上下文对第三属性系数的绝对值进行解码。
如果第一属性系数的绝对值不为0,则解码其符号;
如果第二属性系数的绝对值不为0,则解码其符号;
如果第三属性系数的绝对值不为0,则解码其符号。
进一步,在本申请的实施例中,当确定出当前点的两个颜色分量的属性系数之后,可以根据第一属性系数的绝对值和/或第二属性系数的绝对值自适应的选取上下文对第三个颜色分量的属性系数进行编解码。
示例性的,在本申请的实施例中,对于编码端:利用属性编码器对第一个属性系数进行编码;利用属性编码器对第 二个属性系数进行编码;利用属性编码器利用已编码的第一个属性系数的绝对值加上常量1,与已编码第二个属性系数的绝对值加上常量2的大小关系自适应选择上下文进行编码。对于解码端:利用属性解码器对第一个属性系数进行解码;利用属性解码器对第二个属性系数进行解码;利用属性解码器利用已解码第一个属性系数的绝对值加上常量1,与已解码第二个属性系数的绝对值加上常量2大小关系自适应选择上下文进行解码。其中,常量1和常量2的取值可以为任意数值,例如,常量1=1,常量2=0,或者,常量1=0,常量2=0。
示例性的,在本申请的实施例中,对于编码端:利用属性编码器对第一个属性系数进行编码;利用属性编码器对已编码的第一个属性系数的绝对值是否等于等于常量1和是否小于等于常量2对第二个属性系数自适应选择上下文进行编码;利用属性编码器对已编码的第一个属性系数的绝对值是否等于等于常量3和是否小于等于常量4,和已编码的第二个属性系数的绝对值是否等于等于常量5和是否小于等于常量6来对第三个属性系数自适应选择上下文进行编码;对于解码端:利用属性解码器对第一个属性系数进行解码;利用属性解码器对已解码的第一个属性系数的绝对值是否等于等于常量1和是否小于等于常量2对第二个属性系数自适应选择上下文进行解码;利用属性解码器对已解码的第一个属性系数的绝对值是否等于等于常量3和是否小于等于常量4,和已解码的第二个属性系数的绝对值是否等于等于常量5和是否小于等于常量6来对第三个属性系数自适应选择上下文进行解码。其中,常量1、常量2、常量3、常量4、常量5、常量6的取值可以为任意数值,例如,常量1=常量3=常量5=0,常量2=常量4=常量6=1;或者,常量1=常量3=1,常量2=常量4=2,常量5=0,常量6=1。
示例性的,在本申请的实施例中,对于编码端:对于编码端:利用属性编码器对第一个属性系数进行编码;利用属性编码器对已编码的第一个属性系数的绝对值是否等于等于常量1和是否大于等于常量2对第二个属性系数自适应选择上下文进行编码;利用属性编码器对已编码的第一个属性系数的绝对值是否等于等于常量3和是否大于等于常量4,和已编码的第二个属性系数的绝对值是否等于等于常量5和是否大于等于常量6来对第三个属性系数自适应选择上下文进行编码;对于解码端:利用属性解码器对第一个属性系数进行解码;利用属性解码器对已解码的第一个属性系数的绝对值是否等于等于常量1和是否大于等于常量2对第二个属性系数自适应选择上下文进行解码;利用属性解码器对已解码的第一个属性系数的绝对值是否等于等于常量3和是否大于等于常量4,和已解码的第二个属性系数的绝对值是否等于等于常量5和是否大于等于常量6来对第三个属性系数自适应选择上下文进行解码。其中,常量1、常量2、常量3、常量4、常量5、常量6的取值可以为任意数值,例如,常量1=常量3=常量5=0,常量2=常量4=常量6=1;或者,常量1=常量3=1,常量2=常量4=2,常量5=0,常量6=1。
示例性的,在本申请的实施例中,对于编码端:利用属性编码器对第二个属性系数进行编码;利用已编码的第二个属性系数的绝对值是否等于等于常量7和是否小于等于常量8来对第三个属性系数自适应选择上下文进行编码;对于解码端:利用属性解码器对第二个属性系数进行解码;利用已解码的第二个属性系数的绝对值是否等于等于常量7和是否小于等于常量8来对第三个属性系数自适应选择上下文进行解码。其中,常量7和常量8的取值可以为任意数值,例如,常量7=1;常量8=2;或者,常量7=0;常量8=1。
示例性的,在本申请的实施例中,对于编码端:利用属性编码器对第二个属性系数进行编码;利用已编码的第二个属性系数的绝对值是否等于等于常量7和是否大于等于常量8来对第三个属性系数自适应选择上下文进行编码;对于解码端:利用属性解码器对第二个属性系数进行解码;利用已解码的第二个属性系数的绝对值是否等于等于常量7和是否大于等于常量8来对第三个属性系数自适应选择上下文进行解码。其中,常量7和常量8的取值可以为任意数值,例如,常量7=1;常量8=2;或者,常量7=0;常量8=1。
示例性的,在本申请的实施例中,对于编码端:利用属性编码器对第一个属性系数进行编码;利用属性编码器对第二个属性系数进行编码;利用属性编码器利用已编码的第一个属性系数的绝对值减去常量9,与已编码第二个属性系数的绝对值减去常量10的大小关系自适应选择上下文进行编码;对于解码端:利用属性解码器对第一个属性系数进行解码;
利用属性解码器对第二个属性系数进行解码;利用属性解码器利用已解码第一个属性系数的绝对值减去常量9,与已解码第二个属性系数的绝对值减去常量10大小关系自适应选择上下文进行解码。其中,常量9和常量10的取值可以为任意数值,例如,常量9=1,常量10=0;或者,常量9=0,常量10=0。
可以理解的是,在本申请的实施例中,对于当前点的一个颜色分量的属性系数,在根据几何信息和/或零游程值确定第一颜色分量对应的第一索引值时,可以将基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第一索引值;也可以对基于几何信息确定的索引值和基于零游程值确定的索引值进行运算处理,从而可以获得第一索引值。
可以理解的是,在本申请的实施例中,在确定出当前点的一个颜色分量的属性系数之后,即确定出第一属性系数之后,在根据第一属性系数的绝对值、几何信息以及零游程值中的至少一个确定第二颜色分量对应的第二索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第二索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第二索引值。
可以理解的是,在本申请的实施例中,在确定出当前点的两个颜色分量的属性系数之后,即确定出第一属性系数和第二属性系数之后,在根据第一属性系数的绝对值、第二属性系数的绝对值、几何信息以及零游程值中的至少一个确定第三颜色分量对应的第三索引值时,可以基于第一属性系数的绝对值确定的索引值,或基于第二属性系数的绝对值确定的索引值,或基于几何信息确定的索引值,或基于零游程值确定的索引值,确定为第三索引值;也可以对基于第一属性系数的绝对值确定的索引值,和/或基于第二属性系数的绝对值确定的索引值,和/或基于几何信息确定的索引值,和/或基于零游程值确定的索引值进行运算处理,从而可以获得第三索引值。
进一步地,在本申请的实施例中,在确定索引值之后,可以进一步根据索引值所指示的上下文确定当前点的编码系数(解码系数)。其中,编码系数(解码系数)可以为使用索引值指示的上下文进行编解码处理后所获得的值。
进一步地,在本申请的实施例中,在根据索引值所指示的上下文确定当前点的编码系数(解码系数)数之后,便可以进一步根据编码系数(解码系数)确定当前点的属性系数。其中,属性系数可以为基于编码系数(解码系数)所确定的属性信息的相关值。
可以理解的是,在本申请的实施例中,当前点的属性系数可以为当前点的属性信息的量化残差或量化后的变换系数。也就是说,属性系数可以为量化残差,也可以为量化后的变换系数。
需要说明的是,在本申请的实施例中,还可以使用一个1比特标志位(如自适应上下文标识信息)来表示自适应选择上下文的方式是否开启,这个标志位可以被放在高层语法元素的头信息attribute header中,这个标志位在一些特定的条件下有条件的分析,如果这个标志位不出现在码流中,可以默认该标志位的值为一个固定的值。
相应的,解码端需要解码该标志位,如果这个标志位不出现在码流中,则不解码,默认该标志位的值为一个固定的值。
需要说明的是,在本申请的实施例中,可以将自适应上下文标识信息理解为一个表明是否对点云中的节点使用自适应上下文的标志位。具体地,编码器可以确定作为自适应上下文标识信息的一个变量,从而可以通过该变量的取值来实现自适应上下文标识信息的确定。
示例性的,在本申请的实施例中,如果自适应上下文标识信息的取值为1,那么可以指示使用自适应上下文确定当前点的属性系数;如果自适应上下文标识信息的取值为0,那么可以指示不使用自适应上下文确定当前点的属性系数。
当然,也可以不进行自适应上下文标识信息设置流程。也就是说,可以预先设置是否使用自适应上下文确定当前点的属性系数,也可以预先设置是否使用自适应上下文确定当前点的全部颜色分量中的一个或多个颜色分量的属性系数。即是否对部分或者全部颜色分量使用自适应上下文,可以不依赖于自适应上下文标识信息的取值而独立执行。
本申请实施例提供了一种点云编解码方法,解码器确定索引值;根据索引值所指示的上下文确定当前点的解码系数;根据解码系数确定当前点的属性系数。编码器确定索引值;根据索引值所指示的上下文确定当前点的编码系数;根据解码系数确定当前点的属性系数。也就是说,在本申请的实施例中,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
基于上述实施例,在本申请的再一实施例中,基于前述实施例相同的发明构思,图10为编码器的组成结构示意图一,如图10所示,编码器20可以包括:第一确定单元21,其中,
所述第一确定单元21,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述编码系数确定所述当前点的属性系数。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机可读存储介质,应用于编码器20,该计算机可读存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于上述编码器20的组成以及计算机可读存储介质,图11为编码器的组成结构示意图二,如图11所示,编码器20可以包括:第一存储器22和第一处理器23,第一通信接口24和第一总线系统25。第一存储器22、第一处理器23、第一通信接口24通过第一总线系统25耦合在一起。可理解,第一总线系统25用于实现这些组件之间的连接通信。第一总线系统25除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为第一总线系统25。其中,
第一通信接口24,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
所述第一存储器22,用于存储能够在所述第一处理器上运行的计算机程序;
所述第一处理器23,用于在运行所述计算机程序时,确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述编码系数确定所述当前点的属性系数。
可以理解,本申请实施例中的第一存储器22可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器22旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器23可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器23中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器23可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理 器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器22,第一处理器23读取第一存储器22中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器23还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
图12为解码器的组成结构示意图一,如图12所示,解码器30可以包括:第二确定单元31;其中,
所述第二确定单元31,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述解码系数确定所述当前点的属性系数。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机可读存储介质,应用于解码器30,该计算机可读存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于上述解码器30的组成以及计算机可读存储介质,图13为解码器的组成结构示意图二,如图13所示,解码器30可以包括:第二存储器32和第二处理器33,第二通信接口34和第二总线系统35。第二存储器32和第二处理器33,第二通信接口34通过第二总线系统35耦合在一起。可理解,第二总线系统35用于实现这些组件之间的连接通信。第二总线系统35除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图11中将各种总线都标为第二总线系统35。其中,
第二通信接口34,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
所述第二存储器32,用于存储能够在所述第二处理器上运行的计算机程序;
所述第二处理器33,用于在运行所述计算机程序时,确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述解码系数确定所述当前点的属性系数。
可以理解,本申请实施例中的第二存储器32可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第二存储器32旨在包括但不限于这些和任意其它适合类型的存储器。
而第二处理器33可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第二处理器33中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第二处理器33可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第二存储器32,第二处理器33读取第二存储器32中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
本申请实施例提供了一种编码器和解码器,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的 模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。
在本申请的又一实施例中,本申请实施例还提供一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息至少包括:当前点的自适应上下文标识信息、当前点的几何信息、当前点对应的零游程值。
需要说明的是,在本申请的实施例中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例提供了一种点云编解码方法、编码器、解码器、码流及存储介质,解码器确定索引值;根据索引值所指示的上下文确定当前点的解码系数;根据解码系数确定当前点的属性系数。编码器确定索引值;根据索引值所指示的上下文确定当前点的编码系数;根据解码系数确定当前点的属性系数。也就是说,在本申请的实施例中,在使用上下文对属性系数进行确定时,可以充分利用已经编码/解码的属性系数之间的相关性,以及相关参数来自适应选取不同的上下文进行编解码,从而能够引入多种不同的自适应上下文的模式,而不再仅局限于使用固定的上下文进行属性信息的编解码,从而可以提升点云属性的编解码性能。

Claims (62)

  1. 一种点云解码方法,应用于解码器,所述方法包括:
    确定索引值;
    根据所述索引值所指示的上下文确定当前点的解码系数;
    根据所述解码系数确定所述当前点的属性系数。
  2. 根据权利要求1所述的方法,其中,
    所述索引值包括第一索引值、第二索引值以及第三索引值中的至少一个。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    确定第一索引值;
    根据所述第一索引值所指示的第一上下文确定所述当前点的第一解码系数;
    根据所述第一解码系数确定所述当前点的第一属性系数。
  4. 根据权利要求2-3任一项所述的方法,其中,所述方法还包括:
    确定第二索引值;
    根据所述第二索引值所指示的第二上下文确定所述当前点的第二解码系数;
    根据所述第二解码系数确定所述当前点的第二属性系数。
  5. 根据权利要求2-4任一项所述的方法,其中,所述方法还包括:
    确定第三索引值;
    根据所述第三索引值所指示的第三上下文确定当前点的第三解码系数;
    根据所述第三解码系数确定所述当前点的第三属性系数。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    解码码流,确定所述当前点的几何信息和所述当前点对应的零游程值。
  7. 根据权利要求6所述的方法,其中,所述方法还包括:
    根据所述几何信息和/或所述零游程值确定所述索引值。
  8. 根据权利要求6所述的方法,其中,所述方法还包括:
    根据所述第一属性系数的绝对值、所述几何信息以及所述零游程值中的至少一个,确定所述索引值。
  9. 根据权利要求6所述的方法,其中,所述方法还包括:
    根据所述第一属性系数的绝对值、所述第二属性系数的绝对值、所述几何信息以及所述零游程值中的至少一个,确定所述索引值。
  10. 根据权利要求7-9任一项所述的方法,其中,所述方法还包括:
    对所述零游程值和第一数值进行加法或者减法运算,确定第一运算结果;
    根据所述第一运算结果确定所述索引值。
  11. 根据权利要求7-9任一项所述的方法,其中,所述方法还包括:
    对所述零游程值和第一数值进行加法或者减法运算,确定第一运算结果;
    确定所述第一运算结果对应的第一数值范围;
    按照所述第一数值范围,以及第一预设索引值与数值范围的对应关系,确定所述索引值。
  12. 根据权利要求7-9任一项所述的方法,其中,所述方法还包括:
    确定所述零游程值对应的第二数值范围;
    按照所述第二数值范围,以及第二预设索引值与数值范围的对应关系,确定所述索引值。
  13. 根据权利要求7-9任一项所述的方法,其中,所述方法还包括:
    确定所述几何信息对应的位置范围;
    按照所述位置范围,以及预设位置范围与索引值的对应关系,确定所述索引值。
  14. 根据权利要求8或9所述的方法,其中,所述方法还包括:
    将所述第一属性系数的绝对值设置为所述索引值。
  15. 根据权利要求8或9所述的方法,其中,所述方法还包括:
    确定所述第一属性系数的绝对值对应的第三数值范围;
    按照所述第三数值范围,以及第三预设索引值与数值范围的对应关系,确定所述索引值。
  16. 根据权利要求8或9所述的方法,其中,所述方法还包括:
    对所述第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;
    根据所述第二运算结果确定所述索引值。
  17. 根据权利要求8或9所述的方法,其中,所述方法还包括:
    对所述第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;
    确定所述第二运算结果对应的第四数值范围;
    按照所述第四数值范围,以及第四预设索引值与数值范围的对应关系,确定所述索引值。
  18. 根据权利要求9所述的方法,其中,所述方法还包括:
    将所述第二属性系数的绝对值设置为所述索引值。
  19. 根据权利要求9所述的方法,其中,所述方法还包括:
    确定所述第二属性系数的绝对值对应的第五数值范围;
    按照所述第五数值范围,以及第五预设索引值与数值范围的对应关系,确定所述索引值。
  20. 根据权利要求9所述的方法,其中,所述方法还包括:
    对所述第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;
    根据所述第三运算结果确定所述索引值。
  21. 根据权利要求9所述的方法,其中,所述方法还包括:
    对所述第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;
    确定所述第三运算结果对应的第六数值范围;
    按照所述第六数值范围,以及第六预设索引值与数值范围的对应关系,确定所述索引值。
  22. 根据权利要求10-13任一项所述的方法,其中,所述方法还包括:
    将基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第一索引值;或者,
    对基于所述几何信息确定的索引值和基于所述零游程值确定的索引值进行运算处理,获得所述第一索引值。
  23. 根据权利要求10-17任一项所述的方法,其中,所述方法还包括:
    将基于所述第一属性系数的绝对值确定的索引值,或基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第二索引值;或者,
    对基于所述第一属性系数的绝对值确定的索引值,和/或基于所述几何信息确定的索引值,和/或基于所述零游程值确定的索引值进行运算处理,获得所述第二索引值。
  24. 根据权利要求10-21任一项所述的方法,其中,所述方法还包括:
    将基于所述第一属性系数的绝对值确定的索引值,或基于所述第二属性系数的绝对值确定的索引值,或基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第三索引值;或者,
    对基于所述第一属性系数的绝对值确定的索引值,和/或基于所述第二属性系数的绝对值确定的索引值,和/或基于所述几何信息确定的索引值,和/或基于所述零游程值确定的索引值进行运算处理,获得所述第三索引值。
  25. 根据权利要求5所述的方法,其中,所述方法还包括:
    解码码流,确定所述当前点的自适应上下文标识信息;
    若所述自适应上下文标识信息指示使用自适应上下文确定所述当前点的属性系数,则执行所述第一索引值,和/或所述第二索引值,和/或所述第三索引值的确定流程。
  26. 根据权利要求5所述的方法,其中,所述方法还包括:
    根据第一预设上下文确定所述第一解码系数;和/或,
    根据第二预设上下文确定所述第二解码系数;和/或,
    根据第三预设上下文确定所述第三解码系数。
  27. 根据权利要求5所述的方法,其中,
    所述当前点对应的零游程值包括:当前点的零游程值,或所述当前点的前一个零游程值,或所述当前点的前一个非0的零游程值。
  28. 根据权利要求1-5任一项所述的方法,其中,所述方法还包括:
    若所述当前点的属性系数不全为0,则解码码流,确定非0的属性系数的符号。
  29. 一种点云编码方法,应用于编码器,所述方法包括:
    确定索引值;
    根据所述索引值所指示的上下文确定当前点的编码系数;
    根据所述编码系数确定所述当前点的属性系数。
  30. 根据权利要求29所述的方法,其中,
    所述索引值包括第一索引值、第二索引值以及第三索引值中的至少一个。
  31. 根据权利要求30所述的方法,其中,所述方法还包括:
    确定第一索引值;
    根据所述第一索引值所指示的第一上下文确定所述当前点的第一编码系数;
    根据所述第一编码系数确定所述当前点的第一属性系数。
  32. 根据权利要求30-31任一项所述的方法,其中,所述方法还包括:
    确定第二索引值;
    根据所述第二索引值所指示的第二上下文确定所述当前点的第二编码系数;
    根据所述第二编码系数确定所述当前点的第二属性系数。
  33. 根据权利要求30-32任一项所述的方法,其中,所述方法还包括:
    确定第三索引值;
    根据所述第三索引值所指示的第三上下文确定当前点的第三编码系数;
    根据所述第三编码系数确定所述当前点的第三属性系数。
  34. 根据权利要求33所述的方法,其中,所述方法还包括:
    确定所述当前点的几何信息和所述当前点对应的零游程值。
  35. 根据权利要求34所述的方法,其中,所述方法还包括:
    根据所述几何信息和/或所述零游程值确定所述索引值。
  36. 根据权利要求34所述的方法,其中,所述方法还包括:
    根据所述第一属性系数的绝对值、所述几何信息以及所述零游程值中的至少一个,确定所述索引值。
  37. 根据权利要求34所述的方法,其中,所述方法还包括:
    根据所述第一属性系数的绝对值、所述第二属性系数的绝对值、所述几何信息以及所述零游程值中的至少一个,确定所述索引值。
  38. 根据权利要求35-37任一项所述的方法,其中,所述方法还包括:
    对所述零游程值和第一数值进行加法或者减法运算,确定第一运算结果;
    根据所述第一运算结果确定所述索引值。
  39. 根据权利要求35-37任一项所述的方法,其中,所述方法还包括:
    对所述零游程值和第一数值进行加法或者减法运算,确定第一运算结果;
    确定所述第一运算结果对应的第一数值范围;
    按照所述第一数值范围,以及第一预设索引值与数值范围的对应关系,确定所述索引值。
  40. 根据权利要求35-37任一项所述的方法,其中,所述方法还包括:
    确定所述零游程值对应的第二数值范围;
    按照所述第二数值范围,以及第二预设索引值与数值范围的对应关系,确定所述索引值。
  41. 根据权利要求35-37任一项所述的方法,其中,所述方法还包括:
    确定所述几何信息对应的位置范围;
    按照所述位置范围,以及预设位置范围与索引值的对应关系,确定所述索引值。
  42. 根据权利要求36或37所述的方法,其中,所述方法还包括:
    将所述第一属性系数的绝对值设置为所述索引值。
  43. 根据权利要求36或37所述的方法,其中,所述方法还包括:
    确定所述第一属性系数的绝对值对应的第三数值范围;
    按照所述第三数值范围,以及第三预设索引值与数值范围的对应关系,确定所述索引值。
  44. 根据权利要求36或37所述的方法,其中,所述方法还包括:
    对所述第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;
    将所述第二运算结果确定为所述索引值。
  45. 根据权利要求36或37所述的方法,其中,所述方法还包括:
    对所述第一属性系数的绝对值和第二数值进行加法或者减法运算,确定第二运算结果;
    确定所述第二运算结果对应的第四数值范围;
    按照所述第四数值范围,以及第四预设索引值与数值范围的对应关系,确定所述索引值。
  46. 根据权利要求37所述的方法,其中,所述方法还包括:
    将所述第二属性系数的绝对值设置为所述索引值。
  47. 根据权利要求37所述的方法,其中,所述方法还包括:
    确定所述第二属性系数的绝对值对应的第五数值范围;
    按照所述第五数值范围,以及第五预设索引值与数值范围的对应关系,确定所述索引值。
  48. 根据权利要求37所述的方法,其中,所述方法还包括:
    对所述第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;
    将所述第三运算结果确定为所述索引值。
  49. 根据权利要求37所述的方法,其中,所述方法还包括:
    对所述第二属性系数的绝对值和第三数值进行加法或者减法运算,确定第三运算结果;
    确定所述第三运算结果对应的第六数值范围;
    按照所述第六数值范围,以及第六预设索引值与数值范围的对应关系,确定所述索引值。
  50. 根据权利要求38-41任一项所述的方法,其中,所述方法还包括:
    将基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第一索引值;或者,
    对基于所述几何信息确定的索引值和基于所述零游程值确定的索引值进行运算处理,获得所述第一索引值。
  51. 根据权利要求38-45任一项所述的方法,其中,所述方法还包括:
    将基于所述第一属性系数的绝对值确定的索引值,或基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第二索引值;或者,
    对基于所述第一属性系数的绝对值确定的索引值,和/或基于所述几何信息确定的索引值,和/或基于所述零游程值确定的索引值进行运算处理,获得所述第二索引值。
  52. 根据权利要求38-49任一项所述的方法,其中,所述方法还包括:
    将基于所述第一属性系数的绝对值确定的索引值,或基于所述第二属性系数的绝对值确定的索引值,或基于所述几何信息确定的索引值,或基于所述零游程值确定的索引值,确定为所述第三索引值;或者,
    对基于所述第一属性系数的绝对值确定的索引值,和/或基于所述第二属性系数的绝对值确定的索引值,和/或基于所述几何信息确定的索引值,和/或基于所述零游程值确定的索引值进行运算处理,获得所述第三索引值。
  53. 根据权利要求33所述的方法,其中,所述方法还包括:
    若确定使用自适应上下文确定所述当前点的属性系数,则设置所述当前点的自适应上下文标识信息,并将所述当前点的自适应上下文标识信息写入码流。
  54. 根据权利要求5所述的方法,其中,所述方法还包括:
    根据第一预设上下文确定所述第一编码系数;和/或,
    根据第二预设上下文确定所述第二编码系数;和/或,
    根据第三预设上下文确定所述第三编码系数。
  55. 根据权利要求33所述的方法,其中,
    所述当前点对应的零游程值包括:当前点的零游程值,或所述当前点的前一个零游程值,或所述当前点的前一个非0的零游程值。
  56. 根据权利要求29-33任一项所述的方法,其中,所述方法还包括:
    若所述当前点的属性系数不全为0,则确定非0的属性系数的符号。
  57. 一种编码器,所述编码器包括第一确定单元,其中,
    所述第一确定单元,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述编码系数确定所述当前点的属性系数。
  58. 一种编码器,所述编码器包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求29-56任一项所述的方法。
  59. 一种解码器,所述解码器包括第二确定单元,其中,
    所述第二确定单元,配置为确定索引值;根据所述索引值所指示的上下文确定当前点的编码系数;根据所述解码系数确定所述当前点的属性系数。
  60. 一种解码器,所述解码器包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1-28任一项所述的方法。
  61. 一种码流,所述码流是根据待编码信息进行比特编码生成的;其中,所述待编码信息至少包括:当前点的自适应上下文标识信息、当前点的几何信息、当前点对应的零游程值。
  62. 一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如权利要求29-56任一项所述的方法,或者,被第二处理器执行时实现如权利要求1-28任一项所述的方法。
PCT/CN2022/132330 2022-11-16 2022-11-16 点云编解码方法、编码器、解码器、码流及存储介质 WO2024103304A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/132330 WO2024103304A1 (zh) 2022-11-16 2022-11-16 点云编解码方法、编码器、解码器、码流及存储介质
PCT/CN2023/071279 WO2024103513A1 (zh) 2022-11-16 2023-01-09 点云编解码方法、编码器、解码器、码流及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/132330 WO2024103304A1 (zh) 2022-11-16 2022-11-16 点云编解码方法、编码器、解码器、码流及存储介质

Publications (1)

Publication Number Publication Date
WO2024103304A1 true WO2024103304A1 (zh) 2024-05-23

Family

ID=91083458

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/132330 WO2024103304A1 (zh) 2022-11-16 2022-11-16 点云编解码方法、编码器、解码器、码流及存储介质
PCT/CN2023/071279 WO2024103513A1 (zh) 2022-11-16 2023-01-09 点云编解码方法、编码器、解码器、码流及存储介质

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071279 WO2024103513A1 (zh) 2022-11-16 2023-01-09 点云编解码方法、编码器、解码器、码流及存储介质

Country Status (1)

Country Link
WO (2) WO2024103304A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112352431A (zh) * 2019-09-30 2021-02-09 浙江大学 一种数据编码、解码方法、设备及存储介质
CN113473127A (zh) * 2020-03-30 2021-10-01 鹏城实验室 一种点云几何编码方法、解码方法、编码设备及解码设备
WO2021203924A1 (zh) * 2020-04-08 2021-10-14 Oppo广东移动通信有限公司 编码方法、解码方法、编码器、解码器以及存储介质
WO2022147015A1 (en) * 2020-12-29 2022-07-07 Qualcomm Incorporated Hybrid-tree coding for inter and intra prediction for geometry coding
CN114793484A (zh) * 2020-11-24 2022-07-26 浙江大学 点云编码方法、点云解码方法、装置及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10861196B2 (en) * 2017-09-14 2020-12-08 Apple Inc. Point cloud compression
CN112262578B (zh) * 2019-03-21 2023-07-25 深圳市大疆创新科技有限公司 点云属性编码方法和装置以及点云属性解码方法和装置
TW202046729A (zh) * 2019-04-24 2020-12-16 美商松下電器(美國)知識產權公司 編碼裝置、解碼裝置、編碼方法、及解碼方法
KR20230173094A (ko) * 2021-04-15 2023-12-26 엘지전자 주식회사 포인트 클라우드 데이터 전송 방법, 포인트 클라우드데이터 전송 장치, 포인트 클라우드 데이터 수신 방법 및 포인트 클라우드 데이터 수신 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112352431A (zh) * 2019-09-30 2021-02-09 浙江大学 一种数据编码、解码方法、设备及存储介质
CN113473127A (zh) * 2020-03-30 2021-10-01 鹏城实验室 一种点云几何编码方法、解码方法、编码设备及解码设备
WO2021203924A1 (zh) * 2020-04-08 2021-10-14 Oppo广东移动通信有限公司 编码方法、解码方法、编码器、解码器以及存储介质
CN114793484A (zh) * 2020-11-24 2022-07-26 浙江大学 点云编码方法、点云解码方法、装置及存储介质
WO2022147015A1 (en) * 2020-12-29 2022-07-07 Qualcomm Incorporated Hybrid-tree coding for inter and intra prediction for geometry coding

Also Published As

Publication number Publication date
WO2024103513A1 (zh) 2024-05-23

Similar Documents

Publication Publication Date Title
CN114930858A (zh) 用于基于几何的点云压缩的高级语法
WO2023130333A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2023230996A1 (zh) 编解码方法、编码器、解码器以及可读存储介质
EP4258671A1 (en) Point cloud attribute predicting method, encoder, decoder, and storage medium
US20230377208A1 (en) Geometry coordinate scaling for ai-based dynamic point cloud coding
JP2024505798A (ja) 点群符号化・復号化方法及びシステム、点群符号器並びに点群復号器
WO2024103304A1 (zh) 点云编解码方法、编码器、解码器、码流及存储介质
WO2022141461A1 (zh) 点云编解码方法、编码器、解码器以及计算机存储介质
WO2024082152A1 (zh) 编解码方法及装置、编解码器、码流、设备、存储介质
WO2023240662A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2024119420A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024119419A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2022170511A1 (zh) 点云解码方法、解码器及计算机存储介质
WO2023240660A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2024007144A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024065406A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024119518A1 (zh) 编解码方法、解码器、编码器、码流及存储介质
WO2024065272A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024065408A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024011472A1 (zh) 点云编解码方法、编解码器及计算机存储介质
WO2023197338A1 (zh) 索引确定方法、装置、解码器以及编码器
WO2024082127A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024060161A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2023024842A1 (zh) 点云编解码方法、装置、设备及存储介质
RU2778377C1 (ru) Способ и устройство для кодирования облака точек