WO2023097694A1 - Decoding method, encoding method, decoder, and encoder - Google Patents

Decoding method, encoding method, decoder, and encoder Download PDF

Info

Publication number
WO2023097694A1
WO2023097694A1 PCT/CN2021/135529 CN2021135529W WO2023097694A1 WO 2023097694 A1 WO2023097694 A1 WO 2023097694A1 CN 2021135529 W CN2021135529 W CN 2021135529W WO 2023097694 A1 WO2023097694 A1 WO 2023097694A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
attribute
value
residual value
quantization
Prior art date
Application number
PCT/CN2021/135529
Other languages
French (fr)
Chinese (zh)
Inventor
魏红莲
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/135529 priority Critical patent/WO2023097694A1/en
Publication of WO2023097694A1 publication Critical patent/WO2023097694A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding

Definitions

  • the embodiments of the present application relate to the technical field of encoding and decoding, and more specifically, relate to a decoding method, an encoding method, a decoder, and an encoder.
  • Point cloud has begun to be popularized in various fields, for example, virtual/augmented reality, robotics, geographic information system, medical field, etc.
  • a large number of point clouds on the surface of objects can be accurately obtained, often corresponding to hundreds of thousands of points in one scene.
  • Such a large number of points also brings challenges to computer storage and transmission. Therefore, point-to-point compression and decompression has become a hot issue.
  • the position information of the point cloud is firstly encoded by octree; at the same time, according to the position information of the current point encoded by the octree, the method used to predict the attribute prediction value of the current point is selected from the encoded points. After pointing, predict the attribute information of the current point based on the selected point, and then encode the attribute information by making a difference with the original value of the attribute information, so as to realize the encoding of the point cloud.
  • the process is the reverse process of point cloud compression.
  • Embodiments of the present application provide a decoding method, an encoding method, a decoder, and an encoder, which can improve decompression performance.
  • the present application provides a decoding method, including:
  • the attribute quantized residual value of the first component of the current point Decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud in sequence; wherein, the first component is U component, V component, G component or B component;
  • the attribute reconstruction value of the current point is acquired.
  • the present application provides an encoding method, including:
  • the first component is the U component , V component, G component or B component;
  • the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component are sequentially encoded to obtain the code stream of the current point cloud.
  • the present application provides a decoder, including:
  • the decoding unit is used to sequentially decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud;
  • the first component is a U component, a V component, a G component or a B component;
  • An acquiring unit configured to acquire the attribute reconstruction value of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component.
  • an encoder including:
  • the determination unit is used to determine the attribute quantization residual value of the first component, the attribute quantization residual value of the second component and the attribute quantization residual value of the third component of the current point to be encoded in the current point cloud; wherein, the first One component is U component, V component, G component or B component;
  • the encoding unit is configured to sequentially encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component to obtain the code stream of the current point cloud.
  • the present application provides a codec device, including:
  • a processor adapted to implement computer instructions
  • a computer-readable storage medium stores computer instructions, and the computer instructions are suitable for being loaded by a processor and executing any one of the above-mentioned first to second aspects or the encoding and decoding methods in each implementation manner.
  • processors there are one or more processors, and one or more memories.
  • the computer-readable storage medium may be integrated with the processor, or the computer-readable storage medium may be provided separately from the processor.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores computer instructions, and when the computer instructions are read and executed by a processor of a computer device, the computer device executes the above-mentioned first step.
  • a codec method in any one of the first aspect to the second aspect or its various implementation manners.
  • designing the first component to be decoded at the current point (that is, the first component) as a U component, a V component, a G component or a B component can improve decompression performance.
  • Fig. 1 is an example of a point cloud image provided by an embodiment of the present application.
  • FIG. 2 is a partially enlarged view of the point cloud image shown in FIG. 1 .
  • Fig. 3 is an example of point cloud images with six viewing angles provided by the embodiment of the present application.
  • Fig. 4 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
  • Fig. 5 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
  • Fig. 6 is an example of a bounding box provided by an embodiment of the present application.
  • FIG. 7 is an example of performing octree division on a bounding box provided by an embodiment of the present application.
  • Fig. 11 shows the encoding order of Morton codes in three-dimensional space.
  • Fig. 12 is a schematic flowchart of a decoding method provided by an embodiment of the present application.
  • Fig. 13 is a schematic flowchart of an encoding method provided by an embodiment of the present application.
  • Fig. 14 is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • Fig. 15 is a schematic block diagram of an encoder provided by an embodiment of the present application.
  • Fig. 16 is a schematic block diagram of a codec device provided by an embodiment of the present application.
  • a point cloud is a set of discrete point sets randomly distributed in space that express the spatial structure and surface properties of a 3D object or 3D scene.
  • Figure 1 and Figure 2 show the 3D point cloud image and local enlarged view respectively, and it can be seen that the point cloud surface is composed of densely distributed points.
  • the two-dimensional image has information expression at each pixel, and the distribution is regular, so there is no need to additionally record its position information; however, the distribution of points in the point cloud in three-dimensional space is random and irregular, so it is necessary to record each
  • the position of the point in space can completely express a point cloud.
  • each position in the acquisition process has corresponding attribute information, usually RGB color value, and the color value reflects the color of the object; for point cloud, the attribute information corresponding to each point is in addition to color, Another common one is the reflectance value, which reflects the surface material of the object. Therefore, point cloud data usually includes geometric information (x, y, z) composed of three-dimensional position information, attribute information composed of three-dimensional color information (r, g, b), and one-dimensional reflectance information (r).
  • each point in the point cloud can include geometric information and attribute information, wherein the geometric information of each point in the point cloud refers to the Cartesian three-dimensional coordinate data of the point, and the attribute information of each point in the point cloud can include but It is not limited to at least one of the following: color information, material information, and laser reflection intensity information.
  • the color information can be information on any color space.
  • the color information may be Red Green Blue (RGB) information.
  • the color information may also be brightness and chrominance (YCbCr, YUV) information. Among them, Y represents brightness (Luma), Cb (U) represents a blue chroma component, and Cr (V) represents a red chroma component.
  • Each point in the point cloud has the same amount of attribute information.
  • each point in the point cloud has two attribute information, color information and laser reflection intensity.
  • each point in the point cloud has three attribute information: color information, material information and laser reflection intensity information.
  • the point cloud image can have multiple viewing angles, for example, the point cloud image shown in Figure 3 can have six viewing angles, the data storage format corresponding to the point cloud image consists of a file header information part and a data part, the header information It includes the data format, data representation type, total point cloud points, and the content represented by the point cloud.
  • the data storage format of a point cloud image can be implemented as the following format:
  • the data format is ".ply" format, represented by ASCII code, the total number of points is 207242, and each point has three-dimensional position information xyz and three-dimensional color information rgb.
  • Point cloud can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes, and because point cloud is obtained by directly sampling real objects, it can provide a strong sense of reality under the premise of ensuring accuracy, so it is widely used.
  • point clouds can be divided into two categories, namely, machine-perceived point clouds and human-eye-perceived point clouds.
  • the application scenarios of machine perception point cloud include but are not limited to: autonomous navigation system, real-time inspection system, geographic information system, visual sorting robot, emergency rescue robot and other point cloud application scenarios.
  • the application scenarios of point cloud perceived by the human eye include but are not limited to: digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, 3D immersive interaction and other point cloud application scenarios.
  • the point cloud can be divided into dense point cloud and sparse point cloud based on the acquisition method of the point cloud; the point cloud can also be divided into static point cloud and dynamic point cloud based on the acquisition method of the point cloud, more specifically, it can be It is divided into three types of point clouds, namely, the first static point cloud, the second type of dynamic point cloud and the third type of dynamically acquired point cloud.
  • the first static point cloud the object is stationary, and the device for obtaining the point cloud is also stationary
  • the second type of dynamic point cloud the object is moving, but the device for obtaining the point cloud is stationary
  • the equipment for obtaining point cloud is in motion.
  • the collection of point clouds mainly has the following methods: computer generation, 3D laser scanning, 3D photogrammetry, etc.
  • Computers can generate point clouds of virtual three-dimensional objects and scenes; 3D laser scanning can obtain point clouds of static real-world three-dimensional objects or scenes, and can obtain millions of point clouds per second; 3D photogrammetry can obtain dynamic real-world three-dimensional objects or scenes
  • the point cloud of tens of millions of points can be obtained per second.
  • the point cloud of the surface of the object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera.
  • the point cloud obtained according to the laser measurement principle may include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point.
  • the point cloud obtained according to the principle of photogrammetry may include the three-dimensional coordinate information of the point and the color information of the point.
  • the point cloud is obtained by combining the principles of laser measurement and photogrammetry, which may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point and the color information of the point.
  • These technologies reduce the cost and time period of point cloud data acquisition, and improve the accuracy of the data.
  • point clouds of biological tissues and organs can be obtained from magnetic resonance imaging (MRI), computed tomography (CT), and electromagnetic positioning information.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • electromagnetic positioning information These technologies reduce the acquisition cost and time period of point cloud, and improve the accuracy of data.
  • the transformation of the point cloud data acquisition method has made it possible to acquire a large amount of point cloud data. With the growth of application requirements, the processing of massive 3D point cloud data encounters the bottleneck of storage space and transmission bandwidth limitations.
  • point cloud compression has become a key issue to promote the development of the point cloud industry.
  • Point cloud compression generally uses point cloud geometric information and attribute information to compress separately. Compression of cloud attributes; at the decoding end, the geometric information of the point cloud is first decoded in the geometric decoder, and then the decoded geometric information is input into the attribute decoder as additional information to assist in the compression of the point cloud attributes.
  • the entire codec consists of preprocessing/postprocessing, geometric encoding/decoding, and attribute encoding/decoding.
  • Point clouds can be encoded and decoded by various types of encoding frameworks and decoding frameworks, respectively.
  • the codec framework may be the Geometry Point Cloud Compression (G-PCC) codec framework or Video Point Cloud Compression (Video Point Cloud Compression) provided by the Moving Picture Experts Group (MPEG).
  • MPEG Moving Picture Experts Group
  • V-PCC Video Point Cloud Compression
  • AVS-PCC codec framework Point Cloud Compression Reference Platform (PCRM) framework provided by the Audio Video Standard (AVS) task force.
  • G-PCC codec framework can be used to compress the first static point cloud and the third type of dynamically acquired point cloud
  • the V-PCC codec framework can be used to compress the second type of dynamic point cloud.
  • the G-PCC codec framework is also called point cloud codec TMC13, and the V-PCC codec framework is also called point cloud codec TMC2. Both G-PCC and AVS-PCC are aimed at static sparse point clouds, and their coding frameworks are roughly the same.
  • the codec framework applicable to the embodiment of the present application will be described below by taking the PCRM framework as an example.
  • Fig. 4 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
  • the geometric information of point cloud and the attribute information corresponding to each point are encoded separately.
  • the original geometric information is preprocessed, that is, the geometric origin is normalized to the minimum value position in the point cloud space through coordinate translation, and the geometric information is converted from floating point numbers to integers through coordinate quantization , which is convenient for subsequent regularization processing. Since the geometric information of some points is the same due to quantization and rounding, it is necessary to decide whether to remove duplicate points at this time. Quantization and removal of duplicate points belong to the preprocessing process; then, the regularized geometric information is Geometric coding, that is, to use the octree structure to recursively divide the point cloud space. Each time the current block is divided into eight sub-blocks of the same size, and the occupancy of each sub-block is judged.
  • the geometric information is reconstructed, and the attribute information is encoded by using the reconstructed geometric information.
  • the attribute coding is mainly for the coding of color and reflectance information. First, determine whether to perform color space conversion. If the processed attribute information is color information, the original color needs to be transformed into a YUV color space that is more in line with the visual characteristics of the human eye; then, the geometric lossy In the case of encoding, since the geometric information changes after geometric encoding, it is necessary to reassign attribute values for each point after geometric encoding, so that the attribute error between the reconstructed point cloud and the original point cloud is minimized.
  • attribute interpolation or Attribute recoloring
  • attribute encoding is performed on the preprocessed attribute information, which is divided into attribute prediction and attribute transformation; the attribute prediction process refers to the reordering of point clouds and attribute prediction.
  • the reordering methods include Morton reordering and Hilbert (Hilbert) reordering; for example, the Hilbert code is used in the AVS coding framework to reorder the point cloud; the sorted point cloud uses a differential method for attribute prediction , specifically, if the geometric information of the current point to be encoded is the same as that of the previous encoded point, that is, it is a repeated point, then use the reconstructed attribute value of the repeated point as the attribute prediction value of the current point to be encoded, otherwise select
  • the m points of the previous Hilbert order are used as neighbor candidate points, and then the Manhattan distance between them and the geometric information of the current point to be encoded is calculated respectively, and the n points with the closest distance are determined as the prediction points of the current point to be encoded, and the distance
  • the reciprocal is used as the weight, and the weighted average of the attributes of n neighbors is calculated as the attribute prediction value of the current point to be encoded.
  • the attribute prediction value of the current point to be encoded can be obtained in the following ways:
  • PredR (1/W 1 ⁇ ref 1 +1/W 2 ⁇ ref 2 +1/W 3 ⁇ ref 3 )/(1/W 1 +1/W 2 +1/W 3 ).
  • W 1 , W 2 , W 3 represent the geometric distances between predicted point 1, predicted point 2, predicted point 3 and the current point to be coded
  • ref 1 , ref 2 , ref 3 represent predicted point 1, predicted point 2, Predict the attribute reconstruction value for point 3.
  • the residual value of the current point to be encoded is obtained based on the attribute prediction value of the current point to be encoded, and the residual value is the difference between the original attribute value and the predicted attribute value of the current point to be encoded value; finally, the residual value is quantized, and the quantized residual is input into the attribute entropy encoder to form an attribute code stream.
  • Fig. 5 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
  • the geometry and attributes are also decoded separately.
  • the geometric decoding part the geometric code stream is first entropy decoded to obtain the geometric information of each point, and then the octree structure is constructed in the same way as the geometric encoding, and the coordinate transformation is reconstructed by combining the decoded geometry.
  • the Morton order is constructed in the same way as the encoding side, and the attribute code stream is entropy decoded to obtain the quantized residual information; then inverse quantization is performed to obtain the point cloud residual; similarly, according to the In the same manner as attribute encoding, the attribute prediction value of the current point to be decoded is obtained, and then the attribute prediction value is added to the residual value to restore the YUV attribute value of the current point to be decoded; finally, the decoded attribute is obtained through inverse color space transformation information.
  • x min min(x 0 ,x 1 ,...,x K-1 );
  • y min min(y 0 ,y 1 ,...,y K-1 );
  • z min min(z 0 ,z 1 ,...,z K-1 );
  • x max max(x 0 ,x 1 ,...,x K-1 );
  • y max max(y 0 ,y 1 ,...,y K-1 );
  • z max max(z 0 , z 1 , . . . , z K ⁇ 1 ).
  • origin (x origin , y origin , z origin ) of the bounding box can be calculated as follows:
  • floor() represents the calculation of rounding down or rounding down.
  • int() means rounding operation.
  • the size of the bounding box in the x, y, and z directions can be calculated as follows:
  • the bounding box is first divided into octrees, and eight sub-blocks are obtained each time, and then the non-empty blocks in the sub-blocks (including points) block) to divide the octree again, so recursively divide until a certain depth, the non-empty sub-block of the final size is called a voxel (voxel), each voxel contains one or more points, and the points of these points
  • the geometric position is normalized to the center point of the voxel, and the attribute value of the center point is the average value of the attribute values of all points in the voxel.
  • Regularizing the point cloud into blocks in space is conducive to the description of the relationship between points and points in the point cloud, and then can express a specific encoding order, and determine a certain order to encode each voxel (voxel), that is, the encoded voxel represents Point (or "node"), a commonly used encoding order is the cross-separated Morton order.
  • FIG. 8 to 10 show the encoding sequence of Morton codes in two-dimensional space.
  • Fig. 11 shows the encoding order of Morton codes in three-dimensional space. The order of the arrows indicates the encoding order of the points under the Morton order.
  • Figure 8 shows the "z"-shaped Morton coding sequence of 2*2 pixels in two-dimensional space
  • Figure 9 shows the "z"-shaped Morton coding sequence between four 2*2 blocks in two-dimensional space
  • FIG. 10 shows the Morton coding order of the "z” shape between four 4*4 blocks in a two-dimensional space, which constitutes the Morton coding order of the entire 8*8 block.
  • the Morton coding sequence extended to three-dimensional space is shown in Figure 11.
  • Figure 11 shows 16 points, and the Morton coding sequence between each "z” and “z” inside each "z” is It is encoded along the x-axis first, then along the y-axis, and finally along the z-axis.
  • the attribute intra-frame prediction part in point cloud compression for the color attribute, mainly refers to the adjacent points of the current point to predict the current point.
  • the difference information is transmitted to the decoding end; after the decoding end receives and analyzes the code stream, it obtains the residual information through steps such as inverse transformation and inverse quantization, and the decoding end predicts and obtains the attribute prediction value through the same process, and obtains the current point after superimposing it with the residual information Property reconstruction value.
  • the C1 test conditions may refer to test conditions for limit-lossy geometry compression and lossy attributes compression.
  • the C2 test conditions may refer to test conditions for geometric lossless compression and lossy attributes compression.
  • the conversion formula for converting the color attribute of the input point cloud from RGB space to YUV space may be as follows:
  • V 0.5*R-0.454153*G-0.045847*B+128;
  • the inverse transformation formula for converting the color attribute of the input point cloud from YUV space to RGB space can be as follows:
  • the point cloud in RGB format can be directly used as the input point cloud.
  • the C3 test conditions may refer to test conditions for geometrically lossless compression and limit-lossy attributes compression.
  • C4 test conditions may refer to test conditions for geometric lossless compression and attribute lossless compression.
  • C3 test conditions and C4 test conditions avoid errors in attribute information caused by color space conversion, and can improve compression performance.
  • the attribute information of the processed point cloud will be input into the point cloud attribute compression encoder in the order of YUV (or RGB), and each point will be predicted, quantized, and entropy encoded in sequence according to the index order of the points in the point cloud.
  • YUV or RGB
  • each point will be predicted, quantized, and entropy encoded in sequence according to the index order of the points in the point cloud.
  • the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value.
  • the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
  • the attribute prediction value of each component can be determined by the following formula:
  • a p represents the attribute prediction value of each component
  • w i represents the weight value of the i-th prediction point among the N prediction points
  • a i represents the attribute prediction value of each component of the i-th point among the N prediction points .
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the encoder calculates the attribute residual value of each component according to the attribute prediction value of each component and the cross-component prediction value of each component.
  • the attribute residual value of each component of the current point can be calculated sequentially in the order of Y component, U component, and V component; if the color attribute of the current point is in the RGB space, then you can The attribute residual value of each component of the current point is calculated sequentially in the order of R component, G component, and B component.
  • attribute residual value of each component can be calculated according to the following formula:
  • delta represents the attribute residual value of each component
  • currValue represents the original value or real value of each component
  • predictor represents the attribute prediction value of each component
  • residualPrevComponent represents the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the cross-component prediction value of the Y component and the R component can be set to 0, and for the U component, V component, G component, and B component, its The cross-component prediction value is determined according to the reconstruction value of the attribute residual obtained by inverse quantization of the attribute quantization residual value of the previously encoded component.
  • the cross-component prediction value of the U component can be the attribute residual reconstruction value obtained by inverse quantization of the attribute quantization residual value of the Y component
  • the cross-component prediction value of the G component can be the attribute quantization residual value of the R component.
  • the cross-component prediction value of the V component may be the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the Y component and the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the U component.
  • the sum of values, the cross-component prediction value of the B component can be the attribute residual reconstruction value obtained by dequantizing the attribute quantized residual value of the R component and the attribute residual obtained by dequantizing the attribute quantized residual value of the G component Sum of reconstructed values.
  • the encoder After the encoder obtains the attribute residual value of each component, it performs a quantization operation on the attribute residual value of each component to obtain the attribute quantized residual value of each component of the current point. For example, if the format of the color information of the current point is YUV format, the attribute residual value of each component of the current point is sequentially quantized in the order of Y component, U component, and V component to obtain the attribute quantized residual value of each component. For another example, if the format of the color information of the current point is RGB format, the attribute residual value of each component of the current point is sequentially quantized in the order of R component, G component, and B component to obtain the attribute quantized residual value of each component.
  • the encoder performs entropy encoding on the attribute quantization residual value of each component of the current point to obtain the code stream of the point cloud.
  • entropy encoding is performed on the attribute quantization residual value of each component in the order of Y component, U component, and V component to obtain the code stream of the point cloud.
  • entropy encoding is performed on the attribute quantization residual value of each component in the order of R component, G component, and B component to obtain the code stream of the point cloud.
  • the entropy coding may be performed on the attribute quantization residual value of each component of the current point according to the following steps:
  • another flag flagyu_rg is introduced, which can be used to indicate whether the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are both 0 for points whose attribute information format is YUV format.
  • the encoder judges whether the attribute quantization residual value of the Y component (or R component) is 0, and if it is 0, executes the following 3), otherwise executes the following 4).
  • the encoder encodes the value of flagyu_rg, and judges whether the attribute quantization residual value of the U component (or G component) is 0, if the attribute quantization residual value of the U component (or G component) is 0, then The attribute quantized residual value of the V component (or B component) is encoded, otherwise, the attribute residual quantized values of the U component and the V component (or the G component and the B component) are encoded sequentially.
  • the encoder encodes the value of flagyu_rg, and judges whether the attribute quantization residual value of the U component is 0, if the attribute quantization residual value of the U component is 0 , then encode the attribute quantized residual value of the V component, otherwise encode the attribute residual quantized value of the U component and the attribute residual quantized value of the V component in sequence.
  • the encoder encodes the value of flagyu_rg and judges whether the attribute quantization residual value of the G component is 0. If the attribute quantization residual value of the G component is 0, the attribute quantization residual value of the B component is encoded, otherwise, the attribute residual quantization value of the G component and the attribute residual quantization value of the B component are encoded sequentially.
  • the encoder sequentially encodes the attribute quantized residual values of the Y component, U component, and V component (or R component, G component, and B component).
  • the encoder encodes the attribute quantization residual value of the Y component, the attribute quantization residual value of the U component, and the attribute quantization residual value of the V component in sequence.
  • the encoder sequentially encodes the attribute quantization residual value of the R component, the attribute quantization residual value of the G component, and the attribute quantization residual value of the B component.
  • the encoder For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
  • the encoder For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
  • the attribute reconstruction value of each component of the current point can be generated according to the following formula:
  • each component in, Indicates the attribute reconstruction value of each component of the current point, Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • step c the attribute quantization residual value of each component will be dequantized to obtain the attribute residual reconstruction value of each component. Therefore, the generation process of the attribute reconstruction value of each component in step e) can also be integrated in step c). In addition, if the generation process of the attribute reconstruction value of each component in step e) is integrated in step c), then the generation process of the attribute reconstruction value of each component can also follow the U component, Y component, V component (or G component, G component, The order of R component, B component) is dequantized. This application does not specifically limit it.
  • the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value.
  • the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
  • the attribute prediction value of each component can be determined by the following formula:
  • a p represents the attribute prediction value of each component
  • w i represents the weight value of the i-th prediction point among the N prediction points
  • a i represents the attribute prediction value of each component of the i-th point among the N prediction points .
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the decoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of Y component, U component, and V component.
  • the encoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of R component, G component, and B component.
  • the decoder decodes and obtains the flag bit flag_r from the code stream. If flag_r is 0, the attribute quantization residual value of the Y component is 0, and flag_r is 1, then Y The attribute quantization residual value of the component is not 0.
  • the decoder decodes and obtains the flag bit flag_r from the code stream. If flag_r is 0, the attribute quantization residual value of the R component is 0. If flag_r is 1, Then the attribute quantization residual value of the R component is not 0.
  • the decoder decodes and obtains the flag bit flagyu_rg from the code stream. If flagyu_rg is 0, the attribute quantization residual value of the U component is 0, and decodes it from the code stream Obtain the attribute quantization residual value of the V component; otherwise, sequentially decode and obtain the attribute quantization residual value of the U component and the attribute quantization residual value of the V component from the code stream. For another example, if the color information format of the current point is in RGB format, the decoder decodes the flag bit flagyu_rg from the code stream.
  • the decoder decodes and obtains the flag bit flag_r from the code stream, and if flag_r is 1, the decoder sequentially decodes and obtains the attribute quantization residual value of the Y component from the code stream , the attribute quantized residual value of the U component and the attribute quantized residual value of the V component.
  • the decoder decodes from the code stream to obtain the flag bit flag_r, if flag_r is 1, the decoder sequentially decodes from the code stream to obtain the attribute quantization residual of the R component value, the attribute quantized residual value of the G component, and the attribute quantized residual value of the B component.
  • the encoder For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
  • the encoder For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
  • the attribute reconstruction value of each component of the current point can be generated according to the following formula:
  • each component in, Indicates the attribute reconstruction value of each component of the current point, Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the encoder encodes the attribute quantization residual value of each component of the current point in the order of Y component, U component, and V component. If the color information of the current point The format of the information is RGB format, and the encoder encodes the attribute quantization residual value of each component in the order of R component, G component, and B component; correspondingly, if the format of the color information of the current point is YUV format, the decoder follows The sequence of Y component, U component, and V component decodes the attribute quantization residual value of each component of the current point. If the format of the color information of the current point is RGB format, the decoder follows the sequence of R component, G component, and B component The attribute quantized residual values of the respective components are decoded.
  • the decoder needs to decode two flag bits when decoding the attribute quantization residual value of each component of the current point.
  • a flag bit flag_r is introduced, and the format for the attribute information is YUV Format point, which can be used to indicate whether the attribute quantization residual value of the Y component is 0, and for a point whose attribute information format is RGB format, it can be used to indicate whether the attribute quantization residual value of the R component is 0.
  • YUV Format point which can be used to indicate whether the attribute quantization residual value of the Y component is 0, and for a point whose attribute information format is RGB format, it can be used to indicate whether the attribute quantization residual value of the R component is 0.
  • another flag flagyu_rg is introduced, which can be used to indicate whether the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are both 0 for points whose attribute information format is YUV format.
  • the present application provides a decoding method, which improves decoding efficiency by changing the decoding order of components before this point.
  • Fig. 12 is a schematic flowchart of a decoding method 100 provided by an embodiment of the present application.
  • the method 100 can be executed by a decoder or a decoding framework, such as the decoding framework shown in FIG. 5 .
  • the decoding method 100 may include:
  • the first The component is U component, V component, G component or B component;
  • the first component to be decoded (that is, the first component) at the current point is designed as a U component, a V component, a G component or a B component, which can improve decompression performance.
  • the first component to be decoded at the current point (that is, the first component) is designed as a U component or a V component.
  • this application first decodes the U component or the V component
  • Decoding the Y component is equivalent to determining the cross-component prediction value of the Y component based on the attribute residual reconstruction value of the U component or the attribute residual reconstruction value of the V component, and then determining the Y component based on the cross-component prediction value of the Y component
  • the attribute reconstruction value of the accuracy of the attribute reconstruction value of the Y component can be improved, thereby improving the decompression performance.
  • Table 1 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of limit-lossy geometry compression and lossy attributes compression.
  • Cat1B, Cat1C and Cat3 represent different types of Point cloud of attribute information.
  • Table 2 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of lossless geometry compression and attribute lossy compression.
  • Table 3 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of geometric lossless compression and limit-lossy attributes compression.
  • Table 4 shows the bpip ratios of Cat1B, Cat1C and Cat3 in the case of geometric lossless compression and attribute lossless compression.
  • the first component is a U component
  • the second component may be a V component
  • the third component may be a Y component.
  • the first component is a U component
  • the second component may be a Y component
  • the third component may be a V component.
  • the first component is a V component
  • the second component may be a Y component
  • the third component may be a U component
  • the first component is a V component
  • the second component may be a U component
  • the third component may be a Y component.
  • the first component is a G component
  • the second component may be an R component
  • the third component may be a B component.
  • the first component is a G component
  • the second component may be a B component
  • the third component may be an R component
  • the first component is a B component
  • the second component may be an R component
  • the third component may be a G component
  • the first component is a B component
  • the second component may be a G component
  • the third component may be an R component
  • the S110 may include:
  • Decoding the code stream to obtain a first identifier the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero;
  • the code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are obtained.
  • the decoder decodes the code stream , to obtain the first identification, the value of the first identification is used to indicate whether the attribute quantization residual value of the U component is zero; after the decoder obtains the first identification, it can base the value of the first identification on the code stream Perform analysis to obtain the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component in sequence.
  • the decoder performs Decoding to obtain the first identification, the value of the first identification is used to indicate whether the attribute quantization residual value of the G component is zero; after the decoder obtains the first identification, it can base the value of the first identification on the code
  • the stream is analyzed to obtain the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the V component in sequence.
  • the decoder determines that the attribute quantization residual value of the first component is zero, and based on the second identifier obtained by decoding the code stream, Obtain the attribute residual value of the second component and the attribute residual value of the third component; the value of the second identifier is used to represent the attribute quantized residual value of the first component and the attribute quantized residual value of the second component Whether the differences are all zero.
  • the first value may be 0 or other values.
  • the decoder determines that the attribute quantization residual value of the U component is zero, and based on the second identifier obtained by decoding the code stream, obtains the attribute residual value of the Y component and the attribute residual value of the V component value.
  • the attribute information format of the current point as an example in RGB format
  • the second component can be an R component
  • the third component can be a B component.
  • the decoder determines that the attribute quantization residual value of the G component is zero, and based on the second identifier obtained by decoding the code stream, obtains the attribute residual value of the R component and the attribute residual value of the B component. difference.
  • the code stream includes the first identifier and the second identifier
  • the first identifier can be used to determine whether the attribute quantization residual value of the first component is zero
  • the second identifier can be used to determine whether the second The property of the component quantifies whether the residual value is zero. If the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero, then for the attribute quantization residual value of the first component, the attribute quantization residual value of the second component and For the attribute quantized residual value of the third component, the code stream only includes a result obtained by encoding the attribute quantized residual value of the third component.
  • the decoder can also sequentially obtain The attribute quantized residual value of the second component and the attribute quantized residual value of the third component are acquired.
  • the code stream may not include the second identifier, and the code stream includes the result obtained by sequentially encoding the attribute quantization residual value of the second component and the attribute quantization residual value of the third component. This is not specifically limited.
  • the value of the second identifier if the value of the second identifier is the first value, it is determined that the attribute quantization residual value of the second component is zero, and the attribute quantization residual value of the third component is obtained from the code stream.
  • a difference value if the value of the second identifier is a second value, sequentially acquire the attribute quantization residual value of the second component and the attribute quantization residual value of the third component from the code stream.
  • the first value may be 0 or other values.
  • the second value may be 1 or other values.
  • the decoder determines that the attribute quantization residual value of the Y component is zero, and obtains the attribute quantization residual value of the V component from the code stream; if the value of the second identifier is the second value , the attribute quantization residual value of the Y component and the attribute quantization residual value of the V component are sequentially obtained from the code stream.
  • the second component can be an R component
  • the third component can be a B component. If the second identified If the value is the first value, the decoder determines that the attribute quantization residual value of the R component is zero, and obtains the attribute quantization residual value of the B component from the code stream; if the value of the second identifier is the second value, the attribute quantization residual value of the R component and the attribute quantization residual value of the B component are sequentially obtained from the code stream.
  • the value of the first identifier is a second value; the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the third The attributes of the components quantize the residual values.
  • the second value may be 1 or other values.
  • the decoder may sequentially obtain the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component from the code stream.
  • the attribute information format of the current point as an example in RGB format
  • the second component can be an R component
  • the third component can be a B component. If the first identified If the value is the second value, the decoder can sequentially obtain the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the B component from the code stream.
  • the first component to be decoded at the current point (that is, the first component) is designed as a U component, a V component, a G component or a B component, which can increase the value of the first flag to the second value.
  • the probability is equivalent to that the decoding end does not need to decode the second identifier, and furthermore, the decoding efficiency of the decoder can be improved.
  • the S120 may include:
  • the first component, the second component, and the third component dequantize the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component; obtain The attribute residual reconstruction value of each component; obtain the attribute prediction value of each component; obtain the cross-component prediction value of each component; reconstruct the attribute residual value of each component, the attribute of each component
  • the predicted value, and the sum of the cross-component predicted values for each component is determined as the property reconstruction value for each component.
  • the decoder obtains the first component After the attribute quantized residual value of the second component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component, the predicted attribute value of the Y component, the predicted attribute value of the U component, and the predicted attribute value of the V component can be sequentially obtained.
  • the decoder converts the attribute residual value of the Y component, the attribute prediction value of the Y component And the sum of the cross-component prediction values of the Y component is determined as the attribute reconstruction value of the Y component, and the sum of the attribute residual value of the U component, the attribute prediction value of the U component, and the cross-component prediction value of the U component is determined as the U component
  • the attribute reconstruction value of the V component, the sum of the attribute residual value of the V component, the attribute prediction value of the V component, and the cross-component prediction value of the V component is determined as the attribute reconstruction value of the V component.
  • the decoder obtains the first After the attribute quantization residual value of the component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component, the attribute prediction value of the R component, the attribute prediction value of the G component, and the B component can be sequentially obtained
  • the attribute prediction value of the R component and sequentially obtain the cross-component prediction value of the R component, the cross-component prediction value of the G component, and the cross-component prediction value of the B component;
  • the sum of the value and the cross-component prediction value of the R component is determined as the attribute reconstruction value of the R component, and the sum of the attribute residual value of the G component, the attribute prediction value of the G component, and the cross-component prediction value of the G component is determined as G
  • the sum of the attribute residual value of the component, the attribute prediction value of the G component, and the cross-component prediction value of the G component is determined as G
  • the order in which the decoder obtains the attribute prediction values of each component of the current point may be consistent with the order in which the decoder obtains the attribute quantization residual values of each component of the current point, or may be inconsistent, and this application does not specifically limit this .
  • the cross-component predicted value of the first component is zero; the cross-component predicted value of the first component is zero; the cross-component predicted value of the second component is the attribute residual reconstruction value of the first component ;
  • the cross-component prediction value of the third component is the sum of the attribute residual reconstruction value of the first component and the attribute residual reconstruction value of the second component.
  • the attribute information format of the current point is YUV format as an example, assuming that the first component is the U component, the second component can be the Y component, the third component can be the V component, and the cross-component prediction value of the U component is zero; the cross-component predicted value of the Y component is the attribute residual reconstruction value of the U component, and the cross-component predicted value of the V component is the sum of the attribute residual reconstruction value of the U component and the attribute residual reconstruction value of the Y component.
  • the attribute information format of the current point as an example in RGB format, assuming that the first component is the G component, the second component can be the R component, and the third component can be the B component.
  • Cross-component prediction of the G component The value is zero; the cross-component prediction value of the R component is the attribute residual reconstruction value of the G component; the cross-component prediction value of the B component is the sum of the attribute residual reconstruction value of the G component and the attribute residual reconstruction value of the R component.
  • the decoder can search for N prediction points for the current point within the range formed by the points that have been decoded before the current point; Calculate the weight value corresponding to the N prediction points; finally, the decoder can obtain the attribute of each component based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the N prediction points Predictive value.
  • the decoder After the decoder obtains the weight values corresponding to the N prediction points, it can The attribute reconstruction value of the point before the point is obtained to obtain the attribute prediction value of the Y component of the point, and based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the U component of the N prediction points, the U of the point before the point is obtained.
  • the attribute prediction value of the component based on the weight values corresponding to the N prediction points and the attribute reconstruction value of the V component of the N prediction points, the attribute prediction value of the V component of the point before the point is obtained.
  • the decoder after the decoder obtains the weight values corresponding to the N prediction points, it can base on the weight values corresponding to the N prediction points and the R values of the N prediction points.
  • the attribute reconstruction value of the component obtain the attribute prediction value of the R component of the point before the point, based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the G component of the N prediction points, obtain the point before the point.
  • the attribute prediction value of the G component is based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the B components of the N prediction points, and the attribute prediction value of the B component of the point before the point is obtained.
  • the decoding end sequentially decodes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component of the current point from the code stream of the current point cloud.
  • the first component is a U component
  • the second component may be a Y component
  • the third component may be a V component.
  • the first component is a G component
  • the second component may be an R component
  • the third component may be a B component.
  • the execution process at the decoding end may include the following steps:
  • the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value.
  • the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
  • the attribute prediction value of each component can be determined by the following formula:
  • a p represents the attribute prediction value of each component
  • w i represents the weight value of the i-th prediction point among the N prediction points
  • a i represents the attribute prediction value of each component of the i-th point among the N prediction points .
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the decoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of U component, Y component, and V component.
  • the encoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of G component, R component, and B component.
  • the decoder decodes and obtains the flag bit flagu_g from the code stream. If flagu_g is 0, the attribute quantization residual value of the U component is 0, and flag_g is 1, then U The attribute quantization residual value of the component is not 0.
  • the decoder decodes and obtains the flag bit flagu_g from the code stream. If flagu_g is 0, the attribute quantization residual value of the G component is 0. If flag_g is 1, Then the attribute quantization residual value of the G component is not 0.
  • the decoder decodes and obtains the flag bit flaguy_gr from the code stream, and if flaguy_gr is 0, the attribute quantization residual value of the Y component is 0, and decodes it from the code stream Obtain the attribute quantization residual value of the V component; otherwise, sequentially decode and obtain the attribute quantization residual value of the Y component and the attribute quantization residual value of the V component from the code stream.
  • the decoder decodes and obtains the flag bit flaguy_gr from the code stream.
  • the decoder decodes and obtains the flag bit flagu_g from the code stream. If flag_g is 1, the decoder sequentially decodes and obtains the attribute quantization residual value of the U component from the code stream , the attribute quantized residual value of the Y component, and the attribute quantized residual value of the V component.
  • the decoder decodes the bit flag flag_g from the code stream, and if flagu_g is 1, the decoder decodes the attribute quantization residual of the G component sequentially from the code stream value, the attribute quantized residual value of the R component, and the attribute quantized residual value of the B component.
  • the encoder For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
  • the encoder For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
  • the attribute reconstruction value of each component of the current point can be generated according to the following formula:
  • each component in, Indicates the attribute reconstruction value of each component of the current point, Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the encoder encodes the attribute quantization residual value of each component of the current point in the order of U component, Y component, and V component. If the color information of the current point The format of the information is RGB format, and the encoder encodes the attribute quantization residual value of each component in the order of G component, R component, and B component; correspondingly, if the format of the color information of the current point is YUV format, the decoder follows The order of U component, Y component, and V component decodes the attribute quantization residual value of each component of the current point. If the format of the color information of the current point is RGB format, the decoder follows the order of G component, R component, and B component. The attribute quantized residual values of the respective components are decoded.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be used in this application.
  • the implementation of the examples constitutes no limitation.
  • FIG. 13 is a schematic flowchart of an encoding-based method 200 provided by an embodiment of the present application.
  • the method 200 can be executed by an encoder or an encoding framework, such as the encoding framework shown in FIG. 4 .
  • the encoding method 200 may include:
  • S210 determine the attribute quantization residual value of the first component of the current point to be encoded in the current point cloud, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component; wherein, the first component is U component, V component, G component or B component;
  • the code stream includes a result obtained by encoding the value of the first identifier, and the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero.
  • the value of the first flag is the first value, it means that the attribute quantization residual value of the first component is zero; if the value of the first flag is the second value, it means that the The attribute quantization residual value of the first component is not zero.
  • the code stream further includes the result obtained by encoding the value of the second identifier, and the value of the second identifier is used to represent the Whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
  • the value of the second flag is the first value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero; if the second If the value of the flag is the second value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero.
  • the code stream further includes encoding the attribute quantization residual value of the third component The results obtained.
  • the code stream further includes the attribute quantization residual value of the second component in turn The result obtained by encoding with the attribute quantization residual value of the third component.
  • the code stream further includes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component in sequence And the result obtained by encoding the attribute quantization residual value of the third component.
  • the method 200 may also include:
  • each of the first component, the second component, and the third component perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
  • the sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
  • the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
  • N prediction points are searched for the current point within the range formed by the points encoded before the current point; according to the geometric distance between the N prediction points and the current point, the N prediction points are calculated.
  • the weight value corresponding to the prediction point; based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the N prediction points, the attribute prediction value of each component is obtained.
  • the encoder sequentially encodes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component to obtain the code of the current point cloud Stream;
  • the first component is a U component
  • the second component may be a Y component
  • the third component may be a V component.
  • the first component is a G component
  • the second component may be an R component
  • the third component may be a B component.
  • the execution process at the encoding end may include the following steps:
  • the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value.
  • the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
  • the attribute prediction value of each component can be determined by the following formula:
  • a p represents the attribute prediction value of each component
  • w i represents the weight value of the i-th prediction point among the N prediction points
  • a i represents the attribute prediction value of each component of the i-th point among the N prediction points .
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • the encoder calculates the attribute residual value of each component according to the attribute prediction value of each component and the cross-component prediction value of each component.
  • the attribute residual value of each component of the current point can be calculated sequentially in the order of Y component, U component, and V component; if the color attribute of the current point is in the RGB space, then you can The attribute residual value of each component of the current point is calculated sequentially in the order of R component, G component, and B component.
  • attribute residual value of each component can be calculated according to the following formula:
  • delta represents the attribute residual value of each component
  • currValue represents the original value or real value of each component
  • predictor represents the attribute prediction value of each component
  • residualPrevComponent represents the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, each component can include R component, G component and the B component.
  • the cross-component prediction value of the Y component and the R component can be set to 0, and for the U component, V component, G component, and B component, its The cross-component predictive value is determined according to the attribute residual value obtained through inverse quantization of the attribute quantization residual value of the previously encoded component.
  • the cross-component prediction value of the U component can be the attribute residual reconstruction value obtained by inverse quantization of the attribute quantization residual value of the Y component
  • the cross-component prediction value of the G component can be the attribute quantization residual value of the R component.
  • the cross-component prediction value of the V component may be the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the Y component and the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the U component.
  • the sum of values, the cross-component prediction value of the B component can be the attribute residual value obtained by dequantizing the attribute quantized residual reconstruction value of the R component and the attribute residual obtained by dequantizing the attribute quantized residual value of the G component Sum of reconstructed values.
  • the encoder After the encoder obtains the attribute residual value of each component, it performs a quantization operation on the attribute residual value of each component to obtain the attribute quantized residual value of each component of the current point. For example, if the format of the color information of the current point is YUV format, the attribute residual value of each component of the current point is sequentially quantized in the order of Y component, U component, and V component to obtain the attribute quantized residual value of each component. For another example, if the format of the color information of the current point is RGB format, the attribute residual value of each component of the current point is sequentially quantized in the order of R component, G component, and B component to obtain the attribute quantized residual value of each component.
  • the encoder performs entropy encoding on the attribute quantization residual value of each component of the current point to obtain the code stream of the point cloud.
  • entropy encoding is performed on the attribute quantization residual value of each component in the order of U component, Y component, and V component to obtain the code stream of the point cloud.
  • entropy encoding is performed on the attribute quantization residual value of each component in the order of G component, R component, and B component to obtain the code stream of the point cloud.
  • the entropy coding may be performed on the attribute quantization residual value of each component of the current point according to the following steps:
  • another flag bit flaguy_gr is introduced, which can be used to indicate whether the attribute residual quantization value of the U component and the attribute residual quantization value of the Y component are both 0 for points whose attribute information format is YUV format.
  • the encoder judges whether the attribute quantization residual value of the U component (or G component) is 0, and if it is 0, executes the following 3), otherwise executes the following 4).
  • the encoder encodes the value of flaguy_gr, and judges whether the attribute quantization residual value of the Y component (or R component) is 0, if the attribute quantization residual value of the Y component (or R component) is 0, then The attribute quantization residual value of the V component (or B component) is encoded, otherwise, the attribute residual quantization values of the Y component and the V component (or R component and B component) are encoded sequentially.
  • the encoder encodes the value of flaguy_gr, and judges whether the attribute quantization residual value of the Y component is 0, if the attribute quantization residual value of the Y component is 0 , then encode the attribute quantized residual value of the V component, otherwise encode the attribute residual quantized value of the Y component and the attribute residual quantized value of the V component in sequence.
  • the encoder encodes the value of flaguy_gr and judges whether the attribute quantization residual value of the R component is 0. If the attribute quantization residual value of the R component is 0, the attribute quantization residual value of the B component is encoded, otherwise, the attribute residual quantization value of the R component and the attribute residual quantization value of the B component are encoded sequentially.
  • the encoder sequentially encodes the attribute quantized residual values of the U component, the Y component, and the V component (or G component, R component, and B component).
  • the encoder sequentially encodes the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component.
  • the encoder sequentially encodes the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the B component.
  • the encoder For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
  • the encoder For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
  • the attribute reconstruction value of each component of the current point can be generated according to the following formula:
  • each component in, Indicates the attribute reconstruction value of each component of the current point, Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component.
  • each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
  • step c the attribute quantization residual value of each component will be dequantized to obtain the attribute residual reconstruction value of each component. Therefore, the generation process of the attribute reconstruction value of each component in step e) can also be integrated in step c). In addition, if the generation process of the attribute reconstruction value of each component in step e) is integrated in step c), then the generation process of the attribute reconstruction value of each component can also follow the U component, Y component, V component (or G component, G component, The order of R component, B component) is dequantized. This application does not specifically limit it.
  • Fig. 14 is a schematic block diagram of a decoder 300 provided by an embodiment of the present application.
  • the decoder 300 may include:
  • the decoding unit 310 is used to sequentially decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud; wherein , the first component is a U component, a V component, a G component or a B component;
  • the acquiring unit 320 is configured to acquire the attribute reconstruction value of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component.
  • the decoding unit 310 is specifically used for:
  • Decoding the code stream to obtain a first identifier the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero;
  • the code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are obtained.
  • the value of the first identifier is a first value; wherein, the decoding unit 310 is specifically configured to:
  • the value of the second identifier is used to represent the attribute of the first component Whether the quantized residual value and the attribute quantized residual value of the second component are both zero.
  • the decoding unit 310 is specifically used for:
  • the value of the second identifier is the first value, determine that the attribute quantization residual value of the second component is zero, and acquire the attribute quantization residual value of the third component from the code stream;
  • the attribute quantization residual value of the second component and the attribute quantization residual value of the third component are sequentially obtained from the code stream.
  • the value of the first identifier is a second value; wherein, the decoding unit 310 is specifically configured to:
  • the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are sequentially acquired from the code stream.
  • the acquiring unit 320 is specifically configured to:
  • each of the first component, the second component, and the third component perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
  • the sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
  • the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
  • the acquiring unit 320 is specifically configured to:
  • the attribute prediction value of each component is obtained.
  • the decoder 300 can also be combined with the decoding framework shown in FIG. 5 , that is, units in the decoder 300 can be replaced or combined with relevant parts in the decoding framework.
  • the acquiring unit 320 can be used to implement the attribute prediction part in the decoding framework.
  • FIG. 15 is a schematic block diagram of an encoder 400 provided by an embodiment of the present application.
  • the encoder 400 may include:
  • a determination unit 410 configured to determine the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component of the current point to be encoded in the current point cloud; wherein, the The first component is a U component, a V component, a G component or a B component;
  • An encoding unit 420 configured to sequentially encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component to obtain the code stream of the current point cloud .
  • the code stream includes a result obtained by encoding the value of the first identifier, and the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero.
  • the value of the first flag is the first value, it means that the attribute quantization residual value of the first component is zero; if the value of the first flag is the second value, it means that the The attribute quantization residual value of the first component is not zero.
  • the code stream further includes the result obtained by encoding the value of the second identifier, and the value of the second identifier is used to represent the Whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
  • the value of the second flag is the first value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero; if the second If the value of the flag is the second value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero.
  • the code stream further includes encoding the attribute quantization residual value of the third component The results obtained.
  • the code stream further includes the attribute quantization residual value of the second component in turn The result obtained by encoding with the attribute quantization residual value of the third component.
  • the code stream further includes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component in sequence And the result obtained by encoding the attribute quantization residual value of the third component.
  • the determining unit 410 is also used to:
  • each of the first component, the second component, and the third component perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
  • the sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
  • the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
  • the determining unit 410 is specifically configured to:
  • the attribute prediction value of each component is obtained.
  • the encoder 400 can also be combined with the encoding framework shown in FIG. 4 , that is, units in the encoder 400 can be replaced or combined with relevant parts in the encoding framework.
  • the determining unit 410 can be used to implement the attribute prediction part in the encoding framework.
  • the device embodiment and the method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, details are not repeated here.
  • the decoder 300 may correspond to the corresponding subject in the method 100 of the embodiment of the present application, and each unit in the decoder 300 is to realize the corresponding process in the method 100
  • the encoder 400 may correspond to the execution
  • the corresponding subjects in the method 200 in the embodiment of the present application, and each unit in the encoder 400 are to implement the corresponding processes in the method 200 respectively, and for the sake of brevity, details are not repeated here.
  • the various units in the decoder 300 or the encoder 400 involved in the embodiment of the present application can be respectively or all combined into one or several other units to form, or some (some) units can be further disassembled. Divided into a plurality of functionally smaller units, this can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application.
  • the above-mentioned units are divided based on logical functions. In practical applications, the functions of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present application, the decoder 300 or the encoder 400 may also include other units. In practical applications, these functions may also be implemented with the assistance of other units, and may be implemented cooperatively by multiple units.
  • a general-purpose computing device including a general-purpose computer such as a central processing unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and a storage element
  • a computer program capable of executing the steps involved in the corresponding method to construct the decoder 300 or encoder 400 involved in the embodiment of the present application, and realize the point cloud attribute prediction based on the embodiment of the present application Codec method.
  • the computer program can be recorded in, for example, a computer-readable storage medium, and loaded on any electronic device with data processing capability through the computer-readable storage medium, and run in it to implement the corresponding method of the embodiment of the present application.
  • the units mentioned above can be implemented in the form of hardware, can also be implemented by instructions in the form of software, and can also be implemented in the form of a combination of software and hardware.
  • each step of the method embodiment in the embodiment of the present application can be completed by an integrated logic circuit of the hardware in the processor and/or instructions in the form of software, and the steps of the method disclosed in the embodiment of the present application can be directly embodied as hardware
  • the execution of the decoding processor is completed, or the combination of hardware and software in the decoding processor is used to complete the execution.
  • the software may be located in mature storage media in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, and registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps in the above method embodiments in combination with its hardware.
  • FIG. 16 is a schematic structural diagram of a codec device 500 provided by an embodiment of the present application.
  • the codec device 500 includes at least a processor 510 and a computer-readable storage medium 520 .
  • the processor 510 and the computer-readable storage medium 520 may be connected through a bus or in other ways.
  • the computer-readable storage medium 520 is used for storing a computer program 521
  • the computer program 521 includes computer instructions
  • the processor 510 is used for executing the computer instructions stored in the computer-readable storage medium 520 .
  • the processor 510 is the computing core and the control core of the codec device 500, which is suitable for realizing one or more computer instructions, and is specifically suitable for loading and executing one or more computer instructions so as to realize corresponding method procedures or corresponding functions.
  • the processor 510 may also be called a central processing unit (Central Processing Unit, CPU).
  • the processor 510 may include but not limited to: a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the computer-readable storage medium 520 can be a high-speed RAM memory, or a non-volatile memory (Non-VolatileMemory), such as at least one disk memory; computer readable storage medium.
  • the computer-readable storage medium 520 includes, but is not limited to: volatile memory and/or non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • the volatile memory can be Random Access Memory (RAM), which acts as external cache memory.
  • RAM Random Access Memory
  • Static Random Access Memory SRAM
  • Dynamic Random Access Memory DRAM
  • Synchronous Dynamic Random Access Memory Synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory Double Data Rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory Enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the codec device 500 may be the encoding framework shown in FIG. 4 or the encoder 400 shown in FIG. 15; the first computer instruction is stored in the computer-readable storage medium 520; the processor 510 Load and execute the first computer instruction stored in the computer-readable storage medium 520, to realize the corresponding steps in the method embodiment shown in FIG. 13; Step 510 loads and executes corresponding steps, which will not be repeated here to avoid repetition.
  • the codec device 500 may be the decoding framework shown in FIG. 5 or the decoder 300 shown in FIG. 14 ; second computer instructions are stored in the computer-readable storage medium 520 ; Load and execute the second computer instructions stored in the computer-readable storage medium 520 to implement the corresponding steps in the method embodiment shown in FIG. 12; Step 510 loads and executes corresponding steps, which will not be repeated here to avoid repetition.
  • the embodiment of the present application further provides a computer-readable storage medium (Memory).
  • the computer-readable storage medium is a memory device in the codec device 500 and is used to store programs and data.
  • computer readable storage medium 520 may include a built-in storage medium in the codec device 500 , and of course may also include an extended storage medium supported by the codec device 500 .
  • the computer-readable storage medium provides a storage space, and the storage space stores the operating system of the codec device 500 .
  • one or more computer instructions adapted to be loaded and executed by the processor 510 are also stored in the storage space, and these computer instructions may be one or more computer programs 521 (including program codes). These computer instructions are used for the computer to execute the coding and decoding methods based on point cloud attribute prediction provided in the various optional ways above.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • computer program 521 the codec device 500 may be a computer, the processor 510 reads the computer instructions from the computer-readable storage medium 520, and the processor 510 executes the computer instructions, so that the computer executes the point-based Encoding and decoding methods for cloud property prediction.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, from a website, computer, server, or data center via Wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) transmission to another website site, computer, server or data center.
  • Wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Embodiments of the present application provide a decoding method, an encoding method, a decoder, and an encoder. The method comprises: sequentially decoding, from a code stream of a current point cloud, an attribute quantization residual value of a first component, an attribute quantization residual value of a second component, and an attribute quantization residual value of a third component of a current point, wherein the first component is a U component, a V component, a G component, or a B component; and obtaining an attribute reconstruction value of the current point on the basis of the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component. According to the present application, a first component to be decoded (i.e., the first component) of the current point is designed as a U component, a V component, a G component or a B component, so that the decompression performance can be improved.

Description

解码方法、编码方法、解码器以及编码器Decoding method, encoding method, decoder and encoder 技术领域technical field
本申请实施例涉及编解码技术领域,并且更具体地,涉及解码方法、编码方法、解码器以及编码器。The embodiments of the present application relate to the technical field of encoding and decoding, and more specifically, relate to a decoding method, an encoding method, a decoder, and an encoder.
背景技术Background technique
点云已经开始普及到各个领域,例如,虚拟/增强现实、机器人、地理信息系统、医学领域等。随着扫描设备的基准度和速率的不断提升,可以准确地获取物体表面的大量点云,往往一个场景下就可以对应几十万个点。数量如此庞大的点也给计算机的存储和传输带来了挑战。因此,对点的压缩和解压缩也就成为一个热点问题。Point cloud has begun to be popularized in various fields, for example, virtual/augmented reality, robotics, geographic information system, medical field, etc. With the continuous improvement of the benchmark and speed of scanning equipment, a large number of point clouds on the surface of objects can be accurately obtained, often corresponding to hundreds of thousands of points in one scene. Such a large number of points also brings challenges to computer storage and transmission. Therefore, point-to-point compression and decompression has become a hot issue.
对于点云的压缩来说,主要需要压缩其位置信息和属性信息。具体而言,先通过对点云的位置信息进行八叉树编码;同时,根据八叉树编码后的当前点的位置信息在已编码的点中选择出用于预测当前点的属性预测值的点后,基于选择出的点对当前点的属性信息进行预测,再通过与属性信息的原始值进行做差的方式来编码属性信息,以实现对点云的编码。对于点云的解压缩来说,其过程为点云压缩的反向过程。For point cloud compression, it is mainly necessary to compress its position information and attribute information. Specifically, the position information of the point cloud is firstly encoded by octree; at the same time, according to the position information of the current point encoded by the octree, the method used to predict the attribute prediction value of the current point is selected from the encoded points. After pointing, predict the attribute information of the current point based on the selected point, and then encode the attribute information by making a difference with the original value of the attribute information, so as to realize the encoding of the point cloud. For point cloud decompression, the process is the reverse process of point cloud compression.
截止目前,如何提升解码器的解压缩性能仍然是本领域亟需解决的技术问题。So far, how to improve the decompression performance of the decoder is still a technical problem that needs to be solved urgently in this field.
发明内容Contents of the invention
本申请实施例提供了一种解码方法、编码方法、解码器以及编码器,能够提升解压缩性能。Embodiments of the present application provide a decoding method, an encoding method, a decoder, and an encoder, which can improve decompression performance.
第一方面,本申请提供了一种解码方法,包括:In a first aspect, the present application provides a decoding method, including:
依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;Decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud in sequence; wherein, the first component is U component, V component, G component or B component;
基于该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,获取该当前点的属性重建值。Based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component, the attribute reconstruction value of the current point is acquired.
第二方面,本申请提供了一种编码方法,包括:In a second aspect, the present application provides an encoding method, including:
确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;Determine the attribute quantization residual value of the first component of the current point to be encoded in the current point cloud, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component; wherein, the first component is the U component , V component, G component or B component;
依次编码该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,以得到该当前点云的码流。The attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component are sequentially encoded to obtain the code stream of the current point cloud.
第三方面,本申请提供了一种解码器,包括:In a third aspect, the present application provides a decoder, including:
解码单元,用于依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;The decoding unit is used to sequentially decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud; wherein, The first component is a U component, a V component, a G component or a B component;
获取单元,用于基于该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,获取该当前点的属性重建值。An acquiring unit, configured to acquire the attribute reconstruction value of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component.
第四方面,本申请提供了一种编码器,包括:In a fourth aspect, the present application provides an encoder, including:
确定单元,用于确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;The determination unit is used to determine the attribute quantization residual value of the first component, the attribute quantization residual value of the second component and the attribute quantization residual value of the third component of the current point to be encoded in the current point cloud; wherein, the first One component is U component, V component, G component or B component;
编码单元,用于依次编码该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,以得到该当前点云的码流。The encoding unit is configured to sequentially encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component to obtain the code stream of the current point cloud.
第五方面,本申请提供了一种编解码设备,包括:In a fifth aspect, the present application provides a codec device, including:
处理器,适于实现计算机指令;以及,a processor adapted to implement computer instructions; and,
计算机可读存储介质,计算机可读存储介质存储有计算机指令,计算机指令适于由处理器加载并执行上述第一方面至第二方面中的任一方面或其各实现方式中的编解码方法。A computer-readable storage medium, the computer-readable storage medium stores computer instructions, and the computer instructions are suitable for being loaded by a processor and executing any one of the above-mentioned first to second aspects or the encoding and decoding methods in each implementation manner.
在一种实现方式中,该处理器为一个或多个,该存储器为一个或多个。In an implementation manner, there are one or more processors, and one or more memories.
在一种实现方式中,该计算机可读存储介质可以与该处理器集成在一起,或者该计算机可读存储介质与处理器分离设置。In an implementation manner, the computer-readable storage medium may be integrated with the processor, or the computer-readable storage medium may be provided separately from the processor.
第六方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机指令,该计算机指令被计算机设备的处理器读取并执行时,使得计算机设备执行上述第一方面至第二方面中的任一方面或其各实现方式中的编解码方法。In a sixth aspect, the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores computer instructions, and when the computer instructions are read and executed by a processor of a computer device, the computer device executes the above-mentioned first step. A codec method in any one of the first aspect to the second aspect or its various implementation manners.
基于以上技术方案,将当前点的的第一个待解码的分量(即第一分量)设计为U分量、V分量、G分量或B分量,能够提升解压缩性能。Based on the above technical solutions, designing the first component to be decoded at the current point (that is, the first component) as a U component, a V component, a G component or a B component can improve decompression performance.
附图说明Description of drawings
图1是本申请实施例提供的点云图像的示例。Fig. 1 is an example of a point cloud image provided by an embodiment of the present application.
图2是图1所示的点云图像的局部放大图。FIG. 2 is a partially enlarged view of the point cloud image shown in FIG. 1 .
图3是本申请实施例提供的具有的六个观看角度的点云图像的示例。Fig. 3 is an example of point cloud images with six viewing angles provided by the embodiment of the present application.
图4是本申请实施例提供的编码框架的示意性框图。Fig. 4 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
图5是本申请实施例提供的解码框架的示意性框图。Fig. 5 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
图6是本申请实施例提供的包围盒的示例。Fig. 6 is an example of a bounding box provided by an embodiment of the present application.
图7是本申请实施例提供的对包围盒进行八叉树划分的示例。FIG. 7 is an example of performing octree division on a bounding box provided by an embodiment of the present application.
图8至图10示出了莫顿码在二维空间中的编码顺序。8 to 10 show the encoding sequence of Morton codes in two-dimensional space.
图11示出了莫顿码在三维空间中的编码顺序。Fig. 11 shows the encoding order of Morton codes in three-dimensional space.
图12是本申请实施例提供的解码方法的示意性流程图。Fig. 12 is a schematic flowchart of a decoding method provided by an embodiment of the present application.
图13是本申请实施例提供的编码方法的示意性流程图。Fig. 13 is a schematic flowchart of an encoding method provided by an embodiment of the present application.
图14是本申请实施例提供的解码器的示意性框图。Fig. 14 is a schematic block diagram of a decoder provided by an embodiment of the present application.
图15是本申请实施例提供的编码器的示意性框图。Fig. 15 is a schematic block diagram of an encoder provided by an embodiment of the present application.
图16是本申请实施例提供的编解码设备的示意性框图。Fig. 16 is a schematic block diagram of a codec device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
下面将结合附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
点云(Point Cloud)是空间中一组无规则分布的、表达三维物体或三维场景的空间结构及表面属性的离散点集。图1和图2分别示出了三维点云图像和局部放大图,可以看到点云表面是由分布稠密的点所组成的。A point cloud is a set of discrete point sets randomly distributed in space that express the spatial structure and surface properties of a 3D object or 3D scene. Figure 1 and Figure 2 show the 3D point cloud image and local enlarged view respectively, and it can be seen that the point cloud surface is composed of densely distributed points.
二维图像在每一个像素点均有信息表达,分布规则,因此不需要额外记录其位置信息;然而点云中的点在三维空间中的分布具有随机性和不规则性,因此需要记录每一个点在空间中的位置,才能完整地表达一幅点云。与二维图像类似,采集过程中每一个位置均有对应的属性信息,通常为RGB颜色值,颜色值反映物体的色彩;对于点云来说,每一个点所对应的属性信息除了颜色以外,还有比较常见的是反射率(reflectance)值,反射率值反映物体的表面材质。因此,点云数据通常包括三维位置信息所组成的几何信息(x,y,z)和三维颜色信息(r,g,b)、一维反射率信息(r)所组成的属性信息。The two-dimensional image has information expression at each pixel, and the distribution is regular, so there is no need to additionally record its position information; however, the distribution of points in the point cloud in three-dimensional space is random and irregular, so it is necessary to record each The position of the point in space can completely express a point cloud. Similar to two-dimensional images, each position in the acquisition process has corresponding attribute information, usually RGB color value, and the color value reflects the color of the object; for point cloud, the attribute information corresponding to each point is in addition to color, Another common one is the reflectance value, which reflects the surface material of the object. Therefore, point cloud data usually includes geometric information (x, y, z) composed of three-dimensional position information, attribute information composed of three-dimensional color information (r, g, b), and one-dimensional reflectance information (r).
换言之,点云中每个点可以包括几何信息和属性信息,其中,点云中每个点的几何信息是指该点的笛卡尔三维坐标数据,点云中每个点的属性信息可以包括但不限于以下至少一种:颜色信息、材质信息、激光反射强度信息。颜色信息可以是任意一种色彩空间上的信息。例如,颜色信息可以是红绿蓝(Red Green Blue,RGB)信息。再如,颜色信息还可以是亮度色度(YCbCr,YUV)信息。其中,Y表示明亮度(Luma),Cb(U)表示蓝色色度分量,Cr(V)表示红色色度分量。点云中的每个点都具有相同数量的属性信息。例如,点云中的每个点都具有颜色信息和激光反射强度两种属性信息。再如,点云中的每个点都具有颜色信息、材质信息和激光反射强度信息三种属性信息。In other words, each point in the point cloud can include geometric information and attribute information, wherein the geometric information of each point in the point cloud refers to the Cartesian three-dimensional coordinate data of the point, and the attribute information of each point in the point cloud can include but It is not limited to at least one of the following: color information, material information, and laser reflection intensity information. The color information can be information on any color space. For example, the color information may be Red Green Blue (RGB) information. For another example, the color information may also be brightness and chrominance (YCbCr, YUV) information. Among them, Y represents brightness (Luma), Cb (U) represents a blue chroma component, and Cr (V) represents a red chroma component. Each point in the point cloud has the same amount of attribute information. For example, each point in the point cloud has two attribute information, color information and laser reflection intensity. For another example, each point in the point cloud has three attribute information: color information, material information and laser reflection intensity information.
点云图像可具有的多个观看角度,例如,如图3所示的点云图像可具有的六个观看角度,点云图像对应的数据存储格式由文件头信息部分和数据部分组成,头信息包含了数据格式、数据表示类型、点云总点数、以及点云所表示的内容。The point cloud image can have multiple viewing angles, for example, the point cloud image shown in Figure 3 can have six viewing angles, the data storage format corresponding to the point cloud image consists of a file header information part and a data part, the header information It includes the data format, data representation type, total point cloud points, and the content represented by the point cloud.
作为示例,点云图像的数据存储格式可实现为以下格式:As an example, the data storage format of a point cloud image can be implemented as the following format:
plyply
format ascii 1.0format ascii 1.0
element vertex 207242element vertex 207242
property float xproperty float x
property float yproperty float y
property float zproperty float z
property uchar redproperty uchar red
property uchar greenproperty uchar green
property uchar blueproperty uchar blue
75 318 0 0 142 075 318 0 0 142 0
75 319 0 0 143 075 319 0 0 143 0
75 319 1 1 9 975 319 1 1 9 9
75 315 0 1 9 975 315 0 1 9 9
针对上述点云图像的数据存储格式,其数据格式为“.ply”格式,由ASCII码表示,总点数为207242, 每个点具有三维位置信息xyz和三维颜色信息rgb。For the data storage format of the above point cloud image, the data format is ".ply" format, represented by ASCII code, the total number of points is 207242, and each point has three-dimensional position information xyz and three-dimensional color information rgb.
点云可以灵活方便地表达三维物体或场景的空间结构及表面属性,并且由于点云通过直接对真实物体采样获得,在保证精度的前提下能提供极强的真实感,因而应用广泛,其范围包括虚拟现实游戏、计算机辅助设计、地理信息系统、自动导航系统、数字文化遗产、自由视点广播、三维沉浸远程呈现、生物组织器官三维重建等。Point cloud can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes, and because point cloud is obtained by directly sampling real objects, it can provide a strong sense of reality under the premise of ensuring accuracy, so it is widely used. Including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, 3D immersive telepresence, 3D reconstruction of biological tissues and organs, etc.
基于应用场景可以将点云划分为两大类别,即机器感知点云和人眼感知点云。机器感知点云的应用场景包括但不限于:自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等点云应用场景。人眼感知点云的应用场景包括但不限于:数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。相应的,可以基于点云的获取方式,将点云划分为密集型点云和稀疏型点云;也可基于点云的获取途径将点云划分为静态点云和动态点云,更具体可划分为三种类型的点云,即第一静态点云、第二类动态点云以及第三类动态获取点云。针对第一静态点云,物体是静止的,且获取点云的设备也是静止的;针对第二类动态点云,物体是运动的,但获取点云的设备是静止的;针对第三类动态获取点云,获取点云的设备是运动的。Based on application scenarios, point clouds can be divided into two categories, namely, machine-perceived point clouds and human-eye-perceived point clouds. The application scenarios of machine perception point cloud include but are not limited to: autonomous navigation system, real-time inspection system, geographic information system, visual sorting robot, emergency rescue robot and other point cloud application scenarios. The application scenarios of point cloud perceived by the human eye include but are not limited to: digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, 3D immersive interaction and other point cloud application scenarios. Correspondingly, the point cloud can be divided into dense point cloud and sparse point cloud based on the acquisition method of the point cloud; the point cloud can also be divided into static point cloud and dynamic point cloud based on the acquisition method of the point cloud, more specifically, it can be It is divided into three types of point clouds, namely, the first static point cloud, the second type of dynamic point cloud and the third type of dynamically acquired point cloud. For the first static point cloud, the object is stationary, and the device for obtaining the point cloud is also stationary; for the second type of dynamic point cloud, the object is moving, but the device for obtaining the point cloud is stationary; for the third type of dynamic Obtaining point cloud, the equipment for obtaining point cloud is in motion.
点云的采集主要有以下途径:计算机生成、3D激光扫描、3D摄影测量等。计算机可以生成虚拟三维物体及场景的点云;3D激光扫描可以获得静态现实世界三维物体或场景的点云,每秒可以获取百万级点云;3D摄影测量可以获得动态现实世界三维物体或场景的点云,每秒可以获取千万级点云。具体而言,可通过光电雷达、激光雷达、激光扫描仪、多视角相机等采集设备,可以采集得到物体表面的点云。根据激光测量原理得到的点云,其可以包括点的三维坐标信息和点的激光反射强度(reflectance)。根据摄影测量原理得到的点云,其可以可包括点的三维坐标信息和点的颜色信息。结合激光测量和摄影测量原理得到点云,其可以可包括点的三维坐标信息、点的激光反射强度(reflectance)和点的颜色信息。这些技术降低了点云数据获取成本和时间周期,提高了数据的精度。例如,在医学领域,由磁共振成像(magnetic resonance imaging,MRI)、计算机断层摄影(computed tomography,CT)、电磁定位信息,可以获得生物组织器官的点云。这些技术降低了点云的获取成本和时间周期,提高了数据的精度。点云数据获取方式的变革,使大量点云数据的获取成为可能,伴随着应用需求的增长,海量3D点云数据的处理遭遇存储空间和传输带宽限制的瓶颈。The collection of point clouds mainly has the following methods: computer generation, 3D laser scanning, 3D photogrammetry, etc. Computers can generate point clouds of virtual three-dimensional objects and scenes; 3D laser scanning can obtain point clouds of static real-world three-dimensional objects or scenes, and can obtain millions of point clouds per second; 3D photogrammetry can obtain dynamic real-world three-dimensional objects or scenes The point cloud of tens of millions of points can be obtained per second. Specifically, the point cloud of the surface of the object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera. The point cloud obtained according to the laser measurement principle may include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point. The point cloud obtained according to the principle of photogrammetry may include the three-dimensional coordinate information of the point and the color information of the point. The point cloud is obtained by combining the principles of laser measurement and photogrammetry, which may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point and the color information of the point. These technologies reduce the cost and time period of point cloud data acquisition, and improve the accuracy of the data. For example, in the medical field, point clouds of biological tissues and organs can be obtained from magnetic resonance imaging (MRI), computed tomography (CT), and electromagnetic positioning information. These technologies reduce the acquisition cost and time period of point cloud, and improve the accuracy of data. The transformation of the point cloud data acquisition method has made it possible to acquire a large amount of point cloud data. With the growth of application requirements, the processing of massive 3D point cloud data encounters the bottleneck of storage space and transmission bandwidth limitations.
以帧率为30fps(帧每秒)的点云视频为例,每帧点云的点数为70万,其中,每帧点云中的每一个点具有坐标信息xyz(float)和颜色信息RGB(uchar),则10s点云视频的数据量大约为0.7million·(4Byte·3+1Byte·3)·30fps·10s=3.15GB,而YUV采样格式为4:2:0,帧率为24fps的1280×720二维视频,其10s的数据量约为1280·720·12bit·24frames·10s≈0.33GB,10s的两视角3D视频的数据量约为0.33·2=0.66GB。由此可见,点云视频的数据量远超过相同时长的二维视频和三维视频的数据量。因此,为更好地实现数据管理,节省服务器存储空间,降低服务器与客户端之间的传输流量及传输时间,点云压缩成为促进点云产业发展的关键问题。Taking a point cloud video with a frame rate of 30fps (frame per second) as an example, the number of points in each frame of point cloud is 700,000, and each point in each frame of point cloud has coordinate information xyz (float) and color information RGB ( uchar), the data volume of 10s point cloud video is about 0.7million·(4Byte·3+1Byte·3)·30fps·10s=3.15GB, while the YUV sampling format is 4:2:0, and the frame rate is 1280 at 24fps For ×720 two-dimensional video, the data volume of 10s is about 1280·720·12bit·24frames·10s≈0.33GB, and the data volume of 10s of two-view 3D video is about 0.33·2=0.66GB. It can be seen that the data volume of point cloud video far exceeds the data volume of 2D video and 3D video of the same duration. Therefore, in order to better realize data management, save server storage space, and reduce the transmission traffic and transmission time between the server and the client, point cloud compression has become a key issue to promote the development of the point cloud industry.
点云压缩一般采用点云几何信息和属性信息分别压缩的方式,在编码端,首先在几何编码器中编码点云几何信息,然后将重建几何信息作为附加信息输入到属性编码器中,辅助点云属性的压缩;在解码端,首先在几何解码器中解码点云几何信息,然后将解码后的几何信息作为附加信息输入到属性解码器中,辅助点云属性的压缩。整个编解码器由预处理/后处理、几何编码/解码、属性编码/解码几部分组成。Point cloud compression generally uses point cloud geometric information and attribute information to compress separately. Compression of cloud attributes; at the decoding end, the geometric information of the point cloud is first decoded in the geometric decoder, and then the decoded geometric information is input into the attribute decoder as additional information to assist in the compression of the point cloud attributes. The entire codec consists of preprocessing/postprocessing, geometric encoding/decoding, and attribute encoding/decoding.
点云可通过各种类型的编码框架和解码框架分别进行编码和解码。作为示例,编解码框架可以是运动图象专家组(Moving Picture Experts Group,MPEG)提供的几何点云压缩(Geometry Point Cloud Compression,G-PCC)编解码框架或视频点云压缩(Video Point Cloud Compression,V-PCC)编解码框架,也可以是音视频编码标准(Audio Video Standard,AVS)专题组提供的AVS-PCC编解码框架或点云压缩参考平台(PCRM)框架。G-PCC编解码框架可用于针对第一静态点云和第三类动态获取点云进行压缩,V-PCC编解码框架可用于针对第二类动态点云进行压缩。G-PCC编解码框架也称为点云编解码器TMC13,V-PCC编解码框架也称为点云编解码器TMC2。G-PCC及AVS-PCC均针对静态的稀疏型点云,其编码框架大致相同。下面以PCRM框架为例对本申请实施例可适用的编解码框架进行说明。Point clouds can be encoded and decoded by various types of encoding frameworks and decoding frameworks, respectively. As an example, the codec framework may be the Geometry Point Cloud Compression (G-PCC) codec framework or Video Point Cloud Compression (Video Point Cloud Compression) provided by the Moving Picture Experts Group (MPEG). , V-PCC) codec framework, or the AVS-PCC codec framework or Point Cloud Compression Reference Platform (PCRM) framework provided by the Audio Video Standard (AVS) task force. The G-PCC codec framework can be used to compress the first static point cloud and the third type of dynamically acquired point cloud, and the V-PCC codec framework can be used to compress the second type of dynamic point cloud. The G-PCC codec framework is also called point cloud codec TMC13, and the V-PCC codec framework is also called point cloud codec TMC2. Both G-PCC and AVS-PCC are aimed at static sparse point clouds, and their coding frameworks are roughly the same. The codec framework applicable to the embodiment of the present application will be described below by taking the PCRM framework as an example.
图4是本申请实施例提供的编码框架的示意性框图。Fig. 4 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
如图4所示,在编码框架中,点云的几何信息和每点所对应的属性信息是分开编码的。As shown in Figure 4, in the encoding framework, the geometric information of point cloud and the attribute information corresponding to each point are encoded separately.
在编码端的几何编码部分,首先,对原始几何信息进行预处理,即通过坐标平移将几何原点归一化到点云空间中的最小值位置,并通过坐标量化将几何信息从浮点数转化为整形,便于后续的规则化处理,由于量化取整使得一部分点的几何信息相同,此时需要决定是否移除重复点,量化和移除重复点属于预处理过程;然后,对规则化的几何信息进行几何编码,即采用八叉树结构对点云空间进行递归划分,每次将当前块划分成八个相同大小的子块,并判断每个子块的占有码字情况,当子块内不包含点时记为空,否则记为非空,在递归划分的最后一层记录所有块的占有码字信息,并进行编码;通过八叉树结构表达 的几何信息一方面输入到几何熵编码器中形成几何码流。In the geometric coding part of the encoding end, firstly, the original geometric information is preprocessed, that is, the geometric origin is normalized to the minimum value position in the point cloud space through coordinate translation, and the geometric information is converted from floating point numbers to integers through coordinate quantization , which is convenient for subsequent regularization processing. Since the geometric information of some points is the same due to quantization and rounding, it is necessary to decide whether to remove duplicate points at this time. Quantization and removal of duplicate points belong to the preprocessing process; then, the regularized geometric information is Geometric coding, that is, to use the octree structure to recursively divide the point cloud space. Each time the current block is divided into eight sub-blocks of the same size, and the occupancy of each sub-block is judged. When the sub-block does not contain points When it is recorded as empty, otherwise it is recorded as non-empty. In the last layer of recursive division, the occupied codeword information of all blocks is recorded and encoded; the geometric information expressed by the octree structure is input to the geometric entropy encoder on the one hand to form Geometry stream.
此外,几何编码完成后对几何信息进行重建,利用重建的几何信息来对属性信息进行编码。In addition, after the geometric encoding is completed, the geometric information is reconstructed, and the attribute information is encoded by using the reconstructed geometric information.
在属性编码部分,首先,属性编码主要针对颜色、反射率信息进行的编码。首先,判断是否进行颜色空间的转换,如果处理的属性信息为颜色信息,还需要将原始颜色进行颜色空间变换,将其转变成更符合人眼视觉特性的YUV色彩空间;然后,在几何有损编码的情况下,由于几何信息在几何编码之后有所异动,因此需要为几何编码后的每一个点重新分配属性值,使得重建点云和原始点云的属性误差最小,这个过程叫做属性插值或属性重上色;接着,对预处理后属性信息进行属性编码,在属性信息编码中分为属性预测与属性变换;其中属性预测过程指对点云进行重排序以及进行属性预测。重排序的方法有莫顿重排序和希尔伯特(Hilbert)重排序;例如,AVS编码框架中均采用希尔伯特码对点云进行重排序;排序之后的点云使用差分方式进行属性预测,具体地,若当前待编码点与前一个已编码点的几何信息相同,即为重复点,则利用重复点的重建属性值作为当前待编码点的属性预测值,否则对当前待编码点选择前希尔伯特顺序的m个点作为邻居候选点,然后分别计算它们同当前待编码点的几何信息的曼哈顿距离,确定距离最近的n个点作为当前待编码点的预测点,以距离的倒数作为权重,计算n个邻居的属性的加权平均值,作为当前待编码点的属性预测值。In the attribute coding part, firstly, the attribute coding is mainly for the coding of color and reflectance information. First, determine whether to perform color space conversion. If the processed attribute information is color information, the original color needs to be transformed into a YUV color space that is more in line with the visual characteristics of the human eye; then, the geometric lossy In the case of encoding, since the geometric information changes after geometric encoding, it is necessary to reassign attribute values for each point after geometric encoding, so that the attribute error between the reconstructed point cloud and the original point cloud is minimized. This process is called attribute interpolation or Attribute recoloring; then, attribute encoding is performed on the preprocessed attribute information, which is divided into attribute prediction and attribute transformation; the attribute prediction process refers to the reordering of point clouds and attribute prediction. The reordering methods include Morton reordering and Hilbert (Hilbert) reordering; for example, the Hilbert code is used in the AVS coding framework to reorder the point cloud; the sorted point cloud uses a differential method for attribute prediction , specifically, if the geometric information of the current point to be encoded is the same as that of the previous encoded point, that is, it is a repeated point, then use the reconstructed attribute value of the repeated point as the attribute prediction value of the current point to be encoded, otherwise select The m points of the previous Hilbert order are used as neighbor candidate points, and then the Manhattan distance between them and the geometric information of the current point to be encoded is calculated respectively, and the n points with the closest distance are determined as the prediction points of the current point to be encoded, and the distance The reciprocal is used as the weight, and the weighted average of the attributes of n neighbors is calculated as the attribute prediction value of the current point to be encoded.
例如,可通过以下方式得到当前待编码点的属性预测值:For example, the attribute prediction value of the current point to be encoded can be obtained in the following ways:
PredR=(1/W 1×ref 1+1/W 2×ref 2+1/W 3×ref 3)/(1/W 1+1/W 2+1/W 3)。 PredR=(1/W 1 ×ref 1 +1/W 2 ×ref 2 +1/W 3 ×ref 3 )/(1/W 1 +1/W 2 +1/W 3 ).
其中,W 1、W 2、W 3分别表示预测点1、预测点2、预测点3与当前待编码点的几何距离,ref 1、ref 2、ref 3分别表示预测点1、预测点2、预测点3的属性重建值。 Among them, W 1 , W 2 , W 3 represent the geometric distances between predicted point 1, predicted point 2, predicted point 3 and the current point to be coded, ref 1 , ref 2 , ref 3 represent predicted point 1, predicted point 2, Predict the attribute reconstruction value for point 3.
得到当前待编码点的属性预测值后,基于当前待编码点的属性预测值得到当前待编码点的残差值,残差值为当前待编码点的原始属性值与预测属性值之间的差值;最后对残差值进行量化,将量化残差输入到属性熵编码器中形成属性码流。After the attribute prediction value of the current point to be encoded is obtained, the residual value of the current point to be encoded is obtained based on the attribute prediction value of the current point to be encoded, and the residual value is the difference between the original attribute value and the predicted attribute value of the current point to be encoded value; finally, the residual value is quantized, and the quantized residual is input into the attribute entropy encoder to form an attribute code stream.
图5是本申请实施例提供的解码框架的示意性框图。Fig. 5 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
如图5所示,在解码端,同样采用几何和属性分别解码的方式。在几何解码部分,首先对几何码流进行熵解码,得到每个点的几何信息,然后按照和几何编码相同的方式构建八叉树结构,结合解码几何重建出坐标变换后的、通过八叉树结构表达的几何信息,一方面将该信息进行坐标反量化和反平移,得到解码几何信息,一方面作为附加信息输入到属性解码器中。在属性解码部分,按照与编码端相同的方式构建莫顿顺序,先对属性码流进行熵解码,得到量化后的残差信息;然后进行反量化,得到点云残差;类似的,按照与属性编码相同的方式,获得当前待解码点的属性预测值,然后将属性预测值与残差值相加,可以恢复出当前待解码点的YUV属性值;最后,经过颜色空间反变换得到解码属性信息。As shown in Figure 5, at the decoding end, the geometry and attributes are also decoded separately. In the geometric decoding part, the geometric code stream is first entropy decoded to obtain the geometric information of each point, and then the octree structure is constructed in the same way as the geometric encoding, and the coordinate transformation is reconstructed by combining the decoded geometry. The geometric information expressed by the structure, on the one hand, coordinate inverse quantization and inverse translation of the information to obtain the decoded geometric information, on the other hand, it is input into the attribute decoder as additional information. In the attribute decoding part, the Morton order is constructed in the same way as the encoding side, and the attribute code stream is entropy decoded to obtain the quantized residual information; then inverse quantization is performed to obtain the point cloud residual; similarly, according to the In the same manner as attribute encoding, the attribute prediction value of the current point to be decoded is obtained, and then the attribute prediction value is added to the residual value to restore the YUV attribute value of the current point to be decoded; finally, the decoded attribute is obtained through inverse color space transformation information.
为便于描述,下面对点云的规则化处理方法进行说明。For the convenience of description, the regularization processing method of point cloud is described below.
由于点云在空间中无规则分布的特性,给编码过程带来挑战,因此采用递归八叉树的结构,如图6所示,将点云中的点规则化地表达成立方体的中心。具体而言,首先将整幅点云放置在一个正方体包围盒内,点云中点的坐标表示为(x k,y k,z k),k=0,…,K-1,其中K是点云的总点数,点云在x、y、z方向上的边界值分别为: Due to the irregular distribution of point cloud in space, it brings challenges to the encoding process, so the recursive octree structure is adopted, as shown in Figure 6, to express the points in the point cloud as the center of the cube in a regular way. Specifically, the entire point cloud is first placed in a cube bounding box, and the coordinates of the point cloud midpoint are expressed as (x k , y k , z k ), k=0,...,K-1, where K is The total number of points in the point cloud, and the boundary values of the point cloud in the x, y, and z directions are:
x min=min(x 0,x 1,…,x K-1); x min =min(x 0 ,x 1 ,…,x K-1 );
y min=min(y 0,y 1,…,y K-1); y min =min(y 0 ,y 1 ,...,y K-1 );
z min=min(z 0,z 1,…,z K-1); z min =min(z 0 ,z 1 ,...,z K-1 );
x max=max(x 0,x 1,…,x K-1); x max = max(x 0 ,x 1 ,...,x K-1 );
y max=max(y 0,y 1,…,y K-1); y max = max(y 0 ,y 1 ,...,y K-1 );
z max=max(z 0,z 1,…,z K-1)。 z max = max(z 0 , z 1 , . . . , z K−1 ).
则包围盒的原点(x origin,y origin,z origin)可以计算如下: Then the origin (x origin , y origin , z origin ) of the bounding box can be calculated as follows:
x origin=int(floor(x min)); x origin = int(floor(x min ));
y origin=int(floor(y min)); y origin = int(floor(y min ));
z origin=int(floor(z min))。 z origin = int(floor(z min )).
其中,floor()表示向下取整计算或向下舍入计算。int()表示取整运算。Among them, floor() represents the calculation of rounding down or rounding down. int() means rounding operation.
基于边界值和原点的计算公式,可以计算包围盒在x、y、z方向上的尺寸如下:Based on the calculation formula of the boundary value and the origin, the size of the bounding box in the x, y, and z directions can be calculated as follows:
BoudingBoxSize_x=int(x max-x origin)+1; BoudingBoxSize_x=int(x max -x origin )+1;
BoudingBoxSize_y=int(y max-y origin)+1; BoudingBoxSize_y=int(y max -y origin )+1;
BoudingBoxSize_z=int(z max-z origin)+1。 BoudingBoxSize_z=int(z max -z origin )+1.
如图7所示,得到包围盒在x、y、z方向上的尺寸后,首先对包围盒进行八叉树划分,每次得到八个子块,然后对子块中的非空块(包含点的块)进行再一次的八叉树划分,如此递归划分直到某个深度,将最终大小的非空子块称作体素(voxel),每个voxel中包含一个或多个点,将这些点的几何位置归一 化为voxel的中心点,该中心点的属性值取voxel中所有点的属性值的平均值。将点云规则化为空间中的块,有利于点云中点与点的关系描述,进而能够表达特定的编码顺序,确定一定的顺序编码每一个体素(voxel),即编码voxel所代表的点(或称“节点”),一种常用的编码顺序为交叉分离式的莫顿顺序。As shown in Figure 7, after obtaining the dimensions of the bounding box in the x, y, and z directions, the bounding box is first divided into octrees, and eight sub-blocks are obtained each time, and then the non-empty blocks in the sub-blocks (including points) block) to divide the octree again, so recursively divide until a certain depth, the non-empty sub-block of the final size is called a voxel (voxel), each voxel contains one or more points, and the points of these points The geometric position is normalized to the center point of the voxel, and the attribute value of the center point is the average value of the attribute values of all points in the voxel. Regularizing the point cloud into blocks in space is conducive to the description of the relationship between points and points in the point cloud, and then can express a specific encoding order, and determine a certain order to encode each voxel (voxel), that is, the encoded voxel represents Point (or "node"), a commonly used encoding order is the cross-separated Morton order.
图8至图10示出了莫顿码在二维空间中的编码顺序。图11示出了莫顿码在三维空间中的编码顺序。箭头的顺序表示莫顿顺序下点的编码顺序。图8示出了二维空间中2*2个像素的“z”字形莫顿编码顺序,图9示出了二维空间中4个2*2块之间的“z”字形莫顿编码顺序,图10示出了二维空间中4个4*4块之间的“z”字形莫顿编码顺序,组成为整个8*8块的莫顿编码顺序。扩展到三维空间中的莫顿编码顺序如图11所示,图11中展示了16个点,每个“z”字内部,每个“z”与“z”之间的莫顿编码顺序都是先沿x轴方向编码,再沿y轴,最后沿z轴。8 to 10 show the encoding sequence of Morton codes in two-dimensional space. Fig. 11 shows the encoding order of Morton codes in three-dimensional space. The order of the arrows indicates the encoding order of the points under the Morton order. Figure 8 shows the "z"-shaped Morton coding sequence of 2*2 pixels in two-dimensional space, and Figure 9 shows the "z"-shaped Morton coding sequence between four 2*2 blocks in two-dimensional space , FIG. 10 shows the Morton coding order of the "z" shape between four 4*4 blocks in a two-dimensional space, which constitutes the Morton coding order of the entire 8*8 block. The Morton coding sequence extended to three-dimensional space is shown in Figure 11. Figure 11 shows 16 points, and the Morton coding sequence between each "z" and "z" inside each "z" is It is encoded along the x-axis first, then along the y-axis, and finally along the z-axis.
点云压缩中的属性帧内预测部分,对于颜色属性,主要参考当前点的相邻点对当前点进行预测,将属性预测值与当前点属性值计算残差信息后经量化等过程,将残差信息传输到解码端;解码端接收并解析码流后,经反变换与反量化等步骤得到残差信息,解码端以相同过程预测得到属性预测值,与残差信息叠加后得到当前点的属性重建值。The attribute intra-frame prediction part in point cloud compression, for the color attribute, mainly refers to the adjacent points of the current point to predict the current point. The difference information is transmitted to the decoding end; after the decoding end receives and analyzes the code stream, it obtains the residual information through steps such as inverse transformation and inverse quantization, and the decoding end predicts and obtains the attribute prediction value through the same process, and obtains the current point after superimposing it with the residual information Property reconstruction value.
示例性地,对于不同测试条件,采用不同的颜色信息格式。Exemplarily, for different test conditions, different color information formats are used.
在C1测试条件下和C2测试条件下,可以将输入点云的颜色属性从RGB转换到YUV空间。C1测试条件可以指几何限制有损(limit-lossy geometry)压缩和属性有损(lossy attributes)压缩的测试条件。C2测试条件可以指几何无损压缩和属性有损(lossy attributes)压缩的测试条件。Under the C1 test condition and under the C2 test condition, the color attributes of the input point cloud can be converted from RGB to YUV space. The C1 test conditions may refer to test conditions for limit-lossy geometry compression and lossy attributes compression. The C2 test conditions may refer to test conditions for geometric lossless compression and lossy attributes compression.
示例性地,将输入点云的颜色属性从RGB空间转换到YUV空间的转换公式可以如下所示:Exemplarily, the conversion formula for converting the color attribute of the input point cloud from RGB space to YUV space may be as follows:
Y=0.2126*R+0.7152*G+0.0722*B;Y=0.2126*R+0.7152*G+0.0722*B;
U=-0.114572*R-0.385428*G+0.5*B+128;U=-0.114572*R-0.385428*G+0.5*B+128;
V=0.5*R-0.454153*G-0.045847*B+128;V=0.5*R-0.454153*G-0.045847*B+128;
示例性地,将输入点云的颜色属性从YUV空间转换到RGB空间的逆变换公式可以如下所示:Exemplarily, the inverse transformation formula for converting the color attribute of the input point cloud from YUV space to RGB space can be as follows:
R=Y+1.5748*(V-128);R=Y+1.5748*(V-128);
G=Y–0.18733*(U-128)–0.46813*(V-128);G=Y–0.18733*(U-128)–0.46813*(V-128);
B=Y+1.85563*(U-128);B=Y+1.85563*(U-128);
在C3测试条件和C4测试条件下,可以直接将RGB格式的点云作为输入点云。C3测试条件可以指几何无损压缩和属性限制有损(limit-lossy attributes)压缩的测试条件。C4测试条件可以指几何无损压缩和属性无损压缩的测试条件。C3测试条件和C4测试条件避免了由于颜色空间转换导致的属性信息出现的误差,能够提升压缩性能。Under the C3 test condition and C4 test condition, the point cloud in RGB format can be directly used as the input point cloud. The C3 test conditions may refer to test conditions for geometrically lossless compression and limit-lossy attributes compression. C4 test conditions may refer to test conditions for geometric lossless compression and attribute lossless compression. C3 test conditions and C4 test conditions avoid errors in attribute information caused by color space conversion, and can improve compression performance.
经过处理后的点云的属性信息将以YUV(或RGB)的顺序输入点云属性压缩编码器,按照点云中点的索引顺序,依次对每一个点进行预测、量化、熵编码处理,为便于理解本申请的方案,下面对编码器的编码过程进行示例性说明。具体的编码步骤描述如下:The attribute information of the processed point cloud will be input into the point cloud attribute compression encoder in the order of YUV (or RGB), and each point will be predicted, quantized, and entropy encoded in sequence according to the index order of the points in the point cloud. To facilitate the understanding of the solution of the present application, the encoding process of the encoder is described as an example below. The specific encoding steps are described as follows:
a)、在当前点之间编码完成的点所组成的范围内为当前点查找N个预测点。a) Find N predicted points for the current point within the range formed by the encoded points between the current points.
b)、根据N个预测点与当前点之间的几何距离分别计算该N个预测点中的每一个预测点的权重值,根据每一个预测点的权重值和每一个点的属性重建值(包括Y分量、U分量以及V分量的属性重建值或R分量、G分量以及B分量的属性重建值),计算当前点的属性预测值(包括Y分量、U分量以及V分量的属性预测值或R分量、G分量以及B分量的属性预测值)。b), respectively calculate the weight value of each prediction point in the N prediction points according to the geometric distance between the N prediction points and the current point, and reconstruct the value according to the weight value of each prediction point and the attribute of each point ( Including the attribute reconstruction value of Y component, U component and V component or the attribute reconstruction value of R component, G component and B component), calculate the attribute prediction value of the current point (including the attribute prediction value of Y component, U component and V component or attribute prediction values of the R component, the G component, and the B component).
例如,若当前点的属性信息的格式为YUV格式,则编码器可以根据每一个预测点的权重值和每一个预测点的Y分量,计算当前点的Y分量的属性预测值,根据每一个预测点的权重值和每一个预测点的U分量,计算当前点的U分量的属性预测值,根据每一个预测点的权重值和每一个预测点的V分量,计算当前点的V分量的属性预测值。再如,若当前点的属性信息的格式为RGB格式,则编码器可以根据每一个预测点的权重值和每一个预测点的R分量的属性重建值,计算当前点的R分量的属性预测值,根据每一个预测点的权重值和每一个预测点的G分量的属性重建值,计算当前点的G分量的属性预测值,根据每一个预测点的权重值和每一个预测点的B分量的属性重建值,计算当前点的B分量的属性预测值。For example, if the format of the attribute information of the current point is YUV format, the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value. For another example, if the attribute information format of the current point is in RGB format, the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
示例性地,可通过以下公式确定各个分量的属性预测值:Exemplarily, the attribute prediction value of each component can be determined by the following formula:
Figure PCTCN2021135529-appb-000001
Figure PCTCN2021135529-appb-000001
其中,A p表示各个分量的属性预测值,w i表示N个预测点中的第i个预测点的权重值,A i表示N个预测点中的第i个点的各个分量的属性预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。 Among them, A p represents the attribute prediction value of each component, w i represents the weight value of the i-th prediction point among the N prediction points, and A i represents the attribute prediction value of each component of the i-th point among the N prediction points . For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
c)、编码器根据各个分量的属性预测值和各个分量跨分量预测值,计算各个分量的属性残差值。c) The encoder calculates the attribute residual value of each component according to the attribute prediction value of each component and the cross-component prediction value of each component.
例如,若当前点的颜色属性位于YUV空间,则可以按照Y分量、U分量、V分量的顺序依次计算当前点的各个分量的属性残差值;若当前点的颜色属性位于RGB空间,则可以按照R分量、G分量、B分量的顺序依次计算当前点的各个分量的属性残差值。For example, if the color attribute of the current point is in the YUV space, the attribute residual value of each component of the current point can be calculated sequentially in the order of Y component, U component, and V component; if the color attribute of the current point is in the RGB space, then you can The attribute residual value of each component of the current point is calculated sequentially in the order of R component, G component, and B component.
示例性地,可以按照以下公式计算各个分量的属性残差值:Exemplarily, the attribute residual value of each component can be calculated according to the following formula:
delta=currValue-predictor-residualPrevComponent;delta = currValue-predictor-residualPrevComponent;
其中,delta表示各个分量的属性残差值,currValue表示各个分量的原始值或真实值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。Among them, delta represents the attribute residual value of each component, currValue represents the original value or real value of each component, predictor represents the attribute prediction value of each component, and residualPrevComponent represents the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
示例性地,对于Y分量和R分量,由于是最先编码的分量,因此,可以将Y分量和R分量的跨分量预测值为0,对于U分量、V分量、G分量以及B分量,其跨分量预测值根据前面编码的分量的属性量化残差值经过反量化得到的属性残差重建值确定。例如,U分量的跨分量预测值可以是对Y分量的属性量化残差值经过反量化得到的属性残差重建值,G分量的跨分量预测值可以是对R分量的属性量化残差值经过反量化得到的属性残差重建值。再如,V分量的跨分量预测值可以是对Y分量的属性量化残差值经过反量化得到的属性残差重建值与对U分量的属性量化残差值经过反量化得到的属性残差重建值的和,B分量的跨分量预测值可以是对R分量的属性量化残差值经过反量化得到的属性残差重建值与对G分量的属性量化残差值经过反量化得到的属性残差重建值的和。Exemplarily, for the Y component and the R component, since they are the first encoded components, the cross-component prediction value of the Y component and the R component can be set to 0, and for the U component, V component, G component, and B component, its The cross-component prediction value is determined according to the reconstruction value of the attribute residual obtained by inverse quantization of the attribute quantization residual value of the previously encoded component. For example, the cross-component prediction value of the U component can be the attribute residual reconstruction value obtained by inverse quantization of the attribute quantization residual value of the Y component, and the cross-component prediction value of the G component can be the attribute quantization residual value of the R component. The reconstructed value of the attribute residual obtained by inverse quantization. For another example, the cross-component prediction value of the V component may be the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the Y component and the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the U component. The sum of values, the cross-component prediction value of the B component can be the attribute residual reconstruction value obtained by dequantizing the attribute quantized residual value of the R component and the attribute residual obtained by dequantizing the attribute quantized residual value of the G component Sum of reconstructed values.
编码器获取各个分量的属性残差值后,对各个分量的属性残差值进行量化操作,得到当前点的各个分量的属性量化残差值。例如,若当前点的颜色信息的格式为YUV格式,则按照Y分量、U分量、V分量的顺序依次量化当前点的各个分量的属性残差值,得到各个分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则按照R分量、G分量、B分量的顺序依次量化当前点的各个分量的属性残差值,得到各个分量的属性量化残差值。After the encoder obtains the attribute residual value of each component, it performs a quantization operation on the attribute residual value of each component to obtain the attribute quantized residual value of each component of the current point. For example, if the format of the color information of the current point is YUV format, the attribute residual value of each component of the current point is sequentially quantized in the order of Y component, U component, and V component to obtain the attribute quantized residual value of each component. For another example, if the format of the color information of the current point is RGB format, the attribute residual value of each component of the current point is sequentially quantized in the order of R component, G component, and B component to obtain the attribute quantized residual value of each component.
d)、编码器对当前点的各个分量的属性量化残差值进行熵编码,得到点云的码流。d) The encoder performs entropy encoding on the attribute quantization residual value of each component of the current point to obtain the code stream of the point cloud.
例如,若当前点的颜色信息的格式为YUV格式,则按照Y分量、U分量、V分量的顺序对各个分量的属性量化残差值进行熵编码,以得到点云的码流。再如,若当前点的颜色信息的格式为RGB格式,则按照R分量、G分量、B分量顺序对各个分量的属性量化残差值进行熵编码,以得到点云的码流。For example, if the format of the color information of the current point is YUV format, entropy encoding is performed on the attribute quantization residual value of each component in the order of Y component, U component, and V component to obtain the code stream of the point cloud. For another example, if the format of the color information of the current point is RGB format, entropy encoding is performed on the attribute quantization residual value of each component in the order of R component, G component, and B component to obtain the code stream of the point cloud.
示例性地,可以按照以下步骤对当前点的各个分量的属性量化残差值进行熵编码:Exemplarily, the entropy coding may be performed on the attribute quantization residual value of each component of the current point according to the following steps:
1)、一方面,引入一个标志位flagy_r,针对属性信息的格式为YUV格式的点,其可用于表示Y分量的属性量化残差值是否为0,对于属性信息的格式为RGB格式的点,其可用于表示R分量的属性量化残差值是否为0。例如,若Y分量(或R分量)的属性量化残差值为0,则flagy_r=0,否则flagy_r=1,编码器对flagy_r的取值进行编码并写入码流。另一方面,引入另一个标志位flagyu_rg,针对属性信息的格式为YUV格式的点,其可用于表示Y分量的属性残差量化值和U分量的属性残差量化值是否都为0,对于属性信息的格式为RGB格式的点,其可用于表示R分量的属性残差量化值和G分量的属性残差量化值是否都为0。例如,若Y分量的属性残差量化值和U分量的属性残差量化值都为0,则flagyu_rg=0,否则flagyu_rg=1。再如,若R分量的属性残差量化值和G分量的属性残差量化值都为0,则flagyu_rg=0,否则flagyu_rg=1。1) On the one hand, introduce a flag bit flag_r, for points whose attribute information format is YUV format, it can be used to indicate whether the attribute quantization residual value of the Y component is 0, for points whose attribute information format is RGB format, It can be used to indicate whether the attribute quantization residual value of the R component is 0. For example, if the attribute quantization residual value of the Y component (or R component) is 0, then flagy_r=0, otherwise flagy_r=1, the encoder encodes the value of flag_r and writes it into the code stream. On the other hand, another flag flagyu_rg is introduced, which can be used to indicate whether the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are both 0 for points whose attribute information format is YUV format. The format of the information is a point in RGB format, which can be used to indicate whether the attribute residual quantization value of the R component and the attribute residual quantization value of the G component are both 0. For example, if both the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are 0, then flagyu_rg=0, otherwise flagyu_rg=1. For another example, if both the attribute residual quantization value of the R component and the attribute residual quantization value of the G component are 0, then flagyu_rg=0, otherwise flagyu_rg=1.
2)、编码器判断Y分量(或R分量)的属性量化残差值是否为0,若为0,则执行下述3),否则执行下述4)。2) The encoder judges whether the attribute quantization residual value of the Y component (or R component) is 0, and if it is 0, executes the following 3), otherwise executes the following 4).
3)、编码器对flagyu_rg的取值进行编码,并判断U分量(或G分量)的属性量化残差值是否为0,若U分量(或G分量)的属性量化残差值为0,则编码V分量(或B分量)的属性量化残差值,否则依次编码U分量和V分量(或G分量和B分量)的属性残差量化值。3), the encoder encodes the value of flagyu_rg, and judges whether the attribute quantization residual value of the U component (or G component) is 0, if the attribute quantization residual value of the U component (or G component) is 0, then The attribute quantized residual value of the V component (or B component) is encoded, otherwise, the attribute residual quantized values of the U component and the V component (or the G component and the B component) are encoded sequentially.
例如,若当前点的颜色信息的格式为YUV格式,则编码器对flagyu_rg的取值进行编码,并判断U分量的属性量化残差值是否为0,若U分量的属性量化残差值为0,则编码V分量的属性量化残差值,否则依次编码U分量的属性残差量化值和V分量的属性残差量化值。再如,若当前点的颜色信息的格式为RGB格式,则编码器对flagyu_rg的取值进行编码,并判断G分量的属性量化残差值是否为0,若G分量的属性量化残差值为0,则编码B分量的属性量化残差值,否则依次编码G分量的属性残差量化值和B分量的属性残差量化值。For example, if the format of the color information of the current point is YUV format, the encoder encodes the value of flagyu_rg, and judges whether the attribute quantization residual value of the U component is 0, if the attribute quantization residual value of the U component is 0 , then encode the attribute quantized residual value of the V component, otherwise encode the attribute residual quantized value of the U component and the attribute residual quantized value of the V component in sequence. For another example, if the color information format of the current point is in RGB format, the encoder encodes the value of flagyu_rg and judges whether the attribute quantization residual value of the G component is 0. If the attribute quantization residual value of the G component is 0, the attribute quantization residual value of the B component is encoded, otherwise, the attribute residual quantization value of the G component and the attribute residual quantization value of the B component are encoded sequentially.
4)、编码器依次编码Y分量、U分量、V分量(或R分量、G分量、B分量)的属性量化残差值。4) The encoder sequentially encodes the attribute quantized residual values of the Y component, U component, and V component (or R component, G component, and B component).
例如,若当前点的颜色信息的格式为YUV格式,则编码器依次编码Y分量的属性量化残差值、U分量的属性量化残差值以及V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则编码器依次编码R分量的属性量化残差值、G分量的属性量化残差值以及B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the encoder encodes the attribute quantization residual value of the Y component, the attribute quantization residual value of the U component, and the attribute quantization residual value of the V component in sequence. For another example, if the format of the color information of the current point is the RGB format, the encoder sequentially encodes the attribute quantization residual value of the R component, the attribute quantization residual value of the G component, and the attribute quantization residual value of the B component.
e)、对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量(或R分量、G分量、 B分量)的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。e) For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
例如,若当前点的颜色信息的格式为YUV格式,则对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。再如,若当前点的颜色信息的格式为RGB格式,则对于各个分量的属性量化残差值,编码器按照R分量、G分量、B分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
示例性地,可以按照以下公式生成当前点的各个分量的属性重建值:Exemplarily, the attribute reconstruction value of each component of the current point can be generated according to the following formula:
Figure PCTCN2021135529-appb-000002
Figure PCTCN2021135529-appb-000002
其中,
Figure PCTCN2021135529-appb-000003
表示当前点的各个分量的属性重建值,
Figure PCTCN2021135529-appb-000004
表示各个分量反量化后的属性残差重建值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。
in,
Figure PCTCN2021135529-appb-000003
Indicates the attribute reconstruction value of each component of the current point,
Figure PCTCN2021135529-appb-000004
Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
应当理解,由于步骤c)中会对各个分量的属性量化残差值进行反量化,得到各个分量的属性残差重建值,因此,也可以将步骤e)中各个分量的属性重建值的生成过程集成在步骤c)中。此外,若将步骤e)中各个分量的属性重建值的生成过程集成在步骤c)中,则各个分量的属性重建值的生成过程也可以按照U分量、Y分量、V分量(或G分量、R分量、B分量)的顺序进行反量化。本申请对此不作具体限定。It should be understood that in step c), the attribute quantization residual value of each component will be dequantized to obtain the attribute residual reconstruction value of each component. Therefore, the generation process of the attribute reconstruction value of each component in step e) can also be integrated in step c). In addition, if the generation process of the attribute reconstruction value of each component in step e) is integrated in step c), then the generation process of the attribute reconstruction value of each component can also follow the U component, Y component, V component (or G component, G component, The order of R component, B component) is dequantized. This application does not specifically limit it.
相应的,解码端的具体执行步骤描述如下:Correspondingly, the specific execution steps of the decoding end are described as follows:
a)、在当前点之间解码完成的点所组成的范围内为当前点查找N个预测点;a), find N predicted points for the current point within the range formed by the points that have been decoded between the current points;
b)、根据N个预测点与当前点之间的几何距离分别计算该N个预测点中的每一个预测点的权重值,根据每一个预测点的权重值和每一个的属性重建值(包括Y分量、U分量以及V分量的属性重建值或R分量、G分量以及B分量的属性重建值),计算当前点的属性预测值(包括Y分量、U分量以及V分量的属性预测值或R分量、G分量以及B分量的属性预测值)。b), respectively calculate the weight value of each prediction point in the N prediction points according to the geometric distance between the N prediction points and the current point, and reconstruct the value according to the weight value of each prediction point and each attribute (including The attribute reconstruction value of Y component, U component and V component or the attribute reconstruction value of R component, G component and B component), calculate the attribute prediction value of the current point (including the attribute prediction value of Y component, U component and V component or R component, G component, and attribute prediction value of B component).
例如,若当前点的属性信息的格式为YUV格式,则编码器可以根据每一个预测点的权重值和每一个预测点的Y分量,计算当前点的Y分量的属性预测值,根据每一个预测点的权重值和每一个预测点的U分量,计算当前点的U分量的属性预测值,根据每一个预测点的权重值和每一个预测点的V分量,计算当前点的V分量的属性预测值。再如,若当前点的属性信息的格式为RGB格式,则编码器可以根据每一个预测点的权重值和每一个预测点的R分量的属性重建值,计算当前点的R分量的属性预测值,根据每一个预测点的权重值和每一个预测点的G分量的属性重建值,计算当前点的G分量的属性预测值,根据每一个预测点的权重值和每一个预测点的B分量的属性重建值,计算当前点的B分量的属性预测值。For example, if the format of the attribute information of the current point is YUV format, the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value. For another example, if the attribute information format of the current point is in RGB format, the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
示例性地,可通过以下公式确定各个分量的属性预测值:Exemplarily, the attribute prediction value of each component can be determined by the following formula:
Figure PCTCN2021135529-appb-000005
Figure PCTCN2021135529-appb-000005
其中,A p表示各个分量的属性预测值,w i表示N个预测点中的第i个预测点的权重值,A i表示N个预测点中的第i个点的各个分量的属性预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。 Among them, A p represents the attribute prediction value of each component, w i represents the weight value of the i-th prediction point among the N prediction points, and A i represents the attribute prediction value of each component of the i-th point among the N prediction points . For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
c)、按照Y分量、U分量、V分量(或R分量、G分量、B分量)的顺序从码流中解码获取当前点的各个分量的属性量化残差值。c) According to the order of Y component, U component, V component (or R component, G component, B component), decode from the code stream to obtain the attribute quantization residual value of each component at the current point.
例如,若当前点的颜色信息的格式为YUV格式,则解码器按照Y分量、U分量、V分量的顺序从码流中解码获取当前点的各个分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则编码器按照R分量、G分量、B分量的顺序从码流中解码获取当前点的各个分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of Y component, U component, and V component. For another example, if the format of the color information of the current point is RGB format, the encoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of R component, G component, and B component.
示例性地,具体执行步骤包括:Exemplarily, the specific execution steps include:
1)、从码流中解码获取标志位flagy_r,若flagy_r为0,则Y分量(或R分量)的属性量化残差值为0,若flagy_r为1,则Y分量(或R分量)的属性量化残差值不为0。1) Obtain the flag bit flag_r by decoding from the code stream. If flag_r is 0, the attribute quantization residual value of the Y component (or R component) is 0. If flag_r is 1, the attribute of the Y component (or R component) The quantized residual value is not 0.
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flagy_r,若flagy_r为0,则Y分量的属性量化残差值为0,flagy_r为1,则Y分量的属性量化残差值不为0。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flagy_r,若flagy_r为0,则R分量的属性量化残差值为0,若flagy_r为1,则R分量的属性量化残差值不为0。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flag_r from the code stream. If flag_r is 0, the attribute quantization residual value of the Y component is 0, and flag_r is 1, then Y The attribute quantization residual value of the component is not 0. For another example, if the color information format of the current point is in RGB format, the decoder decodes and obtains the flag bit flag_r from the code stream. If flag_r is 0, the attribute quantization residual value of the R component is 0. If flag_r is 1, Then the attribute quantization residual value of the R component is not 0.
2)、判断Y分量(或R分量)的属性量化残差值是否为0,若为0,则执行3),否则执行4)。2) Judging whether the attribute quantization residual value of the Y component (or R component) is 0, if it is 0, perform 3), otherwise perform 4).
3)、从码流中解码获取标志位flagyu_rg,若flagyu_rg为0,则U分量(或G分量)的属性量化残 差值为0,并从码流中解码获取V分量(或B分量)的属性量化残差值;否则,依次从码流中解码获取U分量、V分量(或G分量、B分量)的属性量化残差值。3) Obtain the flag bit flagyu_rg by decoding from the code stream. If flagyu_rg is 0, the attribute quantization residual value of the U component (or G component) is 0, and decode and obtain the value of the V component (or B component) from the code stream Attribute quantized residual value; otherwise, decode and obtain the attribute quantized residual value of U component, V component (or G component, B component) from the code stream in sequence.
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flagyu_rg,若flagyu_rg为0,则U分量的属性量化残差值为0,并从码流中解码获取V分量的属性量化残差值;否则,依次从码流中解码获取U分量的属性量化残差值和V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flagyu_rg,若flagyu_rg为0,则G分量的属性量化残差值为0,并从码流中解码获取B分量的属性量化残差值;否则,依次从码流中解码获取G分量的属性量化残差值和B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flagyu_rg from the code stream. If flagyu_rg is 0, the attribute quantization residual value of the U component is 0, and decodes it from the code stream Obtain the attribute quantization residual value of the V component; otherwise, sequentially decode and obtain the attribute quantization residual value of the U component and the attribute quantization residual value of the V component from the code stream. For another example, if the color information format of the current point is in RGB format, the decoder decodes the flag bit flagyu_rg from the code stream. If flagyu_rg is 0, the attribute quantization residual value of the G component is 0, and the Decode to obtain the attribute quantization residual value of the B component; otherwise, decode and obtain the attribute quantization residual value of the G component and the attribute quantization residual value of the B component from the code stream in sequence.
4)、依次从码流中解码获取Y分量、U分量、V分量(或R分量、G分量、B分量)的属性量化残差值。4) Decoding and obtaining attribute quantization residual values of the Y component, U component, and V component (or R component, G component, and B component) from the code stream in sequence.
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flagy_r,若flagy_r为1,则解码器依次从码流中解码获取Y分量的属性量化残差值、U分量的属性量化残差值以及V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flagy_r,若flagy_r为1,则解码器依次从码流中解码获取R分量的属性量化残差值、G分量的属性量化残差值以及B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flag_r from the code stream, and if flag_r is 1, the decoder sequentially decodes and obtains the attribute quantization residual value of the Y component from the code stream , the attribute quantized residual value of the U component and the attribute quantized residual value of the V component. For another example, if the format of the color information of the current point is in RGB format, the decoder decodes from the code stream to obtain the flag bit flag_r, if flag_r is 1, the decoder sequentially decodes from the code stream to obtain the attribute quantization residual of the R component value, the attribute quantized residual value of the G component, and the attribute quantized residual value of the B component.
d)、对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量(或R分量、G分量、B分量)的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。d) For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
例如,若当前点的颜色信息的格式为YUV格式,则对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。再如,若当前点的颜色信息的格式为RGB格式,则对于各个分量的属性量化残差值,编码器按照R分量、G分量、B分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
示例性地,可以按照以下公式生成当前点的各个分量的属性重建值:Exemplarily, the attribute reconstruction value of each component of the current point can be generated according to the following formula:
Figure PCTCN2021135529-appb-000006
Figure PCTCN2021135529-appb-000006
其中,
Figure PCTCN2021135529-appb-000007
表示当前点的各个分量的属性重建值,
Figure PCTCN2021135529-appb-000008
表示各个分量反量化后的属性残差重建值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。
in,
Figure PCTCN2021135529-appb-000007
Indicates the attribute reconstruction value of each component of the current point,
Figure PCTCN2021135529-appb-000008
Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
在上述方案中,若当前点的颜色信息的格式为YUV格式,编码器按照Y分量、U分量、V分量的顺序对当前点的各个分量的属性量化残差值进行编码,若当前点的颜色信息的格式为RGB格式,编码器按照R分量、G分量、B分量的顺序对各个分量的属性量化残差值进行编码;相应的,若当前点的颜色信息的格式为YUV格式,解码器按照Y分量、U分量、V分量的顺序对当前点的各个分量的属性量化残差值进行解码,若当前点的颜色信息的格式为RGB格式,解码器按照R分量、G分量、B分量的顺序对各个分量的属性量化残差值进行解码。In the above scheme, if the format of the color information of the current point is YUV format, the encoder encodes the attribute quantization residual value of each component of the current point in the order of Y component, U component, and V component. If the color information of the current point The format of the information is RGB format, and the encoder encodes the attribute quantization residual value of each component in the order of R component, G component, and B component; correspondingly, if the format of the color information of the current point is YUV format, the decoder follows The sequence of Y component, U component, and V component decodes the attribute quantization residual value of each component of the current point. If the format of the color information of the current point is RGB format, the decoder follows the sequence of R component, G component, and B component The attribute quantized residual values of the respective components are decoded.
需要说明的是,在上述方案中,解码端对当前点的各个分量的属性量化残差值进行解码时需要解码两个标志位,一方面,引入一个标志位flagy_r,针对属性信息的格式为YUV格式的点,其可用于表示Y分量的属性量化残差值是否为0,对于属性信息的格式为RGB格式的点,其可用于表示R分量的属性量化残差值是否为0。例如,若Y分量(或R分量)的属性量化残差值为0,则flagy_r=0,否则flagy_r=1,编码器对flagy_r的取值进行编码并写入码流。另一方面,引入另一个标志位flagyu_rg,针对属性信息的格式为YUV格式的点,其可用于表示Y分量的属性残差量化值和U分量的属性残差量化值是否都为0,对于属性信息的格式为RGB格式的点,其可用于表示R分量的属性残差量化值和G分量的属性残差量化值是否都为0。例如,若Y分量的属性残差量化值和U分量的属性残差量化值都为0,则flagyu_rg=0,否则flagyu_rg=1。再如,若R分量的属性残差量化值和G分量的属性残差量化值都为0,则flagyu_rg=0,否则flagyu_rg=1。It should be noted that, in the above scheme, the decoder needs to decode two flag bits when decoding the attribute quantization residual value of each component of the current point. On the one hand, a flag bit flag_r is introduced, and the format for the attribute information is YUV Format point, which can be used to indicate whether the attribute quantization residual value of the Y component is 0, and for a point whose attribute information format is RGB format, it can be used to indicate whether the attribute quantization residual value of the R component is 0. For example, if the attribute quantization residual value of the Y component (or R component) is 0, then flagy_r=0, otherwise flagy_r=1, the encoder encodes the value of flag_r and writes it into the code stream. On the other hand, another flag flagyu_rg is introduced, which can be used to indicate whether the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are both 0 for points whose attribute information format is YUV format. The format of the information is a point in RGB format, which can be used to indicate whether the attribute residual quantization value of the R component and the attribute residual quantization value of the G component are both 0. For example, if both the attribute residual quantization value of the Y component and the attribute residual quantization value of the U component are 0, then flagyu_rg=0, otherwise flagyu_rg=1. For another example, if both the attribute residual quantization value of the R component and the attribute residual quantization value of the G component are 0, then flagyu_rg=0, otherwise flagyu_rg=1.
在具体解码过程中,判断Y分量(或R分量)的属性量化残差值是否为0,若为0,则执行3),否则执行4)。也即是说,当flagy_r为0时,无论flagyu_rg是否为0,解码端都需要解码flagy_r和flagyu_rg。当flagy_r不为0时,解码端无需解码flagyu_rg。换言之,从大数据统计的角度来看,基于各个分量不为0的可能性,有可能会对flagy_r是否为0产生影响,进而会对解码端是否需要解码flagyu_rg产生影响,最终会影响到解码器的解码效率。In the specific decoding process, it is judged whether the attribute quantization residual value of the Y component (or R component) is 0, if it is 0, perform 3), otherwise perform 4). That is to say, when flag_r is 0, no matter whether flagyu_rg is 0 or not, the decoder needs to decode flag_r and flagyu_rg. When flagy_r is not 0, the decoder does not need to decode flagyu_rg. In other words, from the perspective of big data statistics, based on the possibility that each component is not 0, it may affect whether flagy_r is 0, which in turn will affect whether the decoding end needs to decode flagyu_rg, and will eventually affect the decoder. decoding efficiency.
有鉴于此,本申请提供了一种解码方法,通过改变这点前点中各个分量的解码顺序,以提升解码效率。In view of this, the present application provides a decoding method, which improves decoding efficiency by changing the decoding order of components before this point.
图12是本申请实施例提供的解码方法100的示意性流程图。该方法100可由解码器或解码框架执行,例如图5所示的解码框架。Fig. 12 is a schematic flowchart of a decoding method 100 provided by an embodiment of the present application. The method 100 can be executed by a decoder or a decoding framework, such as the decoding framework shown in FIG. 5 .
如图12所示,该解码方法100可包括:As shown in Figure 12, the decoding method 100 may include:
S110,依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;S110, sequentially decode the attribute quantization residual value of the first component of the current point, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component from the code stream of the current point cloud; wherein, the first The component is U component, V component, G component or B component;
S120,基于该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,获取该当前点的属性重建值。S120. Acquire an attribute reconstruction value of the current point based on the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component.
本实施例中,将当前点的的第一个待解码的分量(即第一分量)设计为U分量、V分量、G分量或B分量,能够提升解压缩性能。In this embodiment, the first component to be decoded (that is, the first component) at the current point is designed as a U component, a V component, a G component or a B component, which can improve decompression performance.
此外,将当前点的的第一个待解码的分量(即第一分量)设计为U分量或V分量,考虑到亮度分量包含更多的视觉信息,本申请首先对U分量或V分量进行解码再对Y分量进行解码,相当于能够基于U分量的属性残差重建值或V分量的属性残差重建值确定Y分量的跨分量预测值,进而,基于Y分量的跨分量预测值确定Y分量的属性重建值时,可以提升Y分量的属性重建值的准确性,进而提升了解压缩性能。In addition, the first component to be decoded at the current point (that is, the first component) is designed as a U component or a V component. Considering that the luminance component contains more visual information, this application first decodes the U component or the V component Decoding the Y component is equivalent to determining the cross-component prediction value of the Y component based on the attribute residual reconstruction value of the U component or the attribute residual reconstruction value of the V component, and then determining the Y component based on the cross-component prediction value of the Y component When the attribute reconstruction value of , the accuracy of the attribute reconstruction value of the Y component can be improved, thereby improving the decompression performance.
申请人对本申请提供的方案在点云压缩平台上进行了测试,测试结果如表1至表4所示。The applicant tested the solution provided by this application on the point cloud compression platform, and the test results are shown in Table 1 to Table 4.
表1是在几何限制有损(limit-lossy geometry)压缩和属性有损(lossy attributes)压缩的情况下Cat1B、Cat1C以及Cat3的各个分量的BD-rate,Cat1B、Cat1C以及Cat3表示包括不同类型的属性信息的点云。表2是在几何无损(lossless geometry)压缩和属性有损压缩的情况下Cat1B、Cat1C以及Cat3的各个分量的BD-rate。表3是在几何无损压缩和属性限制有损(limit-lossy attributes)压缩的情况下Cat1B、Cat1C以及Cat3的各个分量的BD-rate。表4是几何无损压缩和属性无损压缩的情况下Cat1B、Cat1C以及Cat3的的bpip比率。Table 1 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of limit-lossy geometry compression and lossy attributes compression. Cat1B, Cat1C and Cat3 represent different types of Point cloud of attribute information. Table 2 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of lossless geometry compression and attribute lossy compression. Table 3 is the BD-rate of each component of Cat1B, Cat1C and Cat3 in the case of geometric lossless compression and limit-lossy attributes compression. Table 4 shows the bpip ratios of Cat1B, Cat1C and Cat3 in the case of geometric lossless compression and attribute lossless compression.
表1Table 1
Figure PCTCN2021135529-appb-000009
Figure PCTCN2021135529-appb-000009
表2Table 2
Figure PCTCN2021135529-appb-000010
Figure PCTCN2021135529-appb-000010
表3table 3
Figure PCTCN2021135529-appb-000011
Figure PCTCN2021135529-appb-000011
表4Table 4
测试序列test sequence bpip比率(bpip ratio)bpip ratio (bpip ratio)
Cat1BCat1B 99.9%99.9%
Cat1CCat1C 100.0%100.0%
Cat3Cat3 99.4%99.4%
如表1至表3所示,“-”代表率失真(Bit distortion,BD-rate)下降,BD-rate代表相同峰值信噪比(Peak Signal to Noise Ratio,PSNR)下的码率差异,BD-rate越小表示编码算法的性能越好。如图4所示,bpip比率的值越大表示编码算法的性能越好,通过表1至表4可以看出,本申请提供的解码方法具有明显的性能提升。As shown in Table 1 to Table 3, "-" represents the decrease in rate distortion (Bit distortion, BD-rate), and BD-rate represents the bit rate difference under the same peak signal-to-noise ratio (Peak Signal to Noise Ratio, PSNR), BD The smaller the -rate, the better the performance of the encoding algorithm. As shown in FIG. 4 , the larger the value of the bpip ratio, the better the performance of the encoding algorithm. It can be seen from Table 1 to Table 4 that the decoding method provided by the present application has obvious performance improvement.
当然,本申请对第二分量和第三分量的具体实现方式不作限定。Of course, the present application does not limit the specific implementation manners of the second component and the third component.
例如,该第一分量为U分量,该第二分量可以是V分量,该第三分量可以是Y分量。For example, the first component is a U component, the second component may be a V component, and the third component may be a Y component.
再如,该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量。For another example, the first component is a U component, the second component may be a Y component, and the third component may be a V component.
再如,该第一分量为V分量,该第二分量可以是Y分量,该第三分量可以是U分量。For another example, the first component is a V component, the second component may be a Y component, and the third component may be a U component.
再如,该第一分量为V分量,该第二分量可以是U分量,该第三分量可以是Y分量。For another example, the first component is a V component, the second component may be a U component, and the third component may be a Y component.
再如,该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量。For another example, the first component is a G component, the second component may be an R component, and the third component may be a B component.
再如,该第一分量为G分量,该第二分量可以是B分量,该第三分量可以是R分量。For another example, the first component is a G component, the second component may be a B component, and the third component may be an R component.
再如,该第一分量为B分量,该第二分量可以是R分量,该第三分量可以是G分量。For another example, the first component is a B component, the second component may be an R component, and the third component may be a G component.
再如,该第一分量为B分量,该第二分量可以是G分量,该第三分量可以是R分量。For another example, the first component is a B component, the second component may be a G component, and the third component may be an R component.
在一些实施例中,该S110可包括:In some embodiments, the S110 may include:
对该码流进行解码,得到第一标识,该第一标识的取值用于表示该第一分量的属性量化残差值是否为零;Decoding the code stream to obtain a first identifier, the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero;
基于该第一标识的取值对该码流进行解析,获取该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值。The code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are obtained.
例如,若当前点的属性信息的格式为YUV格式,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,则解码器对该码流进行解码,得到第一标识,该第一标识的取值用于表示U分量的属性量化残差值是否为零;解码器得到该第一标识后,可基于该第一标识的取值对该码流进行解析,依次得到该U分量的属性量化残差值、Y分量的属性量化残差值以及V分量的属性量化残差值。再如,若当前点的属性信息的格式为RGB格式,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,则解码器对该码流进行解码,得到第一标识,该第一标识的取值用于表示G分量的属性量化残差值是否为零;解码器得到该第一标识后,可基于该第一标识的取值对该码流进行解析,依次得到该G分量的属性量化残差值、R分量的属性量化残差值以及V分量的属性量化残差值。For example, if the format of the attribute information of the current point is YUV format, assuming that the first component is a U component, the second component can be a Y component, and the third component can be a V component, then the decoder decodes the code stream , to obtain the first identification, the value of the first identification is used to indicate whether the attribute quantization residual value of the U component is zero; after the decoder obtains the first identification, it can base the value of the first identification on the code stream Perform analysis to obtain the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component in sequence. For another example, if the format of the attribute information of the current point is RGB format, assuming that the first component is the G component, the second component can be the R component, and the third component can be the B component, then the decoder performs Decoding to obtain the first identification, the value of the first identification is used to indicate whether the attribute quantization residual value of the G component is zero; after the decoder obtains the first identification, it can base the value of the first identification on the code The stream is analyzed to obtain the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the V component in sequence.
在一些实施例中,若该第一标识的取值为第一数值,则解码器确定该第一分量的属性量化残差值为零,并基于对该码流进行解码得到的第二标识,获取该第二分量的属性残差值和该第三分量的属性残差值;该第二标识的取值用于表示该第一分量的属性量化残差值和该第二分量的属性量化残差值是否都为零。例如,该第一数值可以为0或其他取值。In some embodiments, if the value of the first identifier is the first value, the decoder determines that the attribute quantization residual value of the first component is zero, and based on the second identifier obtained by decoding the code stream, Obtain the attribute residual value of the second component and the attribute residual value of the third component; the value of the second identifier is used to represent the attribute quantized residual value of the first component and the attribute quantized residual value of the second component Whether the differences are all zero. For example, the first value may be 0 or other values.
例如,以当前点的属性信息的格式为YUV格式为例,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,若该第一标识的取值为第一数值,则解码器确定U分量的属性量化残差值为零,并基于对该码流进行解码得到的第二标识,获取Y分量的属性残差值和V分量的属性残差值。再如,以当前点的属性信息的格式为RGB格式为例,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,若该第一标识的取值为第一数值,则解码器确定G分量的属性量化残差值为零,并基于对该码流进行解码得到的第二标识,获取R分量的属性残差值和B分量的属性残差值。For example, taking the attribute information format of the current point as an example in YUV format, assuming that the first component is a U component, the second component may be a Y component, and the third component may be a V component. value is the first value, the decoder determines that the attribute quantization residual value of the U component is zero, and based on the second identifier obtained by decoding the code stream, obtains the attribute residual value of the Y component and the attribute residual value of the V component value. For another example, taking the attribute information format of the current point as an example in RGB format, assuming that the first component is a G component, the second component can be an R component, and the third component can be a B component. If the first identified If the value is the first value, the decoder determines that the attribute quantization residual value of the G component is zero, and based on the second identifier obtained by decoding the code stream, obtains the attribute residual value of the R component and the attribute residual value of the B component. difference.
本实施例中,该码流包括该第一标识和该第二标识,该第一标识可用于确定该第一分量的属性量化残差值是否为零,该第二标识可用于确定该第二分量的属性量化残差值是否为零。若该第一分量的属性量化残差值和该第二分量的属性量化残差值均为零,则对于该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,该码流仅包括对该第三分量的属性量化残差值进行编码得到的结果。In this embodiment, the code stream includes the first identifier and the second identifier, the first identifier can be used to determine whether the attribute quantization residual value of the first component is zero, and the second identifier can be used to determine whether the second The property of the component quantifies whether the residual value is zero. If the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero, then for the attribute quantization residual value of the first component, the attribute quantization residual value of the second component and For the attribute quantized residual value of the third component, the code stream only includes a result obtained by encoding the attribute quantized residual value of the third component.
当然,在其他可替代实施例中,若该第一标识的取值为第一数值,则该第一分量的属性量化残差值为零,此时,解码器也可以依次从该码流中获取该第二分量的属性量化残差值和该第三分量的属性量化残差值。换言之,该码流中可以不包括该第二标识,该码流包括依次对该第二分量的属性量化残差值和该第三分量的属性量化残差值进行编码得到的结果,本申请对此不作具体限定。Of course, in other alternative embodiments, if the value of the first identifier is the first value, then the attribute quantization residual value of the first component is zero, and at this time, the decoder can also sequentially obtain The attribute quantized residual value of the second component and the attribute quantized residual value of the third component are acquired. In other words, the code stream may not include the second identifier, and the code stream includes the result obtained by sequentially encoding the attribute quantization residual value of the second component and the attribute quantization residual value of the third component. This is not specifically limited.
在一些实施例中,若该第二标识的取值为该第一数值,则确定该第二分量的属性量化残差值为零,并从该码流中获取该第三分量的属性量化残差值;若该第二标识的取值为第二数值,则依次从该码流中获取该第二分量的属性量化残差值和该第三分量的属性量化残差值。例如,该第一数值可以为0或其他取值。例如,该第二数值可以为1或其他取值。In some embodiments, if the value of the second identifier is the first value, it is determined that the attribute quantization residual value of the second component is zero, and the attribute quantization residual value of the third component is obtained from the code stream. A difference value; if the value of the second identifier is a second value, sequentially acquire the attribute quantization residual value of the second component and the attribute quantization residual value of the third component from the code stream. For example, the first value may be 0 or other values. For example, the second value may be 1 or other values.
例如,以当前点的属性信息的格式为YUV格式为例,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,若该第二标识的取值为该第一数值,则解码器确定Y分量的属性量化残差值为零,并从该码流中获取V分量的属性量化残差值;若该第二标识的取值为第二数值,则依次从该码流中获取Y分量的属性量化残差值和V分量的属性量化残差值。再如,以当前点的属性信息的格式为RGB格式为例,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,若该第二标识的取值为该第一数值,则解码器确定R分量的属性量化残差值为零,并从该 码流中获取B分量的属性量化残差值;若该第二标识的取值为第二数值,则依次从该码流中获取R分量的属性量化残差值和B分量的属性量化残差值。For example, taking the attribute information format of the current point as an example in YUV format, assuming that the first component is a U component, the second component can be a Y component, and the third component can be a V component. value is the first value, the decoder determines that the attribute quantization residual value of the Y component is zero, and obtains the attribute quantization residual value of the V component from the code stream; if the value of the second identifier is the second value , the attribute quantization residual value of the Y component and the attribute quantization residual value of the V component are sequentially obtained from the code stream. For another example, taking the attribute information format of the current point as an example in RGB format, assuming that the first component is a G component, the second component can be an R component, and the third component can be a B component. If the second identified If the value is the first value, the decoder determines that the attribute quantization residual value of the R component is zero, and obtains the attribute quantization residual value of the B component from the code stream; if the value of the second identifier is the second value, the attribute quantization residual value of the R component and the attribute quantization residual value of the B component are sequentially obtained from the code stream.
在一些实施例中,该第一标识的取值为第二数值;依次从该码流中获取该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值。例如,该第二数值可以为1或其他取值。In some embodiments, the value of the first identifier is a second value; the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the third The attributes of the components quantize the residual values. For example, the second value may be 1 or other values.
例如,以当前点的属性信息的格式为YUV格式为例,若该第一标识的取值为第二数值,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,则解码器可以依次从该码流中获取U分量的属性量化残差值、Y分量的属性量化残差值以及V分量的属性量化残差值。再如,以当前点的属性信息的格式为RGB格式为例,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,若该第一标识的取值为第二数值,则解码器可以依次从该码流中获取G分量的属性量化残差值、R分量的属性量化残差值以及B分量的属性量化残差值。For example, taking the attribute information format of the current point as an example in YUV format, if the value of the first identifier is the second value, assuming that the first component is a U component, the second component may be a Y component, and the third The component may be a V component, and the decoder may sequentially obtain the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component from the code stream. For another example, taking the attribute information format of the current point as an example in RGB format, assuming that the first component is a G component, the second component can be an R component, and the third component can be a B component. If the first identified If the value is the second value, the decoder can sequentially obtain the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the B component from the code stream.
本实施例中,将当前点的的第一个待解码的分量(即第一分量)设计为U分量、V分量、G分量或B分量,能够提升第一标识的取值为第二数值的概率,相当于,解码端不需要对第二标识进行解码,进而,能够提升解码器的解码效率。In this embodiment, the first component to be decoded at the current point (that is, the first component) is designed as a U component, a V component, a G component or a B component, which can increase the value of the first flag to the second value. The probability is equivalent to that the decoding end does not need to decode the second identifier, and furthermore, the decoding efficiency of the decoder can be improved.
在一些实施例中,该S120可包括:In some embodiments, the S120 may include:
针对该第一分量、该第二分量和该第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到该每一个分量的属性残差重建值;获取该每一个分量的属性残差重建值;获取该每一个分量的属性预测值;获取该每一个分量的跨分量预测值;将该每一个分量的属性残差重建值、该每一个分量的属性预测值、以及该每一个分量的跨分量预测值的和,确定为该每一个分量的属性重建值。For each of the first component, the second component, and the third component, dequantize the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component; obtain The attribute residual reconstruction value of each component; obtain the attribute prediction value of each component; obtain the cross-component prediction value of each component; reconstruct the attribute residual value of each component, the attribute of each component The predicted value, and the sum of the cross-component predicted values for each component, is determined as the property reconstruction value for each component.
例如,以当前点的属性信息的格式为YUV格式为例,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,解码器得到该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值后,可以依次获取Y分量的属性预测值、U分量的属性预测值以及V分量的属性预测值,并依次获取Y分量的跨分量预测值、U分量的跨分量预测值以及V分量的跨分量预测值;然后,解码器将Y分量的属性残差值、Y分量的属性预测值以及Y分量的跨分量预测值的和,确定为Y分量的属性重建值,将U分量的属性残差值、U分量的属性预测值以及U分量的跨分量预测值的和,确定为U分量的属性重建值,将V分量的属性残差值、V分量的属性预测值以及V分量的跨分量预测值的和,确定为V分量的属性重建值。For example, taking the attribute information format of the current point as an example in the YUV format, assuming that the first component is a U component, the second component may be a Y component, and the third component may be a V component, and the decoder obtains the first component After the attribute quantized residual value of the second component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component, the predicted attribute value of the Y component, the predicted attribute value of the U component, and the predicted attribute value of the V component can be sequentially obtained. attribute prediction value, and sequentially obtain the cross-component prediction value of the Y component, the cross-component prediction value of the U component, and the cross-component prediction value of the V component; then, the decoder converts the attribute residual value of the Y component, the attribute prediction value of the Y component And the sum of the cross-component prediction values of the Y component is determined as the attribute reconstruction value of the Y component, and the sum of the attribute residual value of the U component, the attribute prediction value of the U component, and the cross-component prediction value of the U component is determined as the U component The attribute reconstruction value of the V component, the sum of the attribute residual value of the V component, the attribute prediction value of the V component, and the cross-component prediction value of the V component is determined as the attribute reconstruction value of the V component.
再如,以当前点的属性信息的格式为RGB格式为例,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,解码器得到该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值后,可以依次获取R分量的属性预测值、G分量的属性预测值以及B分量的属性预测值,并依次获取R分量的跨分量预测值、G分量的跨分量预测值以及B分量的跨分量预测值;然后,解码器将R分量的属性残差值、R分量的属性预测值以及R分量的跨分量预测值的和,确定为R分量的属性重建值,将G分量的属性残差值、G分量的属性预测值以及G分量的跨分量预测值的和,确定为G分量的属性重建值,将B分量的属性残差值、B分量的属性预测值以及B分量的跨分量预测值的和,确定为B分量的属性重建值。For another example, taking the attribute information format of the current point as an example in RGB format, assuming that the first component is a G component, the second component may be an R component, and the third component may be a B component, and the decoder obtains the first After the attribute quantization residual value of the component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component, the attribute prediction value of the R component, the attribute prediction value of the G component, and the B component can be sequentially obtained The attribute prediction value of the R component, and sequentially obtain the cross-component prediction value of the R component, the cross-component prediction value of the G component, and the cross-component prediction value of the B component; The sum of the value and the cross-component prediction value of the R component is determined as the attribute reconstruction value of the R component, and the sum of the attribute residual value of the G component, the attribute prediction value of the G component, and the cross-component prediction value of the G component is determined as G For the attribute reconstruction value of the component, the sum of the attribute residual value of the B component, the attribute prediction value of the B component, and the cross-component prediction value of the B component is determined as the attribute reconstruction value of the B component.
应当理解,解码器获取当前点的各个分量的属性预测值的顺序,可以和解码器获取当前点的各个分量的属性量化残差值的顺序保持一致,也可以不一致,本申请对此不作具体限定。It should be understood that the order in which the decoder obtains the attribute prediction values of each component of the current point may be consistent with the order in which the decoder obtains the attribute quantization residual values of each component of the current point, or may be inconsistent, and this application does not specifically limit this .
在一些实施例中,该第一分量的跨分量预测值为零;该第一分量的跨分量预测值为零;该第二分量的跨分量预测值为该第一分量的属性残差重建值;该第三分量的跨分量预测值为该第一分量的属性残差重建值与该第二分量的属性残差重建值的和。In some embodiments, the cross-component predicted value of the first component is zero; the cross-component predicted value of the first component is zero; the cross-component predicted value of the second component is the attribute residual reconstruction value of the first component ; The cross-component prediction value of the third component is the sum of the attribute residual reconstruction value of the first component and the attribute residual reconstruction value of the second component.
例如,以当前点的属性信息的格式为YUV格式为例,假设该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量,U分量的跨分量预测值为零;Y分量的跨分量预测值为U分量的属性残差重建值,V分量的跨分量预测值为U分量的属性残差重建值和Y分量的属性残差重建值的和。再如,以当前点的属性信息的格式为RGB格式为例,假设该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量,G分量的跨分量预测值为零;R分量的跨分量预测值为G分量的属性残差重建值;B分量的跨分量预测值为G分量的属性残差重建值和R分量的属性残差重建值的和。For example, taking the attribute information format of the current point as YUV format as an example, assuming that the first component is the U component, the second component can be the Y component, the third component can be the V component, and the cross-component prediction value of the U component is zero; the cross-component predicted value of the Y component is the attribute residual reconstruction value of the U component, and the cross-component predicted value of the V component is the sum of the attribute residual reconstruction value of the U component and the attribute residual reconstruction value of the Y component. As another example, taking the attribute information format of the current point as an example in RGB format, assuming that the first component is the G component, the second component can be the R component, and the third component can be the B component. Cross-component prediction of the G component The value is zero; the cross-component prediction value of the R component is the attribute residual reconstruction value of the G component; the cross-component prediction value of the B component is the sum of the attribute residual reconstruction value of the G component and the attribute residual reconstruction value of the R component.
在一些实施例中,首先,解码器可以在该当前点之前解码完成的点所组成的范围内为该当前点查找N个预测点;然后,解码器可以根据该N个预测点与该当前点之间的几何距离,计算该N个预测点对应的权重值;最后,解码器可以基于该N个预测点对应的权重值和该N个预测点的属性重建值,获取该每一个分量的属性预测值。In some embodiments, first, the decoder can search for N prediction points for the current point within the range formed by the points that have been decoded before the current point; Calculate the weight value corresponding to the N prediction points; finally, the decoder can obtain the attribute of each component based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the N prediction points Predictive value.
例如,以当前点的属性信息的格式为YUV格式为例,解码器获取该N个预测点对应的权重值后, 可基于该N个预测点对应的权重值和该N个预测点的Y分量的属性重建值,获取该点前点的Y分量的的属性预测值,基于该N个预测点对应的权重值和该N个预测点的U分量的属性重建值,获取该点前点的U分量的的属性预测值,基于该N个预测点对应的权重值和该N个预测点的V分量的属性重建值,获取该点前点的V分量的的属性预测值。再如,以当前点的属性信息的格式为RGB格式为例,解码器获取该N个预测点对应的权重值后,可基于该N个预测点对应的权重值和该N个预测点的R分量的属性重建值,获取该点前点的R分量的的属性预测值,基于该N个预测点对应的权重值和该N个预测点的G分量的属性重建值,获取该点前点的G分量的的属性预测值,基于该N个预测点对应的权重值和该N个预测点的B分量的属性重建值,获取该点前点的B分量的的属性预测值。For example, taking the attribute information format of the current point as YUV format as an example, after the decoder obtains the weight values corresponding to the N prediction points, it can The attribute reconstruction value of the point before the point is obtained to obtain the attribute prediction value of the Y component of the point, and based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the U component of the N prediction points, the U of the point before the point is obtained The attribute prediction value of the component, based on the weight values corresponding to the N prediction points and the attribute reconstruction value of the V component of the N prediction points, the attribute prediction value of the V component of the point before the point is obtained. For another example, taking the attribute information format of the current point as an example in RGB format, after the decoder obtains the weight values corresponding to the N prediction points, it can base on the weight values corresponding to the N prediction points and the R values of the N prediction points. The attribute reconstruction value of the component, obtain the attribute prediction value of the R component of the point before the point, based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the G component of the N prediction points, obtain the point before the point The attribute prediction value of the G component is based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the B components of the N prediction points, and the attribute prediction value of the B component of the point before the point is obtained.
下面结合具体实施例对解码端的具体执行步骤进行示例性说明。The specific execution steps of the decoding end are exemplarily described below in conjunction with specific embodiments.
实施例1:Example 1:
本实施例中,解码端依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值。作为一个示例,该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量。作为另一个示例,该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量。In this embodiment, the decoding end sequentially decodes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component of the current point from the code stream of the current point cloud. As an example, the first component is a U component, the second component may be a Y component, and the third component may be a V component. As another example, the first component is a G component, the second component may be an R component, and the third component may be a B component.
示例性地,解码端的执行过程可包括以下步骤:Exemplarily, the execution process at the decoding end may include the following steps:
a)、在当前点之间解码完成的点所组成的范围内为当前点查找N个预测点;a), find N predicted points for the current point within the range formed by the points that have been decoded between the current points;
b)、根据N个预测点与当前点之间的几何距离分别计算该N个预测点中的每一个预测点的权重值,根据每一个预测点的权重值和每一个的属性重建值(包括Y分量、U分量以及V分量的属性重建值或R分量、G分量以及B分量的属性重建值),计算当前点的属性预测值(包括Y分量、U分量以及V分量的属性预测值或R分量、G分量以及B分量的属性预测值)。b), respectively calculate the weight value of each prediction point in the N prediction points according to the geometric distance between the N prediction points and the current point, and reconstruct the value according to the weight value of each prediction point and each attribute (including The attribute reconstruction value of Y component, U component and V component or the attribute reconstruction value of R component, G component and B component), calculate the attribute prediction value of the current point (including the attribute prediction value of Y component, U component and V component or R component, G component, and attribute prediction value of B component).
例如,若当前点的属性信息的格式为YUV格式,则编码器可以根据每一个预测点的权重值和每一个预测点的Y分量,计算当前点的Y分量的属性预测值,根据每一个预测点的权重值和每一个预测点的U分量,计算当前点的U分量的属性预测值,根据每一个预测点的权重值和每一个预测点的V分量,计算当前点的V分量的属性预测值。再如,若当前点的属性信息的格式为RGB格式,则编码器可以根据每一个预测点的权重值和每一个预测点的R分量的属性重建值,计算当前点的R分量的属性预测值,根据每一个预测点的权重值和每一个预测点的G分量的属性重建值,计算当前点的G分量的属性预测值,根据每一个预测点的权重值和每一个预测点的B分量的属性重建值,计算当前点的B分量的属性预测值。For example, if the format of the attribute information of the current point is YUV format, the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value. For another example, if the attribute information format of the current point is in RGB format, the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
示例性地,可通过以下公式确定各个分量的属性预测值:Exemplarily, the attribute prediction value of each component can be determined by the following formula:
Figure PCTCN2021135529-appb-000012
Figure PCTCN2021135529-appb-000012
其中,A p表示各个分量的属性预测值,w i表示N个预测点中的第i个预测点的权重值,A i表示N个预测点中的第i个点的各个分量的属性预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。 Among them, A p represents the attribute prediction value of each component, w i represents the weight value of the i-th prediction point among the N prediction points, and A i represents the attribute prediction value of each component of the i-th point among the N prediction points . For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
c)、按照U分量、Y分量、V分量(或G分量、R分量、B分量)的顺序从码流中解码获取当前点的各个分量的属性量化残差值。c) Decoding and obtaining attribute quantization residual values of each component at the current point from the code stream in the order of U component, Y component, and V component (or G component, R component, and B component).
例如,若当前点的颜色信息的格式为YUV格式,则解码器按照U分量、Y分量、V分量的顺序从码流中解码获取当前点的各个分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则编码器按照G分量、R分量、B分量的顺序从码流中解码获取当前点的各个分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of U component, Y component, and V component. For another example, if the format of the color information of the current point is RGB format, the encoder decodes the attribute quantization residual value of each component of the current point from the code stream in the order of G component, R component, and B component.
示例性地,具体执行步骤包括:Exemplarily, the specific execution steps include:
1)、从码流中解码获取标志位flagu_g,若flagu_g为0,则U分量(或G分量)的属性量化残差值为0,若flagu_g为1,则U分量(或G分量)的属性量化残差值不为0。1) Obtain the flag bit flagu_g by decoding from the code stream. If flagu_g is 0, the attribute quantization residual value of the U component (or G component) is 0. If flagu_g is 1, the attribute of the U component (or G component) The quantized residual value is not 0.
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flagu_g,若flagu_g为0,则U分量的属性量化残差值为0,flagu_g为1,则U分量的属性量化残差值不为0。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flagu_g,若flagu_g为0,则G分量的属性量化残差值为0,若flagu_g为1,则G分量的属性量化残差值不为0。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flagu_g from the code stream. If flagu_g is 0, the attribute quantization residual value of the U component is 0, and flag_g is 1, then U The attribute quantization residual value of the component is not 0. For another example, if the color information format of the current point is in RGB format, the decoder decodes and obtains the flag bit flagu_g from the code stream. If flagu_g is 0, the attribute quantization residual value of the G component is 0. If flag_g is 1, Then the attribute quantization residual value of the G component is not 0.
2)、判断U分量(或G分量)的属性量化残差值是否为0,若为0,则执行3),否则执行4)。2) Judging whether the attribute quantization residual value of the U component (or G component) is 0, if it is 0, perform 3), otherwise perform 4).
3)、从码流中解码获取标志位flaguy_gr,若flaguy_gr为0,则Y分量(或R分量)的属性量化残差值为0,并从码流中解码获取V分量(或B分量)的属性量化残差值;否则,依次从码流中解码获取Y分量、V分量(或R分量、B分量)的属性量化残差值。3) Obtain the flag bit flaguy_gr from code stream decoding, if flaguy_gr is 0, then the attribute quantization residual value of Y component (or R component) is 0, and decode and obtain V component (or B component) from the code stream Attribute quantized residual value; otherwise, sequentially decode and obtain the attribute quantized residual value of Y component, V component (or R component, B component) from the code stream.
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flaguy_gr,若flaguy_gr为0,则Y分量的属性量化残差值为0,并从码流中解码获取V分量的属性量化残差值;否 则,依次从码流中解码获取Y分量的属性量化残差值和V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flaguy_gr,若flaguy_gr为0,则R分量的属性量化残差值为0,并从码流中解码获取B分量的属性量化残差值;否则,依次从码流中解码获取R分量的属性量化残差值和B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flaguy_gr from the code stream, and if flaguy_gr is 0, the attribute quantization residual value of the Y component is 0, and decodes it from the code stream Obtain the attribute quantization residual value of the V component; otherwise, sequentially decode and obtain the attribute quantization residual value of the Y component and the attribute quantization residual value of the V component from the code stream. For another example, if the format of the color information of the current point is RGB format, the decoder decodes and obtains the flag bit flaguy_gr from the code stream. If flaguy_gr is 0, the attribute quantization residual value of the R component is 0, and the Decode to obtain the attribute quantization residual value of the B component; otherwise, decode and obtain the attribute quantization residual value of the R component and the attribute quantization residual value of the B component from the code stream in sequence.
4)、依次从码流中解码获取U分量、Y分量、V分量(或G分量、R分量、B分量)的属性量化残差值。4) Decoding in sequence from the code stream to obtain attribute quantization residual values of the U component, the Y component, and the V component (or G component, R component, and B component).
例如,若当前点的颜色信息的格式为YUV格式,则解码器从码流中解码获取标志位flagu_g,若flagu_g为1,则解码器依次从码流中解码获取U分量的属性量化残差值、Y分量的属性量化残差值以及V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则解码器从码流中解码获取标志位flagu_g,若flagu_g为1,则解码器依次从码流中解码获取G分量的属性量化残差值、R分量的属性量化残差值以及B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the decoder decodes and obtains the flag bit flagu_g from the code stream. If flag_g is 1, the decoder sequentially decodes and obtains the attribute quantization residual value of the U component from the code stream , the attribute quantized residual value of the Y component, and the attribute quantized residual value of the V component. For another example, if the format of the color information of the current point is in RGB format, the decoder decodes the bit flag flag_g from the code stream, and if flagu_g is 1, the decoder decodes the attribute quantization residual of the G component sequentially from the code stream value, the attribute quantized residual value of the R component, and the attribute quantized residual value of the B component.
d)、对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量(或R分量、G分量、B分量)的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。d) For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
例如,若当前点的颜色信息的格式为YUV格式,则对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。再如,若当前点的颜色信息的格式为RGB格式,则对于各个分量的属性量化残差值,编码器按照R分量、G分量、B分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
示例性地,可以按照以下公式生成当前点的各个分量的属性重建值:Exemplarily, the attribute reconstruction value of each component of the current point can be generated according to the following formula:
Figure PCTCN2021135529-appb-000013
Figure PCTCN2021135529-appb-000013
其中,
Figure PCTCN2021135529-appb-000014
表示当前点的各个分量的属性重建值,
Figure PCTCN2021135529-appb-000015
表示各个分量反量化后的属性残差重建值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。
in,
Figure PCTCN2021135529-appb-000014
Indicates the attribute reconstruction value of each component of the current point,
Figure PCTCN2021135529-appb-000015
Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
在上述方案中,若当前点的颜色信息的格式为YUV格式,编码器按照U分量、Y分量、V分量的顺序对当前点的各个分量的属性量化残差值进行编码,若当前点的颜色信息的格式为RGB格式,编码器按照G分量、R分量、B分量的顺序对各个分量的属性量化残差值进行编码;相应的,若当前点的颜色信息的格式为YUV格式,解码器按照U分量、Y分量、V分量的顺序对当前点的各个分量的属性量化残差值进行解码,若当前点的颜色信息的格式为RGB格式,解码器按照G分量、R分量、B分量的顺序对各个分量的属性量化残差值进行解码。In the above scheme, if the format of the color information of the current point is YUV format, the encoder encodes the attribute quantization residual value of each component of the current point in the order of U component, Y component, and V component. If the color information of the current point The format of the information is RGB format, and the encoder encodes the attribute quantization residual value of each component in the order of G component, R component, and B component; correspondingly, if the format of the color information of the current point is YUV format, the decoder follows The order of U component, Y component, and V component decodes the attribute quantization residual value of each component of the current point. If the format of the color information of the current point is RGB format, the decoder follows the order of G component, R component, and B component. The attribute quantized residual values of the respective components are decoded.
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。The preferred embodiments of the present application have been described in detail above in conjunction with the accompanying drawings. However, the present application is not limited to the specific details in the above embodiments. Within the scope of the technical concept of the present application, various simple modifications can be made to the technical solutions of the present application. These simple modifications all belong to the protection scope of the present application. For example, the various specific technical features described in the above specific implementation manners can be combined in any suitable manner if there is no contradiction. Separately. As another example, any combination of various implementations of the present application can also be made, as long as they do not violate the idea of the present application, they should also be regarded as the content disclosed in the present application. It should also be understood that, in various method embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be used in this application. The implementation of the examples constitutes no limitation.
图13是本申请实施例提供的基于编码方法200的示意性流程图。该方法200可由编码器或编码框架执行,例如图4所示的编码框架。FIG. 13 is a schematic flowchart of an encoding-based method 200 provided by an embodiment of the present application. The method 200 can be executed by an encoder or an encoding framework, such as the encoding framework shown in FIG. 4 .
如图13所示,该编码方法200可包括:As shown in Figure 13, the encoding method 200 may include:
S210,确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;S210, determine the attribute quantization residual value of the first component of the current point to be encoded in the current point cloud, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component; wherein, the first component is U component, V component, G component or B component;
S220,依次编码该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,以得到该当前点云的码流。S220. Encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component in sequence to obtain a code stream of the current point cloud.
在一些实施例中,该码流包括对第一标识的取值进行编码得到的结果,该第一标识的取值用于表示该第一分量的属性量化残差值是否为零。In some embodiments, the code stream includes a result obtained by encoding the value of the first identifier, and the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero.
在一些实施例中,若该第一标识的取值为第一数值,则表示该第一分量的属性量化残差值为零;若该第一标识的取值为第二数值,则表示该第一分量的属性量化残差值不为零。In some embodiments, if the value of the first flag is the first value, it means that the attribute quantization residual value of the first component is zero; if the value of the first flag is the second value, it means that the The attribute quantization residual value of the first component is not zero.
在一些实施例中,若该第一分量的属性量化残差值为零,则该码流还包括对第二标识的取值进行编码得到的结果,该第二标识的取值用于表示该第一分量的属性量化残差值和该第二分量的属性量化残差 值是否都为零。In some embodiments, if the attribute quantization residual value of the first component is zero, the code stream further includes the result obtained by encoding the value of the second identifier, and the value of the second identifier is used to represent the Whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
在一些实施例中,若该第二标识的取值为第一数值,则表示该第一分量的属性量化残差值和该第二分量的属性量化残差值都为零;若该第二标识的取值为第二数值,则表示该第一分量的属性量化残差值和该第二分量的属性量化残差值不都为零。In some embodiments, if the value of the second flag is the first value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero; if the second If the value of the flag is the second value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero.
在一些实施例中,若该第一分量的属性量化残差值和该第二分量的属性量化残差值都为零,该码流还包括对该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if both the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are zero, the code stream further includes encoding the attribute quantization residual value of the third component The results obtained.
在一些实施例中,若该第一分量的属性量化残差值和该第二分量的属性量化残差值不都为零,该码流还包括依次对该第二分量的属性量化残差值和该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero, the code stream further includes the attribute quantization residual value of the second component in turn The result obtained by encoding with the attribute quantization residual value of the third component.
在一些实施例中,若该第一分量的属性量化残差值不为零,则该码流还包括依次对该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if the attribute quantization residual value of the first component is not zero, the code stream further includes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component in sequence And the result obtained by encoding the attribute quantization residual value of the third component.
在一些实施例中,该方法200还可包括:In some embodiments, the method 200 may also include:
针对该第一分量、该第二分量和该第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到该每一个分量的属性残差重建值;For each of the first component, the second component, and the third component, perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
获取该每一个分量的属性残差重建值;Get the attribute residual reconstruction value of each component;
获取该每一个分量的属性预测值;Obtain the attribute prediction value of each component;
获取该每一个分量的跨分量预测值;Obtain the cross-component prediction value of each component;
将该每一个分量的属性残差重建值、该每一个分量的属性预测值、以及该每一个分量的跨分量预测值的和,确定为该每一个分量的属性重建值。The sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
在一些实施例中,该第一分量的跨分量预测值为零;该第二分量的跨分量预测值为该第一分量的属性残差重建值;该第三分量的跨分量预测值为该第一分量的属性残差重建值与该第二分量的属性残差重建值的和。In some embodiments, the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
在一些实施例中,在该当前点之前编码完成的点所组成的范围内为该当前点查找N个预测点;根据该N个预测点与该当前点之间的几何距离,计算该N个预测点对应的权重值;基于该N个预测点对应的权重值和该N个预测点的属性重建值,获取该每一个分量的属性预测值。In some embodiments, N prediction points are searched for the current point within the range formed by the points encoded before the current point; according to the geometric distance between the N prediction points and the current point, the N prediction points are calculated. The weight value corresponding to the prediction point; based on the weight value corresponding to the N prediction points and the attribute reconstruction value of the N prediction points, the attribute prediction value of each component is obtained.
下面结合具体实施例对编码端的具体执行步骤进行示例性说明。The specific execution steps of the encoding end will be illustrated below in combination with specific embodiments.
实施例2:Example 2:
本实施例中,编码器依次编码该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,以得到该当前点云的码流;作为一个示例,该第一分量为U分量,该第二分量可以是Y分量,该第三分量可以是V分量。作为另一个示例,该第一分量为G分量,该第二分量可以是R分量,该第三分量可以是B分量。In this embodiment, the encoder sequentially encodes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component to obtain the code of the current point cloud Stream; as an example, the first component is a U component, the second component may be a Y component, and the third component may be a V component. As another example, the first component is a G component, the second component may be an R component, and the third component may be a B component.
本实施例中,编码端的执行过程可包括以下步骤:In this embodiment, the execution process at the encoding end may include the following steps:
a)、在当前点之间编码完成的点所组成的范围内为当前点查找N个预测点。a) Find N predicted points for the current point within the range formed by the encoded points between the current points.
b)、根据N个预测点与当前点之间的几何距离分别计算该N个预测点中的每一个预测点的权重值,根据每一个预测点的权重值和每一个的属性重建值(包括Y分量、U分量以及V分量的属性重建值或R分量、G分量以及B分量的属性重建值),计算当前点的属性预测值(包括Y分量、U分量以及V分量的属性预测值或R分量、G分量以及B分量的属性预测值)。b), respectively calculate the weight value of each prediction point in the N prediction points according to the geometric distance between the N prediction points and the current point, and reconstruct the value according to the weight value of each prediction point and each attribute (including The attribute reconstruction value of Y component, U component and V component or the attribute reconstruction value of R component, G component and B component), calculate the attribute prediction value of the current point (including the attribute prediction value of Y component, U component and V component or R component, G component, and attribute prediction value of B component).
例如,若当前点的属性信息的格式为YUV格式,则编码器可以根据每一个预测点的权重值和每一个预测点的Y分量,计算当前点的Y分量的属性预测值,根据每一个预测点的权重值和每一个预测点的U分量,计算当前点的U分量的属性预测值,根据每一个预测点的权重值和每一个预测点的V分量,计算当前点的V分量的属性预测值。再如,若当前点的属性信息的格式为RGB格式,则编码器可以根据每一个预测点的权重值和每一个预测点的R分量的属性重建值,计算当前点的R分量的属性预测值,根据每一个预测点的权重值和每一个预测点的G分量的属性重建值,计算当前点的G分量的属性预测值,根据每一个预测点的权重值和每一个预测点的B分量的属性重建值,计算当前点的B分量的属性预测值。For example, if the format of the attribute information of the current point is YUV format, the encoder can calculate the attribute prediction value of the Y component of the current point according to the weight value of each prediction point and the Y component of each prediction point, and according to each prediction Calculate the attribute prediction value of the U component of the current point based on the weight value of the point and the U component of each prediction point, and calculate the attribute prediction of the V component of the current point according to the weight value of each prediction point and the V component of each prediction point value. For another example, if the attribute information format of the current point is in RGB format, the encoder can calculate the attribute prediction value of the R component of the current point according to the weight value of each prediction point and the attribute reconstruction value of the R component of each prediction point , according to the weight value of each prediction point and the attribute reconstruction value of the G component of each prediction point, calculate the attribute prediction value of the G component of the current point, according to the weight value of each prediction point and the B component of each prediction point Attribute reconstruction value, calculate the attribute prediction value of the B component of the current point.
示例性地,可通过以下公式确定各个分量的属性预测值:Exemplarily, the attribute prediction value of each component can be determined by the following formula:
Figure PCTCN2021135529-appb-000016
Figure PCTCN2021135529-appb-000016
其中,A p表示各个分量的属性预测值,w i表示N个预测点中的第i个预测点的权重值,A i表示N个预测点中的第i个点的各个分量的属性预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。 Among them, A p represents the attribute prediction value of each component, w i represents the weight value of the i-th prediction point among the N prediction points, and A i represents the attribute prediction value of each component of the i-th point among the N prediction points . For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
c)、编码器根据各个分量的属性预测值和各个分量跨分量预测值,计算各个分量的属性残差值。c) The encoder calculates the attribute residual value of each component according to the attribute prediction value of each component and the cross-component prediction value of each component.
例如,若当前点的颜色属性位于YUV空间,则可以按照Y分量、U分量、V分量的顺序依次计算 当前点的各个分量的属性残差值;若当前点的颜色属性位于RGB空间,则可以按照R分量、G分量、B分量的顺序依次计算当前点的各个分量的属性残差值。For example, if the color attribute of the current point is in the YUV space, the attribute residual value of each component of the current point can be calculated sequentially in the order of Y component, U component, and V component; if the color attribute of the current point is in the RGB space, then you can The attribute residual value of each component of the current point is calculated sequentially in the order of R component, G component, and B component.
示例性地,可以按照以下公式计算各个分量的属性残差值:Exemplarily, the attribute residual value of each component can be calculated according to the following formula:
delta=currValue-predictor-residualPrevComponent;delta = currValue-predictor-residualPrevComponent;
其中,delta表示各个分量的属性残差值,currValue表示各个分量的原始值或真实值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。Among them, delta represents the attribute residual value of each component, currValue represents the original value or real value of each component, predictor represents the attribute prediction value of each component, and residualPrevComponent represents the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, each component can include R component, G component and the B component.
示例性地,对于Y分量和R分量,由于是最先编码的分量,因此,可以将Y分量和R分量的跨分量预测值为0,对于U分量、V分量、G分量以及B分量,其跨分量预测值根据前面编码的分量的属性量化残差值经过反量化得到的属性残差值确定。例如,U分量的跨分量预测值可以是对Y分量的属性量化残差值经过反量化得到的属性残差重建值,G分量的跨分量预测值可以是对R分量的属性量化残差值经过反量化得到的属性残差重建值。再如,V分量的跨分量预测值可以是对Y分量的属性量化残差值经过反量化得到的属性残差重建值与对U分量的属性量化残差值经过反量化得到的属性残差重建值的和,B分量的跨分量预测值可以是对R分量的属性量化残差重建值经过反量化得到的属性残差值与对G分量的属性量化残差值经过反量化得到的属性残差重建值的和。Exemplarily, for the Y component and the R component, since they are the first encoded components, the cross-component prediction value of the Y component and the R component can be set to 0, and for the U component, V component, G component, and B component, its The cross-component predictive value is determined according to the attribute residual value obtained through inverse quantization of the attribute quantization residual value of the previously encoded component. For example, the cross-component prediction value of the U component can be the attribute residual reconstruction value obtained by inverse quantization of the attribute quantization residual value of the Y component, and the cross-component prediction value of the G component can be the attribute quantization residual value of the R component. The reconstructed value of the attribute residual obtained by inverse quantization. For another example, the cross-component prediction value of the V component may be the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the Y component and the attribute residual reconstruction value obtained by dequantizing the attribute quantization residual value of the U component. The sum of values, the cross-component prediction value of the B component can be the attribute residual value obtained by dequantizing the attribute quantized residual reconstruction value of the R component and the attribute residual obtained by dequantizing the attribute quantized residual value of the G component Sum of reconstructed values.
编码器获取各个分量的属性残差值后,对各个分量的属性残差值进行量化操作,得到当前点的各个分量的属性量化残差值。例如,若当前点的颜色信息的格式为YUV格式,则按照Y分量、U分量、V分量的顺序依次量化当前点的各个分量的属性残差值,得到各个分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则按照R分量、G分量、B分量的顺序依次量化当前点的各个分量的属性残差值,得到各个分量的属性量化残差值。After the encoder obtains the attribute residual value of each component, it performs a quantization operation on the attribute residual value of each component to obtain the attribute quantized residual value of each component of the current point. For example, if the format of the color information of the current point is YUV format, the attribute residual value of each component of the current point is sequentially quantized in the order of Y component, U component, and V component to obtain the attribute quantized residual value of each component. For another example, if the format of the color information of the current point is RGB format, the attribute residual value of each component of the current point is sequentially quantized in the order of R component, G component, and B component to obtain the attribute quantized residual value of each component.
d)、编码器对当前点的各个分量的属性量化残差值进行熵编码,得到点云的码流。d) The encoder performs entropy encoding on the attribute quantization residual value of each component of the current point to obtain the code stream of the point cloud.
例如,若当前点的颜色信息的格式为YUV格式,则按照U分量、Y分量、V分量的顺序对各个分量的属性量化残差值进行熵编码,以得到点云的码流。再如,若当前点的颜色信息的格式为RGB格式,则按照G分量、R分量、B分量顺序对各个分量的属性量化残差值进行熵编码,以得到点云的码流。For example, if the format of the color information of the current point is YUV format, entropy encoding is performed on the attribute quantization residual value of each component in the order of U component, Y component, and V component to obtain the code stream of the point cloud. For another example, if the format of the color information of the current point is RGB format, entropy encoding is performed on the attribute quantization residual value of each component in the order of G component, R component, and B component to obtain the code stream of the point cloud.
示例性地,可以按照以下步骤对当前点的各个分量的属性量化残差值进行熵编码:Exemplarily, the entropy coding may be performed on the attribute quantization residual value of each component of the current point according to the following steps:
1)、一方面,引入一个标志位flagu_g,针对属性信息的格式为YUV格式的点,其可用于表示U分量的属性量化残差值是否为0,对于属性信息的格式为RGB格式的点,其可用于表示G分量的属性量化残差值是否为0。例如,若U分量(或G分量)的属性量化残差值为0,则flagu_g=0,否则flagu_g=1,编码器对flagu_g的取值进行编码并写入码流。另一方面,引入另一个标志位flaguy_gr,针对属性信息的格式为YUV格式的点,其可用于表示U分量的属性残差量化值和Y分量的属性残差量化值是否都为0,对于属性信息的格式为RGB格式的点,其可用于表示G分量的属性残差量化值和R分量的属性残差量化值是否都为0。例如,若U分量的属性残差量化值和Y分量的属性残差量化值都为0,则flaguy_gr=0,否则flaguy_gr=1。再如,若G分量的属性残差量化值和R分量的属性残差量化值都为0,则flaguy_gr=0,否则flaguy_gr=1。1) On the one hand, introduce a flag bit flag_g, which can be used to indicate whether the attribute quantization residual value of the U component is 0 for points whose attribute information format is YUV format, and for points whose attribute information format is RGB format, It can be used to indicate whether the attribute quantization residual value of the G component is 0. For example, if the attribute quantization residual value of the U component (or G component) is 0, then flagu_g=0, otherwise flagu_g=1, the encoder encodes the value of flag_g and writes it into the code stream. On the other hand, another flag bit flaguy_gr is introduced, which can be used to indicate whether the attribute residual quantization value of the U component and the attribute residual quantization value of the Y component are both 0 for points whose attribute information format is YUV format. The format of the information is a point in RGB format, which can be used to indicate whether the attribute residual quantization value of the G component and the attribute residual quantization value of the R component are both 0. For example, if both the attribute residual quantization value of the U component and the attribute residual quantization value of the Y component are 0, then flaguy_gr=0, otherwise flaguy_gr=1. For another example, if both the attribute residual quantization value of the G component and the attribute residual quantization value of the R component are 0, then flaguy_gr=0, otherwise flaguy_gr=1.
2)、编码器判断U分量(或G分量)的属性量化残差值是否为0,若为0,则执行下述3),否则执行下述4)。2) The encoder judges whether the attribute quantization residual value of the U component (or G component) is 0, and if it is 0, executes the following 3), otherwise executes the following 4).
3)、编码器对flaguy_gr的取值进行编码,并判断Y分量(或R分量)的属性量化残差值是否为0,若Y分量(或R分量)的属性量化残差值为0,则编码V分量(或B分量)的属性量化残差值,否则依次编码Y分量和V分量(或R分量和B分量)的属性残差量化值。3), the encoder encodes the value of flaguy_gr, and judges whether the attribute quantization residual value of the Y component (or R component) is 0, if the attribute quantization residual value of the Y component (or R component) is 0, then The attribute quantization residual value of the V component (or B component) is encoded, otherwise, the attribute residual quantization values of the Y component and the V component (or R component and B component) are encoded sequentially.
例如,若当前点的颜色信息的格式为YUV格式,则编码器对flaguy_gr的取值进行编码,并判断Y分量的属性量化残差值是否为0,若Y分量的属性量化残差值为0,则编码V分量的属性量化残差值,否则依次编码Y分量的属性残差量化值和V分量的属性残差量化值。再如,若当前点的颜色信息的格式为RGB格式,则编码器对flaguy_gr的取值进行编码,并判断R分量的属性量化残差值是否为0,若R分量的属性量化残差值为0,则编码B分量的属性量化残差值,否则依次编码R分量的属性残差量化值和B分量的属性残差量化值。For example, if the format of the color information of the current point is YUV format, the encoder encodes the value of flaguy_gr, and judges whether the attribute quantization residual value of the Y component is 0, if the attribute quantization residual value of the Y component is 0 , then encode the attribute quantized residual value of the V component, otherwise encode the attribute residual quantized value of the Y component and the attribute residual quantized value of the V component in sequence. For another example, if the color information format of the current point is in RGB format, the encoder encodes the value of flaguy_gr and judges whether the attribute quantization residual value of the R component is 0. If the attribute quantization residual value of the R component is 0, the attribute quantization residual value of the B component is encoded, otherwise, the attribute residual quantization value of the R component and the attribute residual quantization value of the B component are encoded sequentially.
4)、编码器依次编码U分量、Y分量、V分量(或G分量、R分量、B分量)的属性量化残差值。4) The encoder sequentially encodes the attribute quantized residual values of the U component, the Y component, and the V component (or G component, R component, and B component).
例如,若当前点的颜色信息的格式为YUV格式,则编码器依次编码U分量的属性量化残差值、Y分量的属性量化残差值以及V分量的属性量化残差值。再如,若当前点的颜色信息的格式为RGB格式,则编码器依次编码G分量的属性量化残差值、R分量的属性量化残差值以及B分量的属性量化残差值。For example, if the format of the color information of the current point is YUV format, the encoder sequentially encodes the attribute quantization residual value of the U component, the attribute quantization residual value of the Y component, and the attribute quantization residual value of the V component. For another example, if the color information of the current point is in the RGB format, the encoder sequentially encodes the attribute quantization residual value of the G component, the attribute quantization residual value of the R component, and the attribute quantization residual value of the B component.
e)、对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量(或R分量、G分量、B分量)的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量 的属性重建值。e) For the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, V component (or R component, G component, B component), and then adds the attribute prediction value of the corresponding component and cross-component prediction values to generate attribute reconstruction values for the corresponding components of the current point.
例如,若当前点的颜色信息的格式为YUV格式,则对于各个分量的属性量化残差值,编码器按照Y分量、U分量、V分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。再如,若当前点的颜色信息的格式为RGB格式,则对于各个分量的属性量化残差值,编码器按照R分量、G分量、B分量的顺序进行反量化、然后加上对应分量的属性预测值和跨分量预测值,生成当前点的对应分量的属性重建值。For example, if the format of the color information of the current point is YUV format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of Y component, U component, and V component, and then adds the attribute prediction of the corresponding component Values and cross-component predictions to generate attribute reconstruction values for the corresponding components of the current point. For another example, if the format of the color information of the current point is RGB format, then for the attribute quantization residual value of each component, the encoder performs inverse quantization in the order of R component, G component, and B component, and then adds the attribute of the corresponding component Predicted value and cross-component predicted value, generate the attribute reconstruction value of the corresponding component of the current point.
示例性地,可以按照以下公式生成当前点的各个分量的属性重建值:Exemplarily, the attribute reconstruction value of each component of the current point can be generated according to the following formula:
Figure PCTCN2021135529-appb-000017
Figure PCTCN2021135529-appb-000017
其中,
Figure PCTCN2021135529-appb-000018
表示当前点的各个分量的属性重建值,
Figure PCTCN2021135529-appb-000019
表示各个分量反量化后的属性残差重建值,predictor表示各个分量的属性预测值,residualPrevComponent表示各个分量的跨分量预测值。例如,若当前点的颜色信息的格式为YUV格式,则各个分量可以包括Y分量、U分量以及V分量,若当前点的颜色信息的格式为RGB格式,则各个分量可以包括R分量、G分量以及B分量。
in,
Figure PCTCN2021135529-appb-000018
Indicates the attribute reconstruction value of each component of the current point,
Figure PCTCN2021135529-appb-000019
Indicates the attribute residual reconstruction value after inverse quantization of each component, predictor indicates the attribute prediction value of each component, and residualPrevComponent indicates the cross-component prediction value of each component. For example, if the format of the color information of the current point is YUV format, then each component can include Y component, U component and V component; if the format of the color information of the current point is RGB format, then each component can include R component, G component and the B component.
应当理解,由于步骤c)中会对各个分量的属性量化残差值进行反量化,得到各个分量的属性残差重建值,因此,也可以将步骤e)中各个分量的属性重建值的生成过程集成在步骤c)中。此外,若将步骤e)中各个分量的属性重建值的生成过程集成在步骤c)中,则各个分量的属性重建值的生成过程也可以按照U分量、Y分量、V分量(或G分量、R分量、B分量)的顺序进行反量化。本申请对此不作具体限定。It should be understood that in step c), the attribute quantization residual value of each component will be dequantized to obtain the attribute residual reconstruction value of each component. Therefore, the generation process of the attribute reconstruction value of each component in step e) can also be integrated in step c). In addition, if the generation process of the attribute reconstruction value of each component in step e) is integrated in step c), then the generation process of the attribute reconstruction value of each component can also follow the U component, Y component, V component (or G component, G component, The order of R component, B component) is dequantized. This application does not specifically limit it.
下面将结合附图对本申请实施例提供的编码器或解码器进行说明。The encoder or decoder provided by the embodiments of the present application will be described below with reference to the accompanying drawings.
图14是本申请实施例提供的解码器300的示意性框图。Fig. 14 is a schematic block diagram of a decoder 300 provided by an embodiment of the present application.
如图14所示,该解码器300可包括:As shown in Figure 14, the decoder 300 may include:
解码单元310,用于依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;The decoding unit 310 is used to sequentially decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud; wherein , the first component is a U component, a V component, a G component or a B component;
获取单元320,用于基于该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,获取该当前点的属性重建值。The acquiring unit 320 is configured to acquire the attribute reconstruction value of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component.
在一些实施例中,该解码单元310具体用于:In some embodiments, the decoding unit 310 is specifically used for:
对该码流进行解码,得到第一标识,该第一标识的取值用于表示该第一分量的属性量化残差值是否为零;Decoding the code stream to obtain a first identifier, the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero;
基于该第一标识的取值对该码流进行解析,获取该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值。The code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are obtained.
在一些实施例中,该第一标识的取值为第一数值;其中,该解码单元310具体用于:In some embodiments, the value of the first identifier is a first value; wherein, the decoding unit 310 is specifically configured to:
确定该第一分量的属性量化残差值为零;determining that the attribute quantization residual value of the first component is zero;
基于对该码流进行解码得到的第二标识,获取该第二分量的属性残差值和该第三分量的属性残差值;该第二标识的取值用于表示该第一分量的属性量化残差值和该第二分量的属性量化残差值是否都为零。Based on the second identifier obtained by decoding the code stream, obtain the attribute residual value of the second component and the attribute residual value of the third component; the value of the second identifier is used to represent the attribute of the first component Whether the quantized residual value and the attribute quantized residual value of the second component are both zero.
在一些实施例中,该解码单元310具体用于:In some embodiments, the decoding unit 310 is specifically used for:
若该第二标识的取值为该第一数值,则确定该第二分量的属性量化残差值为零,并从该码流中获取该第三分量的属性量化残差值;If the value of the second identifier is the first value, determine that the attribute quantization residual value of the second component is zero, and acquire the attribute quantization residual value of the third component from the code stream;
若该第二标识的取值为第二数值,则依次从该码流中获取该第二分量的属性量化残差值和该第三分量的属性量化残差值。If the value of the second identifier is the second value, the attribute quantization residual value of the second component and the attribute quantization residual value of the third component are sequentially obtained from the code stream.
在一些实施例中,该第一标识的取值为第二数值;其中,该解码单元310具体用于:In some embodiments, the value of the first identifier is a second value; wherein, the decoding unit 310 is specifically configured to:
依次从该码流中获取该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值。The attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are sequentially acquired from the code stream.
在一些实施例中,该获取单元320具体用于:In some embodiments, the acquiring unit 320 is specifically configured to:
针对该第一分量、该第二分量和该第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到该每一个分量的属性残差重建值;For each of the first component, the second component, and the third component, perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
获取该每一个分量的属性残差重建值;Get the attribute residual reconstruction value of each component;
获取该每一个分量的属性预测值;Obtain the attribute prediction value of each component;
获取该每一个分量的跨分量预测值;Obtain the cross-component prediction value of each component;
将该每一个分量的属性残差重建值、该每一个分量的属性预测值、以及该每一个分量的跨分量预测值的和,确定为该每一个分量的属性重建值。The sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
在一些实施例中,该第一分量的跨分量预测值为零;该第二分量的跨分量预测值为该第一分量的属性残差重建值;该第三分量的跨分量预测值为该第一分量的属性残差重建值与该第二分量的属性残差重 建值的和。In some embodiments, the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
在一些实施例中,该获取单元320具体用于:In some embodiments, the acquiring unit 320 is specifically configured to:
在该当前点之前解码完成的点所组成的范围内为该当前点查找N个预测点;Find N predicted points for the current point within the range formed by the points that have been decoded before the current point;
根据该N个预测点与该当前点之间的几何距离,计算该N个预测点对应的权重值;Calculating weight values corresponding to the N predicted points according to the geometric distance between the N predicted points and the current point;
基于该N个预测点对应的权重值和该N个预测点的属性重建值,获取该每一个分量的属性预测值。Based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the N prediction points, the attribute prediction value of each component is obtained.
需要说明的是,该解码器300也可以结合至图5所示的解码框架,即可将该解码器300中的单元替换或结合至解码框架中的相关部分。例如,该获取单元320可用于实现解码框架中的属性预测部分。It should be noted that the decoder 300 can also be combined with the decoding framework shown in FIG. 5 , that is, units in the decoder 300 can be replaced or combined with relevant parts in the decoding framework. For example, the acquiring unit 320 can be used to implement the attribute prediction part in the decoding framework.
图15是本申请实施例提供的编码器400的示意性框图。FIG. 15 is a schematic block diagram of an encoder 400 provided by an embodiment of the present application.
如图15所示,该编码器400可包括:As shown in Figure 15, the encoder 400 may include:
确定单元410,用于确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,该第一分量为U分量、V分量、G分量或B分量;A determination unit 410, configured to determine the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component of the current point to be encoded in the current point cloud; wherein, the The first component is a U component, a V component, a G component or a B component;
编码单元420,用于依次编码该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值,以得到该当前点云的码流。An encoding unit 420, configured to sequentially encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component to obtain the code stream of the current point cloud .
在一些实施例中,该码流包括对第一标识的取值进行编码得到的结果,该第一标识的取值用于表示该第一分量的属性量化残差值是否为零。In some embodiments, the code stream includes a result obtained by encoding the value of the first identifier, and the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero.
在一些实施例中,若该第一标识的取值为第一数值,则表示该第一分量的属性量化残差值为零;若该第一标识的取值为第二数值,则表示该第一分量的属性量化残差值不为零。In some embodiments, if the value of the first flag is the first value, it means that the attribute quantization residual value of the first component is zero; if the value of the first flag is the second value, it means that the The attribute quantization residual value of the first component is not zero.
在一些实施例中,若该第一分量的属性量化残差值为零,则该码流还包括对第二标识的取值进行编码得到的结果,该第二标识的取值用于表示该第一分量的属性量化残差值和该第二分量的属性量化残差值是否都为零。In some embodiments, if the attribute quantization residual value of the first component is zero, the code stream further includes the result obtained by encoding the value of the second identifier, and the value of the second identifier is used to represent the Whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
在一些实施例中,若该第二标识的取值为第一数值,则表示该第一分量的属性量化残差值和该第二分量的属性量化残差值都为零;若该第二标识的取值为第二数值,则表示该第一分量的属性量化残差值和该第二分量的属性量化残差值不都为零。In some embodiments, if the value of the second flag is the first value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero; if the second If the value of the flag is the second value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero.
在一些实施例中,若该第一分量的属性量化残差值和该第二分量的属性量化残差值都为零,该码流还包括对该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if both the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are zero, the code stream further includes encoding the attribute quantization residual value of the third component The results obtained.
在一些实施例中,若该第一分量的属性量化残差值和该第二分量的属性量化残差值不都为零,该码流还包括依次对该第二分量的属性量化残差值和该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero, the code stream further includes the attribute quantization residual value of the second component in turn The result obtained by encoding with the attribute quantization residual value of the third component.
在一些实施例中,若该第一分量的属性量化残差值不为零,则该码流还包括依次对该第一分量的属性量化残差值、该第二分量的属性量化残差值以及该第三分量的属性量化残差值进行编码得到的结果。In some embodiments, if the attribute quantization residual value of the first component is not zero, the code stream further includes the attribute quantization residual value of the first component, the attribute quantization residual value of the second component in sequence And the result obtained by encoding the attribute quantization residual value of the third component.
在一些实施例中,该确定单元410还用于:In some embodiments, the determining unit 410 is also used to:
针对该第一分量、该第二分量和该第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到该每一个分量的属性残差重建值;For each of the first component, the second component, and the third component, perform inverse quantization processing on the attribute quantization residual value of each component to obtain the attribute residual reconstruction value of each component;
获取该每一个分量的属性残差重建值;Get the attribute residual reconstruction value of each component;
获取该每一个分量的属性预测值;Obtain the attribute prediction value of each component;
获取该每一个分量的跨分量预测值;Obtain the cross-component prediction value of each component;
将该每一个分量的属性残差重建值、该每一个分量的属性预测值、以及该每一个分量的跨分量预测值的和,确定为该每一个分量的属性重建值。The sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
在一些实施例中,该第一分量的跨分量预测值为零;该第二分量的跨分量预测值为该第一分量的属性残差重建值;该第三分量的跨分量预测值为该第一分量的属性残差重建值与该第二分量的属性残差重建值的和。In some embodiments, the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; the cross-component predictive value of the third component is the The sum of the reconstructed attribute residual value of the first component and the reconstructed attribute residual value of the second component.
在一些实施例中,该确定单元410具体用于:In some embodiments, the determining unit 410 is specifically configured to:
在该当前点之前编码完成的点所组成的范围内为该当前点查找N个预测点;Find N predicted points for the current point within the range formed by the encoded points before the current point;
根据该N个预测点与该当前点之间的几何距离,计算该N个预测点对应的权重值;Calculating weight values corresponding to the N predicted points according to the geometric distance between the N predicted points and the current point;
基于该N个预测点对应的权重值和该N个预测点的属性重建值,获取该每一个分量的属性预测值。Based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the N prediction points, the attribute prediction value of each component is obtained.
需要说明的是,该编码器400也可以结合至图4所示的编码框架,即可将该编码器400中的单元替换或结合至编码框架中的相关部分。例如,该确定单元410可用于实现编码框架中的属性预测部分。It should be noted that the encoder 400 can also be combined with the encoding framework shown in FIG. 4 , that is, units in the encoder 400 can be replaced or combined with relevant parts in the encoding framework. For example, the determining unit 410 can be used to implement the attribute prediction part in the encoding framework.
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,解码器300可以对应于执行本申请实施例的方法100中的相应主体,并且解码器300中的各个单元分别为了实现方法100中的相应流程,类似的,编码器400可以对应于执行本申请实施例的方法200中的相应主体,并且编码器400中的各个单元分别为了实现方法200中的相应流程,为了简洁,在此不再赘述。It should be understood that the device embodiment and the method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, details are not repeated here. Specifically, the decoder 300 may correspond to the corresponding subject in the method 100 of the embodiment of the present application, and each unit in the decoder 300 is to realize the corresponding process in the method 100, similarly, the encoder 400 may correspond to the execution The corresponding subjects in the method 200 in the embodiment of the present application, and each unit in the encoder 400 are to implement the corresponding processes in the method 200 respectively, and for the sake of brevity, details are not repeated here.
还应当理解,本申请实施例涉及的解码器300或编码器400中的各个单元可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)单元还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个单元的功能也可以由多个单元来实现,或者多个单元的功能由一个单元实现。在本申请的其它实施例中,该解码器300或编码器400也可以包括其它单元,在实际应用中,这些功能也可以由其它单元协助实现,并且可以由多个单元协作实现。根据本申请的另一个实施例,可以通过在包括例如中央处理单元(CPU)、随机存取存储介质(RAM)、只读存储介质(ROM)等处理元件和存储元件的通用计算机的通用计算设备上运行能够执行相应方法所涉及的各步骤的计算机程序(包括程序代码),来构造本申请实施例涉及的解码器300或编码器400,以及来实现本申请实施例的基于点云属性预测的编解码方法。计算机程序可以记载于例如计算机可读存储介质上,并通过计算机可读存储介质装载于任意具有数据处理能力的电子设备,并在其中运行,来实现本申请实施例的相应方法。It should also be understood that the various units in the decoder 300 or the encoder 400 involved in the embodiment of the present application can be respectively or all combined into one or several other units to form, or some (some) units can be further disassembled. Divided into a plurality of functionally smaller units, this can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application. The above-mentioned units are divided based on logical functions. In practical applications, the functions of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present application, the decoder 300 or the encoder 400 may also include other units. In practical applications, these functions may also be implemented with the assistance of other units, and may be implemented cooperatively by multiple units. According to another embodiment of the present application, a general-purpose computing device including a general-purpose computer such as a central processing unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and a storage element Run a computer program (including program code) capable of executing the steps involved in the corresponding method to construct the decoder 300 or encoder 400 involved in the embodiment of the present application, and realize the point cloud attribute prediction based on the embodiment of the present application Codec method. The computer program can be recorded in, for example, a computer-readable storage medium, and loaded on any electronic device with data processing capability through the computer-readable storage medium, and run in it to implement the corresponding method of the embodiment of the present application.
换言之,上文涉及的单元可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过软硬件结合的形式实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件组合执行完成。可选地,软件可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。In other words, the units mentioned above can be implemented in the form of hardware, can also be implemented by instructions in the form of software, and can also be implemented in the form of a combination of software and hardware. Specifically, each step of the method embodiment in the embodiment of the present application can be completed by an integrated logic circuit of the hardware in the processor and/or instructions in the form of software, and the steps of the method disclosed in the embodiment of the present application can be directly embodied as hardware The execution of the decoding processor is completed, or the combination of hardware and software in the decoding processor is used to complete the execution. Optionally, the software may be located in mature storage media in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, and registers. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps in the above method embodiments in combination with its hardware.
图16是本申请实施例提供的编解码设备500的示意结构图。FIG. 16 is a schematic structural diagram of a codec device 500 provided by an embodiment of the present application.
如图16所示,该编解码设备500至少包括处理器510以及计算机可读存储介质520。其中,处理器510以及计算机可读存储介质520可通过总线或者其它方式连接。计算机可读存储介质520用于存储计算机程序521,计算机程序521包括计算机指令,处理器510用于执行计算机可读存储介质520存储的计算机指令。处理器510是编解码设备500的计算核心以及控制核心,其适于实现一条或多条计算机指令,具体适于加载并执行一条或多条计算机指令从而实现相应方法流程或相应功能。As shown in FIG. 16 , the codec device 500 includes at least a processor 510 and a computer-readable storage medium 520 . Wherein, the processor 510 and the computer-readable storage medium 520 may be connected through a bus or in other ways. The computer-readable storage medium 520 is used for storing a computer program 521 , and the computer program 521 includes computer instructions, and the processor 510 is used for executing the computer instructions stored in the computer-readable storage medium 520 . The processor 510 is the computing core and the control core of the codec device 500, which is suitable for realizing one or more computer instructions, and is specifically suitable for loading and executing one or more computer instructions so as to realize corresponding method procedures or corresponding functions.
作为示例,处理器510也可称为中央处理器(CentralProcessingUnit,CPU)。处理器510可以包括但不限于:通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。As an example, the processor 510 may also be called a central processing unit (Central Processing Unit, CPU). The processor 510 may include but not limited to: a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
作为示例,计算机可读存储介质520可以是高速RAM存储器,也可以是非不稳定的存储器(Non-VolatileMemory),例如至少一个磁盘存储器;可选的,还可以是至少一个位于远离前述处理器510的计算机可读存储介质。具体而言,计算机可读存储介质520包括但不限于:易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。As an example, the computer-readable storage medium 520 can be a high-speed RAM memory, or a non-volatile memory (Non-VolatileMemory), such as at least one disk memory; computer readable storage medium. Specifically, the computer-readable storage medium 520 includes, but is not limited to: volatile memory and/or non-volatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash. The volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (Static RAM, SRAM), Dynamic Random Access Memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synch link DRAM, SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DR RAM).
在一种实现方式中,该编解码设备500可以是图4所示的编码框架或图15所示的编码器400;该计算机可读存储介质520中存储有第一计算机指令;由处理器510加载并执行计算机可读存储介质520中存放的第一计算机指令,以实现图13所示方法实施例中的相应步骤;具体实现中,计算机可读存储介质520中的第一计算机指令由处理器510加载并执行相应步骤,为避免重复,此处不再赘述。在一种实现方式中,该编解码设备500可以是图5所示的解码框架或图14所示的解码器300;该计算机可读存储介质520中存储有第二计算机指令;由处理器510加载并执行计算机可读存储介质520中存放的第二计算机指令,以实现图12所示方法实施例中的相应步骤;具体实现中,计算机可读存储介质520中的第二计算机指令由处理器510加载并执行相应步骤,为避免重复,此处不再赘述。In one implementation, the codec device 500 may be the encoding framework shown in FIG. 4 or the encoder 400 shown in FIG. 15; the first computer instruction is stored in the computer-readable storage medium 520; the processor 510 Load and execute the first computer instruction stored in the computer-readable storage medium 520, to realize the corresponding steps in the method embodiment shown in FIG. 13; Step 510 loads and executes corresponding steps, which will not be repeated here to avoid repetition. In one implementation, the codec device 500 may be the decoding framework shown in FIG. 5 or the decoder 300 shown in FIG. 14 ; second computer instructions are stored in the computer-readable storage medium 520 ; Load and execute the second computer instructions stored in the computer-readable storage medium 520 to implement the corresponding steps in the method embodiment shown in FIG. 12; Step 510 loads and executes corresponding steps, which will not be repeated here to avoid repetition.
根据本申请的另一方面,本申请实施例还提供了一种计算机可读存储介质(Memory),计算机可读存储介质是编解码设备500中的记忆设备,用于存放程序和数据。例如,计算机可读存储介质520。可以理解的是,此处的计算机可读存储介质520既可以包括编解码设备500中的内置存储介质,当然也可以包括编解码设备500所支持的扩展存储介质。计算机可读存储介质提供存储空间,该存储空间存储了编解码设备500的操作系统。并且,在该存储空间中还存放了适于被处理器510加载并执行的一条或 多条的计算机指令,这些计算机指令可以是一个或多个的计算机程序521(包括程序代码)。这些计算机指令指令用于计算机执行上述各种可选方式中提供的基于点云属性预测的编解码方法。According to another aspect of the present application, the embodiment of the present application further provides a computer-readable storage medium (Memory). The computer-readable storage medium is a memory device in the codec device 500 and is used to store programs and data. For example, computer readable storage medium 520 . It can be understood that the computer-readable storage medium 520 here may include a built-in storage medium in the codec device 500 , and of course may also include an extended storage medium supported by the codec device 500 . The computer-readable storage medium provides a storage space, and the storage space stores the operating system of the codec device 500 . Moreover, one or more computer instructions adapted to be loaded and executed by the processor 510 are also stored in the storage space, and these computer instructions may be one or more computer programs 521 (including program codes). These computer instructions are used for the computer to execute the coding and decoding methods based on point cloud attribute prediction provided in the various optional ways above.
根据本申请的另一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。例如,计算机程序521。此时,编解码设备500可以是计算机,处理器510从计算机可读存储介质520读取该计算机指令,处理器510执行该计算机指令,使得该计算机执行上述各种可选方式中提供的基于点云属性预测的编解码方法。According to another aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. For example, computer program 521 . At this point, the codec device 500 may be a computer, the processor 510 reads the computer instructions from the computer-readable storage medium 520, and the processor 510 executes the computer instructions, so that the computer executes the point-based Encoding and decoding methods for cloud property prediction.
换言之,当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地运行本申请实施例的流程或实现本申请实施例的功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质进行传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。In other words, when implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedures of the embodiments of the present application are run in whole or in part or the functions of the embodiments of the present application are realized. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, from a website, computer, server, or data center via Wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) transmission to another website site, computer, server or data center.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元以及流程步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the units and process steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
最后需要说明的是,以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。Finally, it should be noted that the above content is only a specific embodiment of the application, but the scope of protection of the application is not limited thereto. Anyone familiar with the technical field can easily think of Any changes or substitutions shall fall within the scope of protection of this application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (25)

  1. 一种解码方法,其特征在于,包括:A decoding method, characterized in that, comprising:
    依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,所述第一分量为U分量、V分量、G分量或B分量;Decoding the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud in sequence; wherein, the first component It is U component, V component, G component or B component;
    基于所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,获取所述当前点的属性重建值。Acquiring an attribute reconstruction value of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component.
  2. 根据权利要求1所述的方法,其特征在于,所述依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值,包括:The method according to claim 1, wherein the attribute quantization residual value of the first component of the current point, the attribute quantization residual value of the second component, and the third component are sequentially decoded from the code stream of the current point cloud The properties of the components quantify the residual values, including:
    对所述码流进行解码,得到第一标识,所述第一标识的取值用于表示所述第一分量的属性量化残差值是否为零;Decoding the code stream to obtain a first identifier, the value of the first identifier is used to indicate whether the attribute quantization residual value of the first component is zero;
    基于所述第一标识的取值对所述码流进行解析,获取所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值。Analyze the code stream based on the value of the first identifier, and obtain the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization value of the third component residual value.
  3. 根据权利要求2所述的方法,其特征在于,所述第一标识的取值为第一数值;The method according to claim 2, wherein the value of the first identifier is a first value;
    其中,所述基于所述第一标识的取值对所述码流进行解析,获取所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,包括:Wherein, the code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the third The properties of the components quantify the residual values, including:
    确定所述第一分量的属性量化残差值为零;determining that the attribute quantization residual value of the first component is zero;
    基于对所述码流进行解码得到的第二标识,获取所述第二分量的属性残差值和所述第三分量的属性残差值;所述第二标识的取值用于表示所述第一分量的属性量化残差值和所述第二分量的属性量化残差值是否都为零。Based on the second identifier obtained by decoding the code stream, the attribute residual value of the second component and the attribute residual value of the third component are obtained; the value of the second identifier is used to represent the Whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
  4. 根据权利要求3所述的方法,其特征在于,所述基于对所述码流进行解码得到的第二标识,获取所述第二分量的属性残差值和所述第三分量的属性残差值,包括:The method according to claim 3, wherein the attribute residual value of the second component and the attribute residual value of the third component are obtained based on the second identifier obtained by decoding the code stream values, including:
    若所述第二标识的取值为所述第一数值,则确定所述第二分量的属性量化残差值为零,并从所述码流中获取所述第三分量的属性量化残差值;If the value of the second identifier is the first value, determine that the attribute quantization residual value of the second component is zero, and obtain the attribute quantization residual value of the third component from the code stream value;
    若所述第二标识的取值为第二数值,则依次从所述码流中获取所述第二分量的属性量化残差值和所述第三分量的属性量化残差值。If the value of the second identifier is a second value, sequentially acquire the attribute quantization residual value of the second component and the attribute quantization residual value of the third component from the code stream.
  5. 根据权利要求2所述的方法,其特征在于,所述第一标识的取值为第二数值;The method according to claim 2, wherein the value of the first identifier is a second value;
    其中,所述基于所述第一标识的取值对所述码流进行解析,获取所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,包括:Wherein, the code stream is analyzed based on the value of the first identifier, and the attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the third The properties of the components quantify the residual values, including:
    依次从所述码流中获取所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值。The attribute quantization residual value of the first component, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are sequentially acquired from the code stream.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述基于所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,获取所述当前点的属性重建值,包括:The method according to any one of claims 1 to 5, wherein the attribute quantization residual value based on the first component, the attribute quantization residual value of the second component, and the third The attribute quantization residual value of the component is used to obtain the attribute reconstruction value of the current point, including:
    针对所述第一分量、所述第二分量和所述第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到所述每一个分量的属性残差重建值;For each of the first component, the second component, and the third component, dequantize the attribute quantization residual value of each component to obtain the attribute residual value of each component rebuild value;
    获取所述每一个分量的属性残差重建值;Obtain the attribute residual reconstruction value of each component;
    获取所述每一个分量的属性预测值;Obtaining the attribute prediction value of each component;
    获取所述每一个分量的跨分量预测值;obtaining cross-component predictors for each of the components;
    将所述每一个分量的属性残差重建值、所述每一个分量的属性预测值、以及所述每一个分量的跨分量预测值的和,确定为所述每一个分量的属性重建值。A sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
  7. 根据权利要求6所述的方法,其特征在于,所述第一分量的跨分量预测值为零;所述第二分量的跨分量预测值为所述第一分量的属性残差重建值;所述第三分量的跨分量预测值为所述第一分量的属性残差重建值与所述第二分量的属性残差重建值的和。The method according to claim 6, wherein the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; The cross-component prediction value of the third component is the sum of the attribute residual reconstruction value of the first component and the attribute residual reconstruction value of the second component.
  8. 根据权利要求6所述的方法,其特征在于,所述获取所述每一个分量的属性预测值,包括:The method according to claim 6, wherein said obtaining the attribute prediction value of each component comprises:
    在所述当前点之前解码完成的点所组成的范围内为所述当前点查找N个预测点;Searching for N prediction points for the current point within the range formed by the points that have been decoded before the current point;
    根据所述N个预测点与所述当前点之间的几何距离,计算所述N个预测点对应的权重值;calculating weight values corresponding to the N predicted points according to the geometric distance between the N predicted points and the current point;
    基于所述N个预测点对应的权重值和所述N个预测点的属性重建值,获取所述每一个分量的属性预测值。Based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the N prediction points, the attribute prediction value of each component is acquired.
  9. 一种编码方法,其特征在于,包括:A coding method, characterized in that, comprising:
    确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,所述第一分量为U分量、V分量、G分量或B分量;Determine the attribute quantization residual value of the first component of the current point to be encoded in the current point cloud, the attribute quantization residual value of the second component and the attribute quantization residual value of the third component; wherein, the first component is U Component, V component, G component or B component;
    依次编码所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,得到所述当前点云的码流。The attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component are sequentially encoded to obtain the code stream of the current point cloud.
  10. 根据权利要求9所述的方法,其特征在于,所述码流包括对第一标识的取值进行编码得到的结果,所述第一标识的取值用于表示所述第一分量的属性量化残差值是否为零。The method according to claim 9, wherein the code stream includes the result obtained by encoding the value of the first identification, and the value of the first identification is used to represent the attribute quantization of the first component Whether the residual value is zero.
  11. 根据权利要求10所述的方法,其特征在于,若所述第一标识的取值为第一数值,则表示所述第一分量的属性量化残差值为零;若所述第一标识的取值为第二数值,则表示所述第一分量的属性量化残差值不为零。The method according to claim 10, wherein if the value of the first flag is a first value, it means that the attribute quantization residual value of the first component is zero; if the value of the first flag is If the value is the second value, it means that the attribute quantization residual value of the first component is not zero.
  12. 根据权利要求10或11所述的方法,其特征在于,若所述第一分量的属性量化残差值为零,则所述码流还包括对第二标识的取值进行编码得到的结果,所述第二标识的取值用于表示所述第一分量的属性量化残差值和所述第二分量的属性量化残差值是否都为零。The method according to claim 10 or 11, wherein if the attribute quantization residual value of the first component is zero, the code stream further includes the result obtained by encoding the value of the second identifier, The value of the second flag is used to indicate whether the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero.
  13. 根据权利要求12所述的方法,其特征在于,若所述第二标识的取值为第一数值,则表示所述第一分量的属性量化残差值和所述第二分量的属性量化残差值都为零;若所述第二标识的取值为第二数值,则表示所述第一分量的属性量化残差值和所述第二分量的属性量化残差值不都为零。The method according to claim 12, wherein if the value of the second flag is a first value, it indicates that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are The differences are all zero; if the value of the second identifier is a second value, it means that the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not both zero.
  14. 根据权利要求12所述的方法,其特征在于,若所述第一分量的属性量化残差值和所述第二分量的属性量化残差值都为零,所述码流还包括对所述第三分量的属性量化残差值进行编码得到的结果。The method according to claim 12, wherein if the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are both zero, the code stream further includes The attribute quantization residual value of the third component is the result obtained by encoding.
  15. 根据权利要求12所述的方法,其特征在于,若所述第一分量的属性量化残差值和所述第二分量的属性量化残差值不都为零,所述码流还包括依次对所述第二分量的属性量化残差值和所述第三分量的属性量化残差值进行编码得到的结果。The method according to claim 12, wherein if the attribute quantization residual value of the first component and the attribute quantization residual value of the second component are not all zero, the code stream further includes sequentially A result obtained by encoding the attribute quantized residual value of the second component and the attribute quantized residual value of the third component.
  16. 根据权利要求10或11所述的方法,其特征在于,若所述第一分量的属性量化残差值不为零,则所述码流还包括依次对所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值进行编码得到的结果。The method according to claim 10 or 11, wherein if the attribute quantization residual value of the first component is not zero, the code stream further includes sequentially quantizing the attribute quantization residual of the first component value, the attribute quantization residual value of the second component, and the attribute quantization residual value of the third component are encoded results.
  17. 根据权利要求9至16中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 9 to 16, further comprising:
    针对所述第一分量、所述第二分量和所述第三分量中的每一个分量,对所每一个分量的属性量化残差值进行反量化处理,得到所述每一个分量的属性残差重建值;For each of the first component, the second component, and the third component, dequantize the attribute quantization residual value of each component to obtain the attribute residual value of each component rebuild value;
    获取所述每一个分量的属性残差重建值;Obtain the attribute residual reconstruction value of each component;
    获取所述每一个分量的属性预测值;Obtaining the attribute prediction value of each component;
    获取所述每一个分量的跨分量预测值;obtaining cross-component predictors for each of the components;
    将所述每一个分量的属性残差重建值、所述每一个分量的属性预测值、以及所述每一个分量的跨分量预测值的和,确定为所述每一个分量的属性重建值。A sum of the attribute residual reconstruction value of each component, the attribute prediction value of each component, and the cross-component prediction value of each component is determined as the attribute reconstruction value of each component.
  18. 根据权利要求17所述的方法,其特征在于,所述第一分量的跨分量预测值为零;所述第二分量的跨分量预测值为所述第一分量的属性残差重建值;所述第三分量的跨分量预测值为所述第一分量的属性残差重建值与所述第二分量的属性残差重建值的和。The method according to claim 17, wherein the cross-component predictive value of the first component is zero; the cross-component predictive value of the second component is the attribute residual reconstruction value of the first component; The cross-component prediction value of the third component is the sum of the attribute residual reconstruction value of the first component and the attribute residual reconstruction value of the second component.
  19. 根据权利要求17所述的方法,其特征在于,所述获取所述每一个分量的属性预测值,包括:The method according to claim 17, wherein said obtaining the attribute prediction value of each component comprises:
    在所述当前点之前编码完成的点所组成的范围内为所述当前点查找N个预测点;Searching for N predicted points for the current point within the range formed by the encoded points before the current point;
    根据所述N个预测点与所述当前点之间的几何距离,计算所述N个预测点对应的权重值;calculating weight values corresponding to the N predicted points according to the geometric distance between the N predicted points and the current point;
    基于所述N个预测点对应的权重值和所述N个预测点的属性重建值,获取所述每一个分量的属性预测值。Based on the weight values corresponding to the N prediction points and the attribute reconstruction values of the N prediction points, the attribute prediction value of each component is acquired.
  20. 一种解码器,其特征在于,包括:A decoder, characterized in that it comprises:
    解码单元,用于依次从当前点云的码流中解码当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,所述第一分量为U分量、V分量、G分量或B分量;The decoding unit is used to sequentially decode the attribute quantized residual value of the first component of the current point, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component from the code stream of the current point cloud; wherein, The first component is a U component, a V component, a G component or a B component;
    获取单元,用于基于所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,获取所述当前点的属性重建值。An acquiring unit, configured to acquire the attribute reconstruction of the current point based on the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component value.
  21. 一种编码器,其特征在于,包括:An encoder, characterized in that it comprises:
    确定单元,用于确定当前点云中待编码的当前点的第一分量的属性量化残差值、第二分量的属性量化残差值以及第三分量的属性量化残差值;其中,所述第一分量为U分量、V分量、G分量或B分量;The determination unit is used to determine the attribute quantization residual value of the first component, the attribute quantization residual value of the second component and the attribute quantization residual value of the third component of the current point to be encoded in the current point cloud; wherein, the The first component is a U component, a V component, a G component or a B component;
    编码单元,用于依次编码所述第一分量的属性量化残差值、所述第二分量的属性量化残差值以及所述第三分量的属性量化残差值,得到所述当前点云的码流。An encoding unit, configured to sequentially encode the attribute quantized residual value of the first component, the attribute quantized residual value of the second component, and the attribute quantized residual value of the third component to obtain the current point cloud stream.
  22. 一种解码设备,其特征在于,包括:A decoding device, characterized in that it comprises:
    处理器,适于执行计算机程序;a processor adapted to execute a computer program;
    计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至8中任一项所述的解码方法。A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by the processor, the decoding method according to any one of claims 1 to 8 is implemented.
  23. 一种编码设备,其特征在于,包括:An encoding device, characterized in that it comprises:
    处理器,适于执行计算机程序;a processor adapted to execute a computer program;
    计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求9至19中任一项所述的编码方法。A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by the processor, the encoding method according to any one of claims 9 to 19 is realized.
  24. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求1至8中任一项所述的解码方法。A computer-readable storage medium, characterized by being used for storing a computer program, the computer program causes a computer to execute the decoding method according to any one of claims 1 to 8.
  25. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求9至19中任一项所述的编码方法。A computer-readable storage medium is characterized in that it is used to store a computer program, the computer program causes a computer to execute the encoding method according to any one of claims 9 to 19.
PCT/CN2021/135529 2021-12-03 2021-12-03 Decoding method, encoding method, decoder, and encoder WO2023097694A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/135529 WO2023097694A1 (en) 2021-12-03 2021-12-03 Decoding method, encoding method, decoder, and encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/135529 WO2023097694A1 (en) 2021-12-03 2021-12-03 Decoding method, encoding method, decoder, and encoder

Publications (1)

Publication Number Publication Date
WO2023097694A1 true WO2023097694A1 (en) 2023-06-08

Family

ID=86611444

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135529 WO2023097694A1 (en) 2021-12-03 2021-12-03 Decoding method, encoding method, decoder, and encoder

Country Status (1)

Country Link
WO (1) WO2023097694A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418135A (en) * 2019-08-05 2019-11-05 北京大学深圳研究生院 A kind of the point cloud intra-frame prediction method and equipment of the weight optimization based on neighbours
CN110708560A (en) * 2018-07-10 2020-01-17 腾讯美国有限责任公司 Point cloud data processing method and device
WO2021062768A1 (en) * 2019-09-30 2021-04-08 浙江大学 Data encoding method, data decoding method, device, and storage medium
CN112995662A (en) * 2021-03-12 2021-06-18 北京大学深圳研究生院 Method and device for attribute entropy coding and entropy decoding of point cloud
CN113055671A (en) * 2019-01-10 2021-06-29 Oppo广东移动通信有限公司 Image decoding method, decoder and computer storage medium
CN113396584A (en) * 2018-12-07 2021-09-14 弗劳恩霍夫应用研究促进协会 Encoder, decoder and method for enhancing robustness of computation of cross-component linear model parameters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110708560A (en) * 2018-07-10 2020-01-17 腾讯美国有限责任公司 Point cloud data processing method and device
CN113396584A (en) * 2018-12-07 2021-09-14 弗劳恩霍夫应用研究促进协会 Encoder, decoder and method for enhancing robustness of computation of cross-component linear model parameters
CN113055671A (en) * 2019-01-10 2021-06-29 Oppo广东移动通信有限公司 Image decoding method, decoder and computer storage medium
CN110418135A (en) * 2019-08-05 2019-11-05 北京大学深圳研究生院 A kind of the point cloud intra-frame prediction method and equipment of the weight optimization based on neighbours
WO2021062768A1 (en) * 2019-09-30 2021-04-08 浙江大学 Data encoding method, data decoding method, device, and storage medium
CN112995662A (en) * 2021-03-12 2021-06-18 北京大学深圳研究生院 Method and device for attribute entropy coding and entropy decoding of point cloud

Similar Documents

Publication Publication Date Title
US20230342985A1 (en) Point cloud encoding and decoding method and point cloud decoder
CN114598883A (en) Point cloud attribute prediction method, encoder, decoder and storage medium
WO2023097694A1 (en) Decoding method, encoding method, decoder, and encoder
WO2023015530A1 (en) Point cloud encoding and decoding methods, encoder, decoder, and computer readable storage medium
WO2022257155A1 (en) Decoding method, encoding method, decoder, encoder, encoding device and decoding device
WO2022067776A1 (en) Point cloud decoding and encoding method, and decoder, encoder and encoding and decoding system
WO2023023918A1 (en) Decoding method, encoding method, decoder and encoder
WO2023159428A1 (en) Encoding method, encoder, and storage medium
WO2023240455A1 (en) Point cloud encoding method and apparatus, encoding device, and storage medium
WO2023240660A1 (en) Decoding method, encoding method, decoder, and encoder
WO2023197337A1 (en) Index determining method and apparatus, decoder, and encoder
WO2022257143A1 (en) Intra-frame prediction method and apparatus, encoding method and apparatus, decoding method and apparatus, codec, device and medium
WO2023123284A1 (en) Decoding method, encoding method, decoder, encoder, and storage medium
WO2023197338A1 (en) Index determination method and apparatus, decoder, and encoder
WO2022257145A1 (en) Point cloud attribute prediction method and apparatus, and codec
US20230051431A1 (en) Method and apparatus for selecting neighbor point in point cloud, encoder, and decoder
WO2024077548A1 (en) Point cloud decoding method, point cloud encoding method, decoder, and encoder
WO2022133752A1 (en) Point cloud encoding method and decoding method, and encoder and decoder
WO2022217472A1 (en) Point cloud encoding and decoding methods, encoder, decoder, and computer readable storage medium
WO2022257150A1 (en) Point cloud encoding and decoding methods and apparatus, point cloud codec, and storage medium
WO2023240662A1 (en) Encoding method, decoding method, encoder, decoder, and storage medium
WO2023023914A1 (en) Intra-frame prediction method and apparatus, encoding method and apparatus, decoding method and apparatus, and encoder, decoder, device and medium
WO2023173238A1 (en) Encoding method, decoding method, code stream, encoder, decoder, and storage medium
WO2023173237A1 (en) Encoding method, decoding method, bit stream, encoder, decoder, and storage medium
WO2022140937A1 (en) Point cloud encoding method and system, point cloud decoding method and system, point cloud encoder, and point cloud decoder

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21966120

Country of ref document: EP

Kind code of ref document: A1