WO2022062369A1 - 点云编解码方法与系统、及点云编码器与点云解码器 - Google Patents

点云编解码方法与系统、及点云编码器与点云解码器 Download PDF

Info

Publication number
WO2022062369A1
WO2022062369A1 PCT/CN2021/087064 CN2021087064W WO2022062369A1 WO 2022062369 A1 WO2022062369 A1 WO 2022062369A1 CN 2021087064 W CN2021087064 W CN 2021087064W WO 2022062369 A1 WO2022062369 A1 WO 2022062369A1
Authority
WO
WIPO (PCT)
Prior art keywords
quantization
point
current point
attribute information
target
Prior art date
Application number
PCT/CN2021/087064
Other languages
English (en)
French (fr)
Inventor
元辉
王晓辉
李明
王璐
刘祺
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2020/117941 external-priority patent/WO2022061785A1/zh
Priority claimed from PCT/CN2020/138421 external-priority patent/WO2022133752A1/zh
Priority claimed from PCT/CN2020/138423 external-priority patent/WO2022133753A1/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to JP2023518709A priority Critical patent/JP2023543752A/ja
Priority to KR1020237009716A priority patent/KR20230075426A/ko
Priority to CN202180064277.9A priority patent/CN116325731A/zh
Priority to EP21870758.6A priority patent/EP4221207A4/en
Publication of WO2022062369A1 publication Critical patent/WO2022062369A1/zh
Priority to US18/125,276 priority patent/US20230232004A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present application relates to the technical field of point cloud encoding and decoding, and in particular, to a point cloud encoding and decoding method and system, as well as a point cloud encoder and a point cloud decoder.
  • the surface of the object is collected by the collection equipment to form point cloud data, which includes hundreds of thousands or even more points.
  • the point cloud data is transmitted between the point cloud encoding device and the point cloud decoding device in the form of point cloud media files.
  • the point cloud encoding equipment needs to compress the point cloud data before transmission.
  • the compression of point cloud data mainly includes the compression of geometric information and the compression of attribute information.
  • attribute information When the attribute information is compressed, the redundant information in the point cloud data is reduced or eliminated by quantization.
  • Embodiments of the present application provide a point cloud encoding and decoding method and system, as well as a point cloud encoder and a point cloud decoder, so as to improve the quantization effect of points in the point cloud.
  • the present application provides a point cloud encoding method, including:
  • the residual value of the attribute information of the current point is quantized to obtain the quantized residual value of the attribute information of the current point;
  • the target quantization method includes at least two quantization methods as follows: a first quantization method, a second quantization method and a third quantization method, and the first quantization method is to set the quantization parameter of at least one point in the point cloud. Quantitative parameter increment, the second quantization method is to perform weighting processing on the residual value of the point in the point cloud, and the third quantization method is the residual of the attribute information of at least one point in the point cloud.
  • the value is lossless encoded.
  • an embodiment of the present application provides a point cloud decoding method, including:
  • Inverse quantization is performed on the quantized residual value of the attribute information of the current point by using the target inverse quantization method to obtain the reconstructed residual value of the attribute information of the current point;
  • the target inverse quantization method includes the following at least two inverse quantization methods: a first inverse quantization method, a second inverse quantization method, and a third inverse quantization method, and the first inverse quantization method is to perform at least two inverse quantization methods on the point cloud.
  • the inverse quantization parameter of a point is set to an inverse quantization parameter increment
  • the second inverse quantization method is to perform deweighting processing on the residual value of the point in the point cloud
  • the third inverse quantization method is to perform a de-weighting process on the point.
  • the residual value of the attribute information of at least one point in the cloud is subjected to lossless decoding.
  • the present application provides a point cloud encoder for executing the method in the first aspect or each of its implementations.
  • the encoder includes a functional unit for executing the method in the above-mentioned first aspect or each of its implementations.
  • the present application provides a point cloud decoder for executing the method in the second aspect or each of its implementations.
  • the decoder includes functional units for performing the methods in the second aspect or the respective implementations thereof.
  • a point cloud encoder including a processor and a memory.
  • the memory is used for storing a computer program
  • the processor is used for calling and running the computer program stored in the memory, so as to execute the method in the above-mentioned first aspect or each implementation manner thereof.
  • a point cloud decoder including a processor and a memory.
  • the memory is used for storing a computer program
  • the processor is used for calling and running the computer program stored in the memory, so as to execute the method in the above-mentioned second aspect or each implementation manner thereof.
  • a point cloud encoding and decoding system including a point cloud encoder and a point cloud decoder.
  • the point cloud encoder is used to perform the method in the above first aspect or each of its implementations
  • the point cloud decoder is used to perform the above-mentioned method in the second aspect or each of its implementations.
  • a chip for implementing any one of the above-mentioned first aspect to the second aspect or the method in each implementation manner thereof.
  • the chip includes: a processor for invoking and running a computer program from a memory, so that a device on which the chip is installed executes any one of the above-mentioned first to second aspects or each of its implementations method.
  • a computer-readable storage medium for storing a computer program, the computer program causing a computer to execute the method in any one of the above-mentioned first aspect to the second aspect or each of its implementations.
  • a computer program product comprising computer program instructions, the computer program instructions causing a computer to perform the method in any one of the above-mentioned first to second aspects or the implementations thereof.
  • a computer program which, when run on a computer, causes the computer to perform the method in any one of the above-mentioned first to second aspects or the respective implementations thereof.
  • a twelfth aspect provides a code stream obtained by the method described in the first aspect.
  • the attribute information of the current point in the point cloud is obtained by obtaining the attribute information of the current point; the attribute information of the current point is processed to obtain the residual value of the attribute information of the current point; the residual value of the attribute information of the current point is obtained by using the target quantization method.
  • the target quantization method includes at least two quantization methods as follows: a first quantization method, a second quantization method and a third quantization method, and the first quantization method is to The quantization parameter of at least one point in the point cloud sets the quantization parameter increment, the second quantization method is to perform weighting processing on the residual value of the point in the point cloud, and the third quantization method is to calculate the attribute information of at least one point in the point cloud.
  • the residual value is coded losslessly, thereby improving the quantization effect of the attribute information of the points in the point cloud, thereby improving the quantization effect of the attribute information of the points in the point cloud.
  • FIG. 1 is a schematic block diagram of a point cloud encoding and decoding system according to an embodiment of the application
  • FIG. 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
  • FIG. 3 is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • FIG. 5 is a partial block diagram of an attribute decoding module involved in an embodiment of the application.
  • FIG. 6 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of LOD division of some point clouds
  • FIG. 9 is a schematic diagram of LOD division of some point clouds.
  • FIG. 10 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another LOD division involved in an embodiment of the application.
  • FIG. 12A is a schematic diagram of another LOD division involved in an embodiment of the present application.
  • FIG. 12B is a schematic diagram of another LOD division involved in an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application.
  • FIG. 14 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • 15 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the application.
  • 16 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • FIG. 17 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of a point cloud decoder provided by an embodiment of the present application.
  • FIG. 20 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 21 is a schematic block diagram of a point cloud encoding and decoding system provided by an embodiment of the present application.
  • the present application can be applied to the technical field of point cloud compression.
  • Point cloud refers to a set of discrete points in space that are irregularly distributed and express the spatial structure and surface properties of 3D objects or 3D scenes.
  • Point cloud data is a specific record form of point cloud, and the points in the point cloud can include point location information and point attribute information.
  • the position information of the point may be three-dimensional coordinate information of the point.
  • the position information of the point may also be referred to as the geometric information of the point.
  • the attribute information of the points may include color information and/or reflectivity, among others.
  • the color information may be information in any color space.
  • the color information may be (RGB).
  • the color information may be luminance chrominance (YcbCr, YUV) information.
  • Y represents luminance (Luma)
  • Cb(U) represents blue color difference
  • Cr(V) represents red color
  • U and V represent chromaticity (Chroma) for describing color difference information.
  • a point cloud obtained according to the principle of laser measurement the points in the point cloud may include three-dimensional coordinate information of the point and laser reflection intensity (reflectance) of the point.
  • a point cloud obtained according to the principle of photogrammetry the points in the point cloud may include three-dimensional coordinate information of the point and color information of the point.
  • a point cloud is obtained by combining the principles of laser measurement and photogrammetry, and the points in the point cloud may include three-dimensional coordinate information of the point, laser reflection intensity (reflectance) of the point, and color information of the point.
  • the acquisition approach of point cloud data may include, but is not limited to, at least one of the following: (1) Generated by computer equipment.
  • the computer device can generate point cloud data according to the virtual three-dimensional object and the virtual three-dimensional scene.
  • the visual scene of the real world is acquired through a 3D photography device (ie, a set of cameras or a camera device with multiple lenses and sensors) to obtain point cloud data of the visual scene of the real world, and dynamic real-world three-dimensional objects can be obtained through 3D photography.
  • 3D photography device ie, a set of cameras or a camera device with multiple lenses and sensors
  • point cloud data of biological tissues and organs can be obtained by medical equipment such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and electromagnetic positioning information.
  • MRI Magnetic Resonance Imaging
  • CT Computed Tomography
  • the point cloud can be divided into: dense point cloud and sparse point cloud according to the acquisition method.
  • the point cloud is divided into:
  • the first type of static point cloud that is, the object is static, and the device that obtains the point cloud is also static;
  • the second type of dynamic point cloud the object is moving, but the device that obtains the point cloud is stationary;
  • the third type of dynamic point cloud acquisition the device that acquires the point cloud is moving.
  • point clouds According to the use of point clouds, it is divided into two categories:
  • Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and rescue and relief robots;
  • Category 2 Human eye perception point cloud, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • FIG. 1 is a schematic block diagram of a point cloud encoding and decoding system according to an embodiment of the present application. It should be noted that FIG. 1 is only an example, and the point cloud encoding and decoding system in the embodiment of the present application includes but is not limited to that shown in FIG. 1 .
  • the point cloud encoding and decoding system 100 includes an encoding device 110 and a decoding device 120 .
  • the encoding device is used to encode the point cloud data (which can be understood as compression) to generate a code stream, and transmit the code stream to the decoding device.
  • the decoding device decodes the code stream encoded by the encoding device to obtain decoded point cloud data.
  • the encoding device 110 in the embodiment of the present application can be understood as a device with a point cloud encoding function
  • the decoding device 120 can be understood as a device with a point cloud decoding function, that is, the encoding device 110 and the decoding device 120 in the embodiment of the present application include a wider range of Devices, including, for example, smartphones, desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, televisions, cameras, display devices, digital media players, point cloud game consoles, in-vehicle computers, etc. .
  • the encoding device 110 may transmit the encoded point cloud data (eg, a code stream) to the decoding device 120 via the channel 130 .
  • Channel 130 may include one or more media and/or devices capable of transmitting encoded point cloud data from encoding device 110 to decoding device 120 .
  • channel 130 includes one or more communication media that enables encoding device 110 to transmit encoded point cloud data directly to decoding device 120 in real-time.
  • encoding apparatus 110 may modulate the encoded point cloud data according to a communication standard, and transmit the modulated point cloud data to decoding apparatus 120 .
  • the communication medium includes a wireless communication medium, such as a radio frequency spectrum, optionally, the communication medium may also include a wired communication medium, such as one or more physical transmission lines.
  • channel 130 includes a storage medium that can store point cloud data encoded by encoding device 110.
  • Storage media include a variety of locally accessible data storage media such as optical discs, DVDs, flash memory, and the like.
  • the decoding device 120 may obtain the encoded point cloud data from the storage medium.
  • channel 130 may include a storage server that may store point cloud data encoded by encoding device 110 .
  • the decoding device 120 may download the stored encoded point cloud data from the storage server.
  • the storage server may store the encoded point cloud data and may transmit the encoded point cloud data to the decoding device 120, such as a web server (eg, for a website), a file transfer protocol (FTP) server, etc. .
  • FTP file transfer protocol
  • the encoding device 110 includes a point cloud encoder 112 and an output interface 113 .
  • the output interface 113 may include a modulator/demodulator (modem) and/or a transmitter.
  • the encoding device 110 may include a point cloud source 111 in addition to the point cloud encoder 112 and the input interface 113 .
  • the point cloud source 111 may include at least one of a point cloud acquisition device (eg, a scanner), a point cloud archive, a point cloud input interface, a computer graphics system for receiving from a point cloud content provider Point cloud data, computer graphics systems are used to generate point cloud data.
  • a point cloud acquisition device eg, a scanner
  • a point cloud archive e.g., a point cloud archive
  • a point cloud input interface e.g., a point cloud input interface
  • a computer graphics system for receiving from a point cloud content provider Point cloud data
  • computer graphics systems are used to generate point cloud data.
  • the point cloud encoder 112 encodes the point cloud data from the point cloud source 111 to generate a code stream.
  • the point cloud encoder 112 directly transmits the encoded point cloud data to the decoding device 120 via the output interface 113 .
  • the encoded point cloud data may also be stored on a storage medium or a storage server for subsequent reading by the decoding device 120 .
  • decoding device 120 includes input interface 121 and point cloud decoder 122 .
  • the decoding device 120 may include a display device 123 in addition to the input interface 121 and the point cloud decoder 122 .
  • the input interface 121 includes a receiver and/or a modem.
  • the input interface 121 can receive the encoded point cloud data through the channel 130 .
  • the point cloud decoder 122 is configured to decode the encoded point cloud data, obtain the decoded point cloud data, and transmit the decoded point cloud data to the display device 123 .
  • the display device 123 displays the decoded point cloud data.
  • the display device 123 may be integrated with the decoding apparatus 120 or external to the decoding apparatus 120 .
  • the display device 123 may include various display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
  • LCD liquid crystal display
  • plasma display a plasma display
  • OLED organic light emitting diode
  • FIG. 1 is only an example, and the technical solutions of the embodiments of the present application are not limited to FIG. 1 .
  • the technology of the present application may also be applied to single-sided point cloud encoding or single-sided point cloud decoding.
  • the current point cloud encoder can use the Geometry Point Cloud Compression (G-PCC) codec framework provided by the Moving Picture Experts Group (MPEG) or the Video-based point cloud compression (Video). Point Cloud Compression, V-PCC) encoding and decoding framework, or the AVS-PCC encoding and decoding framework provided by the Audio Video Standard (AVS). Both G-PCC and AVS-PCC are aimed at static sparse point clouds, and their coding frameworks are roughly the same.
  • the G-PCC codec framework can be used to compress the first static point cloud and the third type of dynamically acquired point cloud, and the V-PCC codec framework can be used to compress the second type of dynamic point cloud.
  • the G-PCC codec framework is also called point cloud codec TMC13, and the V-PCC codec framework is also called point cloud codec TMC2.
  • the following takes the G-PCC encoding and decoding framework as an example to describe the applicable point cloud encoder and point cloud decoder in the embodiments of the present application.
  • FIG. 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
  • the points in the point cloud may include the position information of the point and the attribute information of the point. Therefore, the encoding of the point in the point cloud mainly includes the position encoding and the attribute encoding.
  • the position information of the point in the point cloud is also called geometric information, and the position encoding of the corresponding point in the point cloud may also be called geometric encoding.
  • the process of position encoding includes: preprocessing the points in the point cloud, such as coordinate transformation, quantization, and removing duplicate points; then, geometrically encoding the preprocessed point cloud, such as constructing an octree, based on the constructed
  • the octree performs geometry encoding to form a geometry codestream.
  • the position information of each point in the point cloud data is reconstructed, and the reconstructed value of the position information of each point is obtained.
  • the attribute coding process includes: given the reconstructed information of the position information of the input point cloud and the original value of the attribute information, select one of the three prediction modes for point cloud prediction, quantify the predicted result, and perform arithmetic coding to form attribute code stream.
  • position encoding can be achieved by the following units:
  • Coordinate transformation transformation (Tanmsform coordinates) unit 201, quantize and remove duplicate points (Quantize and remove points) unit 202, octree analysis (Analyze octree) unit 203, geometric reconstruction (Reconstruct Geometry) unit 204 and first arithmetic coding (Arithmetic) enconde) unit 205.
  • the coordinate transformation unit 201 can be used to transform the world coordinates of the points in the point cloud into relative coordinates. For example, the geometric coordinates of the points are respectively subtracted from the minimum value of the xyz coordinate axes, which is equivalent to the DC operation to convert the coordinates of the points in the point cloud from world coordinates to relative coordinates.
  • the quantization and removal of duplicate points unit 202 can reduce the number of coordinates through quantization; points that were originally different after quantization may be assigned the same coordinates, based on which, duplicate points can be deleted through a deduplication operation; for example, with the same quantization position and Multiple clouds of different attribute information can be merged into one cloud through attribute transformation.
  • the quantization and removal of duplicate points unit 202 is an optional unit module.
  • the octree analysis unit 203 may encode the position information of the quantized points using an octree encoding method.
  • the point cloud is divided in the form of an octree, so that the position of the point can be in a one-to-one correspondence with the position of the octree.
  • the flag (flag) is recorded as 1, for geometry encoding.
  • the geometric reconstruction unit 204 may perform position reconstruction based on the position information output by the octree analysis unit 203 to obtain a reconstructed value of the position information of each point in the point cloud data.
  • the first arithmetic coding unit 205 can use entropy coding to perform arithmetic coding on the position information output by the octree analysis unit 203, that is, the position information output by the octree analysis unit 203 uses the arithmetic coding method to generate a geometric code stream; the geometric code stream also It can be called a geometry bitstream.
  • Attribute encoding can be achieved through the following units:
  • Color space transform (Transform colors) unit 210 attribute transform (Transfer attributes) unit 211, Region Adaptive Hierarchical Transform (RAHT) unit 212, predicting transform (predicting transform) unit 213 and lifting transform (lifting transform) ) unit 214 , a quantize coefficients (Quantize coefficients) unit 215 and a second arithmetic coding unit 216 .
  • RAHT Region Adaptive Hierarchical Transform
  • point cloud encoder 200 may include more, less or different functional components than those shown in FIG. 2 .
  • the color space conversion unit 210 may be used to convert the RGB color space of the points in the point cloud to YCbCr format or other formats.
  • the attribute transformation unit 211 can be used to transform attribute information of points in the point cloud to minimize attribute distortion.
  • the attribute conversion unit 211 may be used to obtain the original value of the attribute information of the point.
  • the attribute information may be color information of dots.
  • any prediction unit can be selected to predict the point in the point cloud.
  • the prediction unit may include: RAHT 212 , a predicting transform unit 213 and a lifting transform unit 214 .
  • any one of the RAHT 212, the predicting transform unit 213 and the lifting transform unit 214 can be used to predict the attribute information of the point in the point cloud, so as to obtain the predicted value of the attribute information of the point, Further, based on the predicted value of the attribute information of the point, the residual value of the attribute information of the point is obtained.
  • the residual value of the attribute information of the point may be the original value of the attribute information of the point minus the predicted value of the attribute information of the point.
  • the predictive transform unit 213 may also be used to generate a level of detail (LOD).
  • LOD generation process includes: obtaining the Euclidean distance between points according to the position information of the points in the point cloud; dividing the points into different detail expression layers according to the Euclidean distance.
  • different ranges of Euclidean distances may be divided into different detail expression layers. For example, a point can be randomly selected as the first detail expression layer. Then calculate the Euclidean distance between the remaining points and the point, and classify the points whose Euclidean distance meets the requirements of the first threshold as the second detail expression layer.
  • the point cloud can be directly divided into one or more detail expression layers, or the point cloud can be divided into multiple point cloud slices first, and then each point cloud slice can be divided into one or more slices. Multiple LOD layers.
  • the point cloud can be divided into multiple point cloud slices, and the number of points in each point cloud slice can be between 550,000 and 1.1 million.
  • Each point cloud tile can be seen as a separate point cloud.
  • Each point cloud slice can be divided into multiple detail expression layers, and each detail expression layer includes multiple points.
  • the detail expression layer can be divided according to the Euclidean distance between points.
  • the quantization unit 215 may be used to quantize residual values of attribute information of points. For example, if the quantization unit 215 is connected to the prediction transformation unit 213 , the quantization unit can be used to quantize the residual value of the attribute information of the point output by the prediction transformation unit 213 .
  • the residual value of the attribute information of the point output by the predictive transformation unit 213 is quantized using a quantization step size, so as to improve the system performance.
  • the second arithmetic coding unit 216 may perform entropy coding on the residual value of the attribute information of the point by using zero run length coding, so as to obtain the attribute code stream.
  • the attribute code stream may be bit stream information.
  • FIG. 3 is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • the decoding framework 300 can obtain the code stream of the point cloud from the encoding device, and obtain the position information and attribute information of the points in the point cloud by parsing the code.
  • the decoding of point cloud includes position decoding and attribute decoding.
  • the process of position decoding includes: arithmetic decoding of the geometric code stream; merging after building an octree, and reconstructing the position information of the point to obtain the reconstruction information of the position information of the point; coordinate the reconstruction information of the position information of the point Transform to get the position information of the point.
  • the position information of the point may also be referred to as the geometric information of the point.
  • the attribute decoding process includes: obtaining the residual value of the attribute information of the point in the point cloud by parsing the attribute code stream; obtaining the residual value of the attribute information of the point after inverse quantization by inverse quantizing the residual value of the attribute information of the point value; based on the reconstruction information of the position information of the point obtained in the position decoding process, select one of the following three prediction modes: RAHT, predicted change and lifting change to perform point cloud prediction, and obtain the predicted value.
  • the predicted value is the same as the residual value.
  • Add the reconstructed value of the attribute information of the point perform color space inverse transformation on the reconstructed value of the attribute information of the point to obtain the decoded point cloud.
  • position decoding can be achieved by the following units:
  • a first arithmetic decoding unit 301 an octree analysis (synthesize octree) unit 302 , a geometric reconstruction (Reconstruct geometry) unit 304 and a coordinate inverse transform (inverse transform coordinates) unit 305 .
  • Attribute encoding can be achieved through the following units:
  • the second arithmetic decoding unit 310 an inverse quantize unit 311, a RAHT unit 312, a predicting transform unit 313, a lifting transform unit 314, and an inverse trasform colors unit 315.
  • decompression is an inverse process of compression
  • the functions of each unit in the decoder 300 may refer to the functions of the corresponding units in the encoder 200 .
  • point cloud decoder 300 may include more, fewer, or different functional components than FIG. 3 .
  • the decoder 300 may divide the point cloud into a plurality of LODs according to the Euclidean distance between the points in the point cloud; then, sequentially decode the attribute information of the points in the LOD; number (zero_cnt), to decode the residual based on zero_cnt; then, the decoding framework 200 may perform inverse quantization based on the decoded residual value, and add the inverse quantized residual value to the predicted value of the current point to obtain the The reconstructed value of the point cloud until all point clouds have been decoded.
  • the current point will be used as the nearest point to the midpoint of the subsequent LOD, and the reconstructed value of the current point will be used to predict the attribute information of the subsequent point.
  • the point cloud encoder 200 mainly includes two parts functionally: a position encoding module and an attribute encoding module, wherein the position encoding module is used to realize the encoding of the position information of the point cloud, form a geometric code stream, and attribute encoding.
  • the module is used to encode the attribute information of the point cloud to form an attribute code stream.
  • the present application mainly relates to the encoding of attribute information. The following describes the attribute encoding module in the point cloud encoder involved in the present application with reference to FIG. 4 .
  • FIG. 4 is a partial block diagram of an attribute encoding module involved in an embodiment of the present application.
  • the attribute encoding module 400 can be understood as a unit for implementing attribute information encoding in the point cloud encoder 200 shown in FIG. 2 above.
  • the attribute encoding module 400 includes: a preprocessing unit 410, a residual unit 420, a quantization unit 430, a prediction unit 440, an inverse quantization unit 450, a reconstruction unit 460, a filtering unit 470, a decoding buffer unit 480, and an encoding unit unit 490.
  • the attribute encoding module 400 may further include more, less or different functional components.
  • the preprocessing unit 410 may include the color space conversion unit 210 and the attribute conversion unit 211 shown in FIG. 2 .
  • the quantization unit 430 may be understood as the quantization coefficient unit 215 in the above-mentioned FIG. 2
  • the encoding unit 490 may be understood as the second arithmetic encoding unit 216 in the above-mentioned FIG. 2 .
  • the prediction unit 440 may include the RAHT 212, the prediction change unit 213, and the boost change unit 214 shown in FIG. 2 .
  • the prediction unit 440 is specifically used to obtain the reconstruction information of the position information of the point cloud, and based on the reconstruction information of the position information of the point, from any one of the RAHT 212, the prediction change unit 213 and the promotion change unit 214 to the point cloud.
  • the attribute information of the point is predicted to obtain the predicted value of the attribute information of the point.
  • the residual unit 420 can obtain the residual value of the attribute information of the point in the point cloud based on the original value of the attribute information of the point in the point cloud and the reconstructed value of the attribute information, for example, the original value of the attribute information of the point minus the value of the attribute information. Reconstruct the value to get the attribute information of the point in the residual.
  • the quantization unit 430 may quantize the residual value of the attribute information, specifically, the quantization unit 430 may quantize the residual value of the attribute information of the point based on a quantization parameter (QP) value associated with the point cloud.
  • QP quantization parameter
  • the point cloud encoder can adjust the degree of quantization applied to the points by adjusting the QP value associated with the point cloud.
  • the inverse quantization unit 450 may respectively apply inverse quantization to the residual values of the quantized attribute information to reconstruct the residual values of the attribute information from the residual values of the quantized attribute information.
  • the reconstruction unit 460 may add the residual value of the reconstructed attribute information to the predicted value generated by the prediction unit 440 to generate a reconstructed value of the attribute information of the point in the point cloud.
  • Filtering unit 470 may remove or reduce noise in reconstruction operations.
  • the decoding buffer unit 480 may store reconstructed values of attribute information of points in the point cloud.
  • the prediction unit 440 may use the reconstructed value of the attribute information of the point to predict the attribute information of other points.
  • the point cloud decoder 300 mainly includes two parts functionally: a position decoding module and an attribute decoding module, wherein the position decoding module is used to realize the decoding of the geometric code stream of the point cloud, and obtain the position information of the point,
  • the attribute decoding module is used to decode the attribute code stream of the point cloud to obtain the attribute information of the point.
  • the attribute decoding module in the point cloud decoder involved in the present application will be introduced below with reference to FIG. 5 .
  • FIG. 5 is a partial block diagram of an attribute decoding module involved in an embodiment of the present application.
  • the attribute decoding module 500 can be understood as a unit for implementing attribute code stream decoding in the point cloud decoder 300 shown in FIG. 3 above.
  • the attribute decoding module 500 includes: a decoding unit 510 , a prediction unit 520 , an inverse quantization unit 530 , a reconstruction unit 540 , a filtering unit 550 and a decoding buffer unit 560 .
  • the attribute decoding module 500 may include more, less or different functional components.
  • the attribute decoding module 500 can receive the attribute code stream.
  • Decoding unit 510 may parse the attribute codestream to extract syntax elements from the attribute codestream. As part of parsing the attribute codestream, decoding unit 510 may parse the encoded syntax elements in the attribute codestream.
  • the prediction unit 520, the inverse quantization unit 530, the reconstruction unit 540, and the filtering unit 550 may decode attribute information according to syntax elements extracted from the attribute codestream.
  • the prediction unit 520 may determine the prediction mode of the point according to one or more syntax elements parsed from the codestream, and use the determined prediction mode to predict the attribute information of the point.
  • the inverse quantization unit 530 may inverse quantize (ie, dequantize) the residual value of the quantized attribute information associated with the point in the point cloud to obtain the residual value of the attribute information of the point. Inverse quantization unit 530 may use the QP value associated with the point cloud to determine the degree of quantization.
  • the reconstruction unit 540 uses the residual value of the attribute information of the point in the point cloud and the predicted value of the attribute information of the point in the point cloud to reconstruct the attribute information of the point in the point cloud. For example, the reconstruction unit 540 may add the residual value of the attribute information of the point in the point cloud to the predicted value of the attribute information of the point to obtain the reconstructed value of the attribute information of the point.
  • Filtering unit 550 may remove or reduce noise in reconstruction operations.
  • the attribute decoding module 500 may store the reconstructed value of the attribute information of the point in the point cloud in the decoding buffer unit 560 .
  • the attribute decoding module 500 may use the reconstructed value of the attribute information in the decoding buffer unit 560 as a reference point for subsequent prediction, or transmit the reconstructed value of the attribute information to the display device for presentation.
  • the basic process of encoding and decoding the attribute information of the point cloud is as follows: at the encoding end, the attribute information of the point cloud data is preprocessed, and the original value of the attribute information of the point in the point cloud is obtained.
  • the prediction unit 410 selects one of the above three prediction methods based on the reconstructed value of the position information of the point in the point cloud, predicts the attribute information of the point in the point cloud, and obtains the predicted value of the attribute information.
  • the residual unit 420 may calculate the residual value of the attribute information based on the original value of the attribute information of the point in the point cloud and the predicted value of the attribute information, that is, the difference between the original value of the attribute information of the point in the point cloud and the predicted value of the attribute information.
  • the value is used as the residual value of the attribute information of the points in the point cloud.
  • the residual value is quantized by the quantization unit 430, which can remove information insensitive to human eyes, so as to eliminate visual redundancy.
  • the encoding unit 490 receives the residual value of the quantized attribute information output by the quantization unit 430, and can encode the residual value of the quantized attribute information, and output an attribute code stream.
  • the inverse quantization unit 450 may also receive the residual value of the quantized attribute information output by the quantization unit 430, and perform inverse quantization on the residual value of the quantized attribute information to obtain the residual value of the attribute information of the points in the point cloud. difference.
  • the reconstruction unit 460 obtains the residual value of the attribute information of the point in the point cloud output by the inverse quantization unit 450, and the predicted value of the attribute information of the point in the point cloud output by the prediction unit 410, and converts the residual value of the attribute information of the point in the point cloud.
  • the value and the predicted value are added to obtain the reconstructed value of the attribute information of the point.
  • the reconstructed value of the attribute information of the point is filtered by the filtering unit 470 and then buffered in the decoding buffer unit 480 for use in the subsequent prediction process of other points.
  • the decoding unit 510 can parse the attribute code stream to obtain the residual value, prediction information, quantization coefficient, etc. of the attribute information of the point in the point cloud after quantization, and the prediction unit 520 predicts the attribute information of the point in the point cloud based on the prediction information Generates predicted values of attribute information for a point.
  • the inverse quantization unit 530 performs inverse quantization on the residual value of the quantized attribute information of the point by using the quantization coefficient obtained from the attribute code stream, so as to obtain the residual value of the attribute information of the point.
  • the reconstruction unit 440 adds the predicted value of the attribute information of the point and the residual value to obtain the reconstructed value of the attribute information of the point.
  • the filtering unit 550 filters the reconstructed value of the attribute information of the point to obtain the decoded attribute information.
  • the mode information or parameter information such as prediction, quantization, encoding, filtering, etc., determined during the encoding of the attribute information at the encoding end is carried in the attribute code stream when necessary.
  • the decoding end determines the same prediction, quantization, coding, filtering and other mode information or parameter information as the encoding end by parsing the attribute code stream and analyzing the existing information, so as to ensure the reconstructed value of the attribute information obtained by the encoding end and that obtained by the decoding end.
  • the reconstructed values of the attribute information are the same.
  • the above is the basic process of the point cloud codec based on the G-PCC codec framework. With the development of technology, some modules or steps of the framework or process may be optimized. This application is applicable to the G-PCC codec-based codec.
  • the basic process of the point cloud codec under the decoding framework but not limited to this framework and process.
  • the encoding end will be introduced below with reference to FIG. 6 .
  • FIG. 6 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application, and the embodiment of the present application is applied to the point cloud encoder shown in FIG. 1 , FIG. 2 , and FIG. 4 .
  • the method of the embodiment of the present application includes:
  • the current point is also referred to as the current point or the point to be processed or the point being processed, the point to be encoded, or the like.
  • the point cloud includes a plurality of points, and each point can include the geometric information of the point and the attribute information of the point.
  • the geometric information of the point may also be referred to as the position information of the point, and the position information of the point may be three-dimensional coordinate information of the point.
  • the attribute information of the point may include color information and/or reflectivity and the like.
  • the point cloud encoder may obtain the original attribute information of the point in the point cloud as the original value of the attribute information of the point.
  • the point cloud encoder after obtaining the original attribute information of the point in the point cloud, performs color space conversion on the original attribute information, such as converting the RGB color space of the point into YCbCr format or other formats. Attribute transformation is performed on the point after color space conversion to minimize the attribute distortion, and the original value of the attribute information of the point is obtained.
  • the predicted value of the attribute information of the current point in the point cloud is determined according to the geometric information of the point in the point cloud.
  • the geometry of the point in the point cloud is encoded to obtain the reconstructed value of the geometric information of the current point in the point cloud; according to the reconstructed value of the geometric information of the current point, the predicted value of the attribute information of the current point is determined; according to the attribute of the current point The predicted value of the information and the original value of the attribute information determine the residual value of the attribute information of the current point.
  • the original value of the attribute information is also referred to as the true value of the attribute information.
  • the point cloud encoder After the point cloud encoder completes encoding the geometric information of the points in the point cloud, it then encodes the attribute information of the points in the point cloud.
  • the first arithmetic encoding unit 205 encodes the geometry information of the points processed by the octree unit 203 to form a geometry code stream
  • the geometry reconstruction unit 204 encodes the octree unit 203 Reconstruct the geometric information of the processed point to obtain a reconstructed value of the geometric information.
  • the prediction unit can obtain the reconstructed value of the geometric information output by the geometric reconstruction unit 204 .
  • the points in the point cloud are sorted according to the reconstructed value of the geometric information of the points in the point cloud, and the sorted point cloud is obtained.
  • the Morton code of the point is determined, and the points in the point cloud are sorted according to the Morton code to obtain the Morton order of the point.
  • For a current point in the sorted point cloud obtain at least one adjacent point of the current point from the point whose attribute information has been encoded, and predict the attribute of the current point according to the reconstructed value of the attribute information of the at least one adjacent point The predicted value of the information.
  • the methods of predicting the predicted value of the attribute information of the current point according to the reconstructed value of the attribute information of at least one adjacent point of the current point include but are not limited to the following:
  • the average value of the reconstructed values of the attribute information of at least one adjacent point is used as the predicted value of the attribute information of the current point.
  • Method 2 Assuming that at least one adjacent point is K adjacent points, the reconstructed value of the attribute information of each adjacent point in the K adjacent points is used as the prediction reference value of the current point to obtain K prediction reference values. In addition, the average value of the reconstructed values of the attribute information of K adjacent points is used as another prediction reference value of the current point, so that the current point has K+1 prediction reference values in total. Calculate the rate-distortion optimization (RDO) cost corresponding to each of the K+1 prediction reference values, and use the prediction reference value with the smallest rate-distortion optimization cost as the prediction value of the attribute information of the current point.
  • RDO rate-distortion optimization
  • the residual value of the attribute information of the point in the point cloud is determined.
  • the difference between the original value of the attribute information of the current point and the predicted value of the attribute information is determined as the residual value of the attribute information of the current point.
  • the predicted value of the attribute information of the current point and the original value of the attribute information can be obtained according to the above steps, and the difference between the original value of the attribute information of the current point and the predicted value of the attribute information can be obtained. value, as the residual value of the attribute information of the current point.
  • AttrResidual attrValue-attrPredValue (1)
  • AttrResidual is the residual value of the attribute information
  • attrValue is the original value of the attribute information
  • attrPredValue is the predicted value of the attribute information.
  • the residual unit 420 can calculate the residual value of the attribute information based on the original value of the attribute information of the point in the point cloud and the predicted value of the attribute information.
  • the residual value is quantized by the quantization unit 430, which can remove information insensitive to human eyes, so as to eliminate visual redundancy.
  • the inverse quantization unit 450 may also receive the residual value of the quantized attribute information output by the quantization unit 430, and perform inverse quantization on the residual value of the quantized attribute information to obtain the residual value of the attribute information of the point in the point cloud. .
  • the reconstruction unit 460 obtains the residual value of the attribute information of the point in the point cloud output by the inverse quantization unit 450, and the predicted value of the attribute information of the point in the point cloud output by the prediction unit 410, and converts the residual value of the attribute information of the point in the point cloud. The value and the predicted value are added to obtain the reconstructed value of the attribute information of the point. The reconstructed value of the attribute information of the point is buffered in the decoding buffer unit 480 and used for the subsequent prediction process of other points.
  • the embodiment of the present application uses a target quantization method to quantize the residual value of the attribute information of the current point, and obtains the quantized residual value of the attribute information of the current point.
  • the target quantization method includes at least two quantization methods as follows: a first quantization method, a second quantization method and a third quantization method, and the first quantization method is to set a quantization parameter increment for the quantization parameter of at least one point in the point cloud, The second quantization method is to perform weighting processing on the residual value of the point in the point cloud, and the third quantization method is to perform lossless encoding on the residual value of the attribute information of at least one point in the point cloud.
  • the first quantization manner may also be referred to as a progressive quantization manner
  • the second quantization manner may also be referred to as an adaptive quantization manner
  • the third quantization manner may also be referred to as an equal interval non-quantization manner.
  • the point cloud encoding process in this embodiment of the present application is shown in FIG. 7 .
  • FIG. 7 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the present application, including:
  • each LOD layer includes at least one layer detail expression layer, and each layer of detail expression includes at least one point.
  • the Morton code of the points is obtained, and the Morton order of the point cloud is obtained by sorting according to the Morton code. LOD partitioning based on Morton order of point cloud.
  • the original order of the point cloud is obtained by sorting the point cloud according to the geometric information of the points in the point cloud. LOD partitioning is performed based on the original order of the point cloud.
  • LOD division is performed on the sorted point cloud, for example, a point is randomly selected from the sorted point cloud as the first detail expression layer. Then, according to the geometric information, calculate the Euclidean distance between the remaining points and the point, and classify the points whose Euclidean distance meets the requirements of the first threshold as the second detail expression layer. Obtain the centroid of the point in the second detail expression layer, calculate the Euclidean distance between the points other than the first and second detail expression layers and the centroid, and classify the point whose Euclidean distance meets the second threshold as the third detail expression layer. And so on, all the points are assigned to the detail expression layer. The points in each level of detail expression are sorted in the detail expression layer according to the size of the reconstructed value of the attribute information of the point.
  • LOD division manner is only an example, and the LOD division manner may also adopt other manners, which are not limited in this application.
  • Figure 8 is a schematic diagram of the LOD division of some point clouds.
  • the upper part of Figure 8 is the 10 points p0, p2, ..., p9 in the original point cloud. According to the geometric information of these 10 points, the 10 points are sorted. Get the original order of points p0,p2,...,p9. Based on the original order of points p0, p2, ..., p9, LOD division is performed on points p0, p2, ..., p9 to obtain three detail expression layers, which do not overlap each other.
  • the first detail expression layer R0 includes p0, p5, p4 and p2, the second detail expression layer R1 includes p1, p6 and p3, and the third detail expression layer R2 includes p9, p8 and p7.
  • the first layer of detail expression layer R0 constitutes the first layer of LOD, denoted as L0
  • the first layer of LOD and the second layer of detail expression layer R1 constitute the second layer of LOD, denoted as L1
  • the second layer of LOD and the third layer of detail expression layer R2 constitutes the third layer of LOD, denoted as L2.
  • the number of points included in the LOD layer increases layer by layer.
  • the multi-layer detail expression layers obtained by dividing the LOD can be sorted according to the number of layers from low to high to obtain the LOD order of the point cloud.
  • the multi-layer detail expression layers may also be sorted in descending order according to the number of layers, so as to obtain the LOD order of the point cloud.
  • the multi-layer detail expression layers of the point cloud sort the layers from low to high to obtain the LOD order of the point cloud, and determine the predicted value of the attribute information of the points in the point cloud based on the LOD order of the point cloud.
  • the above S203 includes but is not limited to the following ways:
  • Method 1 based on the LOD order of the point cloud, obtain at least one adjacent point of the current point from the encoded point cloud, for example, find the three closest points of the current point from the encoded point cloud according to the KNN algorithm, and The weighted average of the reconstructed values of the attribute information of the three nearest neighbors is used as the predicted value of the attribute information of the current point.
  • Method 2 Based on the LOD order of the point cloud, obtain at least one adjacent point of the current point from the encoded point cloud, for example, find the three closest points of the current point from the encoded point cloud according to the KNN algorithm, and set the The reconstructed value of the attribute information of each of the three nearest neighbor points is used as the prediction reference value of the current point, and three prediction reference values are obtained. In addition, the average value of the reconstructed values of the attribute information of the three adjacent points is used as another prediction reference value of the current point, so that the current point has 3+1 prediction reference values in total. Calculate the rate-distortion optimization (RDO) cost corresponding to each of the 3+1 prediction reference values, and use the prediction reference value with the smallest rate-distortion optimization cost as the prediction value of the attribute information of the current point.
  • RDO rate-distortion optimization
  • the reciprocal of the distance (eg, Euclidean distance) between the adjacent point and the current point may be used as the weight of the adjacent point.
  • the prediction indexes corresponding to the above-mentioned 3+1 prediction reference values may be determined according to the following Table 1:
  • the three closest adjacent points to the point p2 are p4, p5 and p1, where p4 is the one closest to p2 among the three adjacent points of the current point p2 Point, denoted as the first adjacent point, that is, 1 st nearest point, as shown in Table 1, its corresponding prediction index is 1.
  • p5 is the point that is next to p4 from the current point p2 among the three adjacent points of the current point p2, and is recorded as the second adjacent point, that is, the 2 nd nearest point. As shown in Table 1, its corresponding prediction index is 2.
  • p1 is the point farthest from the current point p2 among the three adjacent points of the current point p2, and is recorded as the third adjacent point, that is, the 3 rd nearest point. As shown in Table 1, its corresponding prediction index is 3. As shown in Table 1, the prediction index corresponding to the weighted average of the reconstructed values of the attribute information of p4, p5 and p1 is 0.
  • the rate-distortion optimization (RDO) cost corresponding to each of the 3+1 prediction reference values, which is the average value of the attribute information of p4, p5, p1, and p4, p5, and p1, and calculate the rate-distortion optimization (RDO) cost.
  • the prediction reference value with the smallest optimization cost is used as the prediction value of the attribute information of the current point, for example, the prediction reference value with the smallest rate-distortion optimization cost is the reconstructed value of the attribute information of the point p5.
  • the point cloud encoder may carry the prediction index 2 corresponding to the point p5 in the subsequently formed attribute code stream.
  • the decoding end can directly parse the prediction index 2 from the attribute code stream, and use the reconstructed value of the attribute information of the point p5 corresponding to the prediction index 2 to predict the attribute information of the point p2, and obtain the prediction of the attribute information of the point p2 value.
  • S204 Determine the residual value of the attribute information of the current point according to the predicted value of the attribute information of the current point and the original value of the attribute information. For example, the difference between the original value of the attribute information of the current point and the predicted value is determined as the residual value of the attribute information of the current point.
  • S205 Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • the target LOD layer where the current point is located can be determined according to the geometric information of the current point.
  • the methods for determining the target quantization step size for adapting the target LOD layer include but are not limited to the following:
  • determining the target quantization step size adapted to the target LOD layer includes the following steps A1, A2 and A3:
  • Step A1 determine the quantization parameter in the coding parameter of the current point
  • Step A2 Obtain the hierarchical index of the target LOD layer, and determine the quantization parameter increment of the target LOD layer according to the hierarchical index of the target LOD layer;
  • Step A3 Determine the target quantization step size corresponding to the target LOD layer according to the quantization parameter and the quantization parameter increment of the target LOD layer.
  • the quantization parameter increment DeltaQP values of the first 7 layers of LOD devices are shown in Table 2:
  • R is the coding code rate
  • Table 2 shows the DeltaQP values of the first 7 layers of LOD under 5 code rates. It should be noted that the above Table 1 is only an example, and the DeltaQP values corresponding to the first seven LOD layers in the embodiment of the present application include but are not limited to those shown in Table 2.
  • the DeltaQP values in the foregoing Table 2 may all be set to -10.
  • R1 to R5 represent five encoding bit rates recommended by the MPEG public test environment, and the QP values of the preset bit rate points in the public test environment are shown in Table 3.
  • the real QP values of the first 7 layers of LOD under 5 code rates are shown in Table 4,
  • the DeltaQP value under each code rate in Table 2 is added to the QP under the corresponding code rate shown in Table 3, and the real QP value under the corresponding code rate in Table 4 is obtained.
  • the quantization parameter corresponding to the current point can be determined from the coding parameters of the current point, such as QPi in Table 3, and determined from the above Table 2 according to the hierarchical index of the target LOD layer where the current point is located.
  • the quantization parameter increment corresponding to the target LOD layer is obtained, such as DeltaQPi in Table 2, according to the quantization parameter and the quantization parameter increment of the target LOD layer, determine the target quantization step size corresponding to the target LOD layer, such as in the current
  • the QPi value corresponding to the point is added to the DeltaQPi value of the target LOD layer to obtain QPi+DeltaQPi, and the quantization step size corresponding to QPi+DeltaQPi is determined as the target quantization step size corresponding to the target LOD layer.
  • the above-mentioned determining the quantization parameter increment of the target LOD layer according to the hierarchical index of the target LOD layer includes: if the target LOD layer belongs to the top N LOD layers of the point cloud, Then determine that the quantization parameter increment of the target LOD layer is j, the N is a positive integer less than or equal to the first threshold, and the j is an integer greater than 0 and less than or equal to the second threshold; if the target LOD If the layer does not belong to the first N LOD layers of the point cloud, the quantization parameter increment of the target LOD layer is determined to be 0.
  • the j is a first preset value; if the quantization parameter is less than the third threshold, the j is a second preset value.
  • the first threshold may be 14, and the second threshold may be 10.
  • j may be 6 or 10, for example.
  • N can be 6, 7 or 8.
  • the above-mentioned determination of the target quantization step size adapted to the target LOD layer includes the following steps B1 and B2:
  • Step B1 obtaining the hierarchical index of the target LOD layer
  • Step B2 according to the hierarchical index of the target LOD layer, query the target quantization step size corresponding to the target LOD layer in the quantization step size lookup table.
  • the quantization step size lookup table includes the correspondence between the LOD layer and the quantization step size.
  • the above-mentioned quantization step size lookup table is preset.
  • the embodiments of the present application further include a process of constructing a quantization step size lookup table. Specifically, the hierarchical index of the current image block and the quantization parameter offset parameter are determined; the quantization step size Qstep corresponding to the LOD of the level of detail is determined according to the hierarchical index of the current image block and the offset parameter of the quantization parameter. Then, the corresponding relationship between the level of detail level and the quantization step size is obtained, and the corresponding relationship is pre-stored in a quantization step size look-up table (look-up table) for subsequent encoding steps or decoding to determine the quantization step size.
  • a quantization step size look-up table look-up table
  • the level hierarchical index may be, for example, LodSplitIndex
  • the quantization parameter offset parameter may be, for example, QpShiftValue
  • the above-mentioned manner of determining the level index and quantization parameter offset parameter of the current image block may be: obtaining the encoding parameter of the current encoding block; reading the level index and quantization parameter offset in the encoding parameters parameter.
  • the specific implementation manner of obtaining the coding parameters of the current coding block at least includes: determining the coding parameters of the current coding block by using rate-distortion optimization.
  • the encoding parameters may include parameters preset in the configuration file and/or parameters determined according to the data of the point cloud.
  • the coding parameters of the current coding block are obtained first, and then the quantization parameter optimization enable flag, the hierarchical index and the quantization parameter offset parameter in the coding parameters are read, which is beneficial to improve the optimization of the quantization parameters for determining the current coding block.
  • Efficiency of enable flags, hierarchical index and quantization parameter offset parameters are read, which is beneficial to improve the optimization of the quantization parameters for determining the current coding block.
  • the code stream includes a parameter set code stream.
  • the corresponding parameters can be directly determined for decoding according to the parameter set code stream in the code stream, which is beneficial to improve the performance of subsequent decoding. effectiveness.
  • the parameter set includes data for decoding point clouds at one or more different times; the data is attribute data, and the parameter set is an attribute parameter set.
  • the decoding end decodes the code stream, it can directly decode the data used for decoding point clouds at different times according to the parameter set in the code stream, which is beneficial to improve the efficiency of subsequent decoding.
  • the above-mentioned determining the quantization step size Qstep corresponding to the level of detail LOD according to the level hierarchy index and the quantization parameter offset parameter of the current image block includes: determining the quantization parameter in the encoding parameters of the current encoding block Qp; according to the hierarchical index of the current coding block and the quantization parameter offset parameter, determine the quantization parameter offset of each LOD layer in the point cloud; according to the quantization parameter Qp of the current coding block and the point cloud The quantization parameter offset of each LOD layer in the point cloud determines the quantization step size Qstep corresponding to each LOD layer in the point cloud.
  • the above-mentioned quantization parameter offset is also referred to as a quantization parameter increment.
  • the quantization parameter Qp may be determined by the QP parameter provided by the attribute parameter set.
  • the relationship between the quantization parameter Qp of the current coding block and the quantization step size Qstep is as shown in formula (2):
  • Residuals are also left-shifted by 8 bits to match when quantizing the residuals.
  • the quantization parameter Qp in the coding parameters of the current coding block is first determined, and then the quantization parameter offset of each LOD layer is determined according to the hierarchical index of the current coding block and the quantization parameter offset parameter, and then the quantization parameter offset is determined according to the quantization parameter Qp is offset from the quantization parameter of each LOD layer to determine the quantization step size Qstep corresponding to each LOD layer, which is beneficial to improve the flexibility of determining the quantization step size.
  • the determining the quantization parameter offset of each LOD layer according to the level hierarchy index and the quantization parameter offset parameter includes: judging whether the LOD layer of the currently processed level of detail belongs to the level The level range constrained by the hierarchical index, the level range includes the first N levels in the multiple detail levels, and N is a positive integer less than or equal to the first threshold; if so, the current processing is determined according to the quantization parameter offset parameter The value j of the quantization parameter offset of the LOD layer of the level of detail of 0.
  • the LOD layers of the point cloud are grouped, and each group of LOD layers corresponds to a quantization step size look-up table, wherein the quantization step size look-up table corresponding to the group includes the quantization step size corresponding to at least one LOD layer in the group .
  • searching for the target quantization step size corresponding to the target LOD layer where the current point is located first determine the target LOD group to which the target LOD layer belongs, and find the target quantization corresponding to the target LOD layer in the quantization step size lookup table corresponding to the target LOD group. step size.
  • the value range of the level hierarchical index is the number of 0-LOD (positive integer), assuming that the level hierarchical index is 6, then the quantization step size of LOD 0 -LOD 5 (that is, the quantization step size of the detail level of the first 6 layers) ) is converted from QP-QpShiftValue (ie, quantization parameter-quantization offset parameter).
  • the level of detail is further divided into multiple groups, for example, the level index is an array 4 5 6, correspondingly, the level of detail can be divided into 4 groups according to the three division positions, namely LOD 0 -LOD 3 , LOD 4 , Includes LOD 5 as well as LOD 6 and beyond.
  • the value j of the quantization parameter offset corresponding to different groups may be different.
  • j is an integer greater than 0 and less than or equal to the second threshold, then the quantization parameters corresponding to the detail levels of the first N levels are smaller than those of the later levels. This is considering the midpoint of the detail levels of the subsequent high levels, and the previous low levels will be used. If the quantization step size of the detail level of the previous level is relatively long, the corresponding reconstruction error will also be larger. At this time, the reconstruction error is transmitted to the subsequent level, which will affect the Subsequent predictions of mid-level LODs, resulting in less accurate predictions.
  • the quantization step size corresponding to the smaller quantization parameter can be determined to reduce the reconstruction error. Smaller, using a smaller quantization step size will not have a great impact on the size of the color code stream. When the error of the detail level of the previous level is small, the prediction effect of the detail level of the subsequent level will also be better. A smaller quantization step size can also achieve better results, and a properly increased quantization step size can also reduce the size of the code stream and reduce the impact on coding efficiency.
  • the first threshold may be 14, and the second threshold may be 10. This is to consider that the total number of layers of detail levels is generally 11-14; the minimum Qp value of the five code rates set by the public test environment CTC is 10, so j can be an integer greater than 0 and less than or equal to 10 to ensure that the subtraction Negative numbers do not appear after this value is removed.
  • j can be 6 or 10, for example. Since the larger the value of j is, the smaller Qp-j is, and the corresponding quantization step size is also smaller. Further, the smaller the distortion of the reconstruction point and the smaller the error of the reconstruction value, the smaller the distortion of the reconstruction point and the smaller the error of the reconstruction value. The prediction will also be more accurate, so when j is 10, the prediction result will be more accurate, that is, the prediction effect will be better.
  • N can be 6 or 8, for example. Since the reduction of the quantization step size reduces the error, it also increases the size of the encoded code stream, which affects the encoding efficiency. Therefore, the value of N can be 6, which is basically half of the total number of detail levels, and the number of points in the previous level. Relatively few, using a small quantization step size will reduce the error without causing too much increase in the code stream, or if N is 8, using a smaller quantization step size to quantize the points in the detail level of the first 8 layers is beneficial to The reconstruction value error is reduced, the subsequent prediction accuracy is improved, and the impact on the code stream size is reduced.
  • the quantization parameter offset of the LOD layer of the currently processed level of detail is determined according to the quantization parameter offset parameter
  • the value of j, j is an integer greater than 0 and less than or equal to the second threshold, if not, determine the value of the quantization parameter offset of the LOD layer of the currently processed level of detail to be 0, and adapt to the level of detail of the previous level
  • the quantization step size corresponding to the smaller quantization parameter that is, the quantization step size corresponding to the detail level of the previous level is smaller, and the quantization step size corresponding to the detail level of the later level is larger than that of the previous level, which is conducive to improving the prediction accuracy.
  • the impact on coding efficiency is reduced.
  • j is a first preset value if the quantization parameter Qp is greater than or equal to a third threshold, then j is a first preset value; if the quantization parameter Qp is less than the third threshold, then j is a second preset value if the quantization parameter Qp is greater than or equal to a third threshold.
  • the third threshold may be 30, the first preset value may be 10, and the second preset value may be 6.
  • the value of j can be determined in the form of a piecewise function according to the size of the quantization parameter Qp corresponding to the current coding block. For example, when the quantization parameter Qp is greater than or equal to 30, j is 10, and when Qp is less than 30, j is 6.
  • the determining the quantization parameter offset of the LOD layer according to the level hierarchical index and the quantization parameter offset parameter includes: judging the level combination corresponding to the LOD layer of the currently processed level of detail, querying The hierarchical index determines the hierarchical index of the LOD layer of the currently processed level of detail; the quantization parameter offset parameter is queried according to the hierarchical hierarchical index of the LOD layer of the processed level of detail, and the corresponding quantization parameter offset is determined. shift.
  • the quantization parameter offset parameter can be an array, such as 3 5 6, that is, the quantization parameter offsets of the first to fourth groups are -3, -5 and - respectively. 6. If the determined quantization parameter is QP, the actual quantization parameters of the first to fourth groups are QP-3, QP-5, QP-6, and QP, respectively.
  • any one level combination may include at least two adjacent levels, the multiple level combinations include the first level combination and the second level combination, the level in the first level combination Before the level in the second level combination, the quantization parameter corresponding to the first level combination is smaller than the quantization parameter corresponding to the second level combination, and different level combinations correspond to different quantization parameters, which is beneficial for different levels of detail levels.
  • the corresponding quantization step size is further subdivided to improve the flexibility of determining the quantization step size.
  • the first level combination may include the first two levels among the multiple detail levels, and the quantization step size corresponding to the first level combination is 1. Lossless quantization can be used in the first two layers, which is beneficial to further reduce errors and improve the accuracy of subsequent predictions, and because the number of midpoints in the first two layers is small, it will not have much impact on the size of the code stream.
  • the multiple level combinations may include the first level combination, the second level combination, the third level combination and the fourth level combination sorted from front to back, and any one level combination includes at least 2 adjacent levels before and after;
  • the first-level combination adopts 1/sqrt(4) of the original quantization step size as the quantization step size of this level.
  • the original quantization step size refers to the quantization step size determined according to the quantization parameter Qp; the second-level combination adopts 1 of the original quantization step size.
  • /sqrt(3) is used as the quantization step size of this level; the third level combination adopts 1/sqrt(2) of the original quantization step size as the quantization step size of this level; the fourth level combination uses the original quantization step size as the quantization step size of this level Quantization step size.
  • the original quantization step size that is, the quantization step size determined according to the quantization parameter Qp corresponding to the current coding block
  • is a positive integer
  • the four-level combination adopts ⁇ /sqrt(4), ⁇ /sqrt(3), ⁇ /sqrt(2), and ⁇ as the quantization step size of the level.
  • the later the level combination the larger the corresponding quantization step size.
  • Different layers in the layer combination use the same quantization step size.
  • the quantization step size corresponding to the detail levels of different levels is further subdivided to improve the flexibility of determining the quantization step size.
  • the quantization parameter offset when determining the quantization parameter offset, first determine the level combination corresponding to the level of detail level, and then further determine the level hierarchy index corresponding to the level of detail in the level combination, and then according to the corresponding level
  • the hierarchical index queries the quantization parameter offset parameters, and determines the corresponding quantization parameter offset.
  • Different level combinations correspond to different quantization parameter offsets, which is beneficial to further subdivide the quantization step size corresponding to the detail level of different levels, and improve the quantization step. Long-determined flexibility.
  • the present application further includes a quantization parameter optimization enable flag, which is used to indicate whether the above-mentioned first quantization method can be used for quantization.
  • the quantization parameter optimization enable flag can be: enableProgressiveQp, which can be taken as: Two values, 0 or 1.
  • the prediction value before the prediction residual value is quantized based on the quantization weight of the current point and the quantization step size of the current point will be referred to as the prediction residual value, which will be based on
  • the prediction value after the prediction residual value is processed by the quantization weight of the current point and before the prediction residual value is processed based on the quantization step size of the current point is called the weighted residual value, which will be based on the
  • the prediction value obtained by quantizing the prediction residual value by the quantization weight of the current point and the quantization step size of the current point is called a quantization residual value.
  • the quantized residual value may also be referred to as a weighted quantized residual value, or may even be simply referred to as a residual value.
  • the above S207 includes: determining the index of the current point; and determining the quantization weight corresponding to the index of the current point as the first quantization weight of the current point.
  • the encoder can obtain the first quantization weight of the point based on the index of the point.
  • the quantization weight of the point cloud is stored as an array, and the dimension of the array is the same as the number of points in the point cloud.
  • QuantWeight[index] represents the quantization weight whose point index is index.
  • QuantWeight[] can be understood as an array that stores the quantization weights of all points in the point cloud, and the dimension of the array is the same as that of the point.
  • the number of points in the cloud is the same, and the quantitative weight of the point can be queried through the index of the point.
  • the point cloud is divided into one or more LOD layers, and each LOD layer includes one or more points;
  • the initial value of the first quantization weight is greater than the initial value of the first quantization weight of points in the remaining LODs in the plurality of LOD layers.
  • M is an integer greater than 0.
  • the initial value of the first quantization weight of each point in the first seven LOD layers is set to 512
  • the initial value of the first quantization weight of each point in the remaining LODs is set to 256.
  • the first quantization weight of the current point is updated based on the first quantization weight of the N nearest neighbors of the current point.
  • Quantization weight, N is an integer greater than 0.
  • the influence weight of the current point on each of the N closest adjacent points is obtained, and the influence weight depends on the relationship between the current point and the N closest adjacent points position information; based on the first quantization weight of the current point and the influence weight of the current point on each of the N nearest neighbor points, update the first quantization weight of the N nearest neighbor points .
  • the attribute parameter set of the point cloud includes an influence weight of the current point on each of the N closest adjacent points; by querying the attribute parameter set, the The influence weight of the current point on each of the N nearest neighbors.
  • the initial value of the first quantization weight of each point in the point cloud is a preset value.
  • the specific numerical value of the initial value is not limited in the embodiments of the present application.
  • the initial value may be 256, 512 or other specific values.
  • Initializing to 256 means setting the value of the quantization weight of all points in the point cloud to 256.
  • the first quantization weight of each point will be updated according to its importance in the process of predicting the attribute information of the point cloud, The more important points have higher quantization weight values.
  • the first quantization weights of the N nearest neighbors are updated based on the following formula (4):
  • Q represents the current point
  • Pi represents the i -th nearest neighbor point to Q
  • optional i 1, 2, 3
  • w(Q) represents the first quantization weight of the current point
  • ⁇ (P i , Q) Indicates the influence weight of the current point on the neighbor point
  • >>" is a right shift operation
  • is an assignment operation.
  • a ⁇ B means to assign the value of B to A.
  • ⁇ (P i , Q) 2 5-i .
  • ⁇ (P i , Q) 2 6-i .
  • the value of ⁇ (P i , Q) decreases as i increases.
  • the initial value of the first quantization weight of all points in the point cloud is set to 256
  • traverse each point in the reverse coding order to update the first quantization weight of its three nearest neighbors assuming that the index of the currently traversed point is index, and the indices of the three closest points of the current point are indexN1, indexN2, and indexN3, respectively
  • the first quantization weight of the three closest points P10, P3, and P11 of the current point P1 can be recorded as :
  • the first quantization weights of its three nearest neighbors are updated in the following manner:
  • the value of k is 8.
  • 16, 8, and 4 are the influence weights of the current point on the 1st, 2nd, and 3rd nearest neighbors, respectively.
  • the influence weight can be defined as a grammar in the attribute parameter set of the point cloud, and the influence weight can be set through the attribute parameter set. value.
  • the encoder can activate or access the attribute parameter set in the process of encoding the attribute information, and then call the value of the influence weight of the point from the attribute parameter set.
  • the embodiments of the present application do not limit the specific values of k and the influence weight, and the above numbers are only illustrative, and should not be construed as limitations on the present application.
  • the influence weights of the 1st, 2nd, and 3rd nearest neighbor points may also be changed to 64, 32, and 16, respectively.
  • the first quantization weight of the current point is 256
  • the first quantization weight of the nearest neighbor point 0 is also 256
  • the result of (32 ⁇ 256)>>8 is 32, that is, the operation result is shifted to the right
  • the first quantization weight 288 is used to update the three neighbors of the nearest point 0.
  • attribute_parameter_set represents the attribute parameter set
  • aps_chroma_qp_offset represents the chromaticity deviation
  • weightOfNearestNeighborsInAdaptiveQuant[i] represents the influence weight of the current point on the i-th nearest neighbor, where i is 0, 1, and 2, respectively representing the current point’s 1, 2, 3 are the nearest neighbors.
  • the first nearest neighbor point represents the nearest neighbor point to the current point
  • the second nearest neighbor point represents the second nearest neighbor point to the current point, and so on.
  • S208 Quantize the residual value of the attribute information of the current point according to the target quantization step size and the first quantization weight to obtain the quantized residual value of the attribute information of the current point.
  • the above S204 and S403 include the following steps C1 and C2:
  • Step C1 according to the first quantization weight of the current point, determine the second quantization weight of the current point;
  • Step C2 Perform inverse quantization on the quantized residual value of the attribute information of the current point according to the target quantization step size and the second quantization weight to obtain the reconstructed residual value.
  • the second quantization weight is less than or equal to the target quantization step size of the current point, where the target quantization step size of the current point is the target quantization step size of the target LOD layer where the current point is located.
  • effectiveQuantWeight represents the second quantization weight of the current point
  • w(Q) represents the first quantization weight of the current point
  • k represents the number of bits for right-shifting the w(Q)
  • Qstep represents the The target quantization step size of the current point, ">>" is the right shift operation.
  • the first quantization weight of the current point may exceed the target quantization step size.
  • the smaller value of the two is long taken to obtain the second quantization weight, thereby ensuring that the encoder can perform a quantization operation on the prediction residual value, that is, the encoding performance of the encoder is guaranteed.
  • the value of the second quantization weight is equal to an integer power of 2.
  • the value of the first quantization weight of the current point is not equal to the integer power of 2, and based on the value of the first quantization weight of the current point, the value of the first quantization weight that is closest to the current point is 2.
  • the integer power of is determined as the second quantization weight.
  • the value of the first quantization weight of the current point is 18, for the convenience of hardware implementation, 18 can be converted to the nearest integer power of 2, that is, 16 or 32, for example, 18 is converted to 16, that is 18 is replaced by 16. Assuming that the value of the first quantization weight of the current point is 30, the nearest integer power of 2 will become 32. At this time, the first quantization weight of the current point will be converted to 32; for 2 Integer power, the function of adaptive quantization can be realized by binary shift operation, which is convenient for hardware implementation.
  • the weighted multiplication operation can be processed as a shift operation, which can improve the processing efficiency of the encoder, thereby improving the performance of the encoder .
  • the minimum value of the quantization weight of the current point and the target quantization step size of the current point may be taken first, and then the integer times of 2 that are closest to the minimum value may be taken. power, which is determined as the second quantization weight.
  • the second quantization weight may also be determined in other manners, which is not specifically limited in this embodiment of the present application.
  • the first quantization weight of the current point may be directly determined as the second quantization weight of the current point.
  • the step C2 may include: multiplying the prediction residual value by the second quantization weight to obtain a weighted residual value; The difference value is quantized to obtain the quantized residual value.
  • the encoder can obtain the predicted value of the attribute information of the current point through the prediction transformation, and the real value of the attribute information of the current point is known, and then the predicted residual of the attribute information of the current point can be obtained by subtracting the predicted value from the real value. value, the prediction residual value is multiplied by the second quantization weight to obtain the weighted prediction residual value, and the quantization step is used to quantize the weighted prediction residual value to obtain the quantized weighted prediction residual value, that is, the quantized residual value.
  • the quantized residual value of the point is entropy encoded and written into the code stream.
  • the decoder first calculates the first quantization weight of each point in the point cloud according to the reconstructed position information, determines the second quantization weight of each point by comparing with the target quantization step size, and then parses the code stream to obtain the quantization of the current point. Residual value, inverse quantization to obtain the weighted prediction residual value, dividing the weighted prediction residual value by the second quantization weight to obtain the prediction residual value, the decoder determines the predicted value of the attribute information of the current point through the prediction transformation, and then based on the current point The predicted value of the attribute information and the predicted residual value are used to obtain the reconstructed value of the attribute information of the current point. After the decoder obtains the reconstructed value of the attribute information of the current point, it traverses the next point in order to decode and reconstruct.
  • the encoder multiplies the prediction residual value by the second quantization weight for weighting; after inverse quantization, the decoder divides the inverse quantized weighted prediction residual value by the second quantization weight. The weight removes the weighting effect and obtains the prediction residual value. It should be noted that since quantization is not lossless, the weighted prediction residual value obtained by the decoder is not necessarily equal to the weighted prediction residual value obtained by the encoder.
  • the step C2 may include:
  • AttrResidualQuant2 attrResidualQuant1 ⁇ effectiveQuantWeight/Qstep (7)
  • AttrResidualQuant2 represents the quantization residual value
  • attrResidualQuant1 represents the original value of the residual value
  • effectiveQuantWeight represents the second quantization weight of the current point
  • Qstep represents the quantization step size of the current point
  • "X" represents multiplication operation
  • "/" means division operation.
  • the step C2 may include: updating the target quantization step size according to the second quantization weight of the current point; updating the target quantization step size according to the updated quantization step size The residual value of the attribute information of the current point is quantized.
  • updating the target quantization step size according to the second quantization weight of the current point includes: using the following formula (8) to update the target quantization step size of the current point:
  • effectiveQuantWeight represents the second quantization weight of the current point
  • newQstep represents the updated quantization step size of the current point based on the target quantization step size of the current point
  • Qstep represents the target quantization step size of the current point
  • "/" indicates a division operation
  • S209 Encode the quantized residual value of the current point to generate an attribute code stream.
  • the target LOD layer where the current point is located is determined according to the geometric information of the current point, and the target quantization step size corresponding to the target LOD layer is determined.
  • the target quantization step size is determined based on the increment of the quantization parameter, which improves the Flexibility in determining the quantization step size.
  • this embodiment introduces a first quantization weight for weighting the quantization step size of the current point. By introducing the quantization weight of the current point, it is equivalent to quantizing the target of the current point based on the first quantization weight of the current point.
  • the step size is revised, that is, the target quantization step size of the current point can be adaptively adjusted according to the importance of the current point, and then the residual value of the current point is quantized based on the adjusted target quantization step size.
  • the point with high quantization weight is quantized with a smaller quantization step size to reduce its reconstruction error, and for the point located later in the coding sequence, its prediction accuracy can be improved, and the coding effect can be improved.
  • the points in the cat1-A point cloud sequence include color attribute information and other attribute information, such as reflectance attribute information, and the points in the cat1-B point cloud sequence include color attribute information.
  • BD-AttrRate is one of the main parameters for evaluating the performance of video coding algorithms, indicating that the video encoded by the new algorithm (that is, the technical solution of the present application) has a higher bit rate and PSNR (Peak Signal to Noise Ratio, peak signal-to-noise ratio) than the original algorithm.
  • PSNR Peak Signal to Noise Ratio, peak signal-to-noise ratio
  • the change in the above that is, the change in the code rate of the new algorithm and the original algorithm under the same signal-to-noise ratio.
  • "-" indicates performance improvement, such as bit rate and PSNR performance improvement.
  • the performance of the luminance component is improved by 0.8%
  • the performance of the chrominance component Cb is improved by 4.1%
  • the performance of the chrominance component is improved by 4.1%
  • the performance on component Cr is improved by 5.4%.
  • Average represents the average of the performance improvement of the cat1-A point cloud sequence and the cat1-B point cloud sequence.
  • the point cloud encoding process in this embodiment of the present application is shown in FIG. 10 .
  • FIG. 10 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the application, as shown in FIG. 10 , including:
  • each LOD layer includes at least one layer detail expression layer, and each layer of detail expression includes at least one point.
  • S304 Determine the residual value of the attribute information of the current point according to the predicted value of the attribute information of the current point and the original value of the attribute information. For example, the difference between the original value of the attribute information of the current point and the predicted value is determined as the residual value of the attribute information of the current point.
  • S305 Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • S306 Determine whether the current point belongs to the point of lossless encoding, if it is determined that the current point belongs to the point of lossy encoding, execute the following S307 to S309, and if it is determined that the current point belongs to the point of lossless encoding, execute the following S310.
  • S307 Determine the target quantization step size of the adaptation target LOD layer. For details, refer to the description of S206 above, which will not be repeated here.
  • S308 Quantize the residual value of the attribute information of the current point according to the target quantization step size to obtain the quantized residual value of the attribute information of the current point.
  • S309 Encode the quantized residual value of the current point to generate an attribute code stream.
  • the present application performs lossless coding on the residual value of the attribute information of at least one point in the point cloud, so as to reduce the influence of quantization on the reconstructed value of the attribute information, thereby improving the accuracy of attribute information prediction, and will not It has a great influence on the size of the attribute code stream, thereby improving the encoding effect of the attribute information.
  • the lossless encoding of the residual value of the attribute information of the point may also be referred to as the non-quantization of the residual value of the attribute information of the point.
  • This application does not limit the number of points for which the residual value of the attribute information is losslessly encoded.
  • the residual value of the attribute information of some points in the point cloud is quantized, and the residual value of the attribute information of some points is not quantized. (that is, lossless encoding is performed); or, the residual values of attribute information of all points in the point cloud are not quantized (that is, lossless encoding).
  • the at least one point where the residual value of the attribute information is losslessly encoded may include N points.
  • the N is an integer multiple of 2, for example, lossless encoding is performed on residual values of attribute information of 2, 4, 16 or 24 points in the point cloud.
  • the above N points can be any N points in the point cloud, for example, N consecutive points in the sorted point cloud, or N points selected randomly, or N points specified, Or it is N points selected according to a preset point-taking interval, where the point-taking interval may be a non-uniform interval.
  • the interval between each adjacent two points in the above N points is equal.
  • the above point cloud includes 1200 points.
  • N is 24, then the interval between these 24 points is equal, which is 50 points.
  • lossless encoding is performed on the residual value of the attribute information of the point at every preset interval in the point cloud.
  • the preset interval is 10
  • lossless encoding is performed on the residual value of the attribute information of the point at every 10 points in the sorted point cloud.
  • the first point in the 1200 points can be used as the point where the residual value of the first attribute information is not quantized
  • the interval is 10 points
  • the 11th point can be used as the residual value of the second attribute information is not quantized. points, and so on.
  • the 11th point in the 1200 points can be used as the point for lossless encoding of the residual value of the first attribute information
  • the interval is 10 points
  • the 21st point can be used as the residual value of the second attribute information. Points for lossless encoding, and so on.
  • the embodiments of the present application further include S3061 , performing lossless encoding on the attribute residual value of at least one point in at least one detail expression layer of the multi-layer detail expression layers.
  • the above S3061 may include: performing lossless encoding on the attribute residual value of at least one point in a layer of detail expression in the multi-layer detail expression layer; or, performing partial detail expression in the multi-layer detail expression layer
  • the attribute residual value of at least one point in the layer is subjected to lossless encoding, and the attribute residual value of each point in the partial detail expression layer in the multi-layer detail expression layer is quantized;
  • the attribute residual value of at least one point in each detail expression layer is coded losslessly.
  • the above S3061 includes S3061-A1, S3061-A2 and S3061-A3:
  • S3061-A3 Perform lossless encoding on the residual value of the attribute information of at least one point in the second type of detail expression layer.
  • the number of points included in each layer in the multi-layer detail expression layer may be the same or different.
  • the multi-layer detail expression layer is divided into the first type of detail expression layer and the second type of detail expression layer, wherein the first type of detail expression layer
  • the total number of points included in the layer is less than or equal to the first preset value
  • the total number of points included in the second type of detail expression layer is greater than the first preset value.
  • the point cloud is divided into LOD, and 14 layers of detail expression layers are obtained.
  • the number of points included in the detail expression layer is: 1, 6, 28, 114, 424, 1734, 10000... .
  • the first preset value is 24, as shown in FIG. 9 , the first 2 detail expression layers (that is, the first detail expression layer including 1 point and the second detail expression layer including 6 points)
  • the layer detail expression layer is divided into the first type of detail expression layer, and 2 first type of detail expression layers are obtained. and other detail expression layers after the third detail expression layer) are divided into the second type of detail expression layer, and 12 second type of detail expression layers are obtained.
  • the residual value of the attribute information of all points in the first type of detail expression layer is not quantized, for example, the residual value of the attribute information of all points in the first two detail expression layers of the above 14 detail expression layers Perform lossless encoding.
  • the residual value of the attribute information of at least one point in the second type of detail expression layer is not quantized.
  • the difference is encoded losslessly.
  • different second-type detail expression layers can adopt different skip quantization point selection methods.
  • each second-type detail expression layer has different Point selection method.
  • different second-type detail expression layers may adopt the same skip quantization (skip quantization) point selection method, for example, the point selection method of each second-type detail expression layer is the same.
  • the encoding end may carry the relevant information of the first type of detail expression layer and the second detail expression layer in the attribute code stream.
  • the decoding end can parse out the relevant information of the first type of detail expression layer and the second type of detail expression layer from the attribute code stream, and perform point analysis according to the parsed related information of the first type of detail expression layer and the second type of detail expression layer. Reconstruction of attribute information.
  • S3061-A3 includes S3061-A3-1:
  • S3061-A3-1 Perform lossless encoding on residual values of attribute information of M points in the second type of detail expression layer, where M is a positive integer multiple of 2, such as 2, 4, 24, 32, and the like.
  • the interval between two adjacent points among the M points in the second type of detail expression layer of the same layer is equal.
  • the second detail expression layer 1 includes 200 points
  • the second detail expression layer 2 includes 300 points.
  • M is equal to 10
  • the residual values of the attribute information of the 20 points in the second detail expression layer 1 are encoded losslessly.
  • the 1st point, the 11th point, the 21st point, the 31st point...the 181st point, the 191st point the points where the residual values of the adjacent two attribute information are encoded losslessly 10 points in between.
  • the residual values of the attribute information of the 30 points in the second detail expression layer 2 are subjected to lossless encoding, in order: the 1st point, the 11th point, the 21st point, the 31st point...
  • the 291st point is 10 points apart between two adjacent points of lossless encoding of residual values of attribute information.
  • the above-mentioned M can be added to the attribute parameter set of the encoding end according to the following procedure, so as to set the specific numerical value of the non-quantized points in the middle interval of each second type of detail expression layer through the encoding parameter:
  • aps_equal_intervals_unquantized_num represents the number of unquantized points in the middle interval of each second type of detail expression layer, such as 24.
  • At least one second-type detail expression layer includes L second-type detail expression layers
  • the number of points for lossless encoding of residual values of attribute information in different second-type detail expression layers may be different.
  • L is a positive integer greater than or equal to 2
  • both P and Q are positive integers
  • the sum of P and Q is less than or equal to L
  • the P second-type detail expression layers and Q second-type detail expression layers do not overlap , the first quantity is different from the second quantity.
  • the above P second type detail expression layers may be any P second type detail expression layers among the L second type detail expression layers, and the P second type detail expression layers may be continuous second type detail expression layers, or It can be a discontinuous second type of detail expression layer.
  • the above-mentioned Q second-type detail expression layers may be any Q second-type detail expression layers except the P second-type detail expression layers among the L second-type detail expression layers, and the Q second-type detail expression layers. It can be a continuous second type of detail expression layer, or it can be a discontinuous second type of detail expression layer.
  • L is equal to 12
  • the P second-type detail expression layers are the first P second-type detail expression layers among the L second-type detail expression layers.
  • the Q second-type detail expression layers are the last Q second-type detail expression layers among the L second-type detail expression layers.
  • the P second-type detail expression layers are continuous with each other, and the Q second-type detail expression layers are continuous with each other.
  • P can be 7 or 8.
  • the last second type of detail expression layer in the P second type of detail expression layers is adjacent to the first second type of detail expression layer of the Q second type of detail expression layers.
  • the last second-type detail expression layer in the P second-type detail expression layers is the seventh-level detail expression layer, and in the Q second-type detail expression layers
  • the first second type of detail expression layer is the 8th layer of detail expression layer, and the 7th layer of detail expression layer is adjacent to the 8th layer of detail expression layer.
  • the L second-type detail expression layers are divided into P second-type detail expression layers and Q second-type detail expression layers, and for each second-type detail expression layer in the P second-type detail expression layers layer, the residual value of the attribute information of the first number of points in the second type of detail expression layer is not quantized. For each second-type detail expression layer in the Q second-type detail expression layers, the residual value of the attribute information of the second number of points in the second-type detail expression layer is not quantized.
  • the first number is greater than the second number, for example, the first number is 24, 32 or 64, and the corresponding second number may be 8, 16 or 32.
  • the multi-layer detail expression layers are sorted from low to high to obtain the LOD order of the point cloud, and the attribute information is calculated according to the LOD order of the point cloud. coding.
  • the points ranked earlier in the LOD order have a greater chance of being used as reference points in the subsequent prediction process.
  • the earlier P The residual values of attribute information of more points in the second type of detail expression layer are not quantized.
  • the residual values of attribute information of less points in the second Q second-type detail expression layers are not quantized.
  • the first number is a positive integer multiple of the second number, for example, the first number is 3 times or 2 times the second number, for example, the first data is 24 and the second number is 8.
  • the interval between two adjacent points among the first number of points in each of the P second-type detail expression layers is equal.
  • the interval between two adjacent points among the second number of points in each of the Q second-type detail expression layers is equal.
  • the prediction results of the points in the LODs of the first few layers are more important, so the LOD of each layer of the first seven layers (LOD0 ⁇ LOD6)
  • the number of points of the attribute residual values that are not quantized at equal intervals is 32 (intermittent_unquantized_num), and the number of points that are not quantized at equal intervals of each subsequent LOD layer is 10 (intermittent_unquantized_num/3).
  • limittent_unquantized_num 32.
  • limittent_unquantized_num/3 10.
  • the quantization may be performed in the following manner:
  • Manner 1 During the process of quantizing the residual value of the attribute information of the point in the point cloud, at least one point for which the residual value of the attribute information is losslessly encoded is skipped.
  • the quantization step size of at least one point that performs lossless encoding on the residual value of the attribute information is set to 1.
  • the residual value of the attribute information of the point is currently quantized according to the following formula (9):
  • AttrResidualQuant is the residual value of the attribute information after quantization
  • Qstep is the quantization step size.
  • Manner 3 The quantization parameter QP of at least one point where the residual value of the attribute information is losslessly encoded is set as the target value, and the target value is the QP value corresponding to when the quantization step size is 1.
  • the QP value is usually pre-configured through the configuration file. Based on this, the QP can be set to the corresponding QP value when the quantization step size is 1.
  • the attribute code stream includes first information, where the first information is used to indicate a point at which the residual value of the attribute information is subjected to lossless encoding.
  • the first information includes identification information of points whose residual values of attribute information are losslessly encoded
  • the point cloud includes 100 points of lossless encoding of residual values of attribute information, which is carried in the attribute code stream.
  • the decoding end parses the attribute code stream, obtains the identification information of the points whose residual values of attribute information are losslessly encoded, and after the residual values of the attribute information of these points, the residual values of the attribute information of the points corresponding to the identification information are obtained. No inverse quantization is performed, but the residual value of the attribute information of these points is directly used to reconstruct the attribute information to keep it consistent with the encoding end.
  • the first information includes the total number of points where the residual value of the attribute information is losslessly encoded, such as the above N.
  • the first information includes each The specific number (num) of points where the residual values of the attribute information in the second type of detail expression layer are equally spaced losslessly encoded, that is, num is carried in the attribute code stream.
  • the above-mentioned first information also needs to carry the first The identification information of the point where the residual value of the attribute information is losslessly encoded.
  • the above-mentioned first information further includes the first quantity and the second quantity, and P The division information of the second type of detail expression layers and the Q second type of detail expression layers.
  • the above division information It also includes the identification information of the first second-type detail expression layer of the Q second-type detail expression layers; or, includes the identification information of the last second-type detail expression layer of the P second-type detail expression layers; or, Include P and/or Q.
  • the decoding end can determine P second type detail expression layers and Q second type detail expression layers from the L second type detail expression layers according to the information, and then determine each of the P second type detail expression layers for each of the second type detail expression layers.
  • the first information may further include identification information of points where the residual value of the first attribute information of each second type of detail expression layer is quantized losslessly encoded.
  • the encoding process of the point cloud includes: the encoding end determines whether the current point belongs to the first 7 layers of LOD and determines whether the current point is not quantized according to the third quantization method the point.
  • the first quantization method and the third quantization method are combined, and specifically, according to the geometric information of the current point, the target LOD layer where the current point is located is determined; if it is determined that the current point is a lossy coded point, then determine the target quantization step size adapted to the target LOD layer; according to the target quantization step size, quantize the residual value of the attribute information of the current point to obtain the attribute information of the current point The quantized residual value of . If it is determined that the current point belongs to the point of lossless encoding, lossless encoding is performed on the residual value of the attribute information of the current point.
  • the target LOD layer where the current point is located is determined according to the geometric information of the current point, and the target quantization step size corresponding to the target LOD layer is determined. Long deterministic flexibility.
  • the residual value of attribute information of at least one point in the point cloud is subjected to lossless encoding (ie, no quantization) to reduce the influence of quantization on the reconstructed value of attribute information, thereby improving the accuracy of attribute information prediction.
  • the points in the cat1-A point cloud sequence include color attribute information and other attribute information.
  • Table 7 for the cat1-A point cloud sequence, using the technical solution of the present application, compared with the original technology, the performance of the luminance component, the chrominance component Cb, and the chrominance component Cr are all improved.
  • the target quantization mode includes the first quantization mode, the second quantization mode, and the third quantization mode
  • the encoding process in this embodiment of the present application is shown in FIG. 13 .
  • FIG. 13 is a schematic flowchart of a point cloud encoding method provided by an embodiment of the application, as shown in FIG. 13 , including:
  • each LOD layer includes at least one layer detail expression layer, and each layer of detail expression includes at least one point.
  • S404 Determine the residual value of the attribute information of the current point according to the predicted value of the attribute information of the current point and the original value of the attribute information. For example, the difference between the original value of the attribute information of the current point and the predicted value is determined as the residual value of the attribute information of the current point.
  • S405. Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • S406 Determine whether the current point is a point of lossless encoding, if it is determined that the current point belongs to a point of lossy encoding, perform the following S407 to S410, and if it is determined that the current point is a point of lossless encoding, perform S411.
  • the process of judging whether the current point is a lossless coded point is a lossless coded point may refer to the description of 306 above, which will not be repeated here.
  • S407 Determine the target quantization step size of the adaptation target LOD layer. For details, refer to the description of S206 above, which will not be repeated here.
  • S409 Quantize the residual value of the attribute information of the current point according to the target quantization step size and the first quantization weight to obtain the quantized residual value of the attribute information of the current point.
  • the first quantization method is used to calculate the quantization weights of all points in the point cloud (ie, the first quantization weights). It is judged whether the current point belongs to the LOD of the first 7 layers and whether the point is not quantized is judged according to the second quantization method.
  • the current point belongs to the first 7 layers of LOD, and the current point is not an unquantized point
  • the residual value of the current point is multiplied by the second quantization weight w(Q) for weighting to obtain a weighted residual value, and the weighted residual value is quantized according to the target quantization step size.
  • the current point does not belong to the first 7 layers of LOD, and the point is not an unquantized point
  • w(Q) min(w (Q), Qstep
  • this point Qstep 1 (ie, this point does not require quantization).
  • the first quantization method, the second quantization method and the third quantization method are combined, and specifically, according to the geometric information of the current point, the target LOD layer where the current point is located is determined;
  • the current point belongs to the point of lossy coding, then determine the target quantization step size for adapting the target LOD layer and determine the first quantization weight of the current point; according to the target quantization step size and the first quantization weight, the current
  • the residual value of the attribute information of the point is quantized to obtain the quantized residual value of the attribute information of the current point. If it is determined that the current point belongs to the point of lossless encoding, lossless encoding is performed on the residual value of the attribute information of the current point.
  • the residual value of the attribute information of at least one point in the point cloud is losslessly encoded (ie, not quantized), so as to reduce the influence of quantization on the reconstructed value of the attribute information, thereby improving the accuracy of attribute information prediction.
  • this embodiment introduces a first quantization weight for weighting the quantization step size of the current point. By introducing the quantization weight of the current point, it is equivalent to quantizing the target of the current point based on the first quantization weight of the current point.
  • the step size is revised, that is, the target quantization step size of the current point can be adaptively adjusted according to the importance of the current point, and then the residual value of the current point is quantized based on the adjusted target quantization step size.
  • the point with high quantization weight is quantized with a smaller quantization step size to reduce its reconstruction error, and for the point located later in the coding sequence, its prediction accuracy can be improved, and the coding effect can be improved.
  • the points in the cat1-A point cloud sequence include color attribute information and other attribute information.
  • Table 7 for the cat1-A point cloud sequence, using the technical solution of the present application (the combination of the first quantization method, the second quantization method and the third quantization method), compared with the original technology, the luminance components, The performance of the chrominance component Cb and the chrominance component Cr is improved.
  • FIG. 14 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application. As shown in FIG. 14 , the method of the embodiment of the present application includes:
  • the target inverse quantization method includes at least two inverse quantization methods as follows: a first inverse quantization method, a second inverse quantization method and a third inverse quantization method, and the first inverse quantization method is an inverse quantization parameter for at least one point in the point cloud Set the inverse quantization parameter increment, the second inverse quantization method is to weight the residual value of the point in the point cloud, and the third inverse quantization method is to perform lossless decoding on the residual value of the attribute information of at least one point in the point cloud. .
  • inverse quantization can also be called inverse quantization or dequantization, which can be understood as Scaling.
  • the predicted value can be a color predicted value in the attribute predicted value.
  • the decoding of the geometric information of the points in the point cloud is completed, the decoding of the attribute information is performed. After decoding the geometric code stream, the geometric information of the points in the point cloud can be obtained.
  • the attribute code stream of the point cloud is analyzed, and the quantized residual value of the attribute information of the point in the point cloud is obtained.
  • Use the target quantization method to inverse quantize the quantized residual value of the attribute information of the point in the point cloud, and obtain the reconstructed residual value of the attribute information of the point.
  • the target inverse quantization method includes the following at least two inverse quantization methods: a first inverse quantization method, a second inverse quantization method, and a third inverse quantization method.
  • the embodiment of the present application further includes determining the predicted value of the attribute information of the point in the point cloud according to the geometric information of the point in the point cloud.
  • the geometric information of the point in the point cloud is decoded to obtain the reconstructed value of the geometric information of the point in the point cloud, and the predicted value of the attribute information of the point in the point cloud is determined according to the reconstructed value of the geometric information of the point in the point cloud.
  • the reconstructed value of the attribute information of the point in the point cloud is obtained.
  • the decoding process is shown in FIG. 15 .
  • FIG. 15 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the application, as shown in FIG. 15 , including:
  • Each LOD layer includes at least one detail expression layer, and each detail expression layer includes at least one point.
  • S603. Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • the points in the point cloud are sorted according to the multi-layer detail expression layer to obtain the LOD order, and according to the geometric information of the point to be decoded, at least one decoded adjacent point of the point to be decoded is obtained in the LOD order, according to the at least one
  • the reconstructed value of the attribute information of the decoded adjacent point determines the predicted value of the attribute information of the point to be decoded.
  • manners of implementing manner S605 include but are not limited to the following:
  • Mode 1 Decode the code stream to obtain the quantization parameter in the encoding parameter of the current point; obtain the hierarchical index of the target LOD layer, and determine the quantization parameter increment of the target LOD layer according to the hierarchical index of the target LOD layer; according to the quantization parameter and the quantization parameter increment of the target LOD layer to determine the target quantization step size corresponding to the target LOD layer.
  • determining the quantization parameter increment of the target LOD layer according to the hierarchical index of the target LOD layer includes: if the target LOD layer belongs to the top N LOD layers of the point cloud, determining the quantization parameter increment of the target LOD layer The quantity is j, N is a positive integer less than or equal to the first threshold, j is an integer greater than 0 and less than or equal to the second threshold; if the target LOD layer does not belong to the first N LOD layers of the point cloud, then determine the target LOD layer The quantization parameter increments to 0.
  • j is the first preset value; if the quantization parameter is less than the third threshold, j is the second preset value.
  • Method 2 Obtain the hierarchical index of the target LOD layer; according to the hierarchical index of the target LOD layer, query the target quantization step size corresponding to the target LOD layer in the quantization step size look-up table, and the quantization step size look-up table includes the LOD layer and the quantization step. Correspondence between lengths.
  • the number of LODs of the level of detail is set by the common coding parameter CTC of the point cloud, and this part of the parameters belongs to the attribute parameter set of the point cloud.
  • different quantization step sizes Qstep are used to perform inverse quantization for the divided LODs, and the division of different LODs and the change values of the quantization step sizes may be preset.
  • the quantization step size adapted for the level of detail at the earlier level can be smaller, the quantization step size for the level of detail at the later level can be larger, and the number of midpoints for the level of detail at the earlier level is smaller.
  • the number of points in the level of detail at the back of the level is relatively large, the level of detail with a smaller number of points is suitable for a smaller quantization step size, and the level of detail with a large number of points is suitable for a larger quantization step size, and the overall processing is performed during decoding. It won't be too long.
  • the quantization step size Qstep i is adapted to the level of the detail level at which the point P i is located, instead of using a fixed quantization step size Inverse quantization is beneficial to improve the decoding efficiency.
  • the method before determining the quantization step size Qstep corresponding to the level LOD of the level of detail according to the level hierarchy index and the quantization parameter offset parameter of the current decoding block, the method further includes: determining the quantization parameter optimization enable flag of the current decoding block The value of the quantization parameter optimization enable flag is detected as the first value, and the hierarchical index of the current decoding block and the quantization parameter offset parameter are determined; the level LOD corresponding to the level of detail is determined according to the hierarchical index and the quantization parameter offset parameter.
  • the quantization step size Qstep before determining the quantization step size Qstep corresponding to the level LOD of the level of detail according to the level hierarchy index and the quantization parameter offset parameter of the current decoding block.
  • the quantization parameter optimization enable flag can be 0 or 1, and one of the values is recorded as the first value, and only the quantization parameter Only when the value of the optimization enable flag is the first value, the level hierarchy index and the quantization parameter offset parameter are determined, and the quantization step size Qstep corresponding to the level of detail is further determined.
  • the value of the quantization parameter optimization enable flag of the current decoding block is first determined, and when it is detected that the value of the quantization parameter optimization enable flag is the first value, the hierarchical index of the current decoding block and the quantization parameter offset are determined.
  • the quantization step size Qstep corresponding to the level LOD of the detail level is determined according to the level hierarchy index and the quantization parameter offset parameter, which is beneficial to improve the decoding efficiency.
  • determining the value of the quantization parameter optimization enable flag of the current decoding block includes: parsing the code stream, and obtaining the value of the quantization parameter optimization enable flag in the parameter set of the current decoding block.
  • the code stream may include a parameter set, and when parsing the code stream, the value of the quantization parameter optimization enable flag in the parameter set of the current decoding block may be obtained.
  • the value of the quantization parameter optimization enable flag in the parameter set of the current decoding block can be obtained by parsing the code stream, which is beneficial to improve decoding efficiency.
  • the parameter set is the attribute parameter set of the current coding block.
  • the value of the quantization parameter optimization enable flag in the attribute parameter set of the current decoding block can be obtained.
  • the value of the quantization parameter optimization enable flag in the attribute parameter set of the current decoding block can be obtained by parsing the code stream, which is beneficial to improve the decoding efficiency.
  • determining the level hierarchy index and the quantization parameter offset parameter of the current decoding block includes: reading the level hierarchy index and the quantization parameter offset parameter in the attribute parameter set.
  • the hierarchical index and quantization parameter offset parameter in the attribute parameter set can be directly read.
  • the hierarchical index and quantization parameter offset parameter in the attribute parameter set can be read, which is beneficial to improve decoding efficiency.
  • determining the quantization step size Qstepi of the level LODi of the adaptation level of detail includes: querying the quantization step size look-up table according to the level hierarchical index of the level LODi, obtaining the quantization step size Qstepi corresponding to the level LODi, and the quantization step size
  • the look-up table includes the correspondence between the level of detail LOD and the quantization step size Qstep.
  • the quantization step size lookup table can be queried according to the hierarchical index corresponding to each level LODi. Since the quantization lookup table includes the correspondence between the level of detail LOD and the quantization step size Qstep, the lookup table can be directly The way to determine the quantization step size Qstepi that is adapted to a certain level of detail level LODi is given.
  • the quantization step size look-up table is queried according to the hierarchical index of the level LODi to obtain the quantization step size Qstepi corresponding to the level LODi, and the quantization step size look-up table includes the correspondence between the level of detail LOD and the quantization step size Qstep
  • the quantization step size corresponding to different levels of detail levels is different, which is beneficial to improve the flexibility of determining the quantization step size.
  • determining the quantization step size Qstep corresponding to the level LOD of the level of detail according to the level hierarchical index and the quantization parameter offset parameter includes: determining the quantization parameter Qp in the coding parameters of the current coding block; according to the level hierarchical index and The quantization parameter offset parameter determines the quantization parameter offset of each level LOD; according to the quantization parameter Qp and the quantization parameter offset of each level LOD, the quantization step size Qstep corresponding to each level LOD is determined.
  • the quantization parameter Qp may be determined by the QP parameter provided by the attribute parameter set.
  • the quantization parameter offset of each level LOD can be determined according to the level hierarchical index and the quantization parameter offset parameter, and then according to the determined Qp and the quantization of each level LOD Parameter offset, corresponding to determine the quantization step size corresponding to each level.
  • the quantization parameter Qp in the coding parameters of the current coding block is first determined, and then the quantization parameter offset of each level LOD is determined according to the hierarchical level index and the quantization parameter offset parameter, and then according to the quantization parameter Qp and each The quantization parameter offset of the level LOD determines the quantization step size Qstep corresponding to each level LOD, and the quantization step size corresponding to the detail level of different levels is different, which is beneficial to improve the flexibility of determining the quantization step size.
  • determining the quantization parameter offset of each level LOD according to the level hierarchy index and the quantization parameter offset parameter includes: judging whether the level LOD of the currently processed level of detail belongs to the level range constrained by the level hierarchy index, The level range includes the first N levels in the multiple levels of detail, where N is a positive integer less than or equal to the first threshold; if so, the offset of the quantization parameter of the LOD of the level LOD of the currently processed level of detail is determined according to the quantization parameter offset parameter.
  • the value j, j is an integer greater than 0 and less than or equal to the second threshold; if not, it is determined that the value of the quantization parameter offset of the LOD of the currently processed level of detail is 0.
  • the first threshold may be 14, and the second threshold may be 10.
  • the second threshold may be 10. This is to consider that the total number of layers of detail levels is generally 11-14; the minimum Qp value of the five code rates set by the public test environment CTC is 10, so j can be an integer greater than 0 and less than or equal to 10 to ensure that the subtraction Negative numbers do not appear after this value is removed. Since the larger the quantization parameter is, the longer the corresponding quantization step is, and the longer the quantization step is. j is an integer greater than 0 and less than or equal to 10, then the quantization parameters corresponding to the detail levels of the first N levels are smaller than those of the latter levels, and no negative numbers appear after subtracting this value.
  • the value range of the level grading index is the number of 0-LOD (positive integer), assuming that the level grading index is 6, then the quantization step size of LOD0-LOD5 (that is, the quantization step size of the detail level of the first 6 layers) is given by QP-QpShiftValue (ie quantization parameter-quantization offset parameter) is converted.
  • the level of detail is further divided into multiple groups, for example, the level index is an array 4 5 6, correspondingly, the level of detail can be divided into 4 groups according to the three division positions, namely LOD0-LOD3, LOD4, including LOD5 and LOD6 and beyond the level of detail.
  • the value j of the quantization parameter offset corresponding to different groups may be different.
  • the value of N can be 6, which is basically half of the total number of detail levels, and the number of points in the previous level is relatively small. , it will not increase too much decoding time when processing with small quantization step size.
  • N can be 8, and the points in the detail level of the first 8 layers are inverse quantized with a smaller quantization step size. Since the number of points in the detail level of the first 8 layers is relatively small, it is beneficial to improve the decoding efficiency.
  • the quantization parameter offset of the LOD of the currently processed level of detail is determined according to the quantization parameter offset parameter
  • the value of j, j is an integer greater than 0 and less than or equal to the second threshold, if not, the value of the quantization parameter offset of the LOD of the LOD of the currently processed level of detail is determined to be 0, which is adapted for the level of detail of the previous level.
  • the quantization step size corresponding to the smaller quantization parameter that is, the quantization step size corresponding to the detail level of the previous level is smaller, and the quantization step size corresponding to the detail level of the later level is larger than that of the previous level, which is beneficial to improve the decoding efficiency.
  • j is the first preset value if the quantization parameter Qp is greater than or equal to the third threshold, then j is the first preset value; if the quantization parameter Qp is less than the third threshold, then j is the second preset value.
  • the third threshold may be 30, the first preset value may be 10, and the second preset value may be 6.
  • the value of j can be determined in the form of a piecewise function according to the size of the quantization parameter Qp corresponding to the current coding block. For example, when the quantization parameter Qp is greater than or equal to 30, j is 10, and when Qp is less than 30, j is 6.
  • determining the quantization parameter offset of each level LOD according to the level hierarchical index and the quantization parameter offset parameter includes: judging the level combination corresponding to the level LOD of the currently processed level of detail, querying the level hierarchical index to determine The hierarchical index of the LOD of the level of detail currently being processed; the quantization parameter offset parameter is queried according to the hierarchical index of the LOD of the level of detail processed, and the corresponding quantization parameter offset is determined.
  • the quantization parameter offset parameter can be an array, such as 3 5 6, that is, the quantization parameter offsets of the first to fourth groups are -3, -5 respectively and -6, if the determined quantization parameter is QP, the actual quantization parameters of the first to fourth groups are QP-3, QP-5, QP-6, and QP, respectively.
  • any one level combination may include at least two adjacent levels, the multiple level combinations include the first level combination and the second level combination, the level in the first level combination Before the level in the second level combination, the quantization parameter corresponding to the first level combination is smaller than the quantization parameter corresponding to the second level combination, and different level combinations correspond to different quantization parameters, which is beneficial for different levels of detail levels.
  • the corresponding quantization step size is further subdivided to improve the flexibility of determining the quantization step size.
  • the first level combination may include the first two levels among the multiple detail levels, and the quantization step size corresponding to the first level combination is 1. Since the number of points in the first two levels in the level of detail is relatively small, a quantization step size of 1 will not bring too much influence on the decoding efficiency.
  • the multiple level combinations may include the first level combination, the second level combination, the third level combination and the fourth level combination sorted from front to back, and any one level combination includes at least 2 adjacent levels before and after;
  • the first-level combination adopts 1/sqrt(4) of the original quantization step size as the quantization step size of this level.
  • the original quantization step size refers to the quantization step size determined according to the quantization parameter Qp; the second-level combination adopts 1 of the original quantization step size.
  • /sqrt(3) is used as the quantization step size of this level; the third level combination adopts 1/sqrt(2) of the original quantization step size as the quantization step size of this level; the fourth level combination uses the original quantization step size as the quantization step size of this level Quantization step size.
  • the original quantization step size that is, the quantization step size determined according to the quantization parameter Qp corresponding to the current coding block
  • is a positive integer
  • the four-level combination adopts ⁇ /sqrt(4), ⁇ /sqrt(3), ⁇ /sqrt(2), and ⁇ as the quantization step size of the level.
  • the later the level combination the larger the corresponding quantization step size.
  • Different layers in the layer combination use the same quantization step size.
  • the quantization step size corresponding to the detail levels of different levels is further subdivided to improve the flexibility of determining the quantization step size.
  • the quantization parameter offset when determining the quantization parameter offset, first determine the level combination corresponding to the level of detail level, and then further determine the level hierarchy index corresponding to the level of detail in the level combination, and then according to the corresponding level
  • the hierarchical index queries the quantization parameter offset parameters, and determines the corresponding quantization parameter offset.
  • Different level combinations correspond to different quantization parameter offsets, which is beneficial to further subdivide the quantization step size corresponding to the detail level of different levels, and improve the quantization step. Long-determined flexibility.
  • the index of the current point is determined; the quantization weight corresponding to the index of the current point is determined as the first quantization weight of the current point.
  • S607 Perform inverse quantization on the quantized residual value of the attribute information of the current point according to the target quantization step size and the first quantization weight, to obtain the reconstructed residual value of the attribute information of the current point.
  • S607 includes S6071 and S6072:
  • the second quantization weight is less than or equal to the target quantization step size.
  • the above S6071 includes: using the following formula (10) to determine the second quantization weight of the current point:
  • effectiveQuantWeight represents the second quantization weight of the current point
  • w(Q) represents the first quantization weight of the current point
  • k represents the number of bits for right-shifting w(Q)
  • Qstep represents the target quantization step size.
  • the value of the second quantization weight is equal to an integer power of two.
  • the value of the first quantization weight is not equal to the integer power of 2, and based on the value of the first quantization weight of the current point, the integer power of 2 of the first quantization weight closest to the current point is determined as The second quantized weight of the current point.
  • the above S6072 includes: performing inverse quantization on the quantization residual value using the target quantization step size of the current point to obtain a weighted residual value; dividing the weighted residual value by the second quantization weight to obtain a reconstructed residual value .
  • the above S6072 includes: performing inverse quantization on the quantization residual value using the following formula (11):
  • AttrResidualQuant1 (attrResidualQuant2 ⁇ Qstep)/effectiveQuantWeight (11)
  • AttrResidualQuant2 represents the quantization residual value
  • attrResidualQuant1 represents the reconstruction residual value
  • effectiveQuantWeight represents the second quantization weight of the current point
  • Qstep represents the target quantization step size of the current point
  • "X" represents the multiplication operation
  • "/" represents the division operation.
  • the above S6072 includes: updating the target quantization step size according to the second quantization weight of the current point; and performing inverse quantization on the quantization residual value of the attribute information of the current point according to the updated quantization step size.
  • the above-mentioned updating the target quantization step size according to the second quantization weight of the current point includes: using the following formula (12) to update the quantization step size of the current point:
  • effectiveQuantWeight represents the second quantization step size of the current point
  • newQstep represents the quantization step size of the current point after the update of the second quantization step size based on the current point
  • Qstep represents the second quantization step size based on the current point before the update.
  • quantization step size Indicates a round-up operation, and "/" indicates a division operation.
  • the embodiments of the present application further include: according to the reverse order of the coding order of the point cloud, by traversing the points in the point cloud, based on the first quantization weight of the current point, updating the first one of the N nearest neighbors of the current point Quantization weight, N is an integer greater than 0.
  • the initial value of the first quantization weight of each point in the point cloud is a preset value.
  • the initial values of the first quantization weights of points in the first M LOD layers in the LOD layers of the point cloud are greater than the initial values of the first quantization weights of points in the remaining LOD layers, and M is a positive integer .
  • updating the first quantization weights of the N nearest neighbors of the current point based on the first quantization weights of the current point includes: obtaining an influence weight of the current point on each of the N nearest neighbors , the influence weight is related to the position information of the current point and the N nearest neighbors; based on the quantization weight of the current point and the influence weight of the current point on each of the N nearest neighbors, update the N nearest neighbors.
  • the first quantization weight is related to the position information of the current point and the N nearest neighbors.
  • the attribute parameter set of the point cloud includes the influence weight of the current point on each of the N nearest neighbors; the influence of the current point on each of the N nearest neighbors is obtained; The weight includes: obtaining the influence weight of the current point on each of the N nearest neighbors by querying the attribute parameter set.
  • updating the first quantization weight of the N nearest neighbors includes:
  • the first quantized weights of the N nearest neighbors are updated based on the following formula (13):
  • Q represents the current point
  • Pi represents the i -th nearest neighbor point to Q
  • i 1, 2,...,N
  • w(Q) represents the first quantization weight of the current point
  • ⁇ (P i ,Q) represents The influence weight of the current point on the neighbor point
  • w(P i ) represents the first quantization weight of the neighbor point P i after the update
  • k represents the number of bits of the right-shift operation
  • ">>" is the right-shift operation
  • " ⁇ " is the assignment
  • An operation such as "A ⁇ B" means assigning the value of B to A.
  • the value of ⁇ (P i , Q) decreases as i increases.
  • the quantization weight of the point cloud is stored as an array, and the dimension of the array is the same as the number of points in the point cloud.
  • S608 Determine the reconstructed value of the attribute information of the current point according to the predicted value of the attribute information of the current point and the reconstructed residual value.
  • the current point does not belong to the first 7 layers of LOD, read the QP value of the encoding parameter in the code stream, convert the QP into the corresponding target quantization step size Qstep, and use Qstep to inverse quantize the quantization residual obtained by decoding.
  • Let the quantization weight of the current point w(Q) min(w(Q), Qstep), after inverse quantization, divide the inverse quantization residual by w(Q) to remove the weighting influence.
  • the decoding process is shown in FIG. 16 .
  • FIG. 16 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the application, as shown in FIG. 16 , including:
  • S702 Divide the point cloud into one or more LOD layers of the level of detail according to the geometric information of the points in the point cloud.
  • Each LOD layer includes at least one detail expression layer, and each detail expression layer includes at least one point.
  • S703. Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • the points in the point cloud are sorted according to the multi-layer detail expression layer to obtain the LOD order, and according to the geometric information of the point to be decoded, at least one decoded adjacent point of the point to be decoded is obtained in the LOD order, according to the at least one
  • the reconstructed value of the attribute information of the decoded adjacent point determines the predicted value of the attribute information of the point to be decoded.
  • S705 Determine whether the current point belongs to a lossless encoding point, if not, execute S706 and S707, and if so, execute S708.
  • S708 Perform lossless encoding and decoding on the residual value of the attribute information of the current point to obtain the reconstructed residual value of the attribute information of the current point.
  • S709 Obtain a reconstructed value of the attribute information of the current point according to the reconstructed residual value and the predicted value of the attribute information of the current point.
  • the lossless decoding of the residual value of the attribute information of the point may also be referred to as not quantizing the residual value of the attribute information of the point, or referred to as not performing a scaling operation on the residual value of the attribute information of the point (scaling).
  • Quantizing the residual value of the attribute information of the point is also referred to as performing scaling operation (scaling) on the residual value of the attribute information of the point.
  • determining whether the current point belongs to the lossless encoding point in the above S705 includes: S7051 and S7051.
  • S7051 Decode the code stream of the point cloud to obtain first information, where the first information is used to indicate a point where the residual value of the attribute information has undergone lossless encoding;
  • S7052 Determine, according to the first information, whether the residual value of the attribute information of the current point has undergone lossless encoding.
  • the residual information of the attribute information of the point in the point cloud and the first information can be obtained.
  • the first information is used to indicate that the residual value of the attribute information has been losslessly encoded (or unquantized). point.
  • the first information includes identification information of points whose residual values of attribute information have been losslessly coded.
  • a piece of information may include the number or index of the point pattern of the point where the residual value of the attribute information is not quantized.
  • the first information includes the total number of points whose residual values of the attribute information are losslessly encoded.
  • the implementation manner of determining whether the residual information of the attribute information of the current point has undergone lossless encoding is different, and the specific implementation process includes but is not limited to the following situations :
  • the first information includes N, where N is the total number of points in the point cloud whose residual values of attribute information have undergone lossless encoding.
  • N is an integer multiple of 2.
  • the interval between every two adjacent points among the N points is equal.
  • the above S7052 includes: if it is determined that the current point is one point among the N points, determining that the residual information of the attribute information of the current point has undergone lossless encoding. For example, the points in the point cloud are sorted according to the geometric information of the points in the point cloud, and the sorted point cloud is obtained. For the sorted point cloud, determine the interval between two adjacent points among the N points according to the total number of points and the value of N, and determine whether the current point is one of the N points according to the interval point.
  • the current point is the 21st point in the sorted point cloud, starting from the first point in the point cloud, every 10 points is the residual value of the attribute information after lossless encoding
  • the points are the 1st point, the 11th point, the 21st point..., and then it is determined that the current point is the point where the residual information of the attribute information has undergone lossless encoding.
  • the preset interval is the interval between two adjacent lossless encoding points in the point cloud.
  • the above S7052 includes: according to a preset interval, if it is determined that the interval between the current point and the point where the residual information of the previous attribute information has undergone lossless encoding is equal to the preset interval, then determining the interval of the attribute information of the current point.
  • the residual information is lossless encoded.
  • the first preset value is used to indicate that the detail expression layer whose total number of included points is less than or equal to the first preset value is divided into the first type of detail expression layer, and the detail expression layer whose total number of included points is greater than the first preset value is divided into the second type of detail expression layer.
  • S7052 includes: obtaining, according to the first preset value, at least one first type of detail expression layer in which the total number of points included in the multi-layer detail expression layer is less than or equal to the first preset value, and the at least one second-type detail expression layer whose total number of included points is greater than the first preset value;
  • the current point belongs to the first type of detail expression layer, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding.
  • M is the number of points in which the residual value of the attribute information in the second type of detail expression layer is losslessly encoded, and M is a positive integer multiple of 2.
  • the interval between two adjacent points among the M points is equal.
  • the above S7052 includes: if the current point is one of the M points, determining that the residual information of the attribute information of the current point has undergone lossless encoding.
  • At least one second-type detail expression layer includes L second-type detail expression layers, where L is a positive integer greater than or equal to 2, if the first information includes a first quantity, a second quantity, and P second-type detail expression layers The division information of the detail expression layer and the Q second type of detail expression layers.
  • the first number is greater than the second number, eg, the first number is 24 and the second number is 8.
  • the first number is a positive integer multiple of the second number, for example, the first number is twice the second number, for example, the first number is 24 and the second number is 12.
  • the first number is 3 times the second number, for example, the first number is 24 and the second number is 8.
  • the spacing between adjacent ones of the first number of points is equal.
  • the intervals between adjacent ones of the second number of points are equal.
  • the above S7052 includes: according to the division information, dividing L second-type detail expression layers to obtain P second-type detail expression layers and Q second-type detail expression layers; if it is determined that the current point is P One point among the first number of points where the residual value of the attribute information in the second type of detail expression layer has undergone lossless encoding, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding; if it is determined that the current point is Q If the residual value of the attribute information in the second type of detail expression layer is one point in the second number of points that have undergone lossless encoding, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding;
  • L is a positive integer greater than or equal to 2
  • P and Q are positive integers
  • the sum of P and Q is less than or equal to L
  • the P second-type detail expression layers and Q second-type detail expression layers do not overlap
  • the first quantity is different from the second quantity.
  • the P second-type detail expression layers are the first P second-type detail expression layers among the L second-type detail expression layers.
  • the Q second-type detail expression layers are the last Q second-type detail expression layers among the L second-type detail expression layers.
  • the last second type of detail expression layer in the P second type of detail expression layers is adjacent to the first second type of detail expression layer of the Q second type of detail expression layers.
  • the above-mentioned division information may include identification information of the first second-type detail expression layer of the Q second-type detail expression layers, or may include the last second-type detail of the P second-type detail expression layers The identification information of the presentation layer.
  • the first information further includes: identification information of a point where the residual value of the first attribute information in the second type of detail expression layer is losslessly encoded.
  • the first information further includes: identification information of a point whose residual value of the attribute information is lossless encoded.
  • the residual information of the attribute information of the current point may be inversely quantized in the following manner:
  • Manner 1 In the process of inverse quantizing the residual information of the attribute information of the point in the point cloud, the current point is skipped.
  • the inverse quantization step size of the current point is set to 1.
  • the quantization parameter QP of the current point is set as the target value, and the target value is the corresponding QP value when the inverse quantization step size is 1.
  • the method further includes:
  • reconstructedColor is the reconstructed value of the attribute information of the current point
  • attrResidual is the residual value of the attribute information of the current point
  • attrPredValue is the predicted value of the attribute information of the current point.
  • the point cloud decoding process includes: the decoder determines whether the current point belongs to the first 7 layers of LOD and determines whether the current point is a point without quantization.
  • the decoding process is shown in FIG. 17 .
  • FIG. 17 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the application, as shown in FIG. 17 , including:
  • Each LOD layer includes at least one detail expression layer, and each detail expression layer includes at least one point.
  • S803. Determine the target LOD layer where the current point is located according to the geometric information of the current point.
  • the points in the point cloud are sorted according to the multi-layer detail expression layer to obtain the LOD order, and according to the geometric information of the point to be decoded, at least one decoded adjacent point of the point to be decoded is obtained in the LOD order, according to the at least one
  • the reconstructed value of the attribute information of the decoded adjacent point determines the predicted value of the attribute information of the point to be decoded.
  • the point cloud decoding process includes: the decoding end calculates and obtains the quantization weights of all points in the point cloud. Determine whether the current point belongs to the first 7 layers of LOD and determine whether the current point is a point that is not quantized:
  • the current point belongs to the first 7 layers of LOD, and the current point is not an unquantized point, read the QP value of the encoding parameter in the code stream, then add the DeltaQP value of the LOD of this layer, and convert QP+DeltaQP into the corresponding Qstep, Use Qstep to inverse quantize the decoded quantized residual.
  • the current point does not belong to the first 7 layers of LOD, and the current point is not an unquantized point
  • read the QP value of the encoding parameter in the code stream convert the QP into the corresponding Qstep, and use the Qstep to inverse the quantization residual obtained by decoding.
  • FIG. 6 to FIG. 17 are only examples of the present application, and should not be construed as limiting the present application.
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be dealt with in the present application.
  • the implementation of the embodiments constitutes no limitation.
  • the term "and/or" is only an association relationship for describing associated objects, indicating that there may be three kinds of relationships. Specifically, A and/or B can represent three situations: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" in this document generally indicates that the related objects are an "or" relationship.
  • FIG. 18 is a schematic block diagram of a point cloud encoder 10 provided by an embodiment of the present application.
  • the point cloud encoder 10 includes:
  • Obtaining unit 11 for obtaining the attribute information of the current point in the point cloud
  • a processing unit 12 configured to process the attribute information of the current point to obtain a residual value of the attribute information of the current point
  • a quantization unit 13 configured to use a target quantization method to quantize the residual value of the attribute information of the current point to obtain the quantized residual value of the attribute information of the current point;
  • the target quantization method includes at least two quantization methods as follows: a first quantization method, a second quantization method and a third quantization method, and the first quantization method is to set the quantization parameter of at least one point in the point cloud. Quantitative parameter increment, the second quantization method is to perform weighting processing on the residual value of the point in the point cloud, and the third quantization method is the residual of the attribute information of at least one point in the point cloud.
  • the value is lossless encoded.
  • the obtaining unit 11 is further configured to obtain geometric information of the midpoint of the point cloud; the processing unit 12 is configured to divide the point cloud into one or more according to the geometric information of the midpoint of the point cloud Level of detail LOD layer.
  • the target quantization method includes the first quantization method and the second quantization method
  • the quantization unit 13 is specifically configured to determine, according to the geometric information of the current point, the location where the current point is located. target LOD layer; determine the target quantization step size adapted to the target LOD layer; determine the first quantization weight of the current point; according to the target quantization step size and the first quantization weight
  • the residual value of the attribute information is quantized to obtain the quantized residual value of the attribute information of the current point.
  • the target quantization method includes the first quantization method and the third quantization method
  • the quantization unit 13 is specifically configured to determine, according to the geometric information of the current point, the location where the current point is located.
  • the target LOD layer if it is determined that the current point belongs to the point of lossy coding, then determine the target quantization step size adapted to the target LOD layer;
  • the difference value is quantized to obtain a quantized residual value of the attribute information of the current point.
  • the target quantization mode includes the first quantization mode, the second quantization mode, and the third quantization mode
  • the quantization unit 13 is specifically configured to determine according to the geometric information of the current point The target LOD layer where the current point is located; if it is determined that the current point belongs to the point of lossy coding, then determine the target quantization step size adapted to the target LOD layer, and determine the first quantization weight of the current point ; According to the target quantization step size and the first quantization weight, the residual value of the attribute information of the current point is quantized to obtain the quantized residual value of the attribute information of the current point.
  • the quantization unit 13 is further configured to perform lossless encoding on the residual value of the attribute information of the current point if it is determined that the current point belongs to a point of lossless encoding.
  • the quantization unit 13 is specifically configured to obtain the hierarchical index of the target LOD layer; according to the hierarchical index of the target LOD layer, query the quantization step size lookup table corresponding to the target LOD layer
  • the target quantization step size, the quantization step size lookup table includes the correspondence between the LOD layer and the quantization step size.
  • the quantization unit 13 is specifically configured to determine the quantization parameter in the coding parameters of the current point; obtain the hierarchical index of the target LOD layer, and determine the hierarchical index of the target LOD layer according to the hierarchical index of the target LOD layer.
  • the quantization parameter increment of the target LOD layer; the target quantization step size corresponding to the target LOD layer is determined according to the quantization parameter and the quantization parameter increment of the target LOD layer.
  • the quantization unit 13 is specifically configured to, if the target LOD layer belongs to the first N LOD layers of the point cloud, determine that the increment of the quantization parameter of the target LOD layer is j, and the N is A positive integer less than or equal to the first threshold, and the j is an integer greater than 0 and less than or equal to the second threshold; if the target LOD layer does not belong to the top N LOD layers of the point cloud, the target is determined The quantization parameter increment of the LOD layer is 0.
  • the j is a first preset value
  • the j is a second preset value.
  • the quantization unit 13 is specifically configured to determine the index of the current point; and determine the quantization weight corresponding to the index of the current point as the first quantization weight of the current point.
  • the quantization unit 13 is specifically configured to determine the second quantization weight of the current point according to the first quantization weight of the current point; according to the target quantization step size and the second quantization weight, The residual value of the attribute information of the current point is quantized to obtain the quantized residual value.
  • the second quantization weight is less than or equal to the target quantization step size.
  • the quantization unit 13 is specifically configured to use the following formula to determine the second quantization weight of the current point:
  • effectiveQuantWeight represents the second quantization weight of the current point
  • w(Q) represents the first quantization weight of the current point
  • k represents the number of bits for right-shifting the w(Q)
  • Qstep represents the Target quantization step size.
  • the value of the second quantization weight is equal to an integer power of 2.
  • the value of the first quantization weight is not equal to an integer power of 2
  • the quantization unit 13 is specifically configured to, based on the value of the first quantization weight of the current point, quantify the value of the first quantization weight closest to the current point.
  • the integer power of 2 of the quantization weight is determined as the second quantization weight of the current point.
  • the quantization unit 13 is specifically configured to multiply the residual value of the attribute information of the current point by the second quantization weight to obtain a weighted residual value;
  • the weighted residual value is quantized to obtain the quantized residual value of the attribute information of the current point.
  • the quantization unit 13 is specifically configured to use the following formula to quantize the residual value of the attribute information of the current point:
  • AttrResidualQuant2 attrResidualQuant1 ⁇ effectiveQuantWeight/Qstep;
  • the attrResidualQuant2 represents the quantized residual value of the attribute information of the current point
  • attrResidualQuant1 represents the residual value of the attribute information of the current point
  • the effectiveQuantWeight represents the second quantization weight of the current point
  • the Qstep represents the target quantization step size.
  • the quantization unit 13 is specifically configured to update the target quantization step size according to the second quantization weight of the current point; The residual value of attribute information is quantized.
  • the quantization unit 13 is specifically configured to update the target quantization step size by using the following formula:
  • the effectiveQuantWeight represents the second quantization weight of the current point
  • the newQstep represents the updated quantization step size of the current point based on the target quantization step size
  • the Qstep represents the target quantization step size.
  • the quantization unit 13 is further configured to traverse the points in the point cloud according to the reverse order of the encoding order of the point cloud, and update the N number of the current point based on the first quantization weight of the current point
  • the first quantization weight of the nearest neighbor, N is an integer greater than 0.
  • the initial value of the first quantization weight of each point in the point cloud is a preset value.
  • the initial value of the first quantization weight of the points in the first M LOD layers in the LOD layer of the point cloud is greater than the initial value of the first quantization weight of the points in the remaining LOD layers, and the M is positive integer.
  • the quantization unit 13 is specifically configured to obtain an influence weight of the current point on each of the N closest adjacent points, where the influence weight is the same as the current point and the N closest adjacent points.
  • the position information of the nearest neighbors is related; based on the quantization weight of the current point and the influence weight of the current point on each of the N nearest neighbors, update the N nearest neighbors.
  • the first quantization weight is the first quantization weight.
  • the attribute parameter set of the point cloud includes the influence weight of the current point on each of the N closest adjacent points; the quantization unit 13 is specifically configured to query the attribute by querying the attribute A parameter set, to obtain the influence weight of the current point on each of the N nearest neighbors.
  • the quantization unit 13 is specifically configured to update the first quantization weights of the N nearest neighbors based on the following formula:
  • Q represents the current point
  • P i represents the i-th nearest neighbor point to the Q
  • i 1,2,...,N
  • w(Q) represents the first quantization weight of the current point
  • ⁇ ( P i , Q) represents the influence weight of the current point on the neighbor point
  • w(P i ) represents the updated first quantization weight of the neighbor point P i
  • k represents the number of bits of the right-shift operation.
  • the value of ⁇ (P i , Q) decreases as i increases.
  • the quantization weight of the point cloud is stored as an array, and the dimension of the array is the same as the number of points in the point cloud.
  • the encoder further includes an encoding unit 14, and the encoding unit 14 is configured to perform lossless encoding on the residual value of the attribute information of at least one point in the point cloud.
  • the at least one point includes N points, and the N is an integer multiple of 2.
  • the at least one point includes N points, and the interval between each adjacent two points in the N points is equal.
  • the encoding unit 14 is specifically configured to perform lossless encoding on the attribute residual value of at least one point in at least one detail expression layer of the multi-layer detail expression layers.
  • the encoding unit 14 is specifically configured to obtain at least one first type of detail expression layer in which the total number of points included in the multi-layer detail expression layer is less than or equal to a first preset value, and includes at least one second-type detail expression layer whose total number of points is greater than the first preset value; perform lossless encoding on the residual values of the attribute information of all points in the first-type detail expression layer; Lossless encoding is performed on the residual value of the attribute information of at least one point in the second type of detail expression layer.
  • the encoding unit 14 is specifically configured to perform lossless encoding on residual values of attribute information of M points in the second type of detail expression layer, where M is a positive integer multiple of 2.
  • the at least one second-type detail expression layer includes L second-type detail expression layers, where L is a positive integer greater than or equal to 2, and the encoding unit 14 is specifically configured to Lossless encoding is performed on the residual values of the attribute information of the first number of points in each of the second-type detail expression layers in the class-detail expression layers; each of the second-class details in the Q second-type detail expression layers Lossless encoding is performed on the residual values of the attribute information of the second number of points in the detail expression layer;
  • the P and Q are positive integers, and the sum of the P and the Q is less than or equal to the L, and the P second-type detail expression layers are different from the Q second-type detail expression layers Overlapping, the first number is different from the second number.
  • the P second type detail expression layers are the first P second type detail expression layers in the L second type detail expression layers.
  • the Q second-type detail expression layers are the last Q second-type detail expression layers in the L second-type detail expression layers.
  • the last second type detail expression layer in the P second type detail expression layers is adjacent to the first second type detail expression layer of the Q second type detail expression layers.
  • the first number is greater than the second number.
  • the first quantity is a positive integer multiple of the second quantity.
  • the intervals between two adjacent points in the first number of points are equal.
  • the interval between two adjacent points in the second number of points is equal.
  • the encoding unit 14 is further configured to determine the reconstructed value of the attribute information of the current point according to the residual value of the attribute information of the current point and the predicted value of the attribute information.
  • the encoding unit 14 is specifically configured to determine the reconstructed value of the attribute information of the current point according to the following formula:
  • the reconstructedColor is the reconstructed value of the attribute information of the current point
  • the attrResidual is the residual value of the attribute information of the current point
  • the attrPredValue is the predicted value of the attribute information of the current point.
  • the encoding unit 14 is further configured to generate a code stream, where the code stream includes first information, where the first information is used to indicate a point at which the residual value of the attribute information is subjected to lossless encoding.
  • the first information includes identification information of a point where the residual value of the attribute information is losslessly encoded.
  • the first information includes the number of points where the residual value of the attribute information is losslessly encoded.
  • the first information includes the first quantity, the second quantity, and the division information of the P second-type detail expression layers and the Q second-type detail expression layers.
  • the division information further includes identification information of the first second-type detail expression layer of the Q second-type detail expression layers, or includes the last second-type detail of the P second-type detail expression layers.
  • the identification information of the presentation layer is not limited to the last second-type detail expression layer in the P second-type detail expression layers.
  • the first information further includes: identification information of a point where the residual value of the first attribute information in the second type of detail expression layer is losslessly encoded.
  • the encoding unit 14 is specifically configured to skip the at least one lossless encoding of the residual value of the attribute information in the process of quantizing the residual value of the attribute information of the point in the point cloud point; or,
  • the quantization parameter QP of the at least one point where the residual value of the attribute information is losslessly encoded is set as a target value, and the target value is a corresponding QP value when the quantization step size is 1.
  • the apparatus embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, details are not repeated here.
  • the point cloud encoder 10 shown in FIG. 18 can execute the methods of the embodiments of the present application, and the aforementioned and other operations and/or functions of the various units in the point cloud encoder 10 are for implementing the methods 100 to 400, respectively. For the sake of brevity, the corresponding processes in the method will not be repeated here.
  • FIG. 19 is a schematic block diagram of a point cloud decoder 20 provided by an embodiment of the present application.
  • the point cloud decoder 20 may include:
  • the decoding unit 21 is used for analyzing the code stream of the point cloud to obtain the quantized residual value of the attribute information of the current point of the point cloud;
  • an inverse quantization unit 22 configured to perform inverse quantization on the quantized residual value of the attribute information of the current point in a target inverse quantization manner, to obtain the reconstructed residual value of the attribute information of the current point;
  • the target inverse quantization method includes the following at least two inverse quantization methods: a first inverse quantization method, a second inverse quantization method, and a third inverse quantization method, and the first inverse quantization method is to perform at least two inverse quantization methods on the point cloud.
  • the inverse quantization parameter of a point is set to an inverse quantization parameter increment
  • the second inverse quantization method is to perform deweighting processing on the residual value of the point in the point cloud
  • the third inverse quantization method is to perform a de-weighting process on the point.
  • the residual value of the attribute information of at least one point in the cloud is subjected to lossless decoding.
  • the decoding unit 21 is configured to obtain the geometric information of the point in the point cloud; according to the geometric information of the point in the point cloud, the point cloud is divided into one or more level of detail LOD layers.
  • the target inverse quantization method includes the first inverse quantization method and the second inverse quantization method
  • the inverse quantization unit 22 is specifically configured to determine the current point according to the geometric information of the current point.
  • the target LOD layer where the point is located determine the target quantization step size adapted to the target LOD layer; determine the first quantization weight of the current point; according to the target quantization step size and the first quantization weight, Inverse quantization is performed on the quantized residual value of the attribute information of the current point.
  • the target inverse quantization method includes the first inverse quantization method and the third inverse quantization method.
  • the inverse quantization unit 22 is specifically configured to determine the current point according to the geometric information of the current point.
  • the quantized residual value of the attribute information is inversely quantized.
  • the target inverse quantization method includes the first inverse quantization method, the second inverse quantization method, and the third inverse quantization method.
  • the geometric information of the target LOD layer is determined, and the target LOD layer where the current point is located is determined; if it is determined that the current point belongs to the point of lossy coding, the target quantization step size of the target LOD layer is determined to be adapted, and the current point is determined and performing inverse quantization on the quantization residual value of the attribute information of the current point according to the target quantization step size and the first quantization weight.
  • the decoding unit 21 is further configured to perform lossless encoding and decoding on the residual value of the attribute information of the current point if it is determined that the current point belongs to the point of lossless encoding.
  • the inverse quantization unit 22 is specifically configured to obtain the hierarchical index of the target LOD layer; according to the hierarchical index of the target LOD layer, query the quantization step size lookup table corresponding to the target LOD layer
  • the target quantization step size, the quantization step size lookup table includes the correspondence between the LOD layer and the quantization step size.
  • the inverse quantization unit 22 is specifically configured to decode the code stream to obtain quantization parameters in the encoding parameters of the current point;
  • the hierarchical index is used to determine the quantization parameter increment of the target LOD layer;
  • the target quantization step size corresponding to the target LOD layer is determined according to the quantization parameter and the quantization parameter increment of the target LOD layer.
  • the inverse quantization unit 22 is specifically configured to, if the target LOD layer belongs to the first N LOD layers of the point cloud, determine that the quantization parameter increment of the target LOD layer is j, and the N is a positive integer less than or equal to the first threshold, and the j is an integer greater than 0 and less than or equal to the second threshold; if the target LOD layer does not belong to the first N LOD layers of the point cloud, determine the The quantization parameter increment of the target LOD layer is 0.
  • the j is a first preset value; if the quantization parameter is less than the third threshold, the j is a second preset value.
  • the inverse quantization unit 22 is specifically configured to determine the index of the current point; and determine the quantization weight corresponding to the index of the current point as the first quantization weight of the current point.
  • the inverse quantization unit 22 is specifically configured to determine the second quantization weight of the current point according to the first quantization weight of the current point; according to the target quantization step size and the second quantization weight , performing inverse quantization on the quantized residual value of the attribute information of the current point to obtain the reconstructed residual value.
  • the second quantization weight is less than or equal to the target quantization step size.
  • the inverse quantization unit 22 is specifically configured to use the following formula to determine the second quantization weight of the current point:
  • effectiveQuantWeight represents the second quantization weight of the current point
  • w(Q) represents the first quantization weight of the current point
  • k represents the number of bits for right-shifting the w(Q)
  • Qstep represents the Target quantization step size.
  • the value of the second quantization weight is equal to an integer power of 2.
  • the value of the first quantization weight is not equal to an integer power of 2.
  • the inverse quantization unit 22 is specifically configured to use the value of the first quantization weight based on the current point to be the closest The integer power of 2 of the first quantization weight of the current point is determined as the second quantization weight of the current point.
  • the inverse quantization unit 22 is specifically configured to perform inverse quantization on the quantization residual value using the target quantization step size of the current point to obtain a weighted residual value;
  • the reconstructed residual value is obtained by dividing the weighted residual value by the second quantization weight.
  • the inverse quantization unit 22 is specifically configured to perform inverse quantization on the quantization residual value by using the following formula:
  • AttrResidualQuant1 (attrResidualQuant2 ⁇ Qstep)/effectiveQuantWeight;
  • AttrResidualQuant2 represents the quantization residual value
  • attrResidualQuant1 represents the reconstruction residual value
  • effectiveQuantWeight represents the second quantization weight of the current point
  • Qstep represents the target quantization step size of the current point.
  • the inverse quantization unit 22 is specifically configured to update the target quantization step size according to the second quantization weight of the current point;
  • the quantized residual value of the attribute information is inversely quantized.
  • the inverse quantization unit 22 is specifically configured to update the quantization step size of the current point by using the following formula:
  • effectiveQuantWeight represents the second quantization step size of the current point
  • newQstep represents the updated quantization step size of the current point based on the second quantization step size of the current point
  • Qstep represents the current point based on the second quantization step size of the current point.
  • the quantization step size before the second quantization step size of the current point is updated.
  • the decoding unit 21 is further configured to traverse the points in the point cloud according to the reverse order of the encoding order of the point cloud, and update the N number of the current point based on the first quantization weight of the current point
  • the first quantization weight of the nearest neighbor, N is an integer greater than 0.
  • the initial value of the first quantization weight of each point in the point cloud is a preset value.
  • the initial value of the first quantization weight of the points in the first M LOD layers in the LOD layer of the point cloud is greater than the initial value of the first quantization weight of the points in the remaining LOD layers, and the M is positive integer.
  • the decoding unit 21 is specifically configured to obtain an influence weight of the current point on each of the N closest adjacent points, where the influence weight is the same as the current point and the N closest adjacent points.
  • the position information of the nearest neighbors is related; based on the quantization weight of the current point and the influence weight of the current point on each of the N nearest neighbors, update the N nearest neighbors.
  • the first quantization weight is related to the quantization weight of the current point and the influence weight of the current point on each of the N nearest neighbors.
  • the attribute parameter set of the point cloud includes the influence weight of the current point on each of the N closest adjacent points; the decoding unit 21 is further configured to query the attribute by querying the attribute A parameter set, to obtain the influence weight of the current point on each of the N nearest neighbors.
  • the decoding unit 21 is further configured to update the first quantization weights of the N nearest neighbors based on the following formula:
  • Q represents the current point
  • P i represents the i-th nearest neighbor point to the Q
  • i 1,2,...,N
  • w(Q) represents the first quantization weight of the current point
  • ⁇ ( P i , Q) represents the influence weight of the current point on the neighbor point
  • w(P i ) represents the updated first quantization weight of the neighbor point P i
  • k represents the number of bits of the right-shift operation.
  • the value of ⁇ (P i , Q) decreases as i increases.
  • the quantization weight of the point cloud is stored as an array, and the dimension of the array is the same as the number of points in the point cloud.
  • the decoding unit 22 is further configured to decode the code stream of the point cloud to obtain first information, where the first information is used to indicate a point for which the residual value of the attribute information has undergone lossless encoding; according to the The first information is to determine whether the residual value of the attribute information of the current point has undergone lossless encoding.
  • the first information includes N, where N is the total number of points in the point cloud whose residual values of attribute information have undergone lossless encoding.
  • the N is an integer multiple of 2.
  • the interval between each adjacent two points in the N points is equal.
  • the decoding unit 21 is specifically configured to, if it is determined that the current point is one of the N points, determine that the residual information of the attribute information of the current point has undergone lossless encoding.
  • the decoding unit 21 is specifically configured to, according to the preset interval, determine that the residual information between the current point and the previous attribute information has been losslessly encoded. If the interval between points is equal to the preset interval, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding.
  • the decoding unit 21 is specifically configured to obtain the total number of points included in the multi-layer detail expression layer according to the first preset value At least one first-type detail expression layer whose number is less than or equal to the first preset value, and at least one second-type detail expression layer whose total number of included points is greater than the first preset value; If the current point belongs to the first type of detail expression layer, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding.
  • the decoding unit 21 is specifically configured to, if the current point is one of the M points, determine that the residual information of the attribute information of the current point has undergone lossless encoding.
  • the at least one second-type detail expression layer includes L second-type detail expression layers, where L is a positive integer greater than or equal to 2, if the first information includes a first number, a first The number of two, and the division information of the P second-type detail expression layers and the Q second-type detail expression layers, the decoding unit 21 is specifically configured to perform the processing on the L second-type detail expression layers according to the division information Divide to obtain the P second-type detail expression layers and the Q second-type detail expression layers; if it is determined that the current point is the residual value of the attribute information in the P second-type detail expression layers One point in the first number of points of the lossless encoding, it is determined that the residual information of the attribute information of the current point has undergone lossless encoding; if it is determined that the current point is the attribute information in the Q second-type detail expression layers One point in the second number of points whose residual value has undergone lossless encoding, then it is determined that the residual information of the attribute information of the current point has undergone loss
  • the P second type detail expression layers are the first P second type detail expression layers in the L second type detail expression layers.
  • the Q second-type detail expression layers are the last Q second-type detail expression layers in the L second-type detail expression layers.
  • the last second type detail expression layer in the P second type detail expression layers is adjacent to the first second type detail expression layer of the Q second type detail expression layers.
  • the division information further includes the identification information of the first second-type detail expression layer of the Q second-type detail expression layers, or includes the last second-type detail expression layer of the P second-type detail expression layers.
  • the identification information of the detail presentation layer is not limited to the last second-type detail expression layer in the P second-type detail expression layers.
  • the first information further includes: identification information of a point where the residual value of the first attribute information of the second type of detail expression layer has been losslessly encoded.
  • the first information further includes: identification information of a point whose residual value of the attribute information is lossless encoded.
  • the first number is greater than the second number.
  • the first quantity is a positive integer multiple of the second quantity.
  • the intervals between two adjacent points in the first number of points are equal.
  • the interval between two adjacent points in the second number of points is equal.
  • the decoding unit 21 is specifically configured to determine the reconstructed value of the attribute information of the current point according to the following formula:
  • the reconstructedColor is the reconstructed value of the attribute information of the current point
  • the attrResidual is the residual value of the attribute information of the current point
  • the attrPredValue is the predicted value of the attribute information of the current point.
  • the apparatus embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, details are not repeated here.
  • the point cloud decoder 20 shown in FIG. 19 may correspond to the corresponding subject in performing the methods 500 , 600 and/or 700 of the embodiments of the present application, and the aforementioned and other operations of the respective units in the point cloud decoder 20 and/or functions are respectively to implement the corresponding processes in the respective methods such as methods 500, 600 and/or 700, and are not repeated here for the sake of brevity.
  • the functional unit may be implemented in the form of hardware, may also be implemented by an instruction in the form of software, or may be implemented by a combination of hardware and software units.
  • the steps of the method embodiments in the embodiments of the present application may be completed by an integrated logic circuit of hardware in the processor and/or instructions in the form of software, and the steps of the methods disclosed in combination with the embodiments of the present application may be directly embodied as hardware
  • the execution of the decoding processor is completed, or the execution is completed by a combination of hardware and software units in the decoding processor.
  • the software unit may be located in random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps in the above method embodiments in combination with its hardware.
  • FIG. 20 is a schematic block diagram of an electronic device 30 provided by an embodiment of the present application.
  • the electronic device 30 may be the point cloud encoder or the point cloud decoder described in this embodiment of the application, and the electronic device 30 may include:
  • the processor 32 can call and run the computer program 34 from the memory 33 to implement the methods in the embodiments of the present application.
  • the processor 32 may be configured to perform the steps of the method 200 described above according to instructions in the computer program 34 .
  • the processor 32 may include, but is not limited to:
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the memory 33 includes but is not limited to:
  • Non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which acts as an external cache.
  • RAM Random Access Memory
  • RAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • enhanced SDRAM ESDRAM
  • synchronous link dynamic random access memory SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the computer program 34 may be divided into one or more units, and the one or more units are stored in the memory 33 and executed by the processor 32 to complete the procedures provided by the present application.
  • the one or more units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 34 in the electronic device 30 .
  • the electronic device 30 may further include:
  • a transceiver 33 which can be connected to the processor 32 or the memory 33 .
  • the processor 32 can control the transceiver 33 to communicate with other devices, specifically, can send information or data to other devices, or receive information or data sent by other devices.
  • the transceiver 33 may include a transmitter and a receiver.
  • the transceiver 33 may further include antennas, and the number of the antennas may be one or more.
  • each component in the electronic device 30 is connected through a bus system, wherein the bus system includes a power bus, a control bus and a status signal bus in addition to a data bus.
  • FIG. 21 is a schematic block diagram of a point cloud encoding and decoding system 40 provided by an embodiment of the present application.
  • the point cloud encoding and decoding system 40 may include: a point cloud encoder 41 and a point cloud decoder 42, wherein the point cloud encoder 41 is used to execute the point cloud encoding method involved in the embodiments of the present application, and the point cloud encoder 41
  • the decoder 42 is configured to execute the point cloud decoding method involved in the embodiments of the present application.
  • the present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer, enables the computer to execute the methods of the above method embodiments.
  • the embodiments of the present application further provide a computer program product including instructions, and when the instructions are executed by a computer, the instructions cause the computer to execute the method of the above method embodiments.
  • An embodiment of the present application further provides a code stream, where the code stream is obtained through the encoding method shown in FIG. 6 , FIG. 7 , FIG. 10 or FIG. 13 .
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored on or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted over a wire from a website site, computer, server or data center (eg coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc. .
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请提供一种点云编解码方法与系统、及点云编码器与点云解码器,通过获取点云中当前点的属性信息;对当前点的属性信息进行处理,得到当前点的属性信息的残差值;使用目标量化方式,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值;其中,目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,第一量化方式为对点云中至少一个点的量化参数设定量化参数增量,第二量化方式为对点云中点的残差值进行加权处理,第三量化方式为对点云中至少一个点的属性信息的残差值进行无损编码,从而提高点云中点的属性信息的量化效果。

Description

点云编解码方法与系统、及点云编码器与点云解码器
本申请要求以下3件专利申请的优先权:申请日期为2020年09月25日申请号为PCT/CN2020/117941的专利申请、申请日期为2020年12月22日申请号为PCT/CN2020/138423的专利申请、以及申请日期为2020年12月22日申请号为PCT/CN2020/138421的专利申请,上述3件专利申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及点云编解码技术领域,尤其涉及一种点云编解码方法与系统、及点云编码器与点云解码器。
背景技术
通过采集设备对物体表面进行采集,形成点云数据,点云数据包括几十万甚至更多的点。在视频制作过程中,将点云数据以点云媒体文件的形式在点云编码设备和点云解码设备之间传输。但是,如此庞大的点给传输带来了挑战,因此,点云编码设备需要对点云数据进行压缩后传输。
点云数据的压缩主要包括几何信息的压缩和属性信息的压缩,在属性信息压缩时,通过量化来减小或消除点云数据中的冗余信息。
但是,目前对点云中点进行量化时,其量化效果差。
发明内容
本申请实施例提供了一种点云编解码方法与系统、及点云编码器与点云解码器,以提高点云中点的量化效果。
第一方面,本申请提供了一种点云编码方法,包括:
获取点云中当前点的属性信息;
对所述当前点的属性信息进行处理,得到所述当前点的属性信息的残差值;
使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值;
其中,所述目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,所述第一量化方式为对所述点云中至少一个点的量化参数设定量化参数增量,所述第二量化方式为对所述点云中点的残差值进行加权处理,所述第三量化方式为对所述点云中至少一个点的属性信息的残差值进行无损编码。
第二方面,本申请实施例提供一种点云解码方法,包括:
对点云的码流进行解析,得到所述点云的当前点的属性信息的量化残差值;
用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,得到所述当前点的属性信息的重建残差值;
其中,所述目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反 量化方式和第三反量化方式,所述第一反量化方式为对所述点云中至少一个点的反量化参数设定反量化参数增量,所述第二反量化方式为对所述点云中点的残差值进行去加权处理,所述第三反量化方式为对所述点云中至少一个点的属性信息的残差值进行无损解码。
第三方面,本申请提供了一种点云编码器,用于执行上述第一方面或其各实现方式中的方法。具体地,该编码器包括用于执行上述第一方面或其各实现方式中的方法的功能单元。
第四方面,本申请提供了一种点云解码器,用于执行上述第二方面或其各实现方式中的方法。具体地,该解码器包括用于执行上述第二方面或其各实现方式中的方法的功能单元。
第五方面,提供了一种点云编码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第一方面或其各实现方式中的方法。
第六方面,提供了一种点云解码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第二方面或其各实现方式中的方法。
第七方面,提供了一种点云编解码系统,包括点云编码器和点云解码器。点云编码器用于执行上述第一方面或其各实现方式中的方法,点云解码器用于执行上述第二方面或其各实现方式中的方法。
第八方面,提供了一种芯片,用于实现上述第一方面至第二方面中的任一方面或其各实现方式中的方法。具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的设备执行如上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第九方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十一方面,提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十二方面,提供了一种码流,所述码流通过上述第一方面所述的方法得到。
基于以上技术方案,通过获取点云中当前点的属性信息;对当前点的属性信息进行处理,得到当前点的属性信息的残差值;使用目标量化方式,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值;其中,目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,第一量化方式为对点云中至少一个点的量化参数设定量化参数增量,第二量化方式为对点云中点的残差值进行加权处理,第三量化方式为对点云中至少一个点的属性信息的残差值进行无损编码,从而提高点云中点的属性信息的量化效果,从而提高点云中点的属性信息的量化效果。
附图说明
图1为本申请实施例涉及的一种点云编解码系统的示意性框图;
图2是本申请实施例提供的点云编码器的示意性框图;
图3是本申请实施例提供的解码器的示意性框图;
图4为本申请实施例涉及的属性编码模块的部分框图;
图5为本申请实施例涉及的属性解码模块的部分框图;
图6为本申请实施例提供的点云编码方法的一种流程示意图;
图7为本申请一实施例提供的点云编码方法的流程示意图;
图8为部分点云的LOD划分示意图;
图9为部分点云的LOD划分示意图;
图10为本申请一实施例提供的点云编码方法的流程示意图;
图11为本申请实施例涉及的另一LOD划分示意图;
图12A为本申请实施例涉及的另一LOD划分示意图;
图12B为本申请实施例涉及的另一LOD划分示意图;
图13为本申请一实施例提供的点云编码方法的流程示意图;
图14为本申请实施例提供的点云解码方法的一种流程示意图;
图15为本申请一实施例提供的点云解码方法的流程示意图;
图16为本申请一实施例提供的点云解码方法的流程示意图;
图17为本申请一实施例提供的点云解码方法的流程示意图;
图18是本申请实施例提供的点云编码器的示意性框图;
图19是本申请实施例提供的点云解码器的示意性框图;
图20是本申请实施例提供的电子设备的示意性框图;
图21是本申请实施例提供的点云编解码系统的示意性框图。
具体实施方式
本申请可应用于点云压缩技术领域。
为了便于理解本申请的实施例,首先对本申请实施例涉及到的相关概念进行如下简单介绍:
点云(Point Cloud)是指空间中一组无规则分布的、表达三维物体或三维场景的空间结构及表面属性的离散点集。
点云数据(Point Cloud Data)是点云的具体记录形式,点云中的点可以包括点的位置信息和点的属性信息。例如,点的位置信息可以是点的三维坐标信息。点的位置信息也可称为点的几何信息。例如,点的属性信息可包括颜色信息和/或反射率等等。例如,所述颜色信息可以是任意一种色彩空间上的信息。例如,所述颜色信息可以是(RGB)。再如,所述颜色信息可以是于亮度色度(YcbCr,YUV)信息。例如,Y表示明亮度(Luma),Cb(U)表示蓝色色差,Cr(V)表示红色,U和V表示为色度(Chroma)用于描述色差信息。例如,根据激光测量原理得到的点云,所述点云中的点可以包括点的三维坐标信息和点的激光反射强度(reflectance)。再如,根据摄影测量原理得到的点云,所述点云中的点可以可包括点的三维坐标信息和点的颜色信息。再如,结合激光测量和摄影测量原理得到点云,所述点云中的点可以可包括点的三维坐标信息、点的激光反射强度(reflectance) 和点的颜色信息。
点云数据的获取途径可以包括但不限于以下至少一种:(1)计算机设备生成。计算机设备可以根据虚拟三维物体及虚拟三维场景的生成点云数据。(2)3D(3-Dimension,三维)激光扫描获取。通过3D激光扫描可以获取静态现实世界三维物体或三维场景的点云数据,每秒可以获取百万级点云数据;(3)3D摄影测量获取。通过3D摄影设备(即一组摄像机或具有多个镜头和传感器的摄像机设备)对现实世界的视觉场景进行采集以获取现实世界的视觉场景的点云数据,通过3D摄影可以获得动态现实世界三维物体或三维场景的点云数据。(4)通过医学设备获取生物组织器官的点云数据。在医学领域可以通过磁共振成像(Magnetic Resonance Imaging,MRI)、电子计算机断层扫描(Computed Tomography,CT)、电磁定位信息等医学设备获取生物组织器官的点云数据。
点云可以按获取的途径分为:密集型点云和稀疏性点云。
点云按照数据的时序类型划分为:
第一类静态点云:即物体是静止的,获取点云的设备也是静止的;
第二类动态点云:物体是运动的,但获取点云的设备是静止的;
第三类动态获取点云:获取点云的设备是运动的。
按点云的用途分为两大类:
类别一:机器感知点云,其可以用于自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等场景;
类别二:人眼感知点云,其可以用于数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。
图1为本申请实施例涉及的一种点云编解码系统的示意性框图。需要说明的是,图1只是一种示例,本申请实施例的点云编解码系统包括但不限于图1所示。如图1所示,该点云编解码系统100包含编码设备110和解码设备120。其中编码设备用于对点云数据进行编码(可以理解成压缩)产生码流,并将码流传输给解码设备。解码设备对编码设备编码产生的码流进行解码,得到解码后的点云数据。
本申请实施例的编码设备110可以理解为具有点云编码功能的设备,解码设备120可以理解为具有点云解码功能的设备,即本申请实施例对编码设备110和解码设备120包括更广泛的装置,例如包含智能手机、台式计算机、移动计算装置、笔记本(例如,膝上型)计算机、平板计算机、机顶盒、电视、相机、显示装置、数字媒体播放器、点云游戏控制台、车载计算机等。
在一些实施例中,编码设备110可以经由信道130将编码后的点云数据(如码流)传输给解码设备120。信道130可以包括能够将编码后的点云数据从编码设备110传输到解码设备120的一个或多个媒体和/或装置。
在一个实例中,信道130包括使编码设备110能够实时地将编码后的点云数据直接发射到解码设备120的一个或多个通信媒体。在此实例中,编码设备110可根据通信标准来调制编码后的点云数据,且将调制后的点云数据发射到解码设备120。其中通信媒体包含无线通信媒体,例如射频频谱,可选的,通信媒体还可以包含有线通信媒体,例如一根或多根物理传输线。
在另一实例中,信道130包括存储介质,该存储介质可以存储编码设备110编码后 的点云数据。存储介质包含多种本地存取式数据存储介质,例如光盘、DVD、快闪存储器等。在该实例中,解码设备120可从该存储介质中获取编码后的点云数据。
在另一实例中,信道130可包含存储服务器,该存储服务器可以存储编码设备110编码后的点云数据。在此实例中,解码设备120可以从该存储服务器中下载存储的编码后的点云数据。可选的,该存储服务器可以存储编码后的点云数据且可以将该编码后的点云数据发射到解码设备120,例如web服务器(例如,用于网站)、文件传送协议(FTP)服务器等。
一些实施例中,编码设备110包含点云编码器112及输出接口113。其中,输出接口113可以包含调制器/解调器(调制解调器)和/或发射器。
在一些实施例中,编码设备110除了包括点云编码器112和输入接口113外,还可以包括点云源111。
点云源111可包含点云采集装置(例如,扫描仪)、点云存档、点云输入接口、计算机图形系统中的至少一个,其中,点云输入接口用于从点云内容提供者处接收点云数据,计算机图形系统用于产生点云数据。
点云编码器112对来自点云源111的点云数据进行编码,产生码流。点云编码器112经由输出接口113将编码后的点云数据直接传输到解码设备120。编码后的点云数据还可存储于存储介质或存储服务器上,以供解码设备120后续读取。
在一些实施例中,解码设备120包含输入接口121和点云解码器122。
在一些实施例中,解码设备120除包括输入接口121和点云解码器122外,还可以包括显示装置123。
其中,输入接口121包含接收器及/或调制解调器。输入接口121可通过信道130接收编码后的点云数据。
点云解码器122用于对编码后的点云数据进行解码,得到解码后的点云数据,并将解码后的点云数据传输至显示装置123。
显示装置123显示解码后的点云数据。显示装置123可与解码设备120整合或在解码设备120外部。显示装置123可包括多种显示装置,例如液晶显示器(LCD)、等离子体显示器、有机发光二极管(OLED)显示器或其它类型的显示装置。
此外,图1仅为实例,本申请实施例的技术方案不限于图1,例如本申请的技术还可以应用于单侧的点云编码或单侧的点云解码。
目前的点云编码器可以采用运动图像专家组(Moving Picture Experts Group,MPEG)提供的基于几何的点云压缩(Geometry Point Cloud Compression,G-PCC)编解码框架或基于视频的点云压缩(Video Point Cloud Compression,V-PCC)编解码框架,也可以采用音视频编码标准(Audio Video Standard,AVS)提供的AVS-PCC编解码框架。G-PCC及AVS-PCC均针对静态的稀疏型点云,其编码框架大致相同。G-PCC编解码框架可用于针对第一静态点云和第三类动态获取点云进行压缩,V-PCC编解码框架可用于针对第二类动态点云进行压缩。G-PCC编解码框架也称为点云编解码器TMC13,V-PCC编解码框架也称为点云编解码器TMC2。
下面以G-PCC编解码框架为例,对本申请实施例可适用的点云编码器和点云解码器 进行说明。
图2是本申请实施例提供的点云编码器的示意性框图。
由上述可知点云中的点可以包括点的位置信息和点的属性信息,因此,点云中的点的编码主要包括位置编码和属性编码。在一些示例中点云中点的位置信息又称为几何信息,对应的点云中点的位置编码也可以称为几何编码。
位置编码的过程包括:对点云中的点进行预处理,例如坐标变换、量化和移除重复点等;接着,对预处理后的点云进行几何编码,例如构建八叉树,基于构建的八叉树进行几何编码形成几何码流。同时,基于构建的八叉树输出的位置信息,对点云数据中各点的位置信息进行重建,得到各点的位置信息的重建值。
属性编码过程包括:通过给定输入点云的位置信息的重建信息和属性信息的原始值,选择三种预测模式的一种进行点云预测,对预测后的结果进行量化,并进行算术编码形成属性码流。
如图2所示,位置编码可通过以下单元实现:
坐标转换(Tanmsform coordinates)单元201、量化和移除重复点(Quantize and remove points)单元202、八叉树分析(Analyze octree)单元203、几何重建(Reconstruct geometry)单元204以及第一算术编码(Arithmetic enconde)单元205。
坐标转换单元201可用于将点云中点的世界坐标变换为相对坐标。例如,点的几何坐标分别减去xyz坐标轴的最小值,相当于去直流操作,以实现将点云中的点的坐标从世界坐标转换为相对坐标。
量化和移除重复点单元202可通过量化减少坐标的数目;量化后原先不同的点可能被赋予相同的坐标,基于此,可通过去重操作将重复的点删除;例如,具有相同量化位置和不同属性信息的多个云可通过属性转换合并到一个云中。在本申请的一些实施例中,量化和移除重复点单元202为可选的单元模块。
八叉树分析单元203可利用八叉树(octree)编码方式编码量化的点的位置信息。例如,将点云按照八叉树的形式进行划分,由此,点的位置可以和八叉树的位置一一对应,通过统计八叉树中有点的位置,并将其标识(flag)记为1,以进行几何编码。
几何重建单元204可以基于八叉树分析单元203输出的位置信息进行位置重建,得到点云数据中各点的位置信息的重建值。
第一算术编码单元205可以采用熵编码方式对八叉树分析单元203输出的位置信息进行算术编码,即将八叉树分析单元203输出的位置信息利用算术编码方式生成几何码流;几何码流也可称为几何比特流(geometry bitstream)。
属性编码可通过以下单元实现:
颜色空间转换(Transform colors)单元210、属性转化(Transfer attributes)单元211、区域自适应分层变换(Region Adaptive Hierarchical Transform,RAHT)单元212、预测变化(predicting transform)单元213以及提升变化(lifting transform)单元214、量化系数(Quantize coefficients)单元215以及第二算术编码单元216。
需要说明的是,点云编码器200可包含比图2更多、更少或不同的功能组件。
颜色空间转换单元210可用于将点云中点的RGB色彩空间变换为YCbCr格式或其他格式。
属性转化单元211可用于转换点云中点的属性信息,以最小化属性失真。例如,属性转化单元211可用于得到点的属性信息的原始值。例如,所述属性信息可以是点的颜色信息。
经过属性转化单元211转换得到点的属性信息的原始值后,可选择任一种预测单元,对点云中的点进行预测。预测单元可包括:RAHT 212、预测变化(predicting transform)单元213以及提升变化(lifting transform)单元214。
换言之,RAHT 212、预测变化(predicting transform)单元213以及提升变化(lifting transform)单元214中的任一项可用于对点云中点的属性信息进行预测,以得到点的属性信息的预测值,进而基于点的属性信息的预测值得到点的属性信息的残差值。例如,点的属性信息的残差值可以是点的属性信息的原始值减去点的属性信息的预测值。
在本申请的一实施例中,预测变换单元213还可用于生成细节层(level of detail,LOD)。LOD的生成过程包括:根据点云中点的位置信息,获取点与点之间的欧式距离;根据欧式距离,将点分为不同的细节表达层。在一个实施例中,可以将欧式距离进行排序后,将不同范围的欧式距离划分为不同的细节表达层。例如,可以随机挑选一个点,作为第一细节表达层。然后计算剩余点与该点的欧式距离,并将欧式距离符合第一阈值要求的点,归为第二细节表达层。获取第二细节表达层中点的质心,计算除第一、第二细节表达层以外的点与该质心的欧式距离,并将欧式距离符合第二阈值的点,归为第三细节表达层。以此类推,将所有的点都归到细节表达层中。通过调整欧式距离的阈值,可以使得每层LOD层的点的数量是递增的。应理解,LOD划分的方式还可以采用其它方式,本申请对此不进行限制。
需要说明的是,可以直接将点云划分为一个或多个细节表达层,也可以先将点云划分为多个点云切块(slice),再将每一个点云切块划分为一个或多个LOD层。
例如,可将点云划分为多个点云切块,每个点云切块的点的个数可以在55万-110万之间。每个点云切块可看成单独的点云。每个点云切块又可以划分为多个细节表达层,每个细节表达层包括多个点。在一个实施例中,可根据点与点之间的欧式距离,进行细节表达层的划分。
量化单元215可用于量化点的属性信息的残差值。例如,若所述量化单元215和所述预测变换单元213相连,则所述量化单元可用于量化所述预测变换单元213输出的点的属性信息的残差值。
例如,对预测变换单元213输出的点的属性信息的残差值使用量化步长进行量化,以实现提升系统性能。
第二算术编码单元216可使用零行程编码(Zero run length coding)对点的属性信息的残差值进行熵编码,以得到属性码流。所述属性码流可以是比特流信息。
图3是本申请实施例提供的解码器的示意性框图。
如图3所示,解码框架300可以从编码设备获取点云的码流,通过解析码得到点云中的点的位置信息和属性信息。点云的解码包括位置解码和属性解码。
位置解码的过程包括:对几何码流进行算术解码;构建八叉树后进行合并,对点的位置信息进行重建,以得到点的位置信息的重建信息;对点的位置信息的重建信息进行坐 标变换,得到点的位置信息。点的位置信息也可称为点的几何信息。
属性解码过程包括:通过解析属性码流,获取点云中点的属性信息的残差值;通过对点的属性信息的残差值进行反量化,得到反量化后的点的属性信息的残差值;基于位置解码过程中获取的点的位置信息的重建信息,选择如下RAHT、预测变化和提升变化三种预测模式中的一种进行点云预测,得到预测值,预测值与残差值相加得到点的属性信息的重建值;对点的属性信息的重建值进行颜色空间反转化,以得到解码点云。
如图3所示,位置解码可通过以下单元实现:
第一算数解码单元301、八叉树分析(synthesize octree)单元302、几何重建(Reconstruct geometry)单元304以及坐标反转换(inverse transform coordinates)单元305。
属性编码可通过以下单元实现:
第二算数解码单元310、反量化(inverse quantize)单元311、RAHT单元312、预测变化(predicting transform)单元313、提升变化(lifting transform)单元314以及颜色空间反转换(inverse trasform colors)单元315。
需要说明的是,解压缩是压缩的逆过程,类似的,解码器300中的各个单元的功能可参见编码器200中相应的单元的功能。另外,点云解码器300可包含比图3更多、更少或不同的功能组件。
例如,解码器300可根据点云中点与点之间的欧式距离将点云划分为多个LOD;然后,依次对LOD中点的属性信息进行解码;例如,计算零行程编码技术中零的数量(zero_cnt),以基于zero_cnt对残差进行解码;接着,解码框架200可基于解码出的残差值进行反量化,并基于反量化后的残差值与当前点的预测值相加得到该点云的重建值,直到解码完所有的点云。当前点将会作为后续LOD中点的最邻近点,并利用当前点的重建值对后续点的属性信息进行预测。
由上述图2可知,点云编码器200从功能上主要包括了两部分:位置编码模块和属性编码模块,其中位置编码模块用于实现点云的位置信息的编码,形成几何码流,属性编码模块用于实现点云的属性信息的编码,形成属性码流。本申请主要涉及属性信息的编码,下面结合图4,对本申请涉及的点云编码器中的属性编码模块进行介绍。
图4为本申请实施例涉及的属性编码模块的部分框图,该属性编码模块400可以理解为是上述图2所示的点云编码器200中用于实现属性信息编码的单元。如图4所示,该属性编码模块400包括:预处理单元410、残差单元420、量化单元430、预测单元440、反量化单元450、重建单元460、滤波单元470、解码缓存单元480和编码单元490。需要说明的是,属性编码模块400还可包含更多、更少或不同的功能组件。
在一些实施例中,预处理单元410可以包括图2所示的颜色空间转换单元210、属性转化单元211。
在一些实施例中,量化单元430可以理解为上述图2中的量化系数单元215,编码单元490可以理解为上述图2中的第二算数编码单元216。
在一些实施例中,预测单元440可以包括图2所示的RAHT 212、预测变化单元213以及提升变化单元214。预测单元440具体用于获得点云中点的位置信息的重建信息,并 基于点的位置信息的重建信息,从RAHT 212、预测变化单元213以及提升变化单元214中的任选一个对点云中点的属性信息进行预测,以得到点的属性信息的预测值。
残差单元420可基于点云中点的属性信息的原始值和属性信息的重建值,得到点云中点的属性信息的残差值,例如,点的属性信息的原始值减去属性信息的重建值,得到点的属性信息的残差中。
量化单元430可量化属性信息的残差值,具体是,量化单元430可基于与点云相关联的量化参数(QP)值来量化点的属性信息的残差值。点云编码器可通过调整与点云相关联的QP值来调整应用于点的量化程度。
反量化单元450可分别将反量化应用于量化后的属性信息的残差值,以从量化后的属性信息的残差值重建属性信息的残差值。
重建单元460可将重建后的属性信息的残差值加到预测单元440产生的预测值中,以产生点云中点的属性信息的重建值。
滤波单元470可消除或减少重建操作中的噪声。
解码缓存单元480可存储点云中点的属性信息的重建值。预测单元440可使用点的属性信息的重建值来对其它点的属性信息进行预测。
由上述图3可知,点云解码器300从功能上主要包括了两部分:位置解码模块和属性解码模块,其中位置解码模块用于实现点云的几何码流的解码,得到点的位置信息,属性解码模块用于实现点云的属性码流的解码,得到点的属性信息。下面结合图5,对本申请涉及的点云解码器中的属性解码模块进行介绍。
图5为本申请实施例涉及的属性解码模块的部分框图,该属性解码模块500可以理解为是上述图3所示的点云解码器300中用于实现属性码流解码的单元。如图5所示,该属性解码模块500包括:解码单元510、预测单元520、反量化单元530、重建单元540、滤波单元550及解码缓存单元560。需要说明的是,属性解码模块500可包含更多、更少或不同的功能组件。
属性解码模块500可接收属性码流。解码单元510可解析属性码流以从属性码流提取语法元素。作为解析属性码流的一部分,解码单元510可解析属性码流中的经编码后的语法元素。预测单元520、反量化单元530、重建单元540及滤波单元550可根据从属性码流中提取的语法元素来解码属性信息。
在一些实施例中,预测单元520可根据从码流解析的一个或多个语法元素来确定点的预测模式,并使用确定的预测模式对点的属性信息进行预测。
反量化单元530可反量化(即,解量化)与点云中点相关联的量化后的属性信息的残差值,得到点的属性信息的残差值。反量化单元530可使用与点云相关联的QP值来确定量化程度。
重建单元540使用点云中点的属性信息的残差值及点云中点的属性信息的预测值以重建点云中点的属性信息。例如,重建单元540可将点云中点的属性信息的残差值加到点的属性信息的预测值,得到点的属性信息的重建值。
滤波单元550可消除或减少重建操作中的噪声。
属性解码模块500可将点云中点的属性信息的重建值存储于解码缓存单元560中。 属性解码模块500可将解码缓存单元560中的属性信息的重建值作为参考点用于后续预测,或者,将属性信息的重建值传输给显示装置呈现。
点云的属性信息的编解码的基本流程如下:在编码端,对点云数据的属性信息进行预处理,得到点云中点的属性信息的原始值。预测单元410基于点云中点的位置信息的重建值,选择上述3种预测方式中的一种预测方式,对点云中点的属性信息进行预测,得到属性信息的预测值。残差单元420可基于点云中点的属性信息的原始值与属性信息的预测值,计算属性信息的残差值,即将点云中点的属性信息的原始值与属性信息的预测值的差值作为点云中点的属性信息的残差值。该残差值经由量化单元430量化,可以去除人眼不敏感的信息,以消除视觉冗余。编码单元490接收到量化单元430输出的量化后的属性信息的残差值,可对该量化后的属性信息的残差值进行编码,输出属性码流。
另外,反量化单元450也可以接收到量化单元430输出的量化后的属性信息的残差值,并对量化后的属性信息的残差值进行反量化,得到点云中点的属性信息的残差值。重建单元460得到反量化单元450输出的点云中点的属性信息的残差值,以及预测单元410输出的点云中点的属性信息的预测值,将点云中点的属性信息的残差值和预测值进行相加,得到点的属性信息的重建值。点的属性信息的重建值经过滤波单元470滤波后缓存在解码缓存单元480中,用于后续其他点的预测过程。
在解码端,解码单元510可解析属性码流得到点云中点量化后的属性信息的残差值、预测信息、量化系数等,预测单元520基于预测信息对点云中点的属性信息进行预测产生点的属性信息的预测值。反量化单元530使用从属性码流得到的量化系数,对点的量化后的属性信息的残差值进行反量化,得到点的属性信息的残差值。重建单元440将点的属性信息的预测值和残差值相加得到点的属性信息的重建值。滤波单元550对点的属性信息的重建值进行滤波,得到解码后的属性信息。
需要说明的是,编码端属性信息编码时确定的预测、量化、编码、滤波等模式信息或者参数信息等在必要时携带在属性码流中。解码端通过解析属性码流及根据已有信息进行分析确定与编码端相同的预测、量化、编码、滤波等模式信息或者参数信息,从而保证编码端获得的属性信息的重建值和解码端获得的属性信息的重建值相同。
上述是基于G-PCC编解码框架下的点云编解码器的基本流程,随着技术的发展,该框架或流程的一些模块或步骤可能会被优化,本申请适用于该基于G-PCC编解码框架下的点云编解码器的基本流程,但不限于该框架及流程。
上文对本申请实施例涉及的点云编码系统、点云编码器、点云解码器进行介绍。在此基础上,下面结合具体的实施例对本申请实施例提供的技术方案进行详细描述。
下面结合图6对编码端进行介绍。
实施例1
图6为本申请实施例提供的点云编码方法的一种流程示意图,本申请实施例应用于图1、图2和图4所示点云编码器。如图6所示,本申请实施例的方法包括:
S101、获取点云中当前点的属性信息。
在一些实施例中,当前点也称为当前点或待处理的点或正在处理的点、待编码的点 等。
点云包括多个点,每个点可以包括点的几何信息和点的属性信息。其中,点的几何信息也可称为点的位置信息,点的位置信息可以是点的三维坐标信息。其中,点的属性信息可包括颜色信息和/或反射率等等。
在一种示例中,点云编码器可以将获得点云中点的原始属性信息,作为点的属性信息的原始值。
在另一种示例中,如图2所示,点云编码器获得点云中点的原始属性信息后,对该原始属性信息进行颜色空间转换,例如将点的RGB色彩空间变换为YCbCr格式或其他格式。对颜色空间转换后的点进行属性转化,以最小化属性失真,得到点的属性信息的原始值。
S102、对当前点的属性信息进行处理,得到当前点的属性信息的残差值。
在一些实施例中,根据点云中点的几何信息,确定点云中当前点的属性信息的预测值。例如,对点云中点的几何进行编码,得到点云中当前点的几何信息的重建值;根据当前点的几何信息的重建值,确定当前点的属性信息的预测值;根据当前点的属性信息的预测值和属性信息的原始值,确定当前点的属性信息的残差值。
在一些实施例中,属性信息的原始值也称为属性信息的真实值。
需要说明的是,点云编码器对点云中点的几何信息进行编码完成后,再对点云中点的属性信息进行编码。参照图2所示,在几何信息的编码过程中,第一算数编码单元205对八叉树单元203处理后的点的几何信息进行编码形成几何码流外,几何重建单元204对八叉树单元203处理后的点的几何信息进行重建,得到几何信息的重建值。预测单元可以获得几何重建单元204输出的几何信息的重建值。
例如,根据点云中点的几何信息的重建值,对点云中的点进行排序,得到排序后的点云。例如,根据点云中点的几何信息的重建值,确定点的莫顿码,根据莫顿码对点云中点进行排序,得到点的莫顿顺序。针对排序后的点云中的一个当前点,从属性信息已编码的点中获得该当前点的至少一个邻近点,根据该至少一个邻近点的属性信息的重建值,来预测该当前点的属性信息的预测值。
其中,根据当前点的至少一个邻近点的属性信息的重建值,来预测该当前点的属性信息的预测值的方式包括但不限于如下几种:
方式一,将至少一个邻近点的属性信息的重建值的平均值,作为该当前点的属性信息的预测值。
方式二,假设至少一个邻近点为K个邻近点,将K个邻近点中每一个邻近点的属性信息的重建值作为当前点的预测参考值,得到K个预测参考值。另外,将K个邻近点的属性信息的重建值的平均值作为当前点的另一个预测参考值,这样当前点共有K+1个预测参考值。计算这K+1个预测参考值中每个预测参考值对应的率失真优化(RDO)代价,将率失真优化代价最小的预测参考值作为该当前点的属性信息的预测值。
接着,根据点云中点的属性信息的预测值,确定点云中点的属性信息的残差值。
例如,将当前点的属性信息的原始值与属性信息的预测值的差值,确定为当前点的属性信息的残差值。
例如,针对点云中的一个当前点,根据上述步骤可以获得该当前点的属性信息的预 测值和属性信息的原始值,将该当前点的属性信息的原始值与属性信息的预测值的差值,作为该当前点的属性信息的残差值。
例如,根据如下公式(1),确定当前点的属性信息的残差值:
attrResidual=attrValue-attrPredValue      (1)
其中,attrResidual为属性信息的残差值,attrValue为属性信息的原始值,attrPredValue为属性信息的预测值。
S103、使用目标量化方式,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值。
参照图4可知,目前在属性信息的编码过程中,残差单元420可基于点云中点的属性信息的原始值与属性信息的预测值,计算属性信息的残差值。该残差值经由量化单元430量化,可以去除人眼不敏感的信息,以消除视觉冗余。反量化单元450也可以接收到量化单元430输出的量化后的属性信息的残差值,并对量化后的属性信息的残差值进行反量化,得到点云中点的属性信息的残差值。重建单元460得到反量化单元450输出的点云中点的属性信息的残差值,以及预测单元410输出的点云中点的属性信息的预测值,将点云中点的属性信息的残差值和预测值进行相加,得到点的属性信息的重建值。点的属性信息的重建值缓存在解码缓存单元480中,用于后续其他点的预测过程。
由上述可知,目前在属性信息的编码过程中,对点的属性信息的残差值进行量化,但是目前的量化过程其量化效果差。
为了解决该技术问题,本申请实施例采用目标量化方式对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值。其中,目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,第一量化方式为对点云中至少一个点的量化参数设定量化参数增量,第二量化方式为对点云中点的残差值进行加权处理,第三量化方式为对点云中至少一个点的属性信息的残差值进行无损编码。
在一些实施例中,第一量化方式也可以称为渐进式量化方式,第二量化方式也可以称为自适应量化方式,第三种量化方式也可以称为等间隔不量化方式。
实施例2
若目标量化方式包括第一量化方式和第二量化方式,本申请实施例的点云编码过程如图7所示。
图7为本申请一实施例提供的点云编码方法的流程示意图,包括:
S201、获取点云中点的几何信息和属性信息。
S202、根据点云中每个点的几何信息,对点云进行LOD划分,得到点云的多层LOD层,其中,每个LOD层包括至少一个层细节表达层,每一层细节表达中包括至少一个点。
在一种示例中,根据点云中点的几何信息,获得点的莫顿码,按照莫顿码进行排序,得到点云的莫顿顺序。基于点云的莫顿顺序进行LOD划分。
在另一种示例中,根据点云中点的几何信息,对点云进行排序,得到点云的原始顺序。基于点云的原始顺序进行LOD划分。
对排序后的点云进行LOD划分,例如,从排序后的点云中随机挑选一个点,作为第一细节表达层。然后根据几何信息,计算剩余点与该点的欧式距离,并将欧式距离符合 第一阈值要求的点,归为第二细节表达层。获取第二细节表达层中点的质心,计算除第一、第二细节表达层以外的点与该质心的欧式距离,并将欧式距离符合第二阈值的点,归为第三细节表达层。以此类推,将所有的点都归到细节表达层中。每层细节表达中的点按照点的属性信息的重建值的大小在该细节表达层中排序。
需要说明的是,上述LOD划分的方式只是一种示例,LOD划分的方式还可以采用其它方式,本申请对此不进行限制。
图8为部分点云的LOD划分示意图,其中图8上部为原始点云中的10个点p0,p2,……,p9,根据这10个点的几何信息,对这10个点进行排序,得到点p0,p2,……,p9的原始顺序。基于点p0,p2,……,p9的原始顺序对点p0,p2,……,p9进行LOD划分,得到3层细节表达层,这3层细节表达层互不重叠。例如,第一层细节表达层R0包括:p0、p5、p4和p2,第二层细节表达层R1包括:p1、p6和p3,第三层细节表达层R2包括:p9、p8和p7。第一层细节表达层R0构成第一层LOD,记为L0,第一层LOD和第二层细节表达层R1构成第二层LOD,记为L1,第二层LOD和第三层细节表达层R2构成第三层LOD,记为L2。LOD层包括的点数逐层递增。
参照图8所示,可以将LOD划分得到的多层细节表达层按照层数从低到高进行排序,得到点云的LOD顺序。在一些实施例中,还可以对多层细节表达层按照层数从高到低进行排序,得到点云的LOD顺序。
S203、确定点云中当前点的属性信息的预测值。
例如,根据点云的多层细节表达层,按照层数从低到高进行排序,得到点云的LOD顺序,基于点云的LOD顺序,确定点云中点的属性信息的预测值。
在一些实施例中,针对点云中待编码的一个当前点为例,上述S203包括但不限于如下几种方式:
方式一,基于点云的LOD顺序,从已编码的点云中获得当前点的至少一个邻近点,例如按照KNN算法从已编码的点云中寻找到该当前点的3个最邻近点,将这3个最邻近点的属性信息的重建值的加权平均值作为当前点的属性信息的预测值。
方式二,基于点云的LOD顺序,从已编码的点云中获得当前点的至少一个邻近点,例如按照KNN算法从已编码的点云中寻找到该当前点的3个最邻近点,将这3个最邻近点中每一个邻近点的属性信息的重建值作为当前点的预测参考值,得到3个预测参考值。另外,将3个邻近点的属性信息的重建值的平均值作为当前点的另一个预测参考值,这样当前点共有3+1个预测参考值。计算这3+1个预测参考值中每个预测参考值对应的率失真优化(RDO)代价,将率失真优化代价最小的预测参考值作为该当前点的属性信息的预测值。
在一种示例中,在确定率失真优化代价时,可以将邻近点与当前点之间的距离(例如欧氏距离)的倒数作为该邻近点的权重。
在一些实施例中,可以根据如下表1确定上述3+1个预测参考值对应的预测索引:
表1
预测索引 预测参考值
0 加权平均值
1 P4(第一邻近点)
2 P5(第二邻近点)
3 P0(第三邻近点)
举例说明,以当前点为图8中的点p2为例,距离点p2最近的3个邻近点为p4、p5和p1,其中,p4为当前点p2的3个邻近点中距离p2最近的一个点,记为第一邻近点,即1 st nearest point,如表1所示,其对应的预测索引为1。p5为当前点p2的3个邻近点中距离当前点p2仅次于p4的点,记为第二邻近点,即2 nd nearest point,如表1所示,其对应的预测索引为2。p1为当前点p2的3个邻近点中距离当前点p2最远的一个点,记为第三邻近点,即3 rd nearest point,如表1所示,其对应的预测索引为3。如表1所示,p4、p5和p1的属性信息的重建值的加权平均值对应的预测索引为0。
根据上述方法,计算p4、p5、p1、以及p4、p5和p1的属性信息的平均值这3+1个预测参考值中每个预测参考值对应的率失真优化(RDO)代价,将率失真优化代价最小的预测参考值作为该当前点的属性信息的预测值,例如率失真优化代价最小的预测参考值为点p5的属性信息的重建值。
可选的,点云编码器可以将该点p5对应的预测索引2携带在后续形成的属性码流中。这样,解码端直接可以从该属性码流中解析出预测索引2,并使用预测索引2对应的点p5的属性信息的重建值对点p2的属性信息进行预测,得到点p2的属性信息的预测值。
S204、根据当前点的属性信息的预测值和属性信息的原始值,确定当前点的属性信息的残差值。例如,将当前点的属性信息的原始值与预测值的差值,确定为当前点的属性信息的残差值。
S205、根据当前点的几何信息,确定当前点所处的目标LOD层。
由于LOD层时基于点云中点的几何信息划分的,因此,可以根据当前点的几何信息,确定出当前点所处的目标LOD层。
S206、确定适配目标LOD层的目标量化步长。
其中,确定适配所述目标LOD层的目标量化步长的方式包括但不限于如下几种:
方式一,确定适配所述目标LOD层的目标量化步长包括如下步骤A1、步骤A2和步骤A3:
步骤A1:确定当前点的编码参数中的量化参数;
步骤A2:获取目标LOD层的层级分级索引,并根据目标LOD层的层级分级索引,确定目标LOD层的量化参数增量;
步骤A3:根据量化参数和目标LOD层的量化参数增量,确定目标LOD层对应的目标量化步长。
在一些实施例中,前7层LOD设备的量化参数增量DeltaQP值如表2所示:
表2
LOD R1 R2 R3 R4 R5
0 -6 -12 -18 -24 -24
1 -6 -12 -18 -18 -18
2 -6 -12 -18 -18 -18
3 -6 -12 -12 -12 -12
4 -6 -12 -12 -12 -12
5 -6 -6 -6 -6 -6
6 -6 -6 -6 -6 -6
其中,R为编码码率,表2为前7层LOD在5种码率下的DeltaQP的值。需要说明的是,上述表1只是一种示例,本申请实施例前7层LOD层对应的DeltaQP值包括但不限于表2所示。
可选的,可以将上述表2中的DeltaQP值均设定为-10。
在一些实施例中,R1至R5表示MPEG公共测试环境推荐的5种编码码率,在公共测试环境下预设码率点的QP值如表3所示。
表3
R1下的QP R2下的QP R3下的QP R4下的QP R5下的QP
10 16 22 28 34
在一些实施例中,前7层LOD在5种码率下的真实QP的值如表4所示,
表4
Figure PCTCN2021087064-appb-000001
其中,将表2中各码率下的DeltaQP值与表3所示的对应码率下的QP相加,得到表4中对应码率下的真实QP值。
需要说明的是,上述表4只是一种示例,本申请实施例前7层LOD层对应的真实QP的值包括但不限于表4所示。
需要说明的是,上述表2和表4只示出了前7层的QP值和DeltaQP值,其他层的QP值和DeltaQP值可以参照已有技术,在此不再赘述。
该方式一中,可以从当前点的编码参数中确定出当前点对应的量化参数,例如为表3中的QPi,并根据当前点所在的目标LOD层的层级分级索引,从上述表2中确定出目标LOD层对应的量化参数增量,例如为表2中的DeltaQPi,根据量化参数和所述目标LOD层的量化参数增量,确定所述目标LOD层对应的目标量化步长,例如在当前点对应的QPi值,加上该目标LOD层的DeltaQPi值,得到QPi+DeltaQPi,将QPi+DeltaQPi对应的量化步长确定为目标LOD层对应的目标量化步长。
在一些实施例中,上述根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量,包括:若所述目标LOD层属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为j,所述N为小于或等于第一阈值的正整数,所述j为大于0且小于或等于第二阈值的整数;若所述目标LOD层不属于所述点云的前N个 LOD层,则确定所述目标LOD层的量化参数增量为0。
可选的,若所述量化参数大于或等于第三阈值,则所述j为第一预设数值;若所述量化参数小于所述第三阈值,则所述j为第二预设数值。
可选的,第一阈值可以为14,第二阈值可以为10。
可选的,j例如可以是6或10。
可选的,N可以是6、7或8。
方式二,上述确定适配所述目标LOD层的目标量化步长包括如下步骤B1和步骤B2:
步骤B1、获取所述目标LOD层的层级分级索引;
步骤B2、根据所述目标LOD层的层级分级索引,在量化步长查找表中查询所述目标LOD层对应的目标量化步长。
其中,量化步长查找表包括LOD层与量化步长之间的对应关系。
在一些实施例中,上述量化步长查找表是预设的。
在一些实施例中,本申请实施例还包括构建量化步长查找表的过程。具体是,确定当前图像块的层级分级索引和量化参数偏移参数;根据当前图像块的层级分级索引和量化参数偏移参数,确定细节等级的层级LOD对应的量化步长Qstep。进而获得了细节等级的层级与量化步长之间的对应关系,将该对应关系预存在量化步长查找表(look-up table)中,供后续编码步骤或者解码时查找确定量化步长。
可选的,层级分级索引例如可以是:LodSplitIndex,量化参数偏移参数例如可以是:QpShiftValue。
在一些实施例中,上述确定当前图像块的层级分级索引和量化参数偏移参数的方式可以是:获取当前编码块的编码参数;读取所述编码参数中的层级分级索引和量化参数偏移参数。
其中,所述获取当前编码块的编码参数的具体实现方式至少包括:使用率失真优化确定当前编码块的编码参数。
其中,编码参数可以包括配置文件中预设的参数和/或根据点云的数据确定的参数。
可见,本示例中,先获取当前编码块的编码参数,然后读取编码参数中的量化参数优化使能标识、层级分级索引和量化参数偏移参数,有利于提高确定当前编码块的量化参数优化使能标识、层级分级索引和量化参数偏移参数的效率。
在一个可能的示例中,所述码流包括参数集码流,后续对码流进行解码时可直接根据码流中的参数集码流确定相应的参数用于解码,有利于提高后续解码时的效率。
在一个可能的示例中,所述参数集包含用于解码一个或多个不同时刻的点云的数据;所述数据为属性数据,所述参数集为属性参数集。这样解码端对码流进行解码时,可直接根据码流中的参数集用于解码不同时刻的点云的数据进行解码,有利于提高后续解码时的效率。
在一些实施例中,上述根据当前图像块的层级分级索引和量化参数偏移参数,确定细节等级的层级LOD对应的量化步长Qstep,包括:确定所述当前编码块的编码参数中的量化参数Qp;根据当前编码块的所述层级分级索引和所述量化参数偏移参数,确定点云中每个LOD层的量化参数偏移;根据所述当前编码块的量化参数Qp和所述点云中每个 LOD层的量化参数偏移,确定所述点云中每个LOD层对应的量化步长Qstep。
其中,在一些实施例中,上述量化参数偏移也称为量化参数增量。
其中,量化参数Qp可以由属性参数集提供的QP参数确定。
在一些实施例中,当前编码块的量化参数Qp和量化步长Qstep的关系如公式(2)所示:
Figure PCTCN2021087064-appb-000002
实际应用中,可使用如下公式(3)确定量化步长Qstep:
Figure PCTCN2021087064-appb-000003
其中,Δ 0={2 -4/6,2 -3/6,2 -2/6,2 -1/6,2 1/6}<<8={161,181,203,228,256,287}。
上述公式中的“X”为乘法运算,“<<”为左移运算,“%”是求余运算。
在对残差进行量化时也会将残差左移8位以进行匹配。
可见,本示例中,先确定当前编码块的编码参数中的量化参数Qp,再根据当前编码块的层级分级索引和量化参数偏移参数确定每个LOD层的量化参数偏移,然后根据量化参数Qp和每个LOD层的量化参数偏移,确定每个LOD层对应的量化步长Qstep,有利于提高量化步长确定的灵活性。
在一个可能的示例中,所述根据所述层级分级索引和所述量化参数偏移参数确定每个LOD层的量化参数偏移,包括:判断当前处理的细节等级的LOD层是否属于所述层级分级索引所约束的层级范围,所述层级范围包括多个细节等级中的前N个层级,N为小于或等于第一阈值的正整数;若是,则根据所述量化参数偏移参数确定当前处理的细节等级的LOD层的量化参数偏移的值j,j为大于0且小于或等于第二阈值的整数;若否,则确定当前处理的细节等级的LOD层的量化参数偏移的值为0。
也就是说,对点云的LOD层进行分组,每一组LOD层对应一个量化步长查找表,其中该组对应的量化步长查找表中包括该组内至少一个LOD层对应的量化步长。当查找当前点所在的目标LOD层对应的目标量化步长时,首先确定该目标LOD层所属的目标LOD分组,在该目标LOD分组对应的量化步长查找表中查找目标LOD层对应的目标量化步长。
其中,层级分级索引的取值范围为0-LOD的个数(正整数),假设层级分级索引为6,则LOD 0-LOD 5的量化步长(即前6层的细节等级的量化步长)由QP-QpShiftValue(即量化参数-量化偏移参数)转换而来。
另外,如果将细节等级进一步划分成了多个组,例如层级分级索引为数组4 5 6,对应可将细节等级按照这三个划分位置分成4组,分别为LOD 0-LOD 3、LOD 4、包括LOD 5以及LOD 6及其以后的细节等级。不同组对应的量化参数偏移的值j可以是不同的。
具体实现中,量化参数越大,其对应的量化步长越大,压缩后的码流大小越小。j为大于0且小于或等于第二阈值的整数,则前N各层级的细节等级对应量化参数比后面层级的更小,这是考虑到后续高层级的细节等级中点,会利用前面低层级的细节等级中点的重建值进行预测变换,如果前面层级的细节等级的量化步长比较长,则其对应的重建误差也会较大,此时,重建误差传递到后续层级中,则会影响后续细节等级中点的预测,导致预测的准确降低。因此,在确定量化步长Qstepi时,对于低层级的细节等级,可以确定 其适配较小的量化参数对应的量化步长,以减小重建误差,同时由于低层级的细节等级中点的数量较小,采用较小的量化步长也不会对颜色码流的大小产生较大影响,在前面层级细节等级误差较小时,后续层级的细节等级的预测效果也会更好,此时无需再采用较小的量化步长也可取得较好的效果,适当增大的量化步长还可以降低码流大小,减少对编码效率的影响。
其中,第一阈值可以为14,第二阈值可以为10。这是考虑到,细节等级总层数一般为11-14;公共测试环境CTC设置的5种码率的最小Qp值为10,所以j可以为大于0且小于或等于10的整数,以确保减去该数值后不出现负数。
j例如可以是6或10。由于j的值越大,则Qp-j越小,对应的量化步长也就越小,进一步的,重建点的失真越小、重建值的误差越小,对后面层级的细节等级中点的预测也会更加准确,因此j为10时,预测的结果会比较准确即预测效果更好,当然,在对低层级的细节等级中点的残差进行量化时,也可以取6为j的值,以获得一个较小的量化步长对前N层的细节等级中点进行量化,有利于减少重建值误差,提高后续预测准确度的同时,减少对码流大小的影响。
N例如可以是6或8。由于量化步长减小在减少误差的同时也会增加编码的码流大小,影响编码效率,因此,N的值可以取6,基本为细节等级总层数的一半,且前面层级中点的数量相对较少,采用小量化步长在减少误差的同时也不会导致码流增加太多,或者N为8,对前8层的细节等级中的点采用较小量化步长进行量化,有利于减少重建值误差,提高后续预测准确度的同时,减少对码流大小的影响。
可见,本示例中,先判断当前处理的细节等级的LOD层是否属于层级分级索引所约束的层级范围,若是,则根据量化参数偏移参数确定当前处理的细节等级的LOD层的量化参数偏移的值j,j为大于0且小于或等于第二阈值的整数,若否,则确定当前处理的细节等级的LOD层的量化参数偏移的值为0,对于较前层级的细节等级适配较小量化参数对应的量化步长,即较前层级的细节等级对应较小的量化步长,较后层级的细节等级对应的量化步长比前面层级的大,有利于在提高预测准确性的同时,减少对编码效率的影响。
在一个可能的示例中,若所述量化参数Qp大于或等于第三阈值,则j为第一预设数值;若所述量化参数Qp小于所述第三阈值,则j为第二预设数值。
其中,所述第三阈值可以为30,所述第一预设数值可以为10,所述第二预设数值可以为6。
也就是说,可以根据当前编码块对应的量化参数Qp本身的大小采用分段函数的形式确定j的取值,例如当量化参数Qp大于或等于30时,j为10,Qp小于30时,j为6。
可见,本示例中,若量化参数Qp大于或等于30,则j为10,若量化参数Qp小于30,则j为6,根据量化参数Qp本身的值确定j的值,有利于提高确定j的值的灵活性。
在一个可能的示例中,所述根据所述层级分级索引和所述量化参数偏移参数确定LOD层的量化参数偏移,包括:判断当前处理的细节等级的LOD层所对应的层级组合,查询所述层级分级索引确定所述当前处理的细节等级的LOD层的层级分级索引;根据所述处理的细节等级的LOD层的层级分级索引查询所述量化参数偏移参数,确定对应的量化参数偏移。
其中,对于存在多个层级分组例如是个分组的情况,量化参数偏移参数可以为一个 数组,例如3 5 6,即第一至第四组的量化参数偏移分别为-3、-5和-6,若确定出的量化参数为QP,则实际第一至四组的量化参数分别为QP-3、QP-5、QP-6、QP。
其中,层级组合可以有多个,任意一个层级组合可以包括前后相邻的至少2个层级,该多个层级组合包括该第一层级组合和该第二层级组合,该第一层级组合中的层级在该第二层级组合中的层级之前,该第一层级组合所对应的量化参数小于该第二层级组合所对应的量化参数,不同层级组合对应不同的量化参数,有利于对不同层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
其中,第一层级组合可以包括多个细节等级中的前两个层级,该第一层级组合所对应的量化步长为1。前两个层级可以采用无损量化,有利于进一步减少误差,提高后续预测准确率,且由于前两各层级中点数量较少,也不会对码流大小造成太大影响。
其中,多个层级组合可以包括从前往后排序的第一层级组合、第二层级组合、第三层级组合和第四层级组合,且任意一个层级组合中包括前后相邻的至少2个层级;第一层级组合采用原始量化步长的1/sqrt(4)作为本层级的量化步长,原始量化步长是指根据量化参数Qp确定的量化步长;第二层级组合采用原始量化步长的1/sqrt(3)作为本层级的量化步长;第三层级组合采用原始量化步长的1/sqrt(2)作为本层级的量化步长;第四层级组合采用原始量化步长作为本层级的量化步长。
举例来说,若原始量化步长即根据当前编码块对应的量化参数Qp确定的量化步长为α(α为正整数),则第一层级组合、第二层级组合、第三层级组合和第四层级组合分别采用α/sqrt(4)、α/sqrt(3)、α/sqrt(2)、α作为该层级的量化步长,层级组合越靠后,对应的量化步长越大,同一层级组合中不同层级采用相同的量化步长。对不同的层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
可见,本示例中,在确定量化参数偏移时,先判断出细节等级的层级所对应的的层级组合,然后在进一步确定出该层级组合中细节等级对应的层级分级索引,进而根据对应的层级分级索引查询量化参数偏移参数,确定出对应的量化参数偏移,不同层级组合对应不同的量化参数偏移,有利于对不同层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
在一些实施例,本申请还包括量化参数优化使能标识,量化参数优化使能标识用于指示是否可以采用上述第一量化方式进行量化,例如量化参数优化使能标识可以是:enableProgressiveQp,可以取0或1两个值。
S207、确定当前点的第一量化权重。
在一些实施例中,为了便于描述,将基于所述当前点的量化权重和所述当前点的量化步长对所述预测残差值进行量化之前的预测值称为预测残差值,将基于所述当前点的量化权重对所述预测残差值进行处理后且基于所述当前点的量化步长对所述预测残差值进行处理前的预测值称为加权残差值,将基于所述当前点的量化权重和所述当前点的量化步长对所述预测残差值进行量化后的预测值称为量化残差值。当然,上述命名方式仅为本申请的示例,不应理解为对本申请的限制。在本申请的可替代实施例中,所述量化残差值也可称为加权量化残差值,甚至可以直接简称为残差值。
在一些实施例中,上述S207包括:确定所述当前点的索引;将所述当前点的索引所对应的量化权重,确定为所述当前点的第一量化权重。
简言之,编码器可基于点的索引获取点的第一量化权重。
可选的,所述点云的量化权重存储为数组,所述数组的维度和所述点云中点的个数相同。例如,QuantWeight[index]表示点索引为index的量化权重,此时,QuantWeight[]可以理解为是存储了所述点云中的所有点的量化权重的数组,所述数组的维度与所述点云中点的个数一致,通过点的索引便可以查询到点的量化权重。
在本申请的一些实施例中,将所述点云划分为一个或多个LOD层,每个LOD层包括一个或多个点;所述多个LOD层中的前M层LOD中的点的第一量化权重的初始值,大于所述多个LOD层中剩余LOD中的点的第一量化权重的初始值。M为大于0的整数。例如,前7层LOD中每个点的第一量化权重的初始值设为512,其余LOD中每个点的第一量化权重的初始值设为256。
在一种实现方式中,按照所述点云的编码顺序的倒序,通过遍历所述点云中的点,基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,N为大于0的整数。例如,针对当前点而言,基于所述当前点的第一量化权重,更新所述当前点的N个最邻近点中的每一个最邻近点的第一量化权重;N为大于0的整数。在一种实现方式中,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,所述影响权重取决于所述当前点和所述N个最邻近点的位置信息;基于所述当前点的第一量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重。在一种实现方式中,所述点云的属性参数集包括所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重;通过查询所述属性参数集,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重。
可选的,所述点云中的每一个点的第一量化权重的初始值为预设值。
需要说明的是,本申请实施例对初始值的具体数值不作限定。作为示例,所述初始值可以是256、512或其他具体数值。初始化为256就是将所述点云中所有点的量化权重的值都设为256。
编码器在倒序遍历完所述点云中的每个点后,每个点的第一量化权重会根据其在对所述点云的点的属性信息进行预测的过程中的重要性得到更新,越重要的点其量化权重数值越大。
在一种实现方式中,基于以下公式(4)更新所述N个最邻近点的第一量化权重:
w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k)    (4);
其中,Q表示当前点,P i表示距离Q第i近的邻居点,可选的i=1,2,3,w(Q)表示当前点的第一量化权重,α(P i,Q)表示当前点对邻居点影响权重大小,“>>”为右移运算,“←”为赋值运算,例如“A←B”表示将B的值赋给A。
可选的,α(P i,Q)=2 5-i
可选的,α(P i,Q)=2 6-i
可选的,α(P i,Q)的值随i的增大而减小。
例如,图9所示,假设所述点云中的所有点的第一量化权重的初始值设为256,然后按编码顺序的倒序遍历每个点更新其三个最邻近点的第一量化权重,假设当前遍历到的点索引为index,当前点三个最邻近点的索引分别为indexN1,indexN2,indexN3,则当前点P1三个最邻近点P10、P3、P11的第一量化权重可以记为:
W[P10]=W[indexN1];
W[P11]=W[indexN2];
W[P3]=W[indexN3]。
利用当前点P1的第一量化权重按照以下方式对其三个最邻近点的第一量化权重进行更新:
w(P10)←w(P10)+((16×w(P1))>>k);
w(P11)←w(P11)+((8×w(P1))>>k);
w(P3)←w(P3)+((4×w(P1))>>k)。
可选的,k的值为8。
其中,16、8、4分别为当前点对第1、2、3最邻近点的影响权重,该影响权重可作为语法定义在点云的属性参数集中,即可通过属性参数集设置影响权重的值。编码器在编码属性信息的过程中可激活或访问所述属性参数集,继而从所述属性参数集中调用点的影响权重的值。本申请实施例对k以及影响权重的具体取值不作限定,上述数字仅为示例性说明,不应理解为对本申请的限制。例如,在本申请的可替代实施例中,还可以将第1、2、3最邻近点的影响权重分别改为64、32、16。假设当前点的第一量化权重为256,最邻近点0(即第1最邻近点)的第一量化权重也为256,(32×256)>>8的结果为32,即运算结果右移了8位,此时,最邻近点0的量化权重便更新为256+32=288,此结果可同时保存在包含了点云的所有点的量化权重的数组QuantWeight[]中,当遍历到最邻近点0时,便使用第一量化权重288对最邻近点0的三个邻居进行更新。
下面结合表5对属性参数集的语法进行介绍。
表5属性参数集
Figure PCTCN2021087064-appb-000004
如表5所示,attribute_parameter_set表示属性参数集,aps_chroma_qp_offset表示色度偏差,weightOfNearestNeighborsInAdaptiveQuant[i]表示当前点对第i最邻近点的影响权重,其中,i为0、1、2分别表示当前点的第1、2、3最邻近点。具体而言,第1最邻近点表示距离当前点最近的邻居点,第2最邻近点表示距离当前点第2近的邻居点,以此类推。
S208、根据目标量化步长和第一量化权重,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值。
在本申请的一些实施例中,上述S204和S403包括如下步骤C1和步骤C2:
步骤C1、根据所述当前点的第一量化权重,确定所述当前点的第二量化权重;
步骤C2、根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,得到所述重建残差值。
可选的,所述第二量化权重小于或等于所述当前点的目标量化步长,其中当前点的目标量化步长即为当前点所在的目标LOD层的目标量化步长。
可选的,利用以下公式(5)确定所述当前点的第二量化权重:
effectiveQuantWeight=min(w(Q)>>k,Qstep)  (5);
其中,effectiveQuantWeight表示所述当前点的第二量化权重,w(Q)表示所述当前点的第一量化权重,k表示对所述w(Q)进行右移运算的位数,Qstep表示所述当前点的目标量化步长,“>>”为右移运算。
可选的,利用以下公式(6)确定所述当前点的第二量化权重:
effectiveQuantWeight=min(w(Q),Qstep)  (6)
本申请实施例中,当所述当前点的目标量化步长设置较小时,所述当前点的第一量化权重可能会超过目标量化步长,此时,需要对第一量化权重和目标量化步长取二者中的较小值得到第二量化权重,由此,保证编码器能够对预测残差值进行量化操作,即保证编码器的编码性能。
可选的,所述第二量化权重的数值等于2的整数次幂。
可选的,所述当前点的第一量化权重的数值不等于2的整数次幂,基于所述当前点的第一量化权重的数值,将最接近所述当前点的第一量化权重的2的整数次幂,确定为所述第二量化权重。
例如,假设所述当前点的第一量化权重的值为18,为了方便硬件实现,可以将18转换为距离其最近的2的整数次幂,即16或者32,例如将18转换为16,即将18替换为16。假设所述当前点的第一量化权重的值为30,距离其最近的2的整数次幂将变为了32,此时,所述当前点的第一量化权重将会转换为32;针对2的整数次幂,通过二进制移位操作即可实现自适应量化的功能,便于硬件实现。
本申请实施例中,通过将所述第二量化权重的数值构造为2的整数次幂,可将加权的乘法运算处理为移位运算,能够提升编码器的处理效率,进而提升编码器的性能。
需要说明的是,本申请的其他可替代实施例中,可以先在当前点的与量化权重和当前点的目标量化步长中取最小值,然后将最接近所述最小值的2的整数次幂,确定为所述第二量化权重。当然,也可以通过其他方式确定第二量化权重,本申请实施例对此不作具体限定。例如可直接将所述当前点的第一量化权重确定为所述当前点的第二量化权重。
在一些实施例中,所述步骤C2可包括:利用所述第二量化权重乘以所述预测残差值,得到加权残差值;利用所述当前点的目标量化步长对所述加权残差值进行量化,得到所述量化残差值。
具体来说,编码器通过预测变换可以得到当前点的属性信息的预测值,已知当前点的属性信息的真实值,则通过真实值减去预测值可得到当前点的属性信息的预测残差值,预测残差值乘以第二量化权重得到加权预测残差值,利用量化步长对加权预测残差值进行量化,可以得到量化加权预测残差值,即量化残差值,随后对当前点的量化残差值进 行熵编码写进码流。
对应的,解码器首先根据重建的位置信息计算点云中每个点的第一量化权重,通过与目标量化步长比较确定每个点的第二量化权重,然后解析码流得到当前点的量化残差值,反量化得到加权预测残差值,加权预测残差值除以第二量化权重得到预测残差值,解码器通过预测变换确定当前点的属性信息的预测值,随后基于当前点的属性信息的预测值和预测残差值得到当前点的属性信息的重建值,解码器获取当前点的属性信息的重建值后,按顺序遍历下一个点进行解码和重建。
综上,本申请实施例中,编码器在量化前,对预测残差值乘以第二量化权重进行加权;解码器在反量化后,对反量化的加权预测残差值除以第二量化权重去除加权影响,得到预测残差值。需要说明的是,由于量化不是无损的,因此解码器得到的加权预测残差值不一定等于编码器得到的加权预测残差值。
在本申请的一些实施例中,所述步骤C2可包括:
利用以下公式(7)对所述预测残差值进行量化,得到所述量化残差值:
attrResidualQuant2=attrResidualQuant1×effectiveQuantWeight/Qstep   (7)
其中,attrResidualQuant2表示所述量化残差值,attrResidualQuant1表示所述残差值的原始值,effectiveQuantWeight表示所述当前点的第二量化权重,Qstep表示所述当前点的量化步长,“X”表示乘法运算,“/”表示除法运算。
在本申请的一些实施例中,所述步骤C2可包括:根据所述当前点的第二量化权重,对所述目标量化步长进行更新;根据所述更新后的量化步长,对所述当前点的属性信息的残差值进行量化。
可选的,根据所述当前点的第二量化权重,对所述目标量化步长进行更新包括:利用以下公式(8)更新所述当前点的目标量化步长:
Figure PCTCN2021087064-appb-000005
其中,effectiveQuantWeight表示所述当前点的第二量化权重,newQstep表示所述当前点的基于所述当前点的目标量化步长更新后的量化步长,Qstep表示所述当前点的目标量化步长,
Figure PCTCN2021087064-appb-000006
表示向上取整运算,“/”表示除法运算。
S209、对当前点的量化残差值进行编码,生成属性码流。
在上述实施例的基础上,在本申请的一具体实施例提高的点云编码过程包括:编码端首先利用第一量化方式计算得到所有点的量化权重。确定当前点所属的目标LOD层,判断当前点是否属于前7层LOD;如果属于前7层LOD,读取当前编码参数的QP值,并加上该目标LOD层的DeltaQP值,将QP+DeltaQP转化为对应的Qstep,即目标量化步长,令当前点的量化权重w(Q)=min(w(Q),Qstep),在量化前,利用w(Q)乘以当前点的预测残差值进行加权。
如果当前点不属于前7层LOD,读取当前编码参数的QP值,将QP转化为对应的Qstep,令当前点的量化权重w(Q)=min(w(Q),Qstep),在量化前,利用w(Q)乘以当前点的预测残差进行加权。
本申请实施例,根据当前点的几何信息确定当前点所处的目标LOD层,并确定该目标LOD层对应的目标量化步长,该目标量化步长是基于量化参数增量确定的,提高了量化步长的确定灵活性。另外,本实施例引入了用于对所述当前点的量化步长进行加权的 第一量化权重,通过引入当前点的量化权重,相当于基于当前点的第一量化权重对当前点的目标量化步长进行了修正,即根据当前点的重要程度可自适应调整当前点的目标量化步长,进而基于调整后的目标量化步长对当前点的残差值进行量化,在对点云中的点的属性信息进行预测的过程中,针对在编码顺序中位置靠前的点,当其在预测中比较重要时,能够避免其量化步长过大,进而能够避免产生较大的重建误差,相当于,量化权重高的点采用较小的量化步长量化以降低其重建误差,针对在编码顺序中位置靠后的点,能够提升其预测准确性,提升编码效果。
基于本申请提供的技术方案,在G-PCC参考软件TMC13 V11.0上进行了测试,在CTC CY测试条件下对运动图像专家组(Moving Picture Experts Group,MPEG)要求的部分测试序列进行测试,测试结果如下述表6所示,下面结合表6对性能提升效果进行说明。
表6
Figure PCTCN2021087064-appb-000007
其中,cat1-A点云序列中的点包括颜色属性信息和其他属性信息,例如反射率属性信息,cat1-B点云序列中的点包括颜色属性信息。BD-AttrRate是评价视频编码算法性能的主要参数之一,表示新算法(即本申请技术方案)编码的视频相对于原来的算法在码率和PSNR(Peak Signal to Noise Ratio,峰值信噪比)上的变化情况,即新算法与原有算法在相同信噪比情况下码率的变化情况。“-”表示性能提升,例如码率和PSNR性能提升。如表6所示,对于cat1-A点云序列,采用本申请的技术方案相比于原有技术,其中亮度分量上性能提升了0.8%,色度分量Cb上性能提升了4.1%,色度分量Cr上性能提升了5.4%。“平均值”表示cat1-A点云序列和cat1-B点云序列的性能提升的平均值。
实施例3
若目标量化方式包括第一量化方式和第三量化方式,本申请实施例的点云编码过程如图10所示。
图10为本申请一实施例提供的点云编码方法的流程示意图,如图10所示,包括:
S301、获取点云中点的几何信息和属性信息。
S302、根据点云中每个点的几何信息,对点云进行LOD划分,得到点云的多层LOD层,其中,每个LOD层包括至少一个层细节表达层,每一层细节表达中包括至少一个点。
S303、确定点云中当前点的属性信息的预测值。
S304、根据当前点的属性信息的预测值和属性信息的原始值,确定当前点的属性信息的残差值。例如,将当前点的属性信息的原始值与预测值的差值,确定为当前点的属性信息的残差值。
其中,上述S301至S304的实现过程可以参照上述S201至S204的描述,在此不再赘述。
S305、根据当前点的几何信息,确定当前点所处的目标LOD层。
S306、判断当前点是否属于无损编码的点,若确定当前点属于有损编码的点,则执行如下S307至S309,若确定当前点属于无损编码的点,则执行如下S310。
S307、确定适配目标LOD层的目标量化步长。具体参照上述S206的描述,在此不再赘述。
S308、根据目标量化步长,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值。
S309、对当前点的量化残差值进行编码,生成属性码流。
S310、对当前点的残差值进行无损编码,生成属性码流。
由于量化会对属性信息的重建值造成误差,进而降低后续属性信息预测的准确性,从而降低整个属性信息的编码效果。为了解决该技术问题,本申请对点云中至少一个点的属性信息的残差值进行无损编码,以减少量化对属性信息的重建值的影响,进而提高属性信息预测的准确性,且不会对属性码流的大小产生较大影响,从而提高属性信息的编码效果。
下面对本申请涉及的无损编码过程进行介绍。
需要说明的是,本申请中对点的属性信息的残差值进行无损编码也可以称为对点的属性信息的残差值不量化。
本申请对属性信息的残差值进行无损编码的点的个数不做限制,例如对点云中部分点的属性信息的残差值进行量化,部分点的属性信息的残差值不进行量化(即进行无损编码);或者,对点云中所有点的属性信息的残差值不进行量化(即无损编码)。
在一种示例中,上述属性信息的残差值进行无损编码的至少一个点可以包括N个点。
可选的,该N为2的整数倍,例如,对点云中的2个、4个、16个或24个点的属性信息的残差值进行无损编码。
可选的,上述N个点可以为点云中任意的N个点,例如为排序后的点云中连续的N个点,或者为随机选取的N个点,或者为指定的N个点,或者为根据预设的取点间隔选取的N个点,其中取点间隔可以是不均匀间隔。
可选的,上述N个点中每相邻的两个点之间的间隔相等,例如,上述点云包括1200个点,假设N为24,则这24个点之间的间隔相等,均为50个点。
在一些实施例中,根据预设间隔,对点云中每隔预设间隔的点的属性信息的残差值进行无损编码。例如,上述点云包括1200个点,预设间隔为10,则对排序后的点云中每隔10个点的点的属性信息的残差值进行无损编码。可选的,可以将1200个点中的第一点作为第一个属性信息的残差值不量化的点,间隔10个点,将第11点作为第二个属性信息的残差值不量化的点,依次类推。可选的,可以将1200个点中的第11个点作为第一个属性信息的残差值进行无损编码的点,间隔10个点,将第21点作为第二个属性信息的残差值进行无损编码的点,依次类推。
在一些实施例中,本申请实施例还包括S3061、对多层细节表达层中的至少一层细节表达层中的至少一个点的属性残差值进行无损编码。
在一些实施例中,上述S3061可以包括:对多层细节表达层中的一层细节表达中的至少一个点的属性残差值进行无损编码;或者,对多层细节表达层中的部分细节表达层 中的至少一个点的属性残差值进行无损编码,对多层细节表达层中的部分细节表达层中的每个点的属性残差值进行量化;或者,对多层细节表达层中的每层细节表达层中的至少一个点的属性残差值进行无损编码。
在一些实施例中,上述S3061包括S3061-A1、S3061-A2和S3061-A3:
S3061-A1、获得多层细节表达层中所包括的点的总个数小于或等于第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于第一预设值的至少一个第二类细节表达层;其中,第一预设值的具体取值根据实际需要进行确定,本申请对此不做限制;
S3061-A2、对第一类细节表达层中的所有点的属性信息的残差值进行无损编码;
S3061-A3、对第二类细节表达层中的至少一个点的属性信息的残差值进行无损编码。
由上述可知,多层细节表达层中各层所包括的点的数量可能相同也可以不同。基于此,根据多层细节表达层中每层细节表达层所包括的总个数,将多层细节表达层划分为第一类细节表达层和第二类细节表达层,其中第一类细节表达层所包括的点的总个数小于或等于第一预设值,第二类细节表达层包括的点的总个数大于第一预设值。例如,将点云进行LOD划分,得到14层细节表达层,假设随着层数从小到大,细节表达层包括的点数依次为:1,6,28,114,424,1734,10000……。假设第一预设值为24,如图9所示,将上述14层细节表达层中前2层细节表达层(即包括1个点的第一层细节表达层和包括6个点的第二层细节表达层)划分为第一类细节表达层,得到2个第一类细节表达层,将上述14层中剩余的12层细节表达层(即包括28个点的第三层细节表达层,以及第三层细节表达层之后的其他细节表达层)划分为第二类细节表达层,得到12个第二类细节表达层。
对第一类细节表达层中的所有点的属性信息的残差值不进行量化,例如,对上述14层细节表达层中的前2层细节表达层中的所有点的属性信息的残差值进行无损编码。
对第二类细节表达层中的至少一个点的属性信息的残差值不进行量化,例如,对上述14层细节表达层中的后12层中每一层的至少一个点的属性信息的残差值进行无损编码。在选择属性信息的残差值进行无损编码的点时,不同的第二类细节表达层可以采用不同的跳跃量化(skip quantization)选点方式,例如每一层第二类细节表达层有不同的选点方式。可选的,不同的第二类细节表达层可以采用相同的跳跃量化(skip quantization)选点方式,例如每一层第二类细节表达层的选点方式相同。
在一些实施例中,为了保持编码端和解码端的一致,则编码端可以将上述第一类细节表达层和第二细节表达层的相关信息携带在属性码流中。这样解码端可以从属性码流中解析出第一类细节表达层和第二细节表达层的相关信息,并根据解析出的第一类细节表达层和第二细节表达层的相关信息进行点的属性信息的重建。
在一些实施例中,每个第二类细节表达层中属性信息的残差值进行无损编码的点的数量相同,则上述S3061-A3包括S3061-A3-1:
S3061-A3-1、对第二类细节表达层中的M个点的属性信息的残差值进行无损编码,其中,M为2的正整数倍,例如为2、4、24、32等。
可选的,同一层第二类细节表达层中的M个点中相邻两个点之间的间隔相等。例如, 第二细节表达层1包括200个点,第二细节表达层2包括300个点,假设M等于10,则第二细节表达层1中20个点的属性信息的残差值进行无损编码,依次为:第1个点、第11个点、第21个点、第31个点……第181个点、第191个点,相邻的两个属性信息的残差值无损编码的点之间间隔10个点。第二细节表达层2中30个点的属性信息的残差值进行无损编码,依次为:第1个点、第11个点、第21个点、第31个点……第281个点、第291个点,相邻的两个属性信息的残差值无损编码的点之间间隔10个点。
在一些实施例中,可以根据如下程序,将上述M添加在编码端的属性参数集中,以通过编码参数来设置每个第二类细节表达层中等间隔不量化的点的具体数值:
Figure PCTCN2021087064-appb-000008
其中,aps_equal_intervals_unquantized_num表示了每个第二类细节表达层中等间隔不量化的点数,例如24。
在一些实施例中,若至少一个第二类细节表达层包括L个第二类细节表达层,不同的第二类细节表达层中属性信息的残差值进行无损编码的点的数量可以不同,则上述S3061-A3包括S3061-A3-2和S3061-A3-3:
S3061-A3-2、对P个第二类细节表达层中每个第二类细节表达层中的第一数量个点的属性信息的残差值进行无损编码;
S3061-A3-3、对Q个第二类细节表达层中每个第二类细节表达层中的第二数量个点的属性信息的残差值进行无损编码;
其中,L为大于或等于2的正整数,P、Q均为正整数,且P与Q之和小于或等于L,P个第二类细节表达层与Q个第二类细节表达层不重叠,第一数量与第二数量不同。
上述P个第二类细节表达层可以为L个第二类细节表达层中任意P个第二类细节表达层,这P个第二细节表达层可以是连续的第二类细节表达层,也可以是不连续的第二类细节表达层。
上述Q个第二类细节表达层可以为L个第二类细节表达层中除P个第二类细节表达层之外的任意Q个第二类细节表达层,这Q个第二细节表达层可以是连续的第二类细节表达层,也可以是不连续的第二类细节表达层。
例如图11所示,L等于12,从这12个第二类细节表达层中任意选取P个(例如P=7)第二类细节表达层,从剩余的7个第二类细节表达层中任意选取Q个(例如Q=7)第二类细节表达层。
在一种示例中,P个第二类细节表达层为L个第二类细节表达层中的前P个第二类细节表达层。
在一种示例中,Q个第二类细节表达层为L个第二类细节表达层中的后Q个第二类细节表达层。
继续参照图11,将12个第二类细节表达层中前P个(例如P=7)第二类细节表达层,作为上述P个第二类细节表达层,将12个第二类细节表达层中的后Q个(例如Q=7)第二类细节表达层,作为Q个第二类细节表达层。其中,P个第二类细节表达层之间相互连续,Q个第二类细节表达层之间相互连续。
在一些实施例中,如图11若点云的多层细节表达层包括14层细节表达,则P可以取7或8。
在一种示例中,P个第二类细节表达层中的最后一个第二类细节表达层与Q个第二类细节表达层的第一个第二类细节表达层相邻。例如,图11所示,假设P=7,Q=7,P个第二类细节表达层中最后一个第二类细节表达层为第7层细节表达层,Q个第二类细节表达层中第一个第二类细节表达层为第8层细节表达层,第7层细节表达层与第8层细节表达层相邻。
根据上述方法将L个第二类细节表达层划分为P个第二类细节表达层和Q个第二类细节表达层,针对P个第二类细节表达层中的每个第二类细节表达层,对该第二类细节表达层中的第一数量个点的属性信息的残差值不进行量化。针对Q个第二类细节表达层中的每个第二类细节表达层,对该第二类细节表达层中的第二数量个点的属性信息的残差值不进行量化。
若P个第二类细节表达层位于Q个第二类细节表达层之前,则第一数量大于第二数量,例如,第一数量为24、32或64,对应的第二数量可以为8、16或32。这是由于在属性信息的预测过程中,例如图8所示,按照层数从低到高对多层细节表达层进行排序,得到点云的LOD顺序,根据点云的LOD顺序进行属性信息的编码。在预测的过程中,在LOD顺序中排列靠前的点被后续预测过程中用于作为参考点的机会较大,因此,为了降低量化对属性信息的重建值的影响,则将靠前的P个第二类细节表达层中的较多点的属性信息的残差值不进行量化。而为了消除冗余,对靠后的Q个第二类细节表达层中的较少点的属性信息的残差值不进行量化。
在一些实施例中,第一数量为第二数量的正整数倍,例如,第一数量为第二数量的3倍或2倍,例如第一数据为24,第二数量为8。
在一种示例中,P个第二类细节表达层中每个第二类细节表达层的第一数量个点中相邻两个点之间的间隔相等。
在一种示例中,Q个第二类细节表达层中每个第二类细节表达层的第二数量个点中相邻两个点之间的间隔相等。
在一些实施例中,由于前几层的LOD的点会影响后续LOD中点的预测结果,所以前几层LOD中的点的预测结果就更重要,因此设计前七层每一层LOD(LOD0~LOD6)等间隔不进行量化的属性残差值的点数为32(intermittent_unquantized_num),后续的每一层LOD的等间隔不进行量化的点数为10(intermittent_unquantized_num/3)。
以第3层LOD为例,如图12A所示,intermittent_unquantized_num=32。
以第8层LOD为例,如图12B所示,intermittent_unquantized_num/3=10。
本实施例在实际量化过程中,可以采用如下方式进行量化:
方式一,在对点云中的点的属性信息的残差值进行量化的过程中,跳过至少一个对属性信息的残差值进行无损编码的点。
方式二,将至少一个对属性信息的残差值进行无损编码的点的量化步长设置为1。
例如,目前根据如下公式(9)进行点的属性信息的残差值的量化:
Figure PCTCN2021087064-appb-000009
其中,attrResidualQuant为量化后的属性信息的残差值,Qstep为量化步长。
在该方式中,对于属性信息的残差值进行无损编码的点,可则将其量化步长设置为1,即Qstep=1。
方式三,将至少一个对属性信息的残差值进行无损编码的点的量化参数QP设置为目标值,目标值为量化步长为1时对应的QP值。
由于量化步长是由量化参数QP值计算得到,QP值通常通过配置文件预先配置。基于此,可以将QP设置为量化步长为1时对应的QP值。
在一些实施例中,该属性码流包括第一信息,该第一信息用于指示属性信息的残差值进行无损编码的点。
在一些实施例中,第一信息包括属性信息的残差值进行无损编码的点的标识信息,例如点云中包括100个属性信息的残差值无损编码的点,这样在属性码流中携带这100个点的标识信息。解码端对属性码流解析,获得属性信息的残差值进行无损编码的点的标识信息,以及这些点的属性信息的残差值后,对该标识信息对应的点的属性信息的残差值不进行反量化,而是直接使用这些点的属性信息的残差值进行属性信息的重建,以保持与编码端一致。
在一些实施例中,第一信息包括属性信息的残差值进行无损编码的点的总数量,例如上述N。
在一些实施例中,若L个第二类细节表达层中每层所包括的属性信息的残差值进行无损编码的点数量相同,且这些点之间等间隔排列,则第一信息包括每个第二类细节表达层中属性信息的残差值等间隔无损编码的点的具体数量(num),即属性码流中携带num。
在该示例中,若第二类细节表达层中第一个属性信息的残差值无损编码的点不是该第二细节表达层中的第一个点,则上述第一信息还需要携带第一个属性信息的残差值无损编码的点的标识信息。
在一些实施例中,若P个第二类细节表达层和Q个第二类细节表达层中无损编码的点均等间隔排列,则上述第一信息还包括第一数量和第二数量,以及P个第二类细节表达层和Q个第二类细节表达层的划分信息。
在该示例中,若P个第二类细节表达层中的最后一个第二类细节表达层与Q个第二类细节表达层的第一个第二类细节表达层相邻,则上述划分信息还包括Q个第二类细节表达层的第一个第二类细节表达层的标识信息;或者,包括P个第二类细节表达层的最 后一个第二类细节表达层的标识信息;或者,包括P和/或Q。这样解码端可以根据这些信息,从L个第二类细节表达层中确定出P个第二类细节表达层和Q个第二类细节表达层,进而对P个第二类细节表达层中每层的第一数量个等间隔点的属性信息的残差值进行无损编码,对Q个第二类细节表达层中每层的第二数量个等间隔点的属性信息的残差值进行无损编码。
在该示例中,第一信息还可以包括每个第二类细节表达层中第一个属性信息的残差值量化无损编码的点的标识信息。
在上述实施例的基础上,在本申请一具体实施例中,点云的编码过程包括:编码端判断当前点是否属于前7层LOD并且根据第三量化方式判断该当前点是否是不进行量化的点。
如果当前点属于前7层LOD,并且该当前点不是不量化的点,读取当前编码参数的QP值,然后加上当前点所属的目标LOD层的DeltaQP值,将QP+DeltaQP转化为对应的Qstep;如果是不量化的点,则此点的Qstep=1(即不需要量化)。
如果当前点不属于前7层LOD,并且该当前点不是不量化的点,读取当前编码参数的QP值,将QP转化为对应的Qstep;如果当前点是不量化的点,则此点的Qstep=1(即不需要量化)。接着使用上述确定的量化步长对当前点的属性信息的残差值进行量化。
本申请实施例,将第一量化方式和第三量化方式进行结合,具体是根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长;根据所述目标量化步长,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编码。本实施例根据当前点的几何信息确定当前点所处的目标LOD层,并确定该目标LOD层对应的目标量化步长,该目标量化步长是基于量化参数增量确定的,提高了量化步长的确定灵活性。,另外通过对点云中至少一个点的属性信息的残差值进行无损编码(即不进行量化),以减少量化对属性信息的重建值的影响,进而提高属性信息预测的准确性。
本申请的技术方案在G-PCC参考软件TMC13V11.0上实现后,在通用测试配置CTC CY测试条件下对运动图像专家组(MPEG)要求的部分点云测试序列(cat1-A)进行测试,测试结果如表7所示:
表7
Figure PCTCN2021087064-appb-000010
Figure PCTCN2021087064-appb-000011
其中,cat1-A点云序列中的点包括颜色属性信息和其他属性信息。如表7所示,对于cat1-A点云序列,采用本申请的技术方案相比于原有技术,其中亮度分量、色度分量Cb、色度分量Cr上性能均提升。
实施例4
若目标量化方式包括第一量化方式、第二量化方式和第三量化方式,本申请实施例的编码过程如图13所示。
图13为本申请一实施例提供的点云编码方法的流程示意图,如图13所示,包括:
S401、获取点云中点的几何信息和属性信息。
S402、根据点云中每个点的几何信息,对点云进行LOD划分,得到点云的多层LOD层,其中,每个LOD层包括至少一个层细节表达层,每一层细节表达中包括至少一个点。
S403、确定点云中当前点的属性信息的预测值。
S404、根据当前点的属性信息的预测值和属性信息的原始值,确定当前点的属性信息的残差值。例如,将当前点的属性信息的原始值与预测值的差值,确定为当前点的属性信息的残差值。
其中,上述S401至S404的实现过程可以参照上述S201至S204的描述,在此不再赘述。
S405、根据当前点的几何信息,确定当前点所处的目标LOD层。
S406、判断当前点是否为无损编码的点,若确定当前点属于有损编码的点,则执行如下S407至S410,若确定当前点属于无损编码的点,则执行S411。
其中,判断当前点是否为无损编码的点是否为无损编码点的过程参照上述306的描述,在此不再赘述。
S407、确定适配目标LOD层的目标量化步长。具体参照上述S206的描述,在此不再赘述。
S408、确定当前点的第一量化权重。具体参照上述S207的描述,在此不再赘述。
S409、根据目标量化步长和第一量化权重,对当前点的属性信息的残差值进行量化,得到当前点的属性信息的量化残差值。
S410、对当前点的属性信息的量化残差值进行编码,形成码流。
S411、对当前点的属性信息的残差值进行无损编码,形成码流。
在本申请的一具体实施例中,采用第一量化方式,计算得到点云中所有点的量化权重(即第一量化权重)。判断当前点是否属于前7层LOD并且根据第二量化方式判断该点是否是不进行量化的点。
如果确定当前点属于前7层LOD,并且该当前点不是不量化的点,读取当前编码参数的QP值,然后加上该层LOD的DeltaQP值,将QP+DeltaQP转化为对应的Qstep,即该Qstep为目标量化步长,令当前点的量化权重(即第二量化权重)w(Q)=min(w(Q),Qstep)。在量化前,利用第二量化权重w(Q)乘以当前点的残差值以进行加权,得到加权残差值,并根据目标量化步长对加权残差值进行量化。
如果确定该当前点是不量化的点,即执行当前点Qstep=1(即此点不需要量化)。
如果当前点不属于前7层LOD,并且该点不是不量化的点,读取当前编码参数的QP值,将QP转化为对应的Qstep,令当前点的量化权重w(Q)=min(w(Q),Qstep),在量化前,利用w(Q)乘以当前点的残差值进行加权,得到加权残差值。如果该当前点是不量化的点,则执行此点Qstep=1(即此点不需要量化)。
本申请实施例,将第一量化方式、第二量化方式和第三量化方式进行结合,具体是根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长以及确定当前点的第一量化权重;根据所述目标量化步长和第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编码。本实施例通过对点云中至少一个点的属性信息的残差值进行无损编码(即不进行量化),以减少量化对属性信息的重建值的影响,进而提高属性信息预测的准确性。另外,本实施例引入了用于对所述当前点的量化步长进行加权的第一量化权重,通过引入当前点的量化权重,相当于基于当前点的第一量化权重对当前点的目标量化步长进行了修正,即根据当前点的重要程度可自适应调整当前点的目标量化步长,进而基于调整后的目标量化步长对当前点的残差值进行量化,在对点云中的点的属性信息进行预测的过程中,针对在编码顺序中位置靠前的点,当其在预测中比较重要时,能够避免其量化步长过大,进而能够避免产生较大的重建误差,相当于,量化权重高的点采用较小的量化步长量化以降低其重建误差,针对在编码顺序中位置靠后的点,能够提升其预测准确性,提升编码效果。
本申请的技术方案在G-PCC参考软件TMC13 V11.0上实现后,在通用测试配置CTC CY测试条件下对运动图像专家组(MPEG)要求的部分点云测试序列(cat1-A)进行测试,测试结果如表8所示:
表8
Figure PCTCN2021087064-appb-000012
Figure PCTCN2021087064-appb-000013
其中,cat1-A点云序列中的点包括颜色属性信息和其他属性信息。如表7所示,对于cat1-A点云序列,采用本申请的技术方案(第一量化方式、第二量化方式和第三量化方式的结合),相比于原有技术,其中亮度分量、色度分量Cb、色度分量Cr上性能均提升。
实施例5
上文对本申请实施例涉及的点云编码方法进行了描述,在此基础上,下面针对解码端,对本申请涉及的点云解码方法进行描述。
图14为本申请实施例提供的点云解码方法的一种流程示意图,如图14所示,本申请实施例的方法包括:
S501、对点云的码流进行解析,得到点云的当前点的属性信息的量化残差值。
S502、用目标反量化方式,对当前点的属性信息的量化残差值进行反量化,得到当前点的属性信息的重建残差值。
其中,目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反量化方式和第三反量化方式,第一反量化方式为对点云中至少一个点的反量化参数设定反量化参数增量,第二反量化方式为对点云中点的残差值进行加权处理,第三反量化方式为对点云中至少一个点的属性信息的残差值进行无损解码。
其中,反量化也可以称为逆量化或解量化,可以理解为Scaling。预测值可以是属性预测值中的颜色预测值。
需要说明的是,点云中点的几何信息的解码完成后,执行属性信息的解码。在对几何码流解码完成后,可以得到点云中点的几何信息。
对点云的属性码流进行解析,得到点云中点的属性信息的量化残差值。使用目标量化方式对点云中点的属性信息的量化残差值进行反量化,得到点的属性信息的重建残差值。其中目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反量化方式和第三反量化方式。
本申请实施例还包括根据点云中点的几何信息,确定点云中点的属性信息的预测值。
具体是,对点云中点的几何信息进行解码,得到点云中点的几何信息的重建值,根据点云中点的几何信息的重建值,确定点云中点的属性信息的预测值。
根据点云中点的属性信息的预测值和重建残差值,得到点云中点的属性信息的重建值。
实施例6
若目标反量化方式包括第一反量化方式和第二反量化方式,则解码过程如图15所示。
图15为本申请一实施例提供的点云解码方法的流程示意图,如图15所示,包括:
S601、解码几何码流,得到点云中点的几何信息。
S602、根据点云中点的几何信息,将点云划分为一个或多个细节等级LOD层。其中每个LOD层包括至少一个细节表达层,每一层细节表达中包括至少一个点。
S603、根据当前点的几何信息,确定当前点所处的目标LOD层。
S604、确定当前点的属性信息的预测值。例如,根据多层细节表达层对点云中的点进行排序,得到LOD顺序,根据待解码点的几何信息,在该LOD顺序中获得待解码点的至少一个已解码的邻近点,根据至少一个已解码邻近点的属性信息的重建值,确定该待解码点的属性信息的预测值。
S605、确定适配目标LOD层的目标量化步长。
其中实现方式S605的方式包括但不限于如下几种:
方式一,解码码流,得到当前点的编码参数中的量化参数;获取目标LOD层的层级分级索引,并根据目标LOD层的层级分级索引,确定目标LOD层的量化参数增量;根据量化参数和目标LOD层的量化参数增量,确定目标LOD层对应的目标量化步长。
在一些实施例中,根据目标LOD层的层级分级索引,确定目标LOD层的量化参数增量,包括:若目标LOD层属于点云的前N个LOD层,则确定目标LOD层的量化参数增量为j,N为小于或等于第一阈值的正整数,j为大于0且小于或等于第二阈值的整数;若目标LOD层不属于点云的前N个LOD层,则确定目标LOD层的量化参数增量为0。
可选的,若量化参数大于或等于第三阈值,则j为第一预设数值;若量化参数小于第三阈值,则j为第二预设数值。
方式二,获取目标LOD层的层级分级索引;根据目标LOD层的层级分级索引,在 量化步长查找表中查询目标LOD层对应的目标量化步长,量化步长查找表包括LOD层与量化步长之间的对应关系。
其中,细节等级LOD的数量是点云的公共编码参数CTC设置的,该部分参数属于点云的属性参数集。本申请实施例中对于已划分好的多个LOD采用不同量化步长Qstep进行反量化,而不同LOD的划分以及量化步长的变化值可以是预设的。
实际应用中,层级较前的细节等级适配的量化步长可以较小,层级靠后的细节等级适配的量化步长可以较大,而层级较前的细节等级中点的数量较少,层级靠后的细节等级中点的数量相对较多,点数量较少的细节等级适配较小的量化步长,点数量较多的细节等级适配较大的量化步长,解码时整体处理时长不会过长。
本申请方案中,对当前处理的点P i的残差进行反量化时,量化步长Qstep i是与该点P i所处的细节等级的层级适配的,而非采用固定的量化步长进行反量化,有利于提高解码的效率。
在一个可能的示例中,根据当前解码块的层级分级索引、量化参数偏移参数确定细节等级的层级LOD对应的量化步长Qstep之前,方法还包括:确定当前解码块的量化参数优化使能标识的值;检测到量化参数优化使能标识的值为第一数值,确定当前解码块的层级分级索引和量化参数偏移参数;根据层级分级索引、量化参数偏移参数确定细节等级的层级LOD对应的量化步长Qstep。
具体实现中,在确定量化步长时,首先需要确定量化参数优化使能标识的值,量化参数优化使能标识的值可以取0或1,将其中一个数值记为第一数值,只有化参数优化使能标识的值为该第一数值时,才确定层级分级索引和量化参数偏移参数,并进一步确定细节等级对应的量化步长Qstep。
可见,本示例中,先确定当前解码块的量化参数优化使能标识的值,在检测到量化参数优化使能标识的值为第一数值时,确定当前解码块的层级分级索引和量化参数偏移参数,根据层级分级索引、量化参数偏移参数确定细节等级的层级LOD对应的量化步长Qstep,有利于提高解码效率。
在一个可能的示例中,确定当前解码块的量化参数优化使能标识的值,包括:解析码流,获取当前解码块的参数集中的量化参数优化使能标识的值。
具体实现中,码流中可以包括参数集,在解析码流时,则可以获取当前解码块的参数集中的量化参数优化使能标识的值。
可见,本示例中,可通过解析码流,获取当前解码块的参数集中的量化参数优化使能标识的值,有利于提高解码效率。
在一个可能的示例中,参数集为当前编码块的属性参数集。
具体实现中,在解析码流时,则可以获取当前解码块的属性参数集中的量化参数优化使能标识的值。
可见,本示例中,可通过解析码流,获取当前解码块的属性参数集中的量化参数优化使能标识的值,有利于提高解码效率。
在一个可能的示例中,确定当前解码块的层级分级索引和量化参数偏移参数,包括:读取属性参数集中的层级分级索引和量化参数偏移参数。
具体实现中,可以直接读取属性参数集中的层级分级索引和量化参数偏移参数。
可见,本示例中,确定当前编码块的层级分级索引和量化偏移参数,可以读取属性参数集中的层级分级索引和量化参数偏移参数,有利于提高解码效率。
在一个可能的示例中,确定适配细节等级的层级LODi的量化步长Qstepi,包括:根据层级LODi的层级分级索引查询量化步长查找表,获取层级LODi对应的量化步长Qstepi,量化步长查找表包括细节等级的层级LOD与量化步长Qstep之间的对应关系。
具体实现中,可以根据每个层级LODi对应的层级分级索引,去查询量化步长查找表,由于量化查找表包括细节等级的层级LOD与量化步长Qstep之间的对应关系,可以直接通查询表的方式确定给出适配某一细节等级层级LODi的量化步长Qstepi。
可见,本示例中,根据层级LODi的层级分级索引查询量化步长查找表,获取层级LODi对应的量化步长Qstepi,量化步长查找表包括细节等级的层级LOD与量化步长Qstep之间的对应关系,不同层级的细节等级对应的量化步长不同,有利于提高量化步长确定的灵活性。
在一个可能的示例中,根据层级分级索引、量化参数偏移参数确定细节等级的层级LOD对应的量化步长Qstep,包括:确定当前编码块的编码参数中的量化参数Qp;根据层级分级索引和量化参数偏移参数确定每个层级LOD的量化参数偏移;根据量化参数Qp和每个层级LOD的量化参数偏移,确定每个层级LOD对应的量化步长Qstep。
具体实现中,量化参数Qp可以由属性参数集提供的QP参数确定。
确定出当前编码块的编码参数中的量化参数Qp之后,可以根据层级分级索引和量化参数偏移参数确定每个层级LOD的量化参数偏移,进而根据确定出的Qp以及每个层级LOD的量化参数偏移,对应确定出每个层级对应的量化步长。
可见,本示例中,先确定当前编码块的编码参数中的量化参数Qp,然后根据层级分级索引和量化参数偏移参数确定每个层级LOD的量化参数偏移,再根据量化参数Qp和每个层级LOD的量化参数偏移,确定每个层级LOD对应的量化步长Qstep,不同层级的细节等级对应的量化步长不同,有利于提高量化步长确定的灵活性。
在一个可能的示例中,根据层级分级索引和量化参数偏移参数确定每个层级LOD的量化参数偏移,包括:判断当前处理的细节等级的层级LOD是否属于层级分级索引所约束的层级范围,层级范围包括多个细节等级中的前N个层级,N为小于或等于第一阈值的正整数;若是,则根据量化参数偏移参数确定当前处理的细节等级的层级LOD的量化参数偏移的值j,j为大于0且小于或等于第二阈值的整数;若否,则确定当前处理的细节等级的层级LOD的量化参数偏移的值为0。
具体实现中,第一阈值可以为14,第二阈值可以为10。这是考虑到,细节等级总层数一般为11-14;公共测试环境CTC设置的5种码率的最小Qp值为10,所以j可以为大于0且小于或等于10的整数,以确保减去该数值后不出现负数。由于量化参数越大,其对应的量化步长越长,量化步长越长。j为大于0且小于或等于10的整数,则前N各层级的细节等级对应量化参数比后面层级的更小,且减去该数值后不出现负数。
其中,层级分级索引的取值范围为0-LOD的个数(正整数),假设层级分级索引为6,则LOD0-LOD5的量化步长(即前6层的细节等级的量化步长)由QP-QpShiftValue(即量化参数-量化偏移参数)转换而来。
另外,如果将细节等级进一步划分成了多个组,例如层级分级索引为数组4 5 6,对 应可将细节等级按照这三个划分位置分成4组,分别为LOD0-LOD3、LOD4、包括LOD5以及LOD6及其以后的细节等级。不同组对应的量化参数偏移的值j可以是不同的。
由于j的值越大,则量化参数-量化参数偏移即Qp-j越小,对应的量化步长也就越小。考虑到量化步长较小,则解码的时间会相对较长,影响解码效率,因此,N的值可以取6,基本为细节等级总层数的一半,且前面层级中点的数量相对较少,采用小量化步长处理时不会增加过多的解码时间。
在对前N层即较低层级的细节等级中点的残差进行反量化时,可以取6为j的值,以获得一个较小的量化步长对前N层的细节等级中点的残差进行反量化,提高解码效率。
N可以为8,对前8层的细节等级中的点采用较小量化步长进行反量化,由于前8层细节等级中点的数量相对较少,有利于提高解码效率
可见,本示例中,先判断当前处理的细节等级的层级LOD是否属于层级分级索引所约束的层级范围,若是,则根据量化参数偏移参数确定当前处理的细节等级的层级LOD的量化参数偏移的值j,j为大于0且小于或等于第二阈值的整数,若否,则确定当前处理的细节等级的层级LOD的量化参数偏移的值为0,对于较前层级的细节等级适配较小量化参数对应的量化步长,即较前层级的细节等级对应较小的量化步长,较后层级的细节等级对应的量化步长比前面层级的大,有利于提高解码效率。
在一个可能的示例中,若量化参数Qp大于或等于第三阈值,则j为第一预设数值;若量化参数Qp小于第三阈值,则j为第二预设数值。
其中,第三阈值可以为30,第一预设数值可以为10,第二预设数值可以为6。
也就是说,可以根据当前编码块对应的量化参数Qp本身的大小采用分段函数的形式确定j的取值,例如当量化参数Qp大于或等于30时,j为10,Qp小于30时,j为6。
可见,本示例中,若量化参数Qp大于或等于30,则j为10,若量化参数Qp小于30,则j为6,根据量化参数Qp本身的值确定j的值,有利于提高确定j的值的灵活性。
在一个可能的示例中,根据层级分级索引和量化参数偏移参数确定每个层级LOD的量化参数偏移,包括:判断当前处理的细节等级的层级LOD所对应的层级组合,查询层级分级索引确定当前处理的细节等级的层级LOD的层级分级索引;根据处理的细节等级的层级LOD的层级分级索引查询量化参数偏移参数,确定对应的量化参数偏移。
具体实现中,对于存在多个层级分组例如是个分组的情况,量化参数偏移参数可以为一个数组,例如3 5 6,即第一至第四组的量化参数偏移分别为-3、-5和-6,若确定出的量化参数为QP,则实际第一至四组的量化参数分别为QP-3、QP-5、QP-6、QP。
其中,层级组合可以有多个,任意一个层级组合可以包括前后相邻的至少2个层级,该多个层级组合包括该第一层级组合和该第二层级组合,该第一层级组合中的层级在该第二层级组合中的层级之前,该第一层级组合所对应的量化参数小于该第二层级组合所对应的量化参数,不同层级组合对应不同的量化参数,有利于对不同层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
其中,第一层级组合可以包括多个细节等级中的前两个层级,该第一层级组合所对应的量化步长为1。由于细节等级中前两个层级中点的数量相对最少,量化步长为1也不会对解码效率带来过大的影响。
其中,多个层级组合可以包括从前往后排序的第一层级组合、第二层级组合、第三层 级组合和第四层级组合,且任意一个层级组合中包括前后相邻的至少2个层级;第一层级组合采用原始量化步长的1/sqrt(4)作为本层级的量化步长,原始量化步长是指根据量化参数Qp确定的量化步长;第二层级组合采用原始量化步长的1/sqrt(3)作为本层级的量化步长;第三层级组合采用原始量化步长的1/sqrt(2)作为本层级的量化步长;第四层级组合采用原始量化步长作为本层级的量化步长。
举例来说,若原始量化步长即根据当前编码块对应的量化参数Qp确定的量化步长为α(α为正整数),则第一层级组合、第二层级组合、第三层级组合和第四层级组合分别采用α/sqrt(4)、α/sqrt(3)、α/sqrt(2)、α作为该层级的量化步长,层级组合越靠后,对应的量化步长越大,同一层级组合中不同层级采用相同的量化步长。对不同的层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
可见,本示例中,在确定量化参数偏移时,先判断出细节等级的层级所对应的的层级组合,然后在进一步确定出该层级组合中细节等级对应的层级分级索引,进而根据对应的层级分级索引查询量化参数偏移参数,确定出对应的量化参数偏移,不同层级组合对应不同的量化参数偏移,有利于对不同层级的细节等级对应的量化步长做进一步细分,提高量化步长确定的灵活性。
S606、确定当前点的第一量化权重。
例如,确定当前点的索引;将当前点的索引所对应的量化权重,确定为当前点的第一量化权重。
S607、根据目标量化步长和第一量化权重,对当前点的属性信息的量化残差值进行反量化,得到当前点的属性信息的重建残差值。
在一些实施例中,S607包括S6071和S6072:
S6071、根据当前点的第一量化权重,确定当前点的第二量化权重;
S6072、根据目标量化步长和第二量化权重,对当前点的属性信息的量化残差值进行反量化,得到重建残差值。
在一些示例中,第二量化权重小于或等于目标量化步长。
在一些实施例中,上述S6071包括:利用以下公式(10)确定当前点的第二量化权重:
effectiveQuantWeight=min(w(Q),Qstep)   (10)
其中,effectiveQuantWeight表示当前点的第二量化权重,w(Q)表示当前点的第一量化权重,k表示对w(Q)进行右移运算的位数,Qstep表示目标量化步长。
在一些实施例中,第二量化权重的数值等于2的整数次幂。
在一些实施例中,第一量化权重的数值不等于2的整数次幂,基于当前点的第一量化权重的数值,将最接近当前点的第一量化权重的2的整数次幂,确定为当前点的第二量化权重。
在一些实施例中,上述S6072包括:利用当前点的目标量化步长对量化残差值进行反量化,得到加权残差值;利用加权残差值除以第二量化权重,得到重建残差值。
在一些实施例中,上述S6072包括:利用以下公式(11)对量化残差值进行反量化:
attrResidualQuant1=(attrResidualQuant2×Qstep)/effectiveQuantWeight   (11)
其中,attrResidualQuant2表示量化残差值,attrResidualQuant1表示重建残差值, effectiveQuantWeight表示当前点的第二量化权重,Qstep表示当前点的目标量化步长,“X”表示乘法运算,“/”表示除法运算。
在一些实施例中,上述S6072包括:根据当前点的第二量化权重,对目标量化步长进行更新;根据更新后的量化步长,对当前点的属性信息的量化残差值进行反量化。
在一些实施例中,上述根据当前点的第二量化权重,对目标量化步长进行更新包括:利用以下公式(12)更新当前点的量化步长:
Figure PCTCN2021087064-appb-000014
其中,effectiveQuantWeight表示当前点的第二量化步长,newQstep表示当前点的基于当前点的第二量化步长更新后的量化步长,Qstep表示当前点的基于当前点的第二量化步长更新前的量化步长,
Figure PCTCN2021087064-appb-000015
表示向上取整运算,“/”表示除法运算。
在一些实施例中,本申请实施例还包括:按照点云的编码顺序的倒序,通过遍历点云中的点,基于当前点的第一量化权重更新当前点的N个最邻近点的第一量化权重,N为大于0的整数。
在一些实施例中,点云中的每一个点的第一量化权重的初始值为预设值。
在一些实施例中,点云的LOD层中的前M个LOD层中的点的第一量化权重的初始值,大于剩余LOD层中的点的第一量化权重的初始值,M为正整数。
在一些实施例中,基于当前点的第一量化权重更新当前点的N个最邻近点的第一量化权重,包括:获取当前点对N个最邻近点中的每一个最邻近点的影响权重,影响权重与当前点和N个最邻近点的位置信息相关;基于当前点的量化权重和当前点对N个最邻近点中的每一个最邻近点的影响权重,更新N个最邻近点的第一量化权重。
在一些实施例中,点云的属性参数集包括当前点对N个最邻近点中的每一个最邻近点的影响权重;获取当前点对N个最邻近点中的每一个最邻近点的影响权重,包括:通过查询属性参数集,获取当前点对N个最邻近点中的每一个最邻近点的影响权重。
在一些实施例中,基于当前点的量化权重和当前点对N个最邻近点中的每一个最邻近点的影响权重,更新N个最邻近点的第一量化权重,包括:
基于以下公式(13)更新N个最邻近点的第一量化权重:
w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k)   (13)
其中,Q表示当前点,P i表示距离Q第i近的邻居点,i=1,2,…,N,w(Q)表示当前点的第一量化权重,α(P i,Q)表示当前点对邻居点影响权重大小,w(P i)表示邻居点P i更新后的第一量化权重,k表示右移运算的位数,“>>”为右移运算,“←”为赋值运算,例如“A←B”表示将B的值赋给A。
可选的,α(P i,Q)的值随i的增大而减小。
可选的,点云的量化权重存储为数组,数组的维度和点云中点的个数相同。
S608、根据当前点的属性信息的预测值和重建残差值,确定当前点的属性信息的重建值。
在上述实施例的基础上,在本申请一具体实施例中,点云的解码过程包括:解码端,依据第二量化方式计算得到点云中所有点的量化权重。判断当前点是否属于前7层LOD;如果当前点属于前7层LOD,读取码流中编码参数的QP值,然后加上该当前点所在的目标LOD层的DeltaQP值,将QP+DeltaQP转化为对应的目标量化步长Qstep,利用Qstep 对解码得到的量化残差进行反量化。令当前点的量化权重w(Q)=min(w(Q),Qstep),在反量化后,将反量化残差除以w(Q)去除加权影响。
如果当前点不属于前7层LOD,读取码流中编码参数的QP值,将QP转化为对应的目标量化步长Qstep,利用Qstep对解码得到的量化残差进行反量化。令当前点的量化权重w(Q)=min(w(Q),Qstep),在反量化后,将反量化残差除以w(Q)去除加权影响。
实施例7
若目标反量化方式包括第一反量化方式和第三反量化方式,则解码过程如图16所示。
图16为本申请一实施例提供的点云解码方法的流程示意图,如图16所示,包括:
S701、解码几何码流,得到点云中点的几何信息。
S702、根据点云中点的几何信息,将点云划分为一个或多个细节等级LOD层。其中每个LOD层包括至少一个细节表达层,每一层细节表达中包括至少一个点。
S703、根据当前点的几何信息,确定当前点所处的目标LOD层。
S704、确定当前点的属性信息的预测值。例如,根据多层细节表达层对点云中的点进行排序,得到LOD顺序,根据待解码点的几何信息,在该LOD顺序中获得待解码点的至少一个已解码的邻近点,根据至少一个已解码邻近点的属性信息的重建值,确定该待解码点的属性信息的预测值。
S705、判断当前点是否属于无损编码点,若否,则执行S706和S707,若是则执行S708。
S706、确定适配目标LOD层的目标量化步长。具体过程参照上述实施例。
S707、根据目标量化步长,对当前点的属性信息的量化残差值进行反量化,得到当前点的属性信息的重建残差值。
S708、对当前点的属性信息的残差值进行无损编解码,得到当前点的属性信息的重建残差值。
S709、根据当前点的属性信息的重建残差值和预测值,得到当前点的属性信息的重建值。
下面对本申请实施例涉及的无损解码过程进行介绍。
在一些实施例中,对点的属性信息的残差值进行无损解码也可以称为对点的属性信息的残差值不量化,或者称为对点的属性信息的残差值不进行伸缩操作(scaling)。对点的属性信息的残差值进行量化,也称为对点的属性信息的残差值进行伸缩操作(scaling)。
在一些实施例中上述S705中判断当前点是否属于无损编码点包括:S7051和S7051。
S7051、解码点云的码流,得到第一信息,第一信息用于指示属性信息的残差值经过无损编码的点;
S7052、根据第一信息,确定当前点的属性信息的残差值是否经过无损编码。
对点云的属性码流进行解码,可以得到点云中点的属性信息的残差信息以及第一信息,该第一信息用于指示属性信息的残差值经过无损编码(或未量化)的点。
在一种示例中,第一信息包括属性信息的残差值经过无损编码的点的标识信息,例如,编码端采用不同的取点模式选取属性信息的残差值不量化的点,对应的第一信息可以包括属性信息的残差值未量化的点的取点模式的编号或索引。
在另一种示例中,第一信息包括属性信息的残差值经过无损编码的点的总数量。
在一些实施例中,上述S7052中根据第一信息所携带的信息不同,确定当前点的属性信息的残差信息是否经过无损编码的实现方式不同,在具体实现过程包括但不限于如下几种情况:
情况1,第一信息包括N,N为点云中属性信息的残差值经过无损编码的点的总个数。
可选的,N为2的整数倍。
可选的,N个点中每相邻的两个点之间的间隔相等。
此时,上述S7052包括:若确定当前点为N个点中的一个点,则确定当前点的属性信息的残差信息经过无损编码。例如根据点云中点的几何信息对点云中点进行排序,得到排序后的点云。针对排序后的点云,根据点的总个数以及N的取值,确定N个点中相邻两点之间的间隔,并根据该间隔,判断当前点是否为这N个点中的一个点。例如,上述间隔为10,当前点为排序后的点云中的第21个点,从点云中的第一个点开始,每间隔10个点的点为属性信息的残差值经过无损编码的点,依次为第1个点、第11个点、第21个点……,进而确定当前点为属性信息的残差信息经过无损编码的点。
情况2,若第一信息包括预设间隔,该预设间隔为点云中相邻两个无损编码点之间的间隔。
在一些实施例中,上述S7052包括:根据预设间隔,若确定当前点与前一个属性信息的残差信息经过无损编码的点之间的间隔等于预设间隔,则确定当前点的属性信息的残差信息经过无损编码。
情况3,若第一信息包括第一预设值,第一预设值用于指示将所包括的点的总个数小于或等于第一预设值的细节表达层划分为第一类细节表达层,将所包括的点的总个数大于第一预设值的细节表达层划分为第二类细节表达层。
在该情况下,S7052包括:根据第一预设值,获得多层细节表达层中所包括的点的总个数小于或等于第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于第一预设值的至少一个第二类细节表达层;
若当前点属于第一类细节表达层,则确定当前点的属性信息的残差信息经过无损编码。
情况4,若第一信息包括M,M为一层第二类细节表达层中属性信息的残差值经过无损编码的点的数量,M为2的正整数倍。可选的,这M个点中相邻两个点之间的间隔相等。
在该情况下,上述S7052包括:若当前点为M个点中的一个点,则确定当前点的属性信息的残差信息经过无损编码。
情况5,至少一个第二类细节表达层包括L个第二类细节表达层,L为大于或等于2的正整数,若第一信息包括第一数量、第二数量、以及P个第二类细节表达层和Q个第二类细节表达层的划分信息。
在一种示例中,第一数量大于第二数量,例如第一数量为24,第二数量为8。
在一种示例中,第一数量为第二数量的正整数倍,例如第一数量为第二数量的2倍,例如第一数量为24,第二数量为12。或者,第一数量为第二数量的3倍,例如第一数量 为24,第二数量为8。
在一种示例中,第一数量个点中相邻两个点之间的间隔相等。
在一种示例中,第二数量个点中相邻两个点之间的间隔相等。
在该情况下,上述S7052包括:根据划分信息,对L个第二类细节表达层进行划分,得到P个第二类细节表达层和Q个第二类细节表达层;若确定当前点为P个第二类细节表达层中属性信息的残差值经过无损编码的第一数量个点中的一个点,则确定当前点的属性信息的残差信息经过无损编码;若确定当前点为Q个第二类细节表达层中属性信息的残差值经过无损编码的第二数量个点中的一个点,则确定当前点的属性信息的残差信息经过无损编码;
其中,L为大于或等于2的正整数,P、Q为正整数,且P与Q之和小于或等于L,P个第二类细节表达层与Q个第二类细节表达层不重叠,第一数量与第二数量不同。
在一种示例中,P个第二类细节表达层为L个第二类细节表达层中的前P个第二类细节表达层。
在一种示例中,Q个第二类细节表达层为L个第二类细节表达层中的后Q个第二类细节表达层。
在一种示例中,P个第二类细节表达层中的最后一个第二类细节表达层与Q个第二类细节表达层的第一个第二类细节表达层相邻。
在该示例中,上述划分信息可以包括Q个第二类细节表达层的第一个第二类细节表达层的标识信息,或者,包括P个第二类细节表达层的最后一个第二类细节表达层的标识信息。
在一种示例中,第一信息还包括:第二类细节表达层中第一个属性信息的残差值经过无损编码的点的标识信息。
在一种示例中,第一信息还包括:属性信息的残差值经过无损编码的点的标识信息。
在一些实施例中,若确定当前点的属性信息的残差信息经过无损编码,可以通过如下方式对当前点的属性信息的残差进行不进行反量化:
方式一,在对点云中的点的属性信息的残差信息进行反量化的过程中,跳过该当前点。
方式二,将该当前点的反量化步长设置为1。
方式三,将该当前点点的量化参数QP设置为目标值,目标值为反量化步长为1时对应的QP值。
在一些实施例中,若当前点为无损编码的点,方法还包括:
根据如下公式(14),确定当前点的属性信息的重建值:
reconstructedColor=attrResidual+attrPredValue     (14)
其中,reconstructedColor为当前点的属性信息的重建值,attrResidual为当前点的属性信息的残差值,attrPredValue当前点的属性信息的预测值。
在上述实施例的基础上,在本申请的一具体实施例中,点云解码过程包括:解码器判断当前点是否属于前7层LOD并且判断该当前点是否是不进行量化的点。
如果当前点属于前7层LOD,并且该当前点不是不量化的点,读取码流中编码参数的QP值,然后加上该层LOD的DeltaQP值,将QP+DeltaQP转化为对应的Qstep;如果 该当前点是不量化的点,则此点的Qstep=1(即不需要量化),利用Qstep对解码得到的量化残差进行反量化;
如果不属于前7层LOD,并且该当前点不是不量化的点,令此当前点的Qstep=1(即不需要量化),利用Qstep对解码得到的量化残差进行反量化。
实施例8
若目标反量化方式包括目标反量化方式包括第一反量化方式、第二反量化方式和第三反量化方式,则解码过程如图17所示。
图17为本申请一实施例提供的点云解码方法的流程示意图,如图17所示,包括:
S801、解码几何码流,得到点云中点的几何信息。
S802、根据点云中点的几何信息,将点云划分为一个或多个细节等级LOD层。其中每个LOD层包括至少一个细节表达层,每一层细节表达中包括至少一个点。
S803、根据当前点的几何信息,确定当前点所处的目标LOD层。
S804、确定当前点的属性信息的预测值。例如,根据多层细节表达层对点云中的点进行排序,得到LOD顺序,根据待解码点的几何信息,在该LOD顺序中获得待解码点的至少一个已解码的邻近点,根据至少一个已解码邻近点的属性信息的重建值,确定该待解码点的属性信息的预测值。
S805、判断当前点是否属于无损编码点,若否,则执行S806至S809,若是则执行S708。
S806、确定适配目标LOD层的目标量化步长;参照上述S605的描述。
S807、确定当前点的第一量化权重;参照上述S606的描述。
S808、根据目标量化步长和第一量化权重,对当前点的属性信息的量化残差值进行反量化,得到当前点的属性信息的重建残差值;参照上述S607的描述。
S809、对当前点的属性信息的残差值进行无损编解码,得到当前点的属性信息的重建残差值。
S810、根据当前点的属性信息的重建残差值和预测值,得到当前点的属性信息的重建值。
在本申请的一具体实施例中,点云解码过程包括:解码端计算得到点云中所有点的量化权重。判断当前点是否属于前7层LOD并且判断该当前点是否是不进行量化的点:
如果当前点属于前7层LOD,并且该当前点不是不量化的点,读取码流中编码参数的QP值,然后加上该层LOD的DeltaQP值,将QP+DeltaQP转化为对应的Qstep,利用Qstep对解码得到的量化残差进行反量化。令当前点的量化权重w(Q)=min(w(Q),Qstep),在反量化后,将反量化残差除以w(Q)去除加权影响。如果该当前点是不量化的点,将当前点Qstep=1(即此点不需要量化),利用Qstep对解码得到的量化残差进行反量化。
如果当前点不属于前7层LOD,并且该当前点不是不量化的点,读取码流中编码参数的QP值,将QP转化为对应的Qstep,利用Qstep对解码得到的量化残差进行反量化。令当前点的量化权重w(Q)=min(w(Q),Qstep),在反量化后,将反量化残差除以w(Q)去除加权影响;如果当前点是不量化的点,执行当前点Qstep=1(即此点不需要量化)。
应理解,图6至图17仅为本申请的示例,不应理解为对本申请的限制。
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。
还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。另外,本申请实施例中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。具体地,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
上文结合图6至图17,详细描述了本申请的方法实施例,下文结合图18至图20,详细描述本申请的装置实施例。
图18是本申请实施例提供的点云编码器10的示意性框图。
如图18所示,点云编码器10包括:
获取单元11,用于获取点云中当前点的属性信息;
处理单元12,用于对所述当前点的属性信息进行处理,得到所述当前点的属性信息的残差值;
量化单元13,用于使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值;
其中,所述目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,所述第一量化方式为对所述点云中至少一个点的量化参数设定量化参数增量,所述第二量化方式为对所述点云中点的残差值进行加权处理,所述第三量化方式为对所述点云中至少一个点的属性信息的残差值进行无损编码。
在一些实施例中,获取单元11还用于获取所述点云中点的几何信息;处理单元12用于根据所述点云中点的几何信息,将所述点云划分为一个或多个细节等级LOD层。
在一些实施例中,所述目标量化方式包括所述第一量化方式和所述第二量化方式,量化单元13,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;确定适配所述目标LOD层的目标量化步长;确定所述当前点的第一量化权重;根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
在一些实施例中,所述目标量化方式包括所述第一量化方式和所述第三量化方式,量化单元13,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长;根据所述目标量化步长,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
在一些实施例中,所述目标量化方式包括所述第一量化方式、所述第二量化方式和所述第三量化方式,量化单元13,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长,以及确定所述当前点的第一量化权重;根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
在一些实施例中,量化单元13,还用于若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编码。
在一些实施例中,量化单元13,具体用于获取所述目标LOD层的层级分级索引;根据所述目标LOD层的层级分级索引,在量化步长查找表中查询所述目标LOD层对应的目标量化步长,所述量化步长查找表包括LOD层与量化步长之间的对应关系。
在一些实施例中,量化单元13,具体用于确定所述当前点的编码参数中的量化参数;获取所述目标LOD层的层级分级索引,并根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量;根据所述量化参数和所述目标LOD层的量化参数增量,确定所述目标LOD层对应的目标量化步长。
在一些实施例中,量化单元13,具体用于若所述目标LOD层属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为j,所述N为小于或等于第一阈值的正整数,所述j为大于0且小于或等于第二阈值的整数;若所述目标LOD层不属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为0。
可选的,若所述量化参数大于或等于第三阈值,则所述j为第一预设数值;
若所述量化参数小于所述第三阈值,则所述j为第二预设数值。
在一些实施例中,量化单元13,具体用于确定所述当前点的索引;将所述当前点的索引所对应的量化权重,确定为所述当前点的第一量化权重。
在一些实施例中,量化单元13,具体用于根据所述当前点的第一量化权重,确定所述当前点的第二量化权重;根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的残差值进行量化,得到所述量化残差值。
可选的,所述第二量化权重小于或等于所述目标量化步长。
在一些实施例中,量化单元13,具体用于利用以下公式确定所述当前点的第二量化权重:
effectiveQuantWeight=min(w(Q)>>k,Qstep);
其中,effectiveQuantWeight表示所述当前点的第二量化权重,w(Q)表示所述当前点的第一量化权重,k表示对所述w(Q)进行右移运算的位数,Qstep表示所述目标量化步长。
可选的,所述第二量化权重的数值等于2的整数次幂。
可选的,所述第一量化权重的数值不等于2的整数次幂,量化单元13,具体用于基于所述当前点的第一量化权重的数值,将最接近所述当前点的第一量化权重的2的整数次幂,确定为所述当前点的第二量化权重。
在一些实施例中,量化单元13,具体用于利用所述第二量化权重乘以所述当前点的属性信息的残差值,得到加权残差值;利用所述目标量化步长对所述加权残差值进行量化,得到所述当前点的属性信息的量化残差值。
在一些实施例中,量化单元13,具体用于利用以下公式对所述当前点的属性信息的残差值进行量化:
attrResidualQuant2=attrResidualQuant1×effectiveQuantWeight/Qstep;
其中,所述attrResidualQuant2表示所述当前点的属性信息的量化残差值,attrResidualQuant1表示所述当前点的属性信息的残差值,所述effectiveQuantWeight表示所述当前点的第二量化权重,所述Qstep表示所述目标量化步长。
在一些实施例中,量化单元13,具体用于根据所述当前点的第二量化权重,对所述目标量化步长进行更新;根据所述更新后的量化步长,对所述当前点的属性信息的残差值进行量化。
在一些实施例中,量化单元13,具体用于利用以下公式更新所述目标量化步长:
Figure PCTCN2021087064-appb-000016
其中,所述effectiveQuantWeight表示所述当前点的第二量化权重,所述newQstep表示所述当前点的基于所述目标量化步长更新后的量化步长,所述Qstep表示所述目标量化步长。
在一些实施例中,量化单元13,还用于按照所述点云的编码顺序的倒序,通过遍历所述点云中的点,基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,N为大于0的整数。
可选的,所述点云中的每一个点的第一量化权重的初始值为预设值。
可选的,所述点云的LOD层中的前M个LOD层中的点的第一量化权重的初始值,大于剩余LOD层中的点的第一量化权重的初始值,所述M为正整数。
在一些实施例中,量化单元13,具体用于获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,所述影响权重与所述当前点和所述N个最邻近点的位置信息相关;基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重。
在一些实施例中,所述点云的属性参数集包括所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重;量化单元13,具体用于通过查询所述属性参数集,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重。
在一些实施例中,量化单元13,具体用于基于以下公式更新所述N个最邻近点的第一量化权重:
w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k);
其中,Q表示所述当前点,P i表示距离所述Q第i近的邻居点,i=1,2,…,N,w(Q)表示所述当前点的第一量化权重,α(P i,Q)表示所述当前点对邻居点影响权重大小,w(P i)表示邻居点P i更新后的第一量化权重,k表示右移运算的位数。
可选的,所述α(P i,Q)的值随i的增大而减小。
可选的,所述点云的量化权重存储为数组,所述数组的维度和所述点云中点的个数相同。
在一些实施例中,编码器还包括编码单元14,编码单元14用于对所述点云中至少一个点的属性信息的残差值进行无损编码。
可选的,所述至少一个点包括N个点,所述N为2的整数倍。
可选的,所述至少一个点包括N个点,所述N个点中每相邻的两个点之间的间隔相等。
在一些实施例中,编码单元14具体用于对所述多层细节表达层中的至少一层细节表达层中的至少一个点的属性残差值进行无损编码。
在一些实施例中,编码单元14具体用于获得所述多层细节表达层中所包括的点的总个数小于或等于第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于所述第一预设值的至少一个第二类细节表达层;对所述第一类细节表达层中的所有点的属性信息的残差值进行无损编码;对所述第二类细节表达层中的至少一个点的属性信息的残差值进行无损编码。
在一些实施例中,编码单元14具体用于对所述第二类细节表达层中的M个点的属性信息的残差值进行无损编码,所述M为2的正整数倍。
在一些实施例中,所述至少一层第二类细节表达层包括L个第二类细节表达层,所述L为大于或等于2的正整数,编码单元14具体用于对P个第二类细节表达层中每个所述第二类细节表达层中的第一数量个点的属性信息的残差值进行无损编码;对Q个第二类细节表达层中每个所述第二类细节表达层中的第二数量个点的属性信息的残差值进行无损编码;
其中,所述P、Q为正整数,且所述P与所述Q之和小于或等于所述L,所述P个第二类细节表达层与所述Q个第二类细节表达层不重叠,所述第一数量与所述第二数量不同。
可选的,所述P个第二类细节表达层为所述L个第二类细节表达层中的前P个第二类细节表达层。
可选的,所述Q个第二类细节表达层为所述L个第二类细节表达层中的后Q个第二类细节表达层。
可选的,所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻。
可选的,所述第一数量大于所述第二数量。
可选的,所述第一数量为所述第二数量的正整数倍。
可选的,所述第一数量个点中相邻两个点之间的间隔相等。
可选的,所述第二数量个点中相邻两个点之间的间隔相等。
在一些实施例中,编码单元14还用于根据当前点的属性信息的残差值和属性信息的预测值,确定所述当前点的属性信息的重建值。
在一些实施例中,编码单元14具体用于根据如下公式,确定所述当前点的属性信息的重建值:
reconstructedColor=attrResidual+attrPredValue,
其中,所述reconstructedColor为所述当前点的属性信息的重建值,所述attrResidual为所述当前点的属性信息的残差值,所述attrPredValue所述当前点的属性信息的预测值。
在一些实施例中,编码单元14还用于生成码流,所述码流包括第一信息,所述第一信息用于指示属性信息的残差值进行无损编码的点。
可选的,所述第一信息包括所述属性信息的残差值进行无损编码的点的标识信息。
可选的,所述第一信息包括所述属性信息的残差值进行无损编码的点的数量。
可选的,所述第一信息包括所述第一数量、所述第二数量,以及所述P个第二类细节表达层和所述Q个第二类细节表达层的划分信息。
可选的,若所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻,则所述划分信息还包括所述Q个第二类细节表达层的第一个第二类细节表达层的标识信息,或者,包括所述P个第二类细节表达层的最后一个第二类细节表达层的标识信息。
可选的,所述第一信息还包括:所述第二类细节表达层中第一个属性信息的残差值进行无损编码的点的标识信息。
在一些实施例中,编码单元14具体用于在对所述点云中的点的属性信息的残差值进行量化的过程中,跳过所述至少一个对属性信息的残差值进行无损编码的点;或者,
将所述至少一个对属性信息的残差值进行无损编码的点的量化步长设置为1;或者,
将所述至少一个对属性信息的残差值进行无损编码的点的量化参数QP设置为目标值,所述目标值为量化步长为1时对应的QP值。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图18所示的点云编码器10可以执行本申请实施例的方法,并且点云编码器10中的各个单元的前述和其它操作和/或功能分别为了实现方法100至400等各个方法中的相应流程,为了简洁,在此不再赘述。
图19是本申请实施例提供的点云解码器20的示意性框图。
如图19所示,该点云解码器20可包括:
解码单元21,用于对点云的码流进行解析,得到所述点云的当前点的属性信息的量化残差值;
反量化单元22,用于用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,得到所述当前点的属性信息的重建残差值;
其中,所述目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反量化方式和第三反量化方式,所述第一反量化方式为对所述点云中至少一个点的反量化参数设定反量化参数增量,所述第二反量化方式为对所述点云中点的残差值进行去加权处理,所述第三反量化方式为对所述点云中至少一个点的属性信息的残差值进行无损解码。
在一些实施例中,解码单元21,用于获取所述点云中点的几何信息;根据所述点云中点的几何信息,将所述点云划分为一个或多个细节等级LOD层。
在一些实施例中,所述目标反量化方式包括所述第一反量化方式和所述第二反量化方式,反量化单元22,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;确定适配所述目标LOD层的目标量化步长;确定所述当前点的第一量化权重;根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的量化残差值进行反量化。
在一些实施例中,所述目标反量化方式包括所述第一反量化方式和所述第三反量化 方式,反量化单元22,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长;根据所述目标量化步长,对所述当前点的属性信息的量化残差值进行反量化。
在一些实施例中,所述目标反量化方式包括所述第一反量化方式、所述第二反量化方式和所述第三反量化方式,反量化单元22,具体用于根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长,以及确定所述当前点的第一量化权重;根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的量化残差值进行反量化。
在一些实施例中,解码单元21,还用于若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编解码。
在一些实施例中,反量化单元22,具体用于获取所述目标LOD层的层级分级索引;根据所述目标LOD层的层级分级索引,在量化步长查找表中查询所述目标LOD层对应的目标量化步长,所述量化步长查找表包括LOD层与量化步长之间的对应关系。
在一些实施例中,反量化单元22,具体用于解码码流,得到所述当前点的编码参数中的量化参数;获取所述目标LOD层的层级分级索引,并根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量;根据所述量化参数和所述目标LOD层的量化参数增量,确定所述目标LOD层对应的目标量化步长。
在一些实施例中,反量化单元22,具体用于若所述目标LOD层属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为j,所述N为小于或等于第一阈值的正整数,所述j为大于0且小于或等于第二阈值的整数;若所述目标LOD层不属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为0。
可选的,若所述量化参数大于或等于第三阈值,则所述j为第一预设数值;若所述量化参数小于所述第三阈值,则所述j为第二预设数值。
在一些实施例中,反量化单元22,具体用于确定所述当前点的索引;将所述当前点的索引所对应的量化权重,确定为所述当前点的第一量化权重。
在一些实施例中,反量化单元22,具体用于根据所述当前点的第一量化权重,确定所述当前点的第二量化权重;根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,得到所述重建残差值。
可选的,所述第二量化权重小于或等于所述目标量化步长。
在一些实施例中,反量化单元22,具体用于利用以下公式确定所述当前点的第二量化权重:
effectiveQuantWeight=min(w(Q),Qstep);
其中,effectiveQuantWeight表示所述当前点的第二量化权重,w(Q)表示所述当前点的第一量化权重,k表示对所述w(Q)进行右移运算的位数,Qstep表示所述目标量化步长。
可选的,所述第二量化权重的数值等于2的整数次幂。
可选的,所述第一量化权重的数值不等于2的整数次幂,在一些实施例中,反量化单元22,具体用于基于所述当前点的第一量化权重的数值,将最接近所述当前点的第一量化权重的2的整数次幂,确定为所述当前点的第二量化权重。
在一些实施例中,反量化单元22,具体用于利用所述当前点的目标量化步长对所述 量化残差值进行反量化,得到加权残差值;
利用所述加权残差值除以所述第二量化权重,得到所述重建残差值。
在一些实施例中,反量化单元22,具体用于利用以下公式对所述量化残差值进行反量化:
attrResidualQuant1=(attrResidualQuant2×Qstep)/effectiveQuantWeight;
其中,attrResidualQuant2表示所述量化残差值,attrResidualQuant1表示所述重建残差值,effectiveQuantWeight表示所述当前点的第二量化权重,Qstep表示所述当前点的目标量化步长。
在一些实施例中,反量化单元22,具体用于根据所述当前点的第二量化权重,对所述目标量化步长进行更新;根据所述更新后的量化步长,对所述当前点的属性信息的量化残差值进行反量化。
在一些实施例中,反量化单元22,具体用于利用以下公式更新所述当前点的量化步长:
Figure PCTCN2021087064-appb-000017
其中,effectiveQuantWeight表示所述当前点的第二量化步长,newQstep表示所述当前点的基于所述当前点的第二量化步长更新后的量化步长,Qstep表示所述当前点的基于所述当前点的第二量化步长更新前的量化步长。
在一些实施例中,解码单元21,还用于按照所述点云的编码顺序的倒序,通过遍历所述点云中的点,基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,N为大于0的整数。
可选的,所述点云中的每一个点的第一量化权重的初始值为预设值。
可选的,所述点云的LOD层中的前M个LOD层中的点的第一量化权重的初始值,大于剩余LOD层中的点的第一量化权重的初始值,所述M为正整数。
在一些实施例中,解码单元21,具体用于获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,所述影响权重与所述当前点和所述N个最邻近点的位置信息相关;基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重。
在一些实施例中,所述点云的属性参数集包括所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重;解码单元21,还用于通过查询所述属性参数集,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重。
在一些实施例中,解码单元21,还用于基于以下公式更新所述N个最邻近点的第一量化权重:
w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k);
其中,Q表示所述当前点,P i表示距离所述Q第i近的邻居点,i=1,2,…,N,w(Q)表示所述当前点的第一量化权重,α(P i,Q)表示所述当前点对邻居点影响权重大小,w(P i)表示邻居点P i更新后的第一量化权重,k表示右移运算的位数。
可选的,所述α(P i,Q)的值随i的增大而减小。
可选的,所述点云的量化权重存储为数组,所述数组的维度和所述点云中点的个数相同。
在一些实施例中,解码单元22,还用于解码所述点云的码流,得到第一信息,所述第一信息用于指示属性信息的残差值经过无损编码的点;根据所述第一信息,确定所述当前点的属性信息的残差值是否经过无损编码。
可选的,所述第一信息包括N,所述N为所述点云中属性信息的残差值经过无损编码的点的总个数。
可选的,所述N为2的整数倍。
可选的,所述N个点中每相邻的两个点之间的间隔相等。
在一些实施例中,解码单元21,具体用于若确定所述当前点为所述N个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码。
在一些实施例中,若所述第一信息包括预设间隔,解码单元21,具体用于根据所述预设间隔,若确定所述当前点与前一个属性信息的残差信息经过无损编码的点之间的间隔等于所述预设间隔,则确定所述当前点的属性信息的残差信息经过无损编码。
在一些实施例中,若所述第一信息包括第一预设值,解码单元21,具体用于根据所述第一预设值,获得所述多层细节表达层中所包括的点的总个数小于或等于所述第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于所述第一预设值的至少一个第二类细节表达层;若所述当前点属于所述第一类细节表达层,则确定所述当前点的属性信息的残差信息经过无损编码。
在一些实施例中,若所述第一信息包括M,所述M为一层所述第二类细节表达层中属性信息的残差值经过无损编码的点的数量,所述M为2的正整数倍,解码单元21,具体用于若所述当前点为所述M个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码。
在一些实施例中,所述至少一个第二类细节表达层包括L个第二类细节表达层,所述L为大于或等于2的正整数,若所述第一信息包括第一数量、第二数量、以及P个第二类细节表达层和Q个第二类细节表达层的划分信息,解码单元21,具体用于根据所述划分信息,对所述L个第二类细节表达层进行划分,得到所述P个第二类细节表达层和所述Q个第二类细节表达层;若确定所述当前点为所述P个第二类细节表达层中属性信息的残差值经过无损编码的第一数量个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码;若确定所述当前点为所述Q个第二类细节表达层中属性信息的残差值经过无损编码的第二数量个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码;其中,所述P、Q为正整数,且所述P与所述Q之和小于或等于所述L,所述P个第二类细节表达层与所述Q个第二类细节表达层不重叠,所述第一数量与所述第二数量不同。
可选的,所述P个第二类细节表达层为所述L个第二类细节表达层中的前P个第二类细节表达层。
可选的,所述Q个第二类细节表达层为所述L个第二类细节表达层中的后Q个第二类细节表达层。
可选的,所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻。
可选的,若所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个 第二类细节表达层中的第一个第二类细节表达层相邻,则所述划分信息还包括所述Q个第二类细节表达层的第一个第二类细节表达层的标识信息,或者,包括所述P个第二类细节表达层的最后一个第二类细节表达层的标识信息。
可选的,所述第一信息还包括:所述第二类细节表达层中第一个属性信息的残差值经过无损编码的点的标识信息。
可选的,所述第一信息还包括:属性信息的残差值经过无损编码的点的标识信息。
可选的,所述第一数量大于所述第二数量。
可选的,所述第一数量为所述第二数量的正整数倍。
可选的,所述第一数量个点中相邻两个点之间的间隔相等。
可选的,所述第二数量个点中相邻两个点之间的间隔相等。
在一些实施例中,解码单元21,具体用于根据如下公式,确定所述当前点的属性信息的重建值:
reconstructedColor=attrResidual+attrPredValue,
其中,所述reconstructedColor为所述当前点的属性信息的重建值,所述attrResidual为所述当前点的属性信息的残差值,所述attrPredValue所述当前点的属性信息的预测值。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图19所示的点云解码器20可以对应于执行本申请实施例的方法500、600和/或700中的相应主体,并且点云解码器20中的各个单元的前述和其它操作和/或功能分别为了实现方法500、600和/或700等各个方法中的相应流程,为了简洁,在此不再赘述。
上文中结合附图从功能单元的角度描述了本申请实施例的装置和系统。应理解,该功能单元可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件单元组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件单元组合执行完成。可选地,软件单元可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图20是本申请实施例提供的电子设备30的示意性框图。
如图20所示,该电子设备30可以为本申请实施例所述的点云编码器,或者点云解码器,该电子设备30可包括:
存储器33和处理器32,该存储器33用于存储计算机程序34,并将该程序代码34传输给该处理器32。换言之,该处理器32可以从存储器33中调用并运行计算机程序34,以实现本申请实施例中的方法。
例如,该处理器32可用于根据该计算机程序34中的指令执行上述方法200中的步骤。
在本申请的一些实施例中,该处理器32可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable  Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本申请的一些实施例中,该存储器33包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本申请的一些实施例中,该计算机程序34可以被分割成一个或多个单元,该一个或者多个单元被存储在该存储器33中,并由该处理器32执行,以完成本申请提供的方法。该一个或多个单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序34在该电子设备30中的执行过程。
如图20所示,该电子设备30还可包括:
收发器33,该收发器33可连接至该处理器32或存储器33。
其中,处理器32可以控制该收发器33与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器33可以包括发射机和接收机。收发器33还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该电子设备30中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
图21是本申请实施例提供的点云编解码系统40的示意性框图。
如图21所示,该点云编解码系统40可包括:点云编码器41和点云解码器42,其中点云编码器41用于执行本申请实施例涉及的点云编码方法,点云解码器42用于执行本申请实施例涉及的点云解码方法。
本申请还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。
本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
本申请实施例还提供一种码流,该码流是经过上述图6、图7、图10或图13所示的编码方法得到的。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介 质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字点云光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。例如,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。

Claims (105)

  1. 一种点云编码方法,其特征在于,包括:
    获取点云中当前点的属性信息;
    对所述当前点的属性信息进行处理,得到所述当前点的属性信息的残差值;
    使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值;
    其中,所述目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,所述第一量化方式为对所述点云中至少一个点的量化参数设定量化参数增量,所述第二量化方式为对所述点云中点的残差值进行加权处理,所述第三量化方式为对所述点云中至少一个点的属性信息的残差值进行无损编码。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述点云中点的几何信息;
    根据所述点云中点的几何信息,将所述点云划分为一个或多个细节等级LOD层,每个所述LOD层包括至少一个多层细节表达层,每一层细节表达层包括至少一个点。
  3. 根据权利要求2所述的方法,其特征在于,所述目标量化方式包括所述第一量化方式和所述第二量化方式,所述使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    确定适配所述目标LOD层的目标量化步长;
    确定所述当前点的第一量化权重;
    根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
  4. 根据权利要求2所述的方法,其特征在于,所述目标量化方式包括所述第一量化方式和所述第三量化方式,所述使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长;
    根据所述目标量化步长,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
  5. 根据权利要求2所述的方法,其特征在于,所述目标量化方式包括所述第一量化方式、所述第二量化方式和所述第三量化方式,所述使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长,以及确定所述当前点的第一量化权重;
    根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:
    若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编码。
  7. 根据权利要求3-5任一项所述的方法,其特征在于,所述确定适配所述目标LOD层的目标量化步长,包括:
    获取所述目标LOD层的层级分级索引;
    根据所述目标LOD层的层级分级索引,在量化步长查找表中查询所述目标LOD层对应的目标量化步长,所述量化步长查找表包括LOD层与量化步长之间的对应关系。
  8. 根据权利要求3-5任一项所述的方法,其特征在于,所述确定适配所述目标LOD层的目标量化步长,包括:
    确定所述当前点的编码参数中的量化参数;
    获取所述目标LOD层的层级分级索引,并根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量;
    根据所述量化参数和所述目标LOD层的量化参数增量,确定所述目标LOD层对应的目标量化步长。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量,包括:
    若所述目标LOD层属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为j,所述N为小于或等于第一阈值的正整数,所述j为大于0且小于或等于第二阈值的整数;
    若所述目标LOD层不属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为0。
  10. 根据权利要求9所述的方法,其特征在于,若所述量化参数大于或等于第三阈值,则所述j为第一预设数值;
    若所述量化参数小于所述第三阈值,则所述j为第二预设数值。
  11. 根据权利要求3或5所述的方法,其特征在于,所述确定所述当前点的第二量化权重,包括:
    确定所述当前点的索引;
    将所述当前点的索引所对应的量化权重,确定为所述当前点的第一量化权重。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值,包括:
    根据所述当前点的第一量化权重,确定所述当前点的第二量化权重;
    根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的残差值进行量化,得到所述量化残差值。
  13. 根据权利要求12所述的方法,其特征在于,所述第二量化权重小于或等于所述目标量化步长。
  14. 根据权利要求12所述的方法,其特征在于,所述根据所述当前点的第一量化权重,确定所述当前点的第二量化权重,包括:
    利用以下公式确定所述当前点的第二量化权重:
    effectiveQuantWeight=min(w(Q)>>k,Qstep);
    其中,effectiveQuantWeight表示所述当前点的第二量化权重,w(Q)表示所述当前点的第一量化权重,k表示对所述w(Q)进行右移运算的位数,Qstep表示所述目标量化步长。
  15. 根据权利要求12所述的方法,其特征在于,所述第二量化权重的数值等于2的整数次幂。
  16. 根据权利要求12所述的方法,其特征在于,所述第一量化权重的数值不等于2的整数次幂,根据所述当前点的第一量化权重,确定所述当前点的第二量化权重,包括:
    基于所述当前点的第一量化权重的数值,将最接近所述当前点的第一量化权重的2的整数次幂,确定为所述当前点的第二量化权重。
  17. 根据权利要求12所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的残差值进行量化,包括:
    利用所述第二量化权重乘以所述当前点的属性信息的残差值,得到加权残差值;
    利用所述目标量化步长对所述加权残差值进行量化,得到所述当前点的属性信息的量化残差值。
  18. 根据权利要求12所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的残差值进行量化,包括:
    利用以下公式对所述当前点的属性信息的残差值进行量化:
    attrResidualQuant2=attrResidualQuant1×effectiveQuantWeight/Qstep;
    其中,所述attrResidualQuant2表示所述当前点的属性信息的量化残差值,attrResidualQuant1表示所述当前点的属性信息的残差值,所述effectiveQuantWeight表示所述当前点的第二量化权重,所述Qstep表示所述目标量化步长。
  19. 根据权利要求12所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的残差值进行量化,包括:
    根据所述当前点的第二量化权重,对所述目标量化步长进行更新;
    根据所述更新后的量化步长,对所述当前点的属性信息的残差值进行量化。
  20. 根据权利要求19所述的方法,其特征在于,所述根据所述当前点的第二量化权重,对所述目标量化步长进行更新,包括:
    利用以下公式更新所述目标量化步长:
    Figure PCTCN2021087064-appb-100001
    其中,所述effectiveQuantWeight表示所述当前点的第二量化权重,所述newQstep表示所述当前点的基于所述目标量化步长更新后的量化步长,所述Qstep表示所述目标量化步长。
  21. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    按照所述点云的编码顺序的倒序,通过遍历所述点云中的点,基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,N为大于0的整数。
  22. 根据权利要求21所述的方法,其特征在于,所述点云中的每一个点的第一量化权重的初始值为预设值。
  23. 根据权利要求21所述的方法,其特征在于,所述点云的LOD层中的前M个 LOD层中的点的第一量化权重的初始值,大于剩余LOD层中的点的第一量化权重的初始值,所述M为正整数。
  24. 根据权利要求21所述的方法,其特征在于,所述基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,包括:
    获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,所述影响权重与所述当前点和所述N个最邻近点的位置信息相关;
    基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重。
  25. 根据权利要求24所述的方法,其特征在于,所述点云的属性参数集包括所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重;所述获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,包括:
    通过查询所述属性参数集,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重。
  26. 根据权利要求24所述的方法,其特征在于,所述基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重,包括:
    基于以下公式更新所述N个最邻近点的第一量化权重:
    w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k);
    其中,Q表示所述当前点,P i表示距离所述Q第i近的邻居点,i=1,2,…,N,w(Q)表示所述当前点的第一量化权重,α(P i,Q)表示所述当前点对邻居点影响权重大小,w(P i)表示邻居点P i更新后的第一量化权重,k表示右移运算的位数。
  27. 根据权利要求26所述的方法,其特征在于,所述α(P i,Q)的值随i的增大而减小。
  28. 根据权利要求12所述的方法,其特征在于,所述点云的量化权重存储为数组,所述数组的维度和所述点云中点的个数相同。
  29. 根据权利要求4-6任一项所述的方法,其特征在于,所述方法还包括:
    对所述点云中至少一个点的属性信息的残差值进行无损编码。
  30. 根据权利要求29所述的方法,其特征在于,所述至少一个点包括N个点,所述N为2的整数倍。
  31. 根据权利要求29所述的方法,其特征在于,所述至少一个点包括N个点,所述N个点中每相邻的两个点之间的间隔相等。
  32. 根据权利要求29所述的方法,其特征在于,所述对所述点云中至少一个点的属性残差值进行无损编码,包括:
    对所述点云的多层细节表达层中的至少一层细节表达层中的至少一个点的属性残差值进行无损编码。
  33. 根据权利要求32所述的方法,其特征在于,所述对所述点云的多层细节表达层中的至少一层细节表达层中的至少一个点的属性残差值进行无损编码,包括:
    获得所述点云的多层细节表达层中所包括的点的总个数小于或等于第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于所述第一预设值的至少一个第 二类细节表达层;
    对所述第一类细节表达层中的所有点的属性信息的残差值进行无损编码;
    对所述第二类细节表达层中的至少一个点的属性信息的残差值进行无损编码。
  34. 根据权利要求33所述的方法,其特征在于,所述对所述第二类细节表达层中的至少一个点的属性信息的残差值进行无损编码,包括:
    对所述第二类细节表达层中的M个点的属性信息的残差值进行无损编码,所述M为2的正整数倍。
  35. 根据权利要求33所述的方法,其特征在于,所述至少一层第二类细节表达层包括L个第二类细节表达层,所述L为大于或等于2的正整数,则所述对所述第二类细节表达层中的至少一个点的属性信息的残差值进行无损编码,包括:
    对P个第二类细节表达层中每个所述第二类细节表达层中的第一数量个点的属性信息的残差值进行无损编码;
    对Q个第二类细节表达层中每个所述第二类细节表达层中的第二数量个点的属性信息的残差值进行无损编码;
    其中,所述P、Q为正整数,且所述P与所述Q之和小于或等于所述L,所述P个第二类细节表达层与所述Q个第二类细节表达层不重叠,所述第一数量与所述第二数量不同。
  36. 根据权利要求35所述的方法,其特征在于,所述P个第二类细节表达层为所述L个第二类细节表达层中的前P个第二类细节表达层。
  37. 根据权利要求35所述的方法,其特征在于,所述Q个第二类细节表达层为所述L个第二类细节表达层中的后Q个第二类细节表达层。
  38. 根据权利要求35所述的方法,其特征在于,所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻。
  39. 根据权利要求35所述的方法,其特征在于,所述第一数量大于所述第二数量。
  40. 根据权利要求39所述的方法,其特征在于,所述第一数量为所述第二数量的正整数倍。
  41. 根据权利要求35所述的方法,其特征在于,所述第一数量个点中相邻两个点之间的间隔相等。
  42. 根据权利要求35所述的方法,其特征在于,所述第二数量个点中相邻两个点之间的间隔相等。
  43. 根据权利要求6所述的方法,其特征在于,若所述当前点为无损编码的点,则所述方法还包括:
    根据当前点的属性信息的残差值和属性信息的预测值,确定所述当前点的属性信息的重建值。
  44. 根据权利要求43所述的方法,其特征在于,所述根据当前点的属性信息的残差值和属性信息的预测值,确定所述当前点的属性信息的重建值,包括:
    根据如下公式,确定所述当前点的属性信息的重建值:
    reconstructedColor=attrResidual+attrPredValue,
    其中,所述reconstructedColor为所述当前点的属性信息的重建值,所述 attrResidual为所述当前点的属性信息的残差值,所述attrPredValue所述当前点的属性信息的预测值。
  45. 根据权利要求35-42任一项所述的方法,其特征在于,所述方法还包括:
    生成码流,所述码流包括第一信息,所述第一信息用于指示属性信息的残差值进行无损编码的点。
  46. 根据权利要求45所述的方法,其特征在于,所述第一信息包括所述属性信息的残差值进行无损编码的点的标识信息。
  47. 根据权利要求45所述的方法,其特征在于,所述第一信息包括所述属性信息的残差值进行无损编码的点的数量。
  48. 根据权利要求45所述的方法,其特征在于,所述第一信息包括所述第一数量、所述第二数量,以及所述P个第二类细节表达层和所述Q个第二类细节表达层的划分信息。
  49. 根据权利要求48所述的方法,其特征在于,若所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻,则所述划分信息还包括所述Q个第二类细节表达层的第一个第二类细节表达层的标识信息,或者,包括所述P个第二类细节表达层的最后一个第二类细节表达层的标识信息。
  50. 根据权利要求45所述的方法,其特征在于,所述第一信息还包括:所述第二类细节表达层中第一个属性信息的残差值进行无损编码的点的标识信息。
  51. 根据权利要求29所述的方法,其特征在于,所述对所述点云中至少一个点的属性信息的残差值进行无损编码,包括:
    在对所述点云中的点的属性信息的残差值进行量化的过程中,跳过所述至少一个对属性信息的残差值进行无损编码的点;或者,
    将所述至少一个对属性信息的残差值进行无损编码的点的量化步长设置为1;或者,
    将所述至少一个对属性信息的残差值进行无损编码的点的量化参数QP设置为目标值,所述目标值为量化步长为1时对应的QP值。
  52. 一种点云解码方法,其特征在于,包括:
    对点云的码流进行解析,得到所述点云的当前点的属性信息的量化残差值;
    使用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,得到所述当前点的属性信息的重建残差值;
    其中,所述目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反量化方式和第三反量化方式,所述第一反量化方式为对所述点云中至少一个点的反量化参数设定反量化参数增量,所述第二反量化方式为对所述点云中点的残差值进行去加权处理,所述第三反量化方式为对所述点云中至少一个点的属性信息的残差值进行无损解码。
  53. 根据权利要求52所述的方法,其特征在于,所述方法还包括:
    获取所述点云中点的几何信息;
    根据所述点云中点的几何信息,将所述点云划分为一个或多个细节等级LOD层,每 个所述LOD层包括至少一个多层细节表达层,每一层细节表达层包括至少一个点。
  54. 根据权利要求53所述的方法,其特征在于,所述目标反量化方式包括所述第一反量化方式和所述第二反量化方式,所述使用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    确定适配所述目标LOD层的目标量化步长;
    确定所述当前点的第一量化权重;
    根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的量化残差值进行反量化。
  55. 根据权利要求53所述的方法,其特征在于,所述目标反量化方式包括所述第一反量化方式和所述第三反量化方式,所述使用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长;
    根据所述目标量化步长,对所述当前点的属性信息的量化残差值进行反量化。
  56. 根据权利要求53所述的方法,其特征在于,所述目标反量化方式包括所述第一反量化方式、所述第二反量化方式和所述第三反量化方式,所述使用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,包括:
    根据所述当前点的几何信息,确定所述当前点所处的目标LOD层;
    若确定所述当前点属于有损编码的点,则确定适配所述目标LOD层的目标量化步长,以及确定所述当前点的第一量化权重;
    根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的量化残差值进行反量化。
  57. 根据权利要求55或56所述的方法,其特征在于,所述方法还包括:
    若确定所述当前点属于无损编码的点,则对当前点的属性信息的残差值进行无损编解码。
  58. 根据权利要求54或56所述的方法,其特征在于,所述确定适配所述目标LOD层的目标量化步长,包括:
    获取所述目标LOD层的层级分级索引;
    根据所述目标LOD层的层级分级索引,在量化步长查找表中查询所述目标LOD层对应的目标量化步长,所述量化步长查找表包括LOD层与量化步长之间的对应关系。
  59. 根据权利要求54或56所述的方法,其特征在于,所述确定适配所述目标LOD层的目标量化步长,包括:
    解码码流,得到所述当前点的编码参数中的量化参数;
    获取所述目标LOD层的层级分级索引,并根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量;
    根据所述量化参数和所述目标LOD层的量化参数增量,确定所述目标LOD层对应的目标量化步长。
  60. 根据权利要求59所述的方法,其特征在于,其特征在于,所述根据所述目标LOD层的层级分级索引,确定所述目标LOD层的量化参数增量,包括:
    若所述目标LOD层属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为j,所述N为小于或等于第一阈值的正整数,所述j为大于0且小于或等于第二阈值的整数;
    若所述目标LOD层不属于所述点云的前N个LOD层,则确定所述目标LOD层的量化参数增量为0。
  61. 根据权利要求60所述的方法,其特征在于,若所述量化参数大于或等于第三阈值,则所述j为第一预设数值;
    若所述量化参数小于所述第三阈值,则所述j为第二预设数值。
  62. 根据权利要求54或56所述的方法,其特征在于,所述确定所述当前点的第一量化权重,包括:
    确定所述当前点的索引;
    将所述当前点的索引所对应的量化权重,确定为所述当前点的第一量化权重。
  63. 根据权利要求54或56所述的方法,其特征在于,所述根据所述目标量化步长和所述第一量化权重,对所述当前点的属性信息的量化残差值进行反量化,包括:
    根据所述当前点的第一量化权重,确定所述当前点的第二量化权重;
    根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,得到所述重建残差值。
  64. 根据权利要求63所述的方法,其特征在于,所述第二量化权重小于或等于所述目标量化步长。
  65. 根据权利要求63所述的方法,其特征在于,所述根据所述当前点的第一量化权重,确定所述当前点的第二量化权重,包括:
    利用以下公式确定所述当前点的第二量化权重:
    effectiveQuantWeight=min(w(Q),Qstep);
    其中,effectiveQuantWeight表示所述当前点的第二量化权重,w(Q)表示所述当前点的第一量化权重,k表示对所述w(Q)进行右移运算的位数,Qstep表示所述目标量化步长。
  66. 根据权利要求63所述的方法,其特征在于,所述第二量化权重的数值等于2的整数次幂。
  67. 根据权利要求63所述的方法,其特征在于,所述第一量化权重的数值不等于2的整数次幂,根据所述当前点的第一量化权重,确定所述当前点的第二量化权重,包括:
    基于所述当前点的第一量化权重的数值,将最接近所述当前点的第一量化权重的2的整数次幂,确定为所述当前点的第二量化权重。
  68. 根据权利要求63所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,包括:
    利用所述当前点的目标量化步长对所述量化残差值进行反量化,得到加权残差值;
    利用所述加权残差值除以所述第二量化权重,得到所述重建残差值。
  69. 根据权利要求63所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,包括:
    利用以下公式对所述量化残差值进行反量化:
    attrResidualQuant1=(attrResidualQuant2×Qstep)/effectiveQuantWeight;
    其中,attrResidualQuant2表示所述量化残差值,attrResidualQuant1表示所述重建残差值,effectiveQuantWeight表示所述当前点的第二量化权重,Qstep表示所述当前点的目标量化步长。
  70. 根据权利要求63所述的方法,其特征在于,所述根据所述目标量化步长和所述第二量化权重,对所述当前点的属性信息的量化残差值进行反量化,包括:
    根据所述当前点的第二量化权重,对所述目标量化步长进行更新;
    根据所述更新后的量化步长,对所述当前点的属性信息的量化残差值进行反量化。
  71. 根据权利要求70所述的方法,其特征在于,所述根据所述当前点的第二量化权重,对所述目标量化步长进行更新,包括:
    利用以下公式更新所述当前点的量化步长:
    Figure PCTCN2021087064-appb-100002
    其中,effectiveQuantWeight表示所述当前点的第二量化步长,newQstep表示所述当前点的基于所述当前点的第二量化步长更新后的量化步长,Qstep表示所述当前点的基于所述当前点的第二量化步长更新前的量化步长。
  72. 根据权利要求63所述的方法,其特征在于,所述方法还包括:
    按照所述点云的编码顺序的倒序,通过遍历所述点云中的点,基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,N为大于0的整数。
  73. 根据权利要求72所述的方法,其特征在于,所述点云中的每一个点的第一量化权重的初始值为预设值。
  74. 根据权利要求72所述的方法,其特征在于,所述点云的LOD层中的前M个LOD层中的点的第一量化权重的初始值,大于剩余LOD层中的点的第一量化权重的初始值,所述M为正整数。
  75. 根据权利要求72所述的方法,其特征在于,所述基于当前点的第一量化权重更新所述当前点的N个最邻近点的第一量化权重,包括:
    获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,所述影响权重与所述当前点和所述N个最邻近点的位置信息相关;
    基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重。
  76. 根据权利要求75所述的方法,其特征在于,所述点云的属性参数集包括所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重;所述获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,包括:
    通过查询所述属性参数集,获取所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重。
  77. 根据权利要求72所述的方法,其特征在于,所述基于所述当前点的量化权重和所述当前点对所述N个最邻近点中的每一个最邻近点的影响权重,更新所述N个最邻近点的第一量化权重,包括:
    基于以下公式更新所述N个最邻近点的第一量化权重:
    w(P i)←w(P i)+((α(P i,Q)×w(Q))>>k);
    其中,Q表示所述当前点,P i表示距离所述Q第i近的邻居点,i=1,2,…,N,w(Q)表示所述当前点的第一量化权重,α(P i,Q)表示所述当前点对邻居点影响权重大小,w(P i)表示邻居点P i更新后的第一量化权重,k表示右移运算的位数。
  78. 根据权利要求77所述的方法,其特征在于,所述α(P i,Q)的值随i的增大而减小。
  79. 根据权利要求52所述的方法,其特征在于,所述点云的量化权重存储为数组,所述数组的维度和所述点云中点的个数相同。
  80. 根据权利要求55-57任一项所述的方法,其特征在于,所述方法还包括:
    解码所述点云的码流,得到第一信息,所述第一信息用于指示属性信息的残差值经过无损编码的点;
    根据所述第一信息,确定所述当前点的属性信息的残差值是否经过无损编码。
  81. 根据权利要求80所述的方法,其特征在于,所述第一信息包括N,所述N为所述点云中属性信息的残差值经过无损编码的点的总个数。
  82. 根据权利要求81所述的方法,其特征在于,所述N为2的整数倍。
  83. 根据权利要求81所述的方法,其特征在于,所述N个点中每相邻的两个点之间的间隔相等。
  84. 根据权利要求83所述的方法,其特征在于,所述根据所述第一信息,确定所述当前点的属性信息的残差信息是否经过无损编码,包括:
    若确定所述当前点为所述N个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码。
  85. 根据权利要求80所述的方法,其特征在于,若所述第一信息包括预设间隔,所述根据所述第一信息,确定所述当前点的属性信息的残差信息是否经过无损编码,包括:
    根据所述预设间隔,若确定所述当前点与前一个属性信息的残差信息经过无损编码的点之间的间隔等于所述预设间隔,则确定所述当前点的属性信息的残差信息经过无损编码。
  86. 根据权利要求80所述的方法,其特征在于,若所述第一信息包括第一预设值,则所述根据所述第一信息,确定所述当前点的属性信息的残差信息是否经过无损编码,包括:
    根据所述第一预设值,获得所述点云的多层细节表达层中所包括的点的总个数小于或等于所述第一预设值的至少一个第一类细节表达层,以及所包括的点的总个数大于所述第一预设值的至少一个第二类细节表达层;
    若所述当前点属于所述第一类细节表达层,则确定所述当前点的属性信息的残差信息经过无损编码。
  87. 根据权利要求86所述的方法,其特征在于,若所述第一信息包括M,所述M为一层所述第二类细节表达层中属性信息的残差值经过无损编码的点的数量,所述M为2的正整数倍,则所述根据所述第一信息,确定所述当前点的属性信息的残差信息是否经过无损编码,包括:
    若所述当前点为所述M个点中的一个点,则确定所述当前点的属性信息的残差信息 经过无损编码。
  88. 根据权利要求86所述的方法,其特征在于,所述至少一个第二类细节表达层包括L个第二类细节表达层,所述L为大于或等于2的正整数,若所述第一信息包括第一数量、第二数量、以及P个第二类细节表达层和Q个第二类细节表达层的划分信息,则所述根据所述第一信息,确定所述当前点的属性信息的残差信息是否经过无损编码,包括:
    根据所述划分信息,对所述L个第二类细节表达层进行划分,得到所述P个第二类细节表达层和所述Q个第二类细节表达层;
    若确定所述当前点为所述P个第二类细节表达层中属性信息的残差值经过无损编码的第一数量个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码;
    若确定所述当前点为所述Q个第二类细节表达层中属性信息的残差值经过无损编码的第二数量个点中的一个点,则确定所述当前点的属性信息的残差信息经过无损编码;
    其中,所述P、Q为正整数,且所述P与所述Q之和小于或等于所述L,所述P个第二类细节表达层与所述Q个第二类细节表达层不重叠,所述第一数量与所述第二数量不同。
  89. 根据权利要求88所述的方法,其特征在于,所述P个第二类细节表达层为所述L个第二类细节表达层中的前P个第二类细节表达层。
  90. 根据权利要求88所述的方法,其特征在于,所述Q个第二类细节表达层为所述L个第二类细节表达层中的后Q个第二类细节表达层。
  91. 根据权利要求88所述的方法,其特征在于,所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层的第一个第二类细节表达层相邻。
  92. 根据权利要求91所述的方法,其特征在于,若所述P个第二类细节表达层中的最后一个第二类细节表达层与所述Q个第二类细节表达层中的第一个第二类细节表达层相邻,则所述划分信息还包括所述Q个第二类细节表达层的第一个第二类细节表达层的标识信息,或者,包括所述P个第二类细节表达层的最后一个第二类细节表达层的标识信息。
  93. 根据权利要求88所述的方法,其特征在于,所述第一信息还包括:所述第二类细节表达层中第一个属性信息的残差值经过无损编码的点的标识信息。
  94. 根据权利要求88所述的方法,其特征在于,所述第一信息还包括:属性信息的残差值经过无损编码的点的标识信息。
  95. 根据权利要求88所述的方法,其特征在于,所述第一数量大于所述第二数量。
  96. 根据权利要求95所述的方法,其特征在于,所述第一数量为所述第二数量的正整数倍。
  97. 根据权利要求88所述的方法,其特征在于,所述第一数量个点中相邻两个点之间的间隔相等。
  98. 根据权利要求88所述的方法,其特征在于,所述第二数量个点中相邻两个点之间的间隔相等。
  99. 根据权利要求80所述的方法,其特征在于,若所述当前点为无损编码的点,所述方法还包括:
    根据如下公式,确定所述当前点的属性信息的重建值:
    reconstructedColor=attrResidual+attrPredValue,
    其中,所述reconstructedColor为所述当前点的属性信息的重建值,所述attrResidual为所述当前点的属性信息的残差值,所述attrPredValue所述当前点的属性信息的预测值。
  100. 一种点云编码器,其特征在于,包括:
    获取单元,用于获取点云中当前点的属性信息;
    处理单元,用于对所述当前点的属性信息进行处理,得到所述当前点的属性信息的残差值;
    量化单元,用于使用目标量化方式,对所述当前点的属性信息的残差值进行量化,得到所述当前点的属性信息的量化残差值;
    其中,所述目标量化方式包括如下至少两种量化方式:第一量化方式、第二量化方式和第三量化方式,所述第一量化方式为对所述点云中至少一个点的量化参数设定量化参数增量,所述第二量化方式为对所述点云中点的残差值进行加权处理,所述第三量化方式为对所述点云中至少一个点的属性信息的残差值进行无损编码。
  101. 一种点云解码器,其特征在于,包括:
    解码单元,用于对点云的码流进行解析,得到所述点云的当前点的属性信息的量化残差值;
    反量化单元,用于用目标反量化方式,对所述当前点的属性信息的量化残差值进行反量化,得到所述当前点的属性信息的重建残差值;
    其中,所述目标反量化方式包括如下至少两种反量化方式:第一反量化方式、第二反量化方式和第三反量化方式,所述第一反量化方式为对所述点云中至少一个点的反量化参数设定反量化参数增量,所述第二反量化方式为对所述点云中点的残差值进行去加权处理,所述第三反量化方式为对所述点云中至少一个点的属性信息的残差值进行无损解码。
  102. 一种点云编码器,其特征在于,包括:
    处理器,适于执行计算机程序;
    计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至51中任一项所述的方法。
  103. 一种点云解码器,其特征在于,包括:
    处理器,适于执行计算机程序;
    计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求52至98中任一项所述的方法。
  104. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,所述计算机指令适于由处理器加载并执行如权利要求1至98中任一项所述的方法。
  105. 一种点云编解码系统,其特征在于,包括:
    根据权利要求100或102所述的点云编码器;
    以及根据权利要求101或103所述的点云解码器。
PCT/CN2021/087064 2020-09-25 2021-04-13 点云编解码方法与系统、及点云编码器与点云解码器 WO2022062369A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2023518709A JP2023543752A (ja) 2020-09-25 2021-04-13 点群コーデック方法及びシステム、並びに点群エンコーダ及び点群デコーダ
KR1020237009716A KR20230075426A (ko) 2020-09-25 2021-04-13 포인트 클라우드 인코딩 및 디코딩 방법과 시스템 및 포인트 클라우드 인코더와 포인트 클라우드 디코더
CN202180064277.9A CN116325731A (zh) 2020-09-25 2021-04-13 点云编解码方法与系统、及点云编码器与点云解码器
EP21870758.6A EP4221207A4 (en) 2020-09-25 2021-04-13 POINT CLOUD ENCODING AND DECODING METHOD AND SYSTEM, AND POINT CLOUD ENCODER AND DECODER
US18/125,276 US20230232004A1 (en) 2020-09-25 2023-03-23 Point cloud encoding and decoding method and system, and point cloud encoder and point cloud decoder

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CNPCT/CN2020/117941 2020-09-25
PCT/CN2020/117941 WO2022061785A1 (zh) 2020-09-25 2020-09-25 点云编码方法、点云解码方法及相关装置
PCT/CN2020/138421 WO2022133752A1 (zh) 2020-12-22 2020-12-22 点云的编码方法、解码方法、编码器以及解码器
CNPCT/CN2020/138423 2020-12-22
CNPCT/CN2020/138421 2020-12-22
PCT/CN2020/138423 WO2022133753A1 (zh) 2020-12-22 2020-12-22 点云编解码方法与系统、及点云编码器与点云解码器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/125,276 Continuation US20230232004A1 (en) 2020-09-25 2023-03-23 Point cloud encoding and decoding method and system, and point cloud encoder and point cloud decoder

Publications (1)

Publication Number Publication Date
WO2022062369A1 true WO2022062369A1 (zh) 2022-03-31

Family

ID=80844505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087064 WO2022062369A1 (zh) 2020-09-25 2021-04-13 点云编解码方法与系统、及点云编码器与点云解码器

Country Status (6)

Country Link
US (1) US20230232004A1 (zh)
EP (1) EP4221207A4 (zh)
JP (1) JP2023543752A (zh)
KR (1) KR20230075426A (zh)
CN (1) CN116325731A (zh)
WO (1) WO2022062369A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213074A1 (zh) * 2022-05-06 2023-11-09 腾讯科技(深圳)有限公司 点云处理方法、装置、设备、存储介质及产品
WO2024007253A1 (zh) * 2022-07-07 2024-01-11 Oppo广东移动通信有限公司 点云率失真优化方法及属性压缩方法、装置和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018078503A (ja) * 2016-11-11 2018-05-17 日本電信電話株式会社 データ符号化方法、データ符号化装置及びデータ符号化プログラム
US20200021856A1 (en) * 2018-07-10 2020-01-16 Apple Inc. Hierarchical point cloud compression
CN110708560A (zh) * 2018-07-10 2020-01-17 腾讯美国有限责任公司 点云数据处理方法和装置
WO2020162495A1 (ja) * 2019-02-05 2020-08-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
WO2020189709A1 (ja) * 2019-03-18 2020-09-24 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018078503A (ja) * 2016-11-11 2018-05-17 日本電信電話株式会社 データ符号化方法、データ符号化装置及びデータ符号化プログラム
US20200021856A1 (en) * 2018-07-10 2020-01-16 Apple Inc. Hierarchical point cloud compression
CN110708560A (zh) * 2018-07-10 2020-01-17 腾讯美国有限责任公司 点云数据处理方法和装置
WO2020162495A1 (ja) * 2019-02-05 2020-08-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
WO2020189709A1 (ja) * 2019-03-18 2020-09-24 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4221207A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213074A1 (zh) * 2022-05-06 2023-11-09 腾讯科技(深圳)有限公司 点云处理方法、装置、设备、存储介质及产品
WO2024007253A1 (zh) * 2022-07-07 2024-01-11 Oppo广东移动通信有限公司 点云率失真优化方法及属性压缩方法、装置和存储介质

Also Published As

Publication number Publication date
EP4221207A1 (en) 2023-08-02
EP4221207A4 (en) 2024-03-20
KR20230075426A (ko) 2023-05-31
US20230232004A1 (en) 2023-07-20
CN116325731A (zh) 2023-06-23
JP2023543752A (ja) 2023-10-18

Similar Documents

Publication Publication Date Title
US11671576B2 (en) Method and apparatus for inter-channel prediction and transform for point-cloud attribute coding
US11232599B2 (en) Method and apparatus for inter-channel prediction and transform for point cloud attribute coding
US20230232004A1 (en) Point cloud encoding and decoding method and system, and point cloud encoder and point cloud decoder
EP4042376A1 (en) Techniques and apparatus for inter-channel prediction and transform for point-cloud attribute coding
US11910017B2 (en) Method for predicting point cloud attribute, encoder, decoder, and storage medium
US20230342985A1 (en) Point cloud encoding and decoding method and point cloud decoder
CN116250008A (zh) 点云的编码、解码方法、编码器、解码器以及编解码系统
WO2022140937A1 (zh) 点云编解码方法与系统、及点云编码器与点云解码器
CN116325732A (zh) 点云的解码、编码方法、解码器、编码器和编解码系统
WO2022217472A1 (zh) 点云编解码方法、编码器、解码器及计算机可读存储介质
WO2023024842A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2022257528A1 (zh) 点云属性的预测方法、装置及相关设备
WO2022133752A1 (zh) 点云的编码方法、解码方法、编码器以及解码器
WO2023103565A1 (zh) 点云属性信息的编解码方法、装置、设备及存储介质
WO2023159428A1 (zh) 编码方法、编码器以及存储介质
WO2024026712A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2023240455A1 (zh) 点云编码方法、编码装置、编码设备以及存储介质
WO2024065269A1 (zh) 点云编解码方法、装置、设备及存储介质
EP4307687A1 (en) Method and apparatus for selecting neighbor point in point cloud, and codec
CN117321991A (zh) 点云属性的预测方法、装置及编解码器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870758

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023518709

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021870758

Country of ref document: EP

Effective date: 20230425