WO2023123471A1 - 编解码方法、码流、编码器、解码器以及存储介质 - Google Patents

编解码方法、码流、编码器、解码器以及存储介质 Download PDF

Info

Publication number
WO2023123471A1
WO2023123471A1 PCT/CN2021/143955 CN2021143955W WO2023123471A1 WO 2023123471 A1 WO2023123471 A1 WO 2023123471A1 CN 2021143955 W CN2021143955 W CN 2021143955W WO 2023123471 A1 WO2023123471 A1 WO 2023123471A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
value
filter
identification information
reconstructed
Prior art date
Application number
PCT/CN2021/143955
Other languages
English (en)
French (fr)
Inventor
元辉
邢金睿
王璐
王婷婷
李明
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/143955 priority Critical patent/WO2023123471A1/zh
Publication of WO2023123471A1 publication Critical patent/WO2023123471A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the embodiments of the present application relate to the technical field of video encoding and decoding, and in particular, relate to an encoding and decoding method, a code stream, an encoder, a decoder, and a storage medium.
  • the attribute information encoding of point clouds is mainly aimed at encoding color information.
  • the color information is converted from the RGB color space to the YUV color space.
  • the point cloud is recolored with the reconstructed geometry information so that the unencoded attribute information corresponds to the reconstructed geometry information.
  • three predictive transformation methods are mainly used, including Predicting Transform, Lifting Transform, and Region Adaptive Hierarchal Transform (RAHT), to finally generate binary codes. flow.
  • the existing G-PCC codec framework only performs basic reconstruction for the initial point cloud, and for the case of attribute lossy coding, the difference between the reconstructed point cloud and the initial point cloud may be made after reconstruction Relatively large, the distortion is more serious, which affects the quality of the entire point cloud.
  • Embodiments of the present application provide a codec method, a code stream, a coder, a decoder, and a storage medium, which can not only improve the quality of point clouds, but also save code rates, thereby improving codec efficiency.
  • the embodiment of the present application provides an encoding method applied to an encoder, and the method includes:
  • the filter coefficient uses the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the points adjacent to the first point (K-1) neighbor points, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud;
  • the filter identification information indicates that the reconstructed point cloud is to be filtered
  • the filter identification information and filter coefficients are encoded, and the obtained encoded bits are written into the code stream.
  • the embodiment of the present application provides a code stream, which is generated by bit coding according to the information to be encoded; wherein the information to be encoded includes at least one of the following: attribute information of points in the initial point cloud Residual value, filter identification information and filter coefficient.
  • the embodiment of the present application provides a decoding method, which is applied to a decoder, and the method includes:
  • the code stream is decoded to determine the filter coefficient
  • the filter coefficient uses the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the points adjacent to the first point (K-1) neighbor points, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • an encoder which includes a first determination unit, a first filtering unit, and an encoding unit; wherein,
  • the first determination unit is configured to determine the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud; and determine the filter coefficient according to the initial point cloud and the reconstructed point cloud;
  • the first filtering unit is configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and (K-1) neighbor points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud;
  • the first determination unit is further configured to determine filter identification information according to the reconstructed point cloud and the filtered point cloud; wherein the filter identification information is used to determine whether to perform filtering processing on the reconstructed point cloud;
  • the encoding unit is configured to encode the filter identification information and filter coefficients if the filter identification information indicates that the reconstructed point cloud is to be filtered, and write the obtained encoded bits into the code stream.
  • the embodiment of the present application provides an encoder, where the encoder includes a first memory and a first processor; wherein,
  • a first memory for storing a computer program capable of running on the first processor
  • the first processor is configured to execute the method of the first aspect when running the computer program.
  • the embodiment of the present application provides a decoder, which includes a decoding unit and a second filtering unit; wherein,
  • the decoding unit is configured to decode the code stream and determine the filter identification information; wherein the filter identification information is used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud; and if the filter identification information indicates that the reconstructed point cloud is filtered, Then decode the code stream and determine the filter coefficient;
  • the second filtering unit is configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and (K-1) neighbor points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the embodiment of the present application provides a decoder, where the decoder includes a second memory and a second processor; wherein,
  • a second memory for storing a computer program capable of running on the second processor
  • the second processor is configured to execute the method as described in the third aspect when running the computer program.
  • the embodiment of the present application provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed by the first processor, the method as described in the first aspect is implemented, or the computer program is executed by the second processor.
  • the processor realizes the method as described in the third aspect when executing.
  • the embodiment of the present application provides a codec method, code stream, encoder, decoder, and storage medium.
  • the initial point cloud and the reconstruction point cloud corresponding to the initial point cloud are determined; according to the initial point cloud and the reconstruction point cloud cloud, to determine the filter coefficient; use the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the corresponding (K-1) neighbor points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud; according to the reconstructed point cloud and the filtered point cloud, the filter identification information is determined; wherein , the filter identification information is used to determine whether to perform filter processing on the reconstructed point cloud; if the filter identification information indicates that the reconstruction point cloud is to be filtered, then the filter identification information and filter coefficients are encoded, and the obtained encoded bits are written into the code stream .
  • the code stream is decoded to determine the filter identification information; wherein, the filter identification information is used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud; if the filter identification information indicates that the reconstructed point cloud is filtered, then Decode the code stream to determine the filter coefficient; use the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point And (K-1) neighboring points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the encoder uses the initial point cloud and the reconstructed point cloud to calculate the filter coefficients for filtering processing, and after determining to filter the reconstructed point cloud, the corresponding filter coefficients are passed to the decoder; correspondingly, the decoder It can be directly decoded to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud; and when the encoder uses the nearest neighbor points for filtering processing, it also uses The current point itself is considered, so that the filtered value also depends on the attribute value of the current point itself; the quality of the point cloud is further improved, the bit rate can be saved, and the encoding and decoding efficiency is improved.
  • Fig. 1 is a composition framework schematic diagram of a kind of G-PCC coder
  • Fig. 2 is a composition framework schematic diagram of a kind of G-PCC decoder
  • Fig. 3 is a schematic structural diagram of a zero-run encoding
  • FIG. 4 is a first schematic flow diagram of an encoding method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram II of an encoding method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a relationship between BD-Rate performance indicators and K values under different components provided by an embodiment of the present application;
  • Fig. 7 is a schematic diagram of a change relationship between encoding and decoding time and K value provided by the embodiment of the present application.
  • FIG. 8 is a first schematic flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 9 is a second schematic flow diagram of a decoding method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an overall flow of encoding and decoding provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an encoding end filtering process provided by an embodiment of the present application.
  • FIG. 12 is a schematic flow diagram of a filtering process at a decoding end provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a comparison of test results of predictive transformation under CY test conditions provided by the embodiment of the present application.
  • FIG. 14 is a schematic diagram of a comparison of test results of lifting transformation under C1 test conditions provided by the embodiment of the present application.
  • Fig. 15 is a schematic diagram of comparison of test results of RAHT transformation under C1 test conditions provided by the embodiment of the present application.
  • Fig. 16 is a schematic diagram of a test result of lifting conversion under the C2 test condition provided by the embodiment of the present application.
  • Fig. 17 is a schematic diagram of test results of RAHT transformation under C2 test conditions provided by the embodiment of the present application.
  • FIG. 18 is a schematic diagram of the composition and structure of an encoder provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a specific hardware structure of an encoder provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a decoder provided in an embodiment of the present application.
  • FIG. 21 is a schematic diagram of a specific hardware structure of a decoder provided in an embodiment of the present application.
  • FIG. 22 is a schematic diagram of the composition and structure of an encoding and decoding system provided by an embodiment of the present application.
  • references to “some embodiments” describe a subset of all possible embodiments, but it is understood that “some embodiments” may be the same subset or a different subset of all possible embodiments, and Can be combined with each other without conflict.
  • first ⁇ second ⁇ third involved in the embodiment of the present application is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second ⁇ The specific order or sequence of "third” may be interchanged where permitted so that the embodiments of the application described herein can be implemented in an order other than that illustrated or described herein.
  • G-PCC Geometry-based Point Cloud Compression
  • Video-based point cloud compression Video-based Point Cloud Compression, V-PCC or VPCC
  • RAHT Region Adaptive Hierarchal Transform
  • Peak Signal to Noise Ratio Peak Signal to Noise Ratio
  • MMSE Minimum Mean Squared Error
  • Luminance component (Luminance, L or Y)
  • Red chroma component Chroma red, Cr
  • Point cloud is a three-dimensional representation of the surface of an object.
  • the point cloud (data) on the surface of an object can be collected through acquisition equipment such as photoelectric radar, laser radar, laser scanner, and multi-view camera.
  • Point cloud refers to a collection of massive three-dimensional points, and the points in the point cloud can include point location information and point attribute information.
  • the point position information may be three-dimensional coordinate information of the point.
  • the location information of a point may also be referred to as geometric information of a point.
  • the attribute information of a point may include color information and/or reflectivity and the like.
  • color information may be information on any color space.
  • color information may be RGB information. Wherein, R represents red (Red, R), G represents green (Green, G), and B represents blue (Blue, B).
  • the color information may be luminance chrominance (YCbCr, YUV) information. Among them, Y represents brightness, Cb(U) represents blue chroma, and Cr(V) represents red chroma.
  • the points in the point cloud can include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point.
  • the points in the point cloud may include the three-dimensional coordinate information of the point and the color information of the point.
  • the points in the point cloud may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point, and the color information of the point.
  • Point clouds can be divided into the following ways:
  • the first type of static point cloud that is, the object is stationary, and the device for obtaining the point cloud is also stationary;
  • the second type of dynamic point cloud the object is moving, but the device for obtaining the point cloud is still;
  • the third type of dynamic acquisition of point clouds the equipment for acquiring point clouds is in motion.
  • point cloud For example, according to the purpose of point cloud, it is divided into two categories:
  • Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and emergency rescue robots;
  • Category 2 Human eyes perceive point clouds, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • the point cloud is a collection of massive points, storing the point cloud will not only consume a large amount of memory, but also is not conducive to transmission, and there is no such a large bandwidth to support the direct transmission of the point cloud at the network layer without compression. Therefore, it is necessary to Cloud for compression.
  • the point cloud coding framework that can compress the point cloud can be the G-PCC codec framework or the V-PCC codec framework provided by the Moving Picture Experts Group (MPEG), or it can be audio and video coding
  • MPEG Moving Picture Experts Group
  • AVS-PCC codec framework provided by the standard (Audio Video Standard, AVS).
  • the G-PCC codec framework can be used to compress the first type of static point cloud and the third type of dynamically acquired point cloud
  • the V-PCC codec framework can be used to compress the second type of dynamic point cloud.
  • the description here mainly focuses on the G-PCC codec framework.
  • each slice is independently encoded.
  • FIG. 1 is a schematic diagram of a composition framework of a G-PCC encoder. As shown in Figure 1, this G-PCC encoder is applied to a point cloud encoder.
  • the point cloud data is divided into multiple slices through slice division first.
  • the geometric information of the point cloud and the attribute information corresponding to each point cloud are encoded separately.
  • the geometric information is transformed into coordinates so that all point clouds are included in a bounding box, and then quantized. This step of quantization mainly plays a role in scaling.
  • the geometry of a part of the point cloud Due to the rounding of quantization, the geometry of a part of the point cloud The information is the same, so based on the parameters to decide whether to remove duplicate points, the process of quantizing and removing duplicate points is also called voxelization process. Then perform octree division on the bounding box. In the octree-based geometric information encoding process, the bounding box is divided into 8 sub-cubes, and the sub-cubes that are not empty (including points in the point cloud) are continued to be divided into 8 sub-cubes until the obtained leaf structure is obtained.
  • the octree division is also performed first, but different from the geometric information encoding based on octree, this trisoup does not need to divide the point cloud step by step Divide into a unit cube with a side length of 1 ⁇ 1 ⁇ 1, but stop dividing when it is divided into a sub-block (block) with a side length of W.
  • vertex Based on the surface formed by the distribution of point clouds of each block, the surface and the block are obtained. At most twelve intersection points (vertex) generated by the twelve edges of the vertex are arithmetically encoded (surface fitting based on the intersection point) to generate a binary geometric bit stream, that is, a geometric code stream. Vertex is also used in the implementation of the geometric reconstruction process, and the reconstructed set information is used when encoding the attributes of the point cloud.
  • the geometric encoding is completed, and after the geometric information is reconstructed, color conversion is performed to convert the color information (that is, the attribute information) from the RGB color space to the YUV color space. Then, the point cloud is recolored with the reconstructed geometry information so that the unencoded attribute information corresponds to the reconstructed geometry information.
  • Attribute coding is mainly carried out for color information. In the process of color information coding, there are mainly two transformation methods, one is distance-based lifting transformation that relies on LOD division, and the other is direct RAHT transformation.
  • Both methods will color information Transform from the space domain to the frequency domain, obtain high-frequency coefficients and low-frequency coefficients through transformation, and finally quantize the coefficients (that is, quantize coefficients).
  • the geometric encoding data and quantized coefficients processed by octree division and surface fitting properties After the coded data is combined into slices, the vertex coordinates of each block are coded sequentially (that is, arithmetic coding) to generate a binary attribute bit stream, that is, an attribute code stream.
  • FIG. 2 is a schematic diagram of a composition framework of a G-PCC decoder. As shown in Fig. 2, this G-PCC decoder is applied to the point cloud encoder. In the G-PCC decoding framework, for the obtained binary code stream, the geometric bit stream and attribute bit stream in the binary code stream are first independently decoded.
  • the geometric information of the point cloud is obtained through arithmetic decoding - octree synthesis - surface fitting - reconstruction geometry - inverse coordinate transformation; when decoding the attribute bit stream, through arithmetic decoding - inverse Quantization-LOD-based lifting inverse transformation or RAHT-based inverse transformation-inverse color conversion to obtain attribute information of the point cloud, and restore the 3D image model of the point cloud data to be encoded based on the geometric information and attribute information.
  • LOD division is mainly used in two ways: Predicting Transform and Lifting Transform in point cloud attribute transformation.
  • the process of LOD division is after the geometric reconstruction of the point cloud.
  • the geometric coordinate information of the point cloud can be directly obtained.
  • the point cloud is divided into multiple LODs according to the Euclidean distance between the point cloud points; the color of the points in the LOD is decoded in turn, and the number of zeros in the zero-run encoding technique (represented by zero_cnt) is calculated, and then the residual value is calculated according to the value of zero_cnt Poor decoding.
  • the decoding operation is performed according to the encoding zero-pass encoding method.
  • the size of the first zero_cnt in the code stream is decoded. If it is greater than 0, it means that there are consecutive zero_cnt residuals that are 0; if zero_cnt is equal to 0, it means that the attribute residual of this point If the difference is not 0, decode the corresponding residual value, then dequantize the decoded residual value and add it to the color prediction value of the current point to obtain the reconstruction value of the point, and continue to perform this operation until all points are decoded cloud point.
  • FIG. 3 is a schematic structural diagram of a zero-run encoding.
  • the current point will be used as the nearest neighbor of the point in the subsequent LOD, and the attribute prediction of the subsequent point will be performed using the color reconstruction value of the current point.
  • the existing G-PCC codec framework only performs basic reconstruction of the point cloud sequence. After reconstruction, no certain processing is performed to further improve the quality of the reconstructed point cloud color attributes. This may cause a relatively large difference between the reconstructed point cloud and the original point cloud, and serious distortion, which will affect the quality of the entire point cloud.
  • the embodiment of the present application proposes a codec method, which can affect the arithmetic coding and subsequent parts in the G-PCC coding framework, and can also affect the part after attribute reconstruction in the G-PCC decoding framework.
  • the embodiment of the present application proposes a coding method, which can be applied to the arithmetic coding shown in FIG. 1 and subsequent parts.
  • the embodiment of the present application also proposes a decoding method, which can be applied to the part after attribute reconstruction as shown in FIG. 2 .
  • the encoding end uses the initial point cloud and the reconstructed point cloud to calculate the filter coefficients for filtering processing, and after determining to perform filtering processing on the reconstructed point cloud, the corresponding filter coefficients are passed to the decoder; correspondingly, the decoding end It can be directly decoded to obtain the filter coefficient, and then use the filter coefficient to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • the encoder uses the neighboring points for filtering processing, the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and property loss is also proposed here, so as not only to improve The scope of application is widened, the quality of the point cloud is improved, and the bit rate can be saved, which improves the encoding and decoding efficiency.
  • FIG. 4 shows a first schematic flowchart of an encoding method provided in an embodiment of the present application.
  • the method may include:
  • S401 Determine an initial point cloud and a reconstructed point cloud corresponding to the initial point cloud.
  • the encoding method described in the embodiment of the present application specifically refers to the point cloud encoding method, which can be applied to a point cloud encoder (in the embodiment of the present application, it may be simply referred to as "encoder").
  • the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud may be determined first, and then the initial point cloud and the reconstructed point cloud are used to calculate the filter coefficient.
  • the initial point cloud and the reconstructed point cloud are used to calculate the filter coefficient.
  • a point in the initial point cloud when encoding the point, it can be used as a point to be encoded in the initial point cloud, and there are multiple encoded points around the point.
  • a point in the initial point cloud corresponds to a geometric information and an attribute information; wherein, the geometric information represents the spatial position of the point, and the attribute information represents the attribute value of the point (such as color component value).
  • the attribute information may include color components, specifically color information of any color space.
  • the attribute information may be color information in RGB space, may also be color information in YUV space, may also be color information in YCbCr space, etc., which are not specifically limited in this embodiment of the present application.
  • the color components may include a first color component, a second color component, and a third color component.
  • the first color component, the second color component and the third color component are: R component, G component, B component; if the color component conforms to the YUV color space, then it can be determined
  • the first color component, the second color component and the third color component are: Y component, U component, V component; if the color component conforms to the YCbCr color space, then the first color component, the second color component and the third color can be determined
  • the components are in turn: Y component, Cb component, Cr component.
  • the attribute information of the point may be a color component, reflectivity or other attributes, which is not specifically limited in the embodiment of the present application.
  • the predicted value and residual value of the attribute information of the point can be determined first, and then the predicted value and residual value can be used to further calculate and obtain the point The reconstruction value of the attribute information in order to construct the reconstruction point cloud.
  • the geometric information and attribute information of multiple target neighbor points of the point can be used, combined with the geometric information of the point for the The attribute information of the point is predicted to obtain the corresponding predicted value, and then the corresponding reconstruction value can be determined.
  • the point can be used as the nearest neighbor of the point in the subsequent LOD, and the reconstruction value of the attribute information of the point can be used to continue to perform subsequent reconstruction.
  • Points are used to predict attributes, so that a reconstructed point cloud can be constructed.
  • the initial point cloud can be directly obtained through the codec program point cloud reading function, and the reconstructed point cloud is obtained after attribute encoding, attribute reconstruction and geometric compensation.
  • the reconstructed point cloud in the embodiment of the present application can be the reconstructed point cloud output after decoding, or it can be used as a reference for decoding the subsequent point cloud; in addition, the reconstructed point cloud here can not only be in the prediction loop, that is, as an inloop
  • the filter can be used as a reference for decoding subsequent point clouds; it can also be used outside the prediction loop, that is, as a post filter, not as a reference for decoding subsequent point clouds; this embodiment of the application does not specifically limit this.
  • the embodiment of the present application is implemented in the case of attribute lossy encoding, and it can be divided into two cases: geometry lossless and attribute lossy encoding and geometric lossy and attribute lossy encoding. Therefore, in a possible implementation manner, for S401, the determining the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud may include: if the first type of encoding method is used to encode and reconstruct the initial point cloud processing, the first reconstructed point cloud is obtained, and the first reconstructed point cloud is used as the reconstructed point cloud.
  • the determining the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud may include:
  • the second reconstructed point cloud is obtained; the geometric recovery process is performed on the second reconstructed point cloud to obtain the restored and reconstructed point cloud, and the restored and reconstructed point cloud is used as the reconstruction point cloud.
  • the first type of encoding is used to indicate that the initial point cloud is geometrically lossless and attribute lossy
  • the second type of encoding is used to indicate that the initial point cloud is geometrically lossy and attribute-lossy. Attribute lossy encoding.
  • performing geometry recovery processing on the second reconstructed point cloud to obtain the restored and reconstructed point cloud may include:
  • the geometric coordinate preprocessing of the reconstructed point cloud is first performed, and after geometric compensation processing, the geometric coordinates of each point are divided according to the reconstruction process by using the scaling scale in the configuration parameters.
  • the reconstructed point cloud can be restored to the same size as the bounding box of the original point cloud through this process, and the geometric position is the same (that is, the offset of the geometric coordinates is eliminated).
  • the method may further include: in the case of the first type of encoding method, determining the corresponding point of the point in the reconstructed point cloud in the initial point cloud according to the first preset search method, Establish the correspondence between the points in the reconstructed point cloud and the points in the initial point cloud.
  • the method may further include: in the case of the second type of encoding method, determining the corresponding point of the point in the reconstructed point cloud in the initial point cloud according to the second preset search method, and Construct a matching point cloud according to the determined corresponding points, use the matching point cloud as the initial point cloud, and establish the corresponding relationship between the points in the reconstructed point cloud and the points in the initial point cloud.
  • the determining the corresponding points of the points in the reconstructed point cloud in the initial point cloud according to the second preset search method may include:
  • a second constant value of points is searched in the initial point cloud using a second preset search method
  • the corresponding point of the current point is determined according to the multiple points corresponding to the minimum distance value.
  • determining the corresponding point of the current point according to the multiple points corresponding to the minimum distance value may include: randomly selecting a point from the multiple points corresponding to the minimum distance value, and the selected The point corresponding to the current point is determined as the corresponding point of the current point; or, the fusion process is performed on multiple points corresponding to the minimum distance value, and the fused point is determined as the corresponding point of the current point; this is not specifically limited here.
  • the first preset search method is the K-nearest neighbor search method for searching the first constant value points;
  • the second preset search method is the K-nearest neighbor search method for the second constant value points Search method.
  • the first constant value is equal to 1
  • KNN search method but it is not specifically limited here.
  • the number and coordinates of points in the reconstructed point cloud obtained at this time have not changed, so the corresponding points in the initial point cloud
  • the number of points, coordinates and even the bounding box of the entire point cloud will change greatly depending on the set bit rate of the reconstructed point cloud at this time.
  • a point cloud with the same number of points as the reconstructed point cloud and one-to-one correspondence with each point can be obtained, which can be input to the filter together with the reconstructed point cloud as the real initial point cloud.
  • S402 Determine a filter coefficient according to the initial point cloud and the reconstructed point cloud.
  • the points in the initial point cloud and the points in the reconstructed point cloud have a corresponding relationship.
  • the K target points corresponding to the points in the reconstructed point cloud may include the current point and (K-1) neighboring points adjacent to the current point, where K is an integer greater than 1.
  • the determining the filter coefficients according to the initial point cloud and the reconstructed point cloud may include: determining the filter coefficients according to K target points corresponding to the midpoints of the initial point cloud and the reconstructed point cloud.
  • the determination of the K target points corresponding to the midpoint of the reconstructed point cloud may include:
  • the K nearest neighbor search method is used to search for preset candidate points in the reconstructed point cloud
  • the first point can be any point in the reconstructed point cloud.
  • the (K-1) neighbor points closest to the first point; the first point itself and the (K-1) neighbor points are determined as the final K target points. That is to say, for K target points, in addition to the first point itself, it also includes (K-1) neighboring points with the closest geometric distance to the first point, which together constitute the first point in the reconstructed point cloud. Points correspond to K target points.
  • the filter here may be an adaptive filter, such as a neural network-based filter, a Wiener filter, etc., which are not specifically limited here.
  • the function of the Wiener filter is mainly to calculate filter coefficients, and at the same time determine whether the quality of the point cloud after the Wiener filter is improved. That is to say, the filter coefficients described in the embodiments of the present application may be coefficients processed by the Wiener filter, that is, the filter coefficients are coefficients output by the Wiener filter.
  • the Wiener Filter (Wiener Filter) is a linear filter whose optimal criterion is to minimize the mean square error. Under certain constraints, the square of the difference between its output and a given function (usually called the expected output) reaches the minimum, and through mathematical operations, it can finally become a problem of solving the Toblitz equation.
  • the Wiener filter is also known as the least squares filter or the least squares filter.
  • Wiener filtering is a method of filtering a signal mixed with noise by using the correlation characteristics and spectral characteristics of a stationary random process, and it is currently one of the basic filtering methods.
  • the specific algorithm of Wiener filtering is as follows:
  • the output when the filter length or order is M is as follows,
  • M is the length or order of the filter
  • y(n) is the output signal
  • x(n) is a column of (mixed with noise) input signals.
  • the error between the known signal and the desired signal can be calculated, represented by e(n), as follows,
  • the Wiener filter takes the minimum mean square error as the objective function, so the objective function is expressed as follows,
  • the reciprocal of the objective function to the coefficient should be 0, namely:
  • Rxd and Rxx are the correlation matrix of the input signal and the expected signal and the autocorrelation matrix of the input signal respectively. Therefore, according to the optimal solution calculation according to the Wiener-Hough equation, the filter coefficient H can be obtained:
  • the noisy signal and the expected signal are required; and in the embodiment of the present application, for the point cloud encoding and decoding framework, the two correspond to the reconstructed point cloud and the initial point cloud respectively, so that it can be determined Input to the Wiener filter.
  • the determination of the filter coefficients according to the initial point cloud and the reconstructed point cloud may include: using the Wiener filter to input the initial point cloud and the reconstructed point cloud into the Wiener filter for calculation, and output filtering coefficient.
  • the initial point cloud and the reconstructed point cloud are input into the Wiener filter for calculation, and the filter coefficients are output, which may include:
  • filter coefficients are determined.
  • the determining the filter coefficient based on the first attribute parameter and the second attribute parameter may include:
  • the coefficients are calculated according to the cross-correlation parameters and the auto-correlation parameters to obtain the filter coefficients.
  • the filter coefficient based on the initial point cloud and the reconstructed point cloud can be based on The original value and reconstructed value of the color component (such as Y component, U component, V component), the determination of the first attribute parameter and the second attribute parameter of the color component, and then can be further determined according to the first attribute parameter and the second attribute parameter
  • the filter coefficients used for filtering processing are output.
  • the first attribute parameter is determined according to the original value of the attribute information of at least one point in the initial point cloud; the second attribute parameter is reconstructed according to the attribute information of K target points corresponding to at least one point in the reconstructed point cloud The value is determined, where the K target points include the current point and (K-1) neighbor points adjacent to the current point.
  • the order of the Wiener filter is also involved.
  • the order of the Wiener filter may be set equal to M.
  • the values of M and K may be the same or different, which are not specifically limited here.
  • the filter type may be used to indicate the order of the filter, and/or the shape of the filter, and/or the dimension of the filter.
  • the shape of the filter includes rhombus, rectangle, etc.
  • the dimensionality of the filter includes one-dimensional, two-dimensional or even more dimensions.
  • different filter types can correspond to Wiener filters of different orders, for example, the values of orders can be 12, 32, 128 Wiener filters; different types can also be Filters corresponding to different dimensions, for example, one-dimensional filter, two-dimensional filter, etc., are not specifically limited here. That is to say, if a 16-order filter needs to be determined, then 16 points can be used to determine a 16-order asymmetric filter, or an 8-order one-dimensional symmetric filter, or other quantities (such as more special two-dimensional, three-dimensional filter, etc.), and the filter is not specifically limited here.
  • the filter coefficient according to the initial point cloud and the reconstructed point cloud when determining the filter coefficient according to the initial point cloud and the reconstructed point cloud, it can first be determined according to the original value of the color component of the point in the initial point cloud The first attribute parameter of the color component; at the same time, the second attribute parameter of the color component can be determined according to the reconstructed value of the color component of the K target points corresponding to the center point of the reconstructed point cloud; finally, it can be based on the first attribute parameter and the second The second attribute parameter determines the filter coefficient vector corresponding to the color component.
  • the corresponding first attribute parameter and second attribute parameter are calculated for a color component, and the filter coefficient vector corresponding to the color component is determined by using the first attribute parameter and the second attribute parameter.
  • the filter coefficient vector of each color component can be obtained, so that the filter coefficient can be determined based on the filter coefficient vector of each color component.
  • the cross-correlation parameter corresponding to the color component can be determined according to the first attribute parameter and the second attribute parameter corresponding to the color component; at the same time, the cross-correlation parameter corresponding to the color component can be determined according to the second attribute
  • the parameters determine the autocorrelation parameters corresponding to the color components; then the filter coefficient vectors corresponding to the color components can be determined based on the cross-correlation parameters and autocorrelation parameters; finally, all color components can be traversed, and the filter coefficients can be determined by using the filter coefficient vectors corresponding to all color components .
  • the order of the filter is K, that is, using the K target points corresponding to each point in the reconstructed point cloud to Calculate the optimal coefficient, that is, calculate the filter coefficient.
  • the K target points may include the point itself and (K-1) neighboring points adjacent to it.
  • the point cloud sequence is n
  • the first attribute parameter composed of the original value use the matrix P(n,k) to represent the reconstruction value of the K target points corresponding to all points in the reconstructed point cloud under the same color component (such as the Y component), that is, P(n , k) is the second attribute parameter composed of the reconstruction values of the Y components of K target points corresponding to all points in the reconstruction point cloud.
  • the autocorrelation parameter A(k,k) is obtained, as shown below.
  • the optimal coefficient H(k) under the Y component that is, the filter coefficient vector H(k) of the K-order Wiener filter under the Y component, is as follows,
  • the U component and the V component can be traversed according to the above method, and finally the filter coefficient vector under the U component and the filter coefficient vector under the V component can be determined, and then the filter coefficient vectors under all color components can be used to determine the filter coefficient.
  • the process of determining the filter coefficients it is based on the YUV color space; if the initial point cloud or the reconstructed point cloud does not conform to the YUV color space (for example, RGB color space), then color space conversion is also required, so that it conforms to the YUV color space.
  • YUV color space for example, RGB color space
  • the method may further include: if the color components of the points in the initial point cloud conform to the RGB color space, performing color space conversion on the initial point cloud so that the color components of the points in the initial point cloud conform to the YUV color space; if the color components of the points in the reconstructed point cloud conform to the RGB color space, the color space conversion is performed on the reconstructed point cloud so that the color components of the points in the reconstructed point cloud conform to the YUV color space.
  • the first attribute parameter and The second attribute parameter can further determine the filter coefficient vector corresponding to each color component, and finally obtain the filter coefficient by using the filter coefficient vectors of all color components.
  • S403 Perform filtering processing on the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine a filtered point cloud corresponding to the reconstructed point cloud.
  • the first point represents any point in the reconstructed point cloud.
  • the K target points may include the first point and (K-1) neighboring points adjacent to the first point, where K is an integer greater than 1.
  • the (K-1) neighbor points here specifically refer to the (K-1) neighbor points with the closest geometric distance to the first point.
  • the encoder may further use the filter coefficients to determine the filtered point cloud corresponding to the reconstructed point cloud. Specifically, in some embodiments, it is first necessary to determine K target points corresponding to the first point in the reconstructed point cloud, where the first point represents any point in the reconstructed point cloud. Then, filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud, which may include:
  • the filtered point cloud is determined according to the filter value of the attribute information of the at least one point.
  • determining K target points corresponding to the first point in the reconstructed point cloud may include:
  • the K nearest neighbor search method is used to search for preset candidate points in the reconstructed point cloud
  • the K-nearest neighbor search method can be used to search for a preset value of candidate points in the reconstructed point cloud, calculate the distance between the first point and these candidate points, and then from these candidate points Select the (K-1) neighbors closest to the first point; that is to say, in addition to the first point itself, it also includes (K-1) neighbors with the closest geometric distance to the first point , forming a total of K target points corresponding to the first point in the reconstructed point cloud.
  • the filter coefficient when using the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, it may include: using the filter coefficient corresponding to the color component The vector filters the K target points corresponding to the first point in the reconstructed point cloud to obtain the filtered value of the color component of the first point in the reconstructed point cloud; according to the filtered value of the color component of the point in the reconstructed point cloud, the filtered point cloud.
  • the filter value of the color component of each point in the reconstructed point cloud can be determined according to the filter coefficient vector corresponding to the color component and the second attribute parameter; then based on the filter value of the color component of each point in the reconstructed point cloud, Get the filtered point cloud.
  • the Wiener filter when the Wiener filter is used to filter the reconstructed point cloud, the noisy signal and the expected signal are required.
  • the reconstructed point cloud can be used as the Noise signal
  • the initial point cloud is used as the desired signal; therefore, the initial point cloud and the reconstructed point cloud can be input into the Wiener filter at the same time, that is, the input of the Wiener filter is the initial point cloud and the reconstructed point cloud, and the Wiener filter
  • the output of the filter is the filter coefficient; after the filter coefficient is obtained, the filter processing of the reconstructed point cloud can also be completed based on the filter coefficient, and the corresponding filtered point cloud can be obtained.
  • the filter coefficients are obtained based on the original point cloud and the reconstructed point cloud; therefore, applying the filter coefficients to the reconstructed point cloud can restore the original point cloud to the greatest extent.
  • the filter coefficients when filtering the reconstructed point cloud according to the filter coefficient, for one of the color components, it can be determined that the color component corresponds to filter value.
  • the filter coefficient vector H(k) under the Y component is applied to the reconstructed point cloud, that is, the second attribute parameter P(n,k), and the filter under the Y component can be obtained Value R(n), as shown below,
  • the U component and V component can be traversed according to the above method, and finally the filter value under the U component and the filter value under the V component can be determined, and then the filter value under all color components can be used to determine the filter corresponding to the reconstructed point cloud After the point cloud.
  • S404 Determine filtering identification information according to the reconstructed point cloud and the filtered point cloud; wherein the filtering identification information is used to determine whether to perform filtering processing on the reconstructed point cloud.
  • the encoder may further determine the filter identification information corresponding to the initial point cloud according to the reconstructed point cloud and the filtered point cloud.
  • the filter identification information can be used to determine whether to filter the reconstructed point cloud; further, the filter identification information can also be used to determine which one or several The color components are filtered.
  • the color component may include at least one of the following: a first color component, a second color component, and a third color component; wherein, the to-be-processed component of the attribute information may be the first Any one of the color component, the second color component, and the third color component.
  • FIG. 5 it shows a second schematic flowchart of an encoding method provided by an embodiment of the present application. As shown in Figure 5, for the determination of the filter identification information, the method may include:
  • S501 Determine the first-generation value of the component to be processed of the attribute information of the reconstructed point cloud, and determine the second-generation value of the component to be processed of the attribute information of the filtered point cloud.
  • the determination of the first-generation value of the to-be-processed component of the attribute information of the reconstructed point cloud may include: using the rate distortion cost method to process the to-be-processed component of the reconstructed point cloud's attribute information performing cost value calculation on the component, and using the obtained first rate-distortion value as the first generation value;
  • the determining the second generation value of the component to be processed of the attribute information of the filtered point cloud may include: calculating the cost value of the component to be processed of the attribute information of the filtered point cloud using a rate distortion cost method, and calculating the obtained second generation value of the attribute information of the filtered point cloud The second rate-distortion value is obtained as the second generation value.
  • the filter identification information may be determined in a rate-distortion cost manner.
  • the comparison result with the second-generation value determines the filter identification information of the component to be processed.
  • the cost value here may be a distortion value used for distortion measurement, or may be a rate-distortion cost result, etc., which is not specifically limited in this embodiment of the present application.
  • the embodiment of this application also performs a rate-distortion trade-off between the filtered point cloud and the reconstructed point cloud.
  • the rate-distortion cost method can be used to calculate the comprehensive quality improvement and bit stream Increased rate-distortion value.
  • the first rate-distortion value and the second rate-distortion value can respectively represent the respective rate-distortion cost results of the reconstructed point cloud and the filtered point cloud under the same color component, and are used to represent the compression efficiency of the filtered point cloud.
  • the specific calculation formula is as follows,
  • J is the rate-distortion value
  • D is the SSE of the initial point cloud and the reconstructed point cloud or the filtered point cloud, that is, the sum of squares of the corresponding point errors
  • is a quantity related to the quantization parameter QP
  • the first rate-distortion value of the to-be-processed component of the reconstructed point cloud and the second rate-distortion value of the to-be-processed component of the filtered point cloud can be calculated according to the above method, and the It is used as the first-generation value and the second-generation value, and then the filter identification information of the component to be processed is determined according to the comparison result of the two.
  • the determination of the first-generation value of the to-be-processed component of the attribute information of the reconstructed point cloud may include: using the rate-distortion cost method to calculate the value of the attribute information of the reconstructed point cloud Calculate the cost value of the component to be processed to obtain the first rate-distortion value; use the preset performance measurement index to calculate the performance value of the component to be processed to reconstruct the attribute information of the point cloud to obtain the first performance value; according to the first rate-distortion value and The first performance value, which determines the first generation value.
  • the determining the second generation value of the component to be processed of the attribute information of the filtered point cloud may include: calculating the cost value of the component to be processed of the attribute information of the filtered point cloud using a rate-distortion cost method to obtain the second rate-distortion value; use the preset performance measurement index to calculate the performance value of the component to be processed of the attribute information of the filtered point cloud to obtain a second performance value; determine the second generation value according to the second rate-distortion value and the second performance value.
  • the performance value eg, PSNR value
  • the first performance value and the second performance value may represent respective encoding and decoding performances of the reconstructed point cloud and the filtered point cloud under the same color component.
  • the first performance value may be the PSNR value of the color component of the points in the reconstructed point cloud
  • the second performance value may be the PSNR value of the color components of the points in the filtered point cloud.
  • the PSNR value can be considered to determine whether to perform filtering at the decoding end, but also the rate-distortion trade-off before and after filtering can be performed in a rate-distortion cost manner.
  • the first performance value and the first rate-distortion value of the components to be processed of the reconstructed point cloud can be calculated according to the above method, so as to obtain the first generation value; and calculating Get the second performance value and the second rate distortion value of the component to be processed of the filtered point cloud to obtain the second generation value, and then determine the filter identification of the component to be processed according to the comparison result of the first generation value and the second generation value information.
  • S502 Determine the filter identification information of the component to be processed according to the first generation value and the second generation value.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include:
  • the value of the second generation is less than the value of the first generation, it is determined that the value of the filter identification information of the component to be processed is the first value
  • the value of the second generation is greater than the value of the first generation, it is determined that the value of the filter identification information of the component to be processed is a second value.
  • the value of the filter identification information of the component to be processed is the first value; or, it can also be determined that the value of the filter identification information of the component to be processed is The value is the second value.
  • the filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed point cloud is to be filtered; or, If the value of the filtering identification information of the component to be processed is the second value, it may be determined that the filtering identification information of the component to be processed indicates that no filtering process is performed on the component to be processed of the attribute information of the reconstructed point cloud.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include:
  • the second performance value is greater than the first performance value and the second rate-distortion value is less than the first rate-distortion value, then determine that the value of the filter identification information of the component to be processed is the first value;
  • the second performance value is smaller than the first performance value, it is determined that the value of the filter identification information of the component to be processed is a second value.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include:
  • the second performance value is greater than the first performance value and the second rate-distortion value is less than the first rate-distortion value, then determine that the value of the filter identification information of the component to be processed is the first value;
  • the second rate-distortion value is greater than the first rate-distortion value, it is determined that the value of the filter identification information of the component to be processed is a second value.
  • the second performance value is equal to the first performance value or the second rate-distortion value is equal to the first rate-distortion value
  • it may be determined that the value of the filter identification information of the component to be processed is the first value; or , it may also be determined that the value of the filter identification information of the component to be processed is a second value.
  • the filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed point cloud is to be filtered; or, If the value of the filtering identification information of the component to be processed is the second value, it may be determined that the filtering identification information of the component to be processed indicates that no filtering process is performed on the component to be processed of the attribute information of the reconstructed point cloud.
  • the component to be processed is a color component
  • the first performance value, the first rate-distortion value, the second performance value and the second rate-distortion value it can be determined that the filtered point cloud is compared with the reconstructed point cloud , whether the performance value of one or several of these color components increases while the rate-distortion cost decreases.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include:
  • the second performance value is smaller than the first performance value, or the second rate-distortion value is greater than the first rate-distortion value, determine that the value of the filter identification information of the first color component is the second value.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include :
  • the second performance value is smaller than the first performance value, or the second rate-distortion value is greater than the first rate-distortion value, determine that the value of the filter identification information of the second color component is the second value.
  • the determining the filter identification information of the component to be processed according to the first generation value and the second generation value may include :
  • the second performance value is smaller than the first performance value, or the second rate-distortion value is greater than the first rate-distortion value, determine that the value of the filter identification information of the third color component is the second value.
  • the PSNR and rate-distortion cost result (Cost) of the Y component as an example, according to the first performance value, the first rate-distortion value, the second performance value and the second rate-distortion value , when determining the filter identification information of the Y component, if the PSNR1 corresponding to the reconstructed point cloud is greater than the PSNR2 corresponding to the filtered point cloud, or the Cost1 corresponding to the reconstructed point cloud is less than the Cost2 corresponding to the filtered point cloud, that is, the PSNR value of the filtered Y component decrease, or the rate-distortion cost of the Y component increases after filtering, then it can be considered that the filtering effect is not good.
  • the filter identification information of the Y component will be set to indicate that the Y component is not filtered; correspondingly, if the PSNR1 corresponding to the reconstructed point cloud is less than the PSNR2 corresponding to the filtered point cloud and the Cost1 corresponding to the reconstructed point cloud is greater than the Cost2 corresponding to the filtered point cloud, that is, the PSNR of the filtered Y component is improved and the rate-distortion cost is reduced, then the filtering effect can be considered to be better.
  • the filter identification information of the Y component will be set to indicate that the Y component is to be filtered.
  • the embodiment of the present application can traverse the U component and the V component according to the above method, and finally determine the filter identification information of the U component and the filter identification information of the V component, so that the final filter identification information can be determined by using the filter identification information of all color components. Identification information.
  • the determining the filter identification information by using the filter identification information of the component to be processed may include: determining the filter identification information of the first color component when the component to be processed is a color component , the filter identification information of the second color component and the filter identification information of the third color component; according to the filter identification information of the first color component, the filter identification information of the second color component and the filter identification information of the third color component, the filter identification is obtained information.
  • the filter identification information can be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component of.
  • the filter identification information can be represented by [y u v], wherein, y represents the filter identification information of the first color component, u represents the filter identification information of the second color component, and v represents the filter identification information of the third color component.
  • the filter identification information includes the filter identification information of each color component, the filter identification information can not only be used to determine whether to perform filtering processing on the reconstructed point cloud, but also can be used to determine specifically which A color component is filtered.
  • the method may also include:
  • the value of the filter identification information of the first color component is the first value, it is determined to perform filtering processing on the first color component of the reconstructed point cloud; if the value of the filter identification information of the first color component is the second value, then determining not to filter the first color component of the reconstructed point cloud; or,
  • the value of the filter identification information of the second color component is the first value, it is determined to perform filtering processing on the second color component of the reconstructed point cloud; if the value of the filter identification information of the second color component is the second value, then determining not to filter the second color component of the reconstructed point cloud; or,
  • the value of the filter identification information of the third color component is the first value, it is determined to perform filtering processing on the third color component of the reconstructed point cloud; if the value of the filter identification information of the third color component is the second value, then It is determined not to perform filtering processing on the third color component of the reconstructed point cloud.
  • the filter identification information corresponding to a certain color component is the first value, then it can be instructed to perform filtering processing on the color component; if the filter identification information corresponding to a certain color component is the second value, Then it may indicate that the color component is not to be filtered.
  • the method may also include:
  • the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component is the first value, then it is determined that the filter identification information indicates that the reconstruction point cloud is filtered;
  • the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component all have the second value, it is determined that the filter identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the filter identification information corresponding to these color components is the second value, then it can be indicated that no filtering process is performed on these color components, and it can be determined that the filter identification information indicates that the reconstruction point cloud is not correct.
  • Perform filtering processing correspondingly, if at least one of the filtering identification information of these color components is the first value, then it can be indicated to perform filtering processing on at least one color component, that is, it can be determined that the filtering identification information indicates that filtering processing is performed on the reconstructed point cloud.
  • the first value and the second value are different, and the first value and the second value may be in the form of parameters or numbers.
  • the identification information corresponding to these color components may be a parameter written in a profile (profile), or a value of a flag (flag), which is not specifically limited here.
  • the first value can be set to 1, and the second value can be set to 0; or, the first value can also be set to true, and the second value can also be set to false ; but not specifically limited here.
  • the filter identification information can be Write the code stream, and also optionally write the filter coefficients into the code stream.
  • the filter identification information as [1 0 1] as an example, since the value of the filter identification information of the first color component is 1, it is necessary to filter the first color component of the reconstructed point cloud;
  • the value of the filter identification information of the second color component is 0, that is, it is not necessary to filter the second color component of the reconstructed point cloud;
  • the third color component performs filtering processing; at this time, only the filter coefficients corresponding to the first color component and the filter coefficients corresponding to the third color component need to be written into the code stream, and there is no need to write the filter coefficients corresponding to the second color component into the code stream. flow.
  • the method may further include: if the quantization parameter of the reconstructed point cloud is greater than a preset threshold value, determining that the filtering identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the method may further include: encoding only the filtering identification information, and writing the obtained encoded bits into the code stream.
  • the encoder may further determine the filter identification information according to the reconstructed point cloud and the filtered point cloud. If the filter identification information indicates that the reconstruction point cloud is filtered, then the filter identification information and filter coefficients can be written into the code stream; if the filter identification information indicates that the reconstruction point cloud is not filtered, then it is not necessary to write To write the filter coefficients into the code stream, it is only necessary to write the filter identification information into the code stream, so that it can be transmitted to the decoder through the code stream later.
  • the method may also include: determining the predicted value of the attribute information of the point in the initial point cloud; according to the original value and the predicted value of the attribute information of the color component of the point in the initial point cloud, Determine the residual value of the attribute information of the point in the initial point cloud; encode the residual value of the attribute information of the point in the initial point cloud, and write the obtained encoded bits into the code stream. That is to say, after the encoder determines the residual value of the attribute information, it also needs to write the residual value of the attribute information into the code stream, so as to be subsequently transmitted to the decoder through the code stream.
  • the order of the Wiener filter it is assumed that the order is the same as the value of K, and K can be set to be equal to 16 or other values.
  • K can be set to be equal to 16 or other values.
  • the BD-Rate gain of the pre-filtered point cloud and the filtered point cloud using the Lifting Transform transformation method and the running Time was tested, and the test results are shown in Figure 6 and Figure 7.
  • FIG. 6 shows the variation of the BD-Rate performance index corresponding to the Y component with the K value, and (b) shows the variation of the BD-Rate performance index corresponding to the U component with the K value , (c) shows the change of the BD-Rate performance index corresponding to the V component with the K value; it can be seen that the BD-Rate performance index changes slowly when the K value is too large (eg, greater than about 20).
  • Fig. 7 shows the variation of the total encoding and decoding time of all code rates with the value of K, from which it can be seen that the encoding and decoding time has almost a linear relationship with the K value.
  • the K value can be set to 16, which can not only ensure the filtering effect, but also reduce the time complexity to a certain extent.
  • the method may also include:
  • the value of K is determined according to the neighborhood difference value of points in the reconstructed point cloud.
  • the neighborhood difference value may be calculated based on a component difference between a color component to be filtered of a point and a color component to be filtered of at least one neighboring point.
  • determining the value of K according to the neighborhood difference value of the reconstructed point cloud midpoint may also include: if the neighborhood difference value of the reconstructed point cloud midpoint satisfies the first preset interval, then selecting the first category The filter performs filtering processing on the reconstructed point cloud; if the neighborhood difference value of the points in the reconstructed point cloud satisfies the second preset interval, then select the second type of filter to perform filtering processing on the reconstructed point cloud; wherein, the first The order of the filters of the first class is different from the order of the filters of the second class.
  • a possible implementation is to select different K when filtering point clouds of different sizes, specifically, the threshold can be set according to the number of points for judgment, so as to realize the adaptive selection of K; when point cloud When the number of points in the point cloud is small, K can be appropriately increased to focus on improving the quality; when the number of points in the point cloud is large, K can be appropriately reduced. At this time, first ensure that the memory is sufficient and the running speed is not too slow.
  • the embodiment of the present application proposes an optimized technology for performing Wiener filtering on the YUV components of codec color reconstruction values to better enhance the quality of point clouds.
  • method when filtering the color components (such as Y component, U component, V component) of the color reconstruction value of the point cloud sequence, not only the geometric lossless and attribute lossy conditions, but also the geometric lossy and attribute lossy
  • the point cloud Wiener filter post-processing method is optimized for the judgment of quality improvement, the selection of adjacent points, and the size of K, so that the quality of the reconstructed point cloud output by the decoder can be selectively enhanced.
  • the increase in the bit size of the code stream is small, and the overall compression performance is improved.
  • the encoding method proposed in the embodiment of the present application is applicable to any point cloud sequence, especially for sequences with densely distributed points and sequences with a medium code rate, and has a relatively prominent optimization effect.
  • the encoding method proposed in the embodiment of the present application can operate on all three attribute transformation (Predicting transform, Lifing transform, RAHT) encoding methods, and has universal applicability. For very few sequences with poor filtering effect, this method will not affect the reconstruction process of the decoding end, except for a slight increase in the running time of the encoding end, the impact of the filtering identification information on the size of the attribute stream is negligible.
  • Wiener filter proposed in the embodiment of this application can be used in the prediction loop, that is, as an inloop filter, and can be used as a reference for decoding subsequent point clouds; it can also be outside the prediction loop, that is, as a post
  • the filter is used and not used as a reference for decoding subsequent point clouds. This is not specifically limited here.
  • the Wiener filter proposed in the embodiment of the present application is an in-loop filter
  • the parameter information indicating the filtering process such as filtering
  • the filter coefficients also need to be written into the code stream; correspondingly, after determining not to perform filtering processing, you can choose not to write the parameter information indicating the filtering process, such as the filter identification information into the code stream, At the same time, the filter coefficients are not written into the code stream.
  • the Wiener filter proposed in the embodiment of the present application is a post-processing filter
  • the filter coefficient corresponding to the filter is located in a separate auxiliary information data unit (for example, supplementary enhancement information SEI)
  • a separate auxiliary information data unit for example, supplementary enhancement information SEI
  • the decoder does not obtain the supplementary enhancement information, it will not Reconstruct the point cloud for filtering.
  • the filter coefficient corresponding to the filter and other information are contained in an auxiliary information data unit.
  • the parameter information indicating the filtering process such as the filter identification information
  • the filter coefficients it is also necessary to write the filter coefficients into the code stream; correspondingly, after determining not to perform filtering processing, you can choose not to write the parameter information indicating the filtering process, such as the filter identification information, into the code stream, and at the same time do not write the filter coefficient The coefficients are written into the code stream.
  • one or more parts of the reconstructed point cloud can be selected to be filtered, that is, the control range of the filter identification information can be the entire reconstructed point cloud, or the reconstructed point cloud A certain part of it is not specifically limited here.
  • This embodiment provides an encoding method, which is applied to an encoder.
  • the filter coefficient By determining the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud; according to the initial point cloud and the reconstructed point cloud, determine the filter coefficient; use the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine The filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and (K-1) neighbor points adjacent to the first point, K is an integer greater than 1, and the first point represents the reconstruction point Any point in the cloud; according to the reconstructed point cloud and the filtered point cloud, determine the filter identification information; wherein, the filter identification information is used to determine whether to filter the reconstructed point cloud; if the filter identification information indicates that the reconstructed point cloud is filtered , the filter identification information and filter coefficients are encoded, and the obtained encoded bits are written into the code stream.
  • the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss is proposed here, which not only improves The scope of application also realizes the optimization of the reconstructed point cloud, further improves the quality of the point cloud, and can save the bit rate and improve the encoding and decoding efficiency.
  • the embodiment of the present application also provides a code stream, which is generated by performing bit coding according to the information to be encoded; wherein the information to be encoded includes at least one of the following: initial point The residual value of the attribute information of the point in the cloud, the filter identification information and the filter coefficient.
  • the decoder obtains the residual value of the attribute information of the point in the initial point cloud by decoding, and then constructs and the decoder obtains the filter identification information by decoding, and can also determine whether to filter the reconstructed point cloud; then, when the reconstructed point cloud needs to be filtered, the filter coefficient is directly obtained by decoding; according to the filter coefficient Filtering is performed on the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • FIG. 8 shows a first schematic flowchart of a decoding method provided in an embodiment of the present application. As shown in Figure 8, the method may include:
  • S801 Decode the code stream, and determine filtering identification information; wherein, the filtering identification information is used to determine whether to perform filtering processing on the reconstructed point cloud corresponding to the initial point cloud.
  • decoding method described in the embodiment of the present application specifically refers to the point cloud decoding method, which can be applied to a point cloud decoder (in the embodiment of the present application, it may be simply referred to as "decoder").
  • the decoder may decode the code stream first, so as to determine the filter identification information.
  • the filter identification information may be used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud. Specifically, the filter identification information may also be used to determine which color component or color components in the reconstructed point cloud are to be filtered.
  • a point in the initial point cloud when decoding the point, it can be used as a point to be decoded in the initial point cloud, and there are Multiple decoded points.
  • a point in the initial point cloud corresponds to a geometric information and an attribute information; wherein, the geometric information represents the spatial position of the point, and the attribute information represents the attribute value of the point (such as color component value).
  • the attribute information may include color components, specifically color information of any color space.
  • the attribute information may be color information in RGB space, may also be color information in YUV space, may also be color information in YCbCr space, etc., which are not specifically limited in this embodiment of the present application.
  • the color components may include a first color component, a second color component, and a third color component.
  • the first color component, the second color component and the third color component are: R component, G component, B component; if the color component conforms to the YUV color space, then it can be determined
  • the first color component, the second color component and the third color component are: Y component, U component, V component; if the color component conforms to the YCbCr color space, then the first color component, the second color component and the third color can be determined
  • the components are in turn: Y component, Cb component, Cr component.
  • the attribute information of the point may be a color component, reflectivity or other attributes, which is not specifically limited in the embodiment of the present application.
  • the decoder can determine the residual value of the attribute information of the points in the initial point cloud by decoding the code stream, so as to construct the reconstructed point cloud. Therefore, in some embodiments, the method may also include:
  • a reconstructed point cloud is constructed.
  • the predicted value and residual value of the attribute information of the point can be determined first, and then the reconstruction value of the attribute information of the point can be obtained by further calculation using the predicted value and residual value , in order to construct a reconstructed point cloud.
  • the geometric information and attribute information of multiple target neighbor points of the point can be used, combined with the geometric information of the point for the The attribute information of the point is predicted to obtain the corresponding predicted value, and then the corresponding reconstruction value can be determined.
  • the point can be used as the nearest neighbor of the point in the subsequent LOD, and the reconstruction value of the attribute information of the point can be used to continue to perform subsequent reconstruction.
  • Points are used to predict attributes, so that a reconstructed point cloud can be constructed.
  • the initial point cloud can be directly obtained through the codec program point cloud reading function, and the reconstructed point cloud is obtained after attribute encoding, attribute reconstruction and geometric compensation.
  • the reconstructed point cloud in the embodiment of the present application can be the reconstructed point cloud output after decoding, or it can be used as a reference for decoding the subsequent point cloud; in addition, the reconstructed point cloud here can not only be in the prediction loop, that is, as an inloop
  • the filter can be used as a reference for decoding the subsequent point cloud; it can also be used outside the prediction loop, that is, as a post filter, and not used as a reference for decoding the subsequent point cloud; this embodiment of the application does not specifically limit it.
  • the filtering identification information may include filtering identification information of the component to be processed of the attribute information.
  • the decoding code stream and determining the filter identification information may include: decoding the code stream and determining the filter identification information of the component to be processed; wherein, the filter identification information of the component to be processed It is used to indicate whether to filter the component to be processed of the attribute information of the reconstructed point cloud.
  • determining the filter identification information may specifically include: if the value of the filter identification information of the component to be processed is the first value, then determining Filtering is performed on the component to be processed of the attribute information of the cloud; if the value of the filtering identification information of the component to be processed is the second value, it is determined not to perform filtering processing on the component to be processed of the attribute information of the reconstructed point cloud.
  • the filter identification information corresponding to each color component may be used to indicate whether to perform filtering processing on the color component. Therefore, in some embodiments, the decoding code stream and determining the filter identification information of the component to be processed may include: decoding the code stream, determining the filter identification information of the first color component, the filter identification information of the second color component, and the second color component Filter identification information of three color components;
  • the filter identification information of the first color component is used to indicate whether to filter the first color component of the attribute information of the reconstructed point cloud
  • the filter identification information of the second color component is used to indicate whether to filter the attribute information of the reconstructed point cloud.
  • the second color component performs filtering processing
  • the filtering identification information of the third color component is used to indicate whether to perform filtering processing on the third color component of the attribute information of the reconstructed point cloud.
  • the filter identification information can be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component of.
  • the filter identification information can be represented by [y u v], wherein, y represents the filter identification information of the first color component, u represents the filter identification information of the second color component, and v represents the filter identification information of the third color component.
  • the filter identification information includes the filter identification information corresponding to each color component, the filter identification information can not only be used to determine whether to filter the reconstructed point cloud, but also can be used to determine which One or which color components are filtered.
  • determining the filter identification information may specifically include:
  • the value of the filter identification information of the first color component is the first value, it is determined to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud; if the value of the filter identification information of the first color component is the second value, it is determined not to filter the first color component of the attribute information of the reconstructed point cloud; or,
  • the value of the filter identification information of the second color component is the first value, it is determined to perform filtering processing on the second color component of the attribute information of the reconstructed point cloud; if the value of the filter identification information of the second color component is the second value, it is determined not to filter the second color component of the attribute information of the reconstructed point cloud; or,
  • the value of the filter identification information of the third color component is the first value, it is determined to perform filtering processing on the third color component of the attribute information of the reconstructed point cloud; if the value of the filter identification information of the third color component is the second value value, it is determined not to filter the third color component of the attribute information of the reconstructed point cloud.
  • the filter identification information corresponding to a certain color component is the first value, then it can be instructed to perform filtering processing on the color component; if the filter identification information corresponding to a certain color component is the second value, Then it may indicate that the color component is not to be filtered.
  • the method may also include:
  • the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component is the first value, then it is determined that the filter identification information indicates that the reconstruction point cloud is filtered;
  • the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component all have the second value, it is determined that the filter identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the filter identification information corresponding to these color components is the second value, then it can be indicated that no filtering process is performed on these color components, and it can be determined that the filter identification information indicates that the reconstruction point cloud is not correct.
  • Perform filtering processing correspondingly, if at least one of the filtering identification information of these color components is the first value, then it can be indicated to perform filtering processing on at least one color component, that is, it can be determined that the filtering identification information indicates that filtering processing is performed on the reconstructed point cloud.
  • the first value and the second value are different, and the first value and the second value may be in the form of parameters or numbers.
  • the identification information corresponding to these color components may be a parameter written in a profile (profile), or a value of a flag (flag), which is not specifically limited here.
  • the first value can be set to 1, and the second value can be set to 0; or, the first value can also be set to true, and the second value can also be set to false ; but not specifically limited here.
  • the filter coefficient for filtering can be further determined if the filter identification information indicates that the reconstructed point cloud should be filtered.
  • the filter may be an adaptive filter, such as a neural network-based filter, a Wiener filter, etc., which is not specifically limited here.
  • Wiener filter as an example, the filter coefficients described in the embodiments of the present application may be used for Wiener filtering, that is, the filter coefficients are coefficients for Wiener filtering processing.
  • the Wiener filter is a linear filter whose optimal criterion is to minimize the mean square error. Under certain constraints, the square of the difference between its output and a given function (usually called the expected output) reaches the minimum, and through mathematical operations, it can finally become a problem of solving the Toblitz equation.
  • the Wiener filter is also known as the least squares filter or the least squares filter.
  • the filter coefficient may refer to a filter coefficient corresponding to the component to be processed.
  • decoding the code stream and determining the filter coefficient may include: if the value of the filtering identification information of the component to be processed is the first value, decode the code stream, and determine the filter coefficient corresponding to the component to be processed.
  • the filter coefficient can be corresponding to the first color component
  • decoding the code stream and determining the filtering coefficient may include:
  • the filter identification information of the first color component is the first value, then decode the code stream, and determine the filter coefficient corresponding to the first color component; or,
  • the filter identification information of the second color component is the first value, then decode the code stream, and determine the filter coefficient corresponding to the second color component; or,
  • the code stream is decoded to determine a filter coefficient corresponding to the third color component.
  • the method may further include: if the filter identification information indicates that the reconstructed point cloud is not to be filtered, then the decoding code stream is not executed, and the filter coefficient is determined.
  • the decoder when the filter identification information indicates to perform filter processing on the reconstructed point cloud, the decoder can decode the code stream and directly obtain the filter coefficients. However, after the decoder decodes the code stream and determines the filter identification information, if the filter identification information indicates that a certain color component of the reconstructed point cloud is not to be filtered, then the decoder does not need to decode to obtain the filter coefficient vector corresponding to the color component.
  • S803 Perform filtering processing on the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine a filtered point cloud corresponding to the reconstructed point cloud.
  • the K target points include the first point and (K-1) neighbor points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the (K-1) neighbor points here specifically refer to the (K-1) neighbor points with the closest geometric distance to the first point.
  • the filter coefficient can be further used to filter the reconstructed point cloud , so that the filtered point cloud corresponding to the reconstructed point cloud can be obtained.
  • the attribute information takes the color component as an example, and each color component of the reconstructed point cloud can be filtered separately.
  • the filter corresponding to the Y component can be determined separately.
  • These filter coefficients together constitute the filter coefficients for reconstructing the point cloud.
  • the filter coefficient can be determined by a K-order filter; in this way, when the decoder performs filtering, it can perform a KNN search for each point in the reconstructed point cloud to determine the K target points corresponding to the point , and then use the filter coefficient of this point to filter.
  • FIG. 9 shows a second schematic flowchart of a decoding method provided by an embodiment of the present application. As shown in Figure 9, for S803, this step may include:
  • S901 Perform filtering processing on K target points corresponding to the first point in the reconstructed point cloud by using a filter coefficient, and determine a filter value of attribute information of the first point in the reconstructed point cloud.
  • S902 After determining the filter value of the attribute information of at least one point in the reconstructed point cloud, determine a filtered point cloud according to the filter value of the attribute information of at least one point.
  • the decoder may further use the filter coefficients to determine the filtered point cloud corresponding to the reconstructed point cloud. Specifically, in some embodiments, it is first necessary to determine K target points corresponding to the first point in the reconstructed point cloud, where the first point represents any point in the reconstructed point cloud. Then, filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient to determine the filtered point cloud corresponding to the reconstructed point cloud.
  • determining K target points corresponding to the first point in the reconstructed point cloud may include:
  • the K nearest neighbor search method is used to search for preset candidate points in the reconstructed point cloud
  • the K-nearest neighbor search method can be used to search for a preset value of candidate points in the reconstructed point cloud, calculate the distance between the first point and these candidate points, and then from these candidate points Select the (K-1) neighbors closest to the first point; that is to say, in addition to the first point itself, it also includes (K-1) neighbors with the closest geometric distance to the first point , forming a total of K target points corresponding to the first point in the reconstructed point cloud.
  • the filter coefficient when using the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, it may include: using the filter coefficient corresponding to the color component The vector filters the K target points corresponding to the first point in the reconstructed point cloud to obtain the filtered value of the color component of the first point in the reconstructed point cloud; according to the filtered value of the color component of the point in the reconstructed point cloud, the filtered point cloud.
  • the filter value of the color component of each point in the reconstructed point cloud can be determined according to the filter coefficient vector corresponding to the color component and the second attribute parameter; then based on the filter value of the color component of each point in the reconstructed point cloud, Get the filtered point cloud.
  • the order of the Wiener filter and the attribute information of the K target points corresponding to the midpoint of the reconstructed point cloud can be firstly Reconstruct the value to determine the second attribute parameter corresponding to the attribute information; then determine the filter value of the attribute information according to the filter coefficient and the second attribute parameter; finally, the filtered point cloud can be obtained based on the filter value of the attribute information.
  • the attribute information is the color component of the YUV space
  • it can be based on the reconstruction value of the color component (such as Y component, U component, V component), combined with the order of the Wiener filter
  • the second attribute parameter (expressed by P(n,k)) is determined, that is, the second attribute parameter represents the reconstruction value of the color components of the K target points corresponding to the center point of the reconstructed point cloud.
  • the filter type may be used to indicate the order of the filter, and/or the shape of the filter, and/or the dimension of the filter.
  • the shape of the filter includes rhombus, rectangle, etc.
  • the dimensionality of the filter includes one-dimensional, two-dimensional or even more dimensions.
  • different filter types may correspond to different orders of Wiener filters, for example, the values of orders may be 12, 32, or 128 Wiener filters; different types It may also correspond to filters of different dimensions, for example, a one-dimensional filter, a two-dimensional filter, etc., which are not specifically limited here. That is to say, if a 16-order filter needs to be determined, then 16 points can be used to determine a 16-order asymmetric filter, or an 8-order one-dimensional symmetric filter, or other quantities (such as more special two-dimensional, three-dimensional filter, etc.), and the filter is not specifically limited here.
  • the filter value calculation is performed for a certain color component, and the filter value of the color component can be determined by using the second attribute parameter of the color component and the filter coefficient vector corresponding to the color component . After traversing all the color components and obtaining the filtered value of each color component, the filtered point cloud can be obtained.
  • the reconstructed point cloud and the filter coefficients can be input into the Wiener filter at the same time, that is, the Wiener filter
  • the input of is the filter coefficient and the reconstructed point cloud
  • the filter processing of the reconstructed point cloud can be completed based on the filter coefficient, and the corresponding filtered point cloud can be obtained.
  • the filter coefficients are obtained based on the original point cloud and the reconstructed point cloud. Therefore, applying the filter coefficients to the reconstructed point cloud can restore the original point cloud to the maximum extent.
  • the filter coefficients taking the color component of the YUV space in the point cloud sequence as an example, assuming that the order of the filter is K and the point cloud sequence is n, use the matrix P(n,k) to represent the reconstruction
  • the reconstruction values of the K target points corresponding to all points in the point cloud under the same color component (such as the Y component), that is, P(n,k) is the Y component of the K target points corresponding to all points in the reconstructed point cloud
  • the reconstruction value constitutes the second property parameter.
  • the filter coefficient vector H(k) under the Y component is applied to the reconstructed point cloud, that is, the second attribute parameter P(n,k), and the attribute information under the Y component can be obtained.
  • the U component and V component can be traversed according to the above method, and finally the filter value under the U component and the filter value under the V component can be determined, and then the filter value under all color components can be used to determine the filter corresponding to the reconstructed point cloud After the point cloud.
  • the decoder may use the filtered point cloud to cover the reconstructed point cloud.
  • the quality of the filtered point cloud obtained after filtering processing is significantly enhanced; therefore, after obtaining the filtered point cloud, you can use
  • the filtered point cloud covers the original reconstructed point cloud, so as to realize the whole codec and quality enhancement operation.
  • this method can also Including: if the color component of the point cloud after filtering does not conform to the RGB color space (for example, YUV color space, YCbCr color space, etc.), then perform color space conversion on the point cloud after filtering, so that the color of the point cloud after filtering The components conform to the RGB color space.
  • the RGB color space for example, YUV color space, YCbCr color space, etc.
  • the decoder can skip the filtering process and apply the reconstructed point cloud obtained according to the original program, that is, no more Reconstruct the point cloud for update processing.
  • the embodiment of the present application can set the value of the order to be equal to 16, so that the filtering effect can be guaranteed, and it can be reduced to a certain extent time complexity.
  • the encoder-side Wiener filter considers the neighborhood difference value, due to the large changes in the attribute values at the point cloud contour or boundary, it may be difficult for a single filter to cover the filtering of the entire point cloud, resulting in room for improvement in the effect. Especially for sparse point cloud sequences; therefore, it can be classified according to the neighborhood difference value, and different filters are applied to different categories, such as setting multiple Wiener filters with different orders.
  • the encoder passes multiple sets of filters to the decoder.
  • multiple sets of filter coefficients will be decoded at the decoder to filter the reconstructed point cloud, so as to realize adaptive selection of filters for filtering to achieve better results.
  • the embodiment of the present application proposes an optimized technology for performing Wiener filter processing on the YUV component of the codec color reconstruction value to better enhance the quality of the point cloud.
  • it can be decoded through steps S801-S803.
  • method when filtering the color components (such as Y component, U component, V component) of the color reconstruction value of the point cloud sequence, not only the geometric lossless and attribute lossy conditions, but also the geometric lossy and attribute lossy
  • the judgment of quality improvement, the selection of adjacent points, and the size of K are optimized, so that the quality enhancement operation can be selectively performed on the reconstructed point cloud output by the decoder.
  • the increase in stream bit size is small and the overall compression performance is improved.
  • the decoding method proposed in the embodiment of the present application is applicable to any point cloud sequence, especially for sequences with densely distributed points and sequences with medium code rates, and has a relatively prominent optimization effect.
  • the decoding method proposed in the embodiment of the present application can operate on all three decoding modes of attribute transformation (Predicting transform, Lifing transform, RAHT), and has universal applicability. For very few sequences with poor filtering effect, this method will not affect the reconstruction process of the decoding end, except for a slight increase in the running time of the encoding end, the impact of the filtering identification information on the size of the attribute stream is negligible.
  • Wiener filter proposed in the embodiment of this application can be used in the prediction loop, that is, as an inloop filter, and can be used as a reference for decoding subsequent point clouds; it can also be outside the prediction loop, that is, as a post
  • the filter is used and not used as a reference for decoding subsequent point clouds. This application does not specifically limit it.
  • the Wiener filter proposed in the embodiment of the present application is an in-loop filter
  • the parameter information indicating the filtering process such as filtering
  • the filter coefficients also need to be written into the code stream; correspondingly, after determining not to perform filtering processing, you can choose not to write the parameter information indicating the filtering process, such as the filter identification information into the code stream, At the same time, the filter coefficients are not written into the code stream.
  • the Wiener filter proposed in the embodiment of the present application is a post-processing filter
  • the filter coefficient corresponding to the filter is located in a separate auxiliary information data unit (for example, supplementary enhancement information SEI)
  • a separate auxiliary information data unit for example, supplementary enhancement information SEI
  • the decoder does not obtain the supplementary enhancement information, it will not Reconstruct the point cloud for filtering.
  • the filter coefficient corresponding to the filter and other information are contained in an auxiliary information data unit.
  • the parameter information indicating the filtering process such as the filter identification information
  • the filter coefficients it is also necessary to write the filter coefficients into the code stream; correspondingly, after determining not to perform filtering processing, you can choose not to write the parameter information indicating the filtering process, such as the filter identification information, into the code stream, and at the same time do not write the filter coefficient The coefficients are written into the code stream.
  • one or more parts of the reconstructed point cloud can be selected to be filtered, that is, the control range of the filtered identification information can be the entire reconstructed point cloud, or the reconstructed point cloud A certain part of it is not specifically limited here.
  • This embodiment provides a decoding method, which is applied to a decoder. Determine the filtering identification information by decoding the code stream; wherein, the filtering identification information is used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud; if the filtering identification information indicates that the reconstructed point cloud is filtered, the decoding code stream, Determine the filter coefficient; use the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the first point (K-1) neighboring points adjacent to the point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud , not only considering the PSNR performance index, but also considering the rate-distortion cost value for trade-off; in addition, the encoding side also proposes the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss, so that The decoding end not only improves the scope of application and the quality of the point cloud, but also saves the bit rate and improves the encoding and decoding efficiency.
  • an optimized Wiener filtering process is proposed for the YUV component of the codec color reconstruction value to better enhance the quality of the point cloud It mainly proposes a point cloud Wiener filter post-processing method for geometrically lossy and attribute lossy encoding, and also optimizes the judgment of quality improvement, selection of adjacent points, and the size of K.
  • FIG. 10 is a schematic flowchart of an overall coding and decoding process provided by an embodiment of the present application.
  • the reconstructed point cloud can be obtained; then the initial point cloud and the reconstructed point cloud can be used as the input of the Wiener filter, and the output of the Wiener filter.
  • These are the filter coefficients; these filter coefficients will determine whether the reconstruction point cloud needs to be filtered according to the judgment effect; only when the reconstruction point cloud needs to be filtered will the corresponding filter coefficients be written into the code stream; in addition, the initial The attribute residual value of the point cloud after encoding will also be written into the code stream.
  • the decoder can obtain the attribute residual value by decoding the code stream, and then obtain the reconstructed point cloud after decoding and reconstruction processing; in addition, if there are filter coefficients in the code stream, it needs to be decoded to obtain the corresponding filter Coefficients, then input the filter coefficients and reconstructed point cloud to the Wiener filter, and finally output the filtered point cloud (that is, the output point cloud), and it has a quality enhancement effect.
  • FIG. 11 is a schematic flowchart of a filtering process at an encoding end provided by an embodiment of the present application.
  • the initial point cloud and the reconstructed point cloud are used as the input of the Wiener filter, and the points in the initial point cloud and the points in the reconstructed point cloud are respectively aligned with the position information, and the attribute information is converted (such as the The color component in the attribute information is converted from RGB space to YUV space), and the color component of the point in the initial point cloud and the color component of the point in the reconstructed point cloud can be used to calculate the filter coefficient.
  • the calculation of autocorrelation parameters can be performed based on the color components of points in the reconstructed point cloud
  • the calculation of cross-correlation parameters can be performed based on the color components of points in the initial point cloud and the color components of points in the reconstructed point cloud, and finally can be
  • the optimal filter coefficient ie the corresponding filter coefficient, is determined by using the autocorrelation parameter and the cross-correlation parameter.
  • the input of the Wiener filter needs to be obtained: the initial point cloud and the reconstructed point cloud.
  • the initial point cloud can be directly obtained through the point cloud reading function of the codec program, and the reconstructed point cloud is obtained after attribute encoding, attribute reconstruction and geometric compensation.
  • the Wiener filtering link can be entered.
  • the geometrically lossy and attribute lossy encoding method the number and coordinates of points and the bounding box of the entire point cloud will change greatly depending on the set bit rate of the reconstructed point cloud, so here is provided A way to more accurately find point-to-point correspondences.
  • the geometric coordinates of the reconstructed point cloud are first preprocessed.
  • the geometric coordinates of each point are divided by the scaling scale according to the reconstruction process by using the scaling scale in the configuration parameters.
  • the reconstructed point cloud can be restored to a size comparable to the original point cloud and the offset of the coordinates is eliminated.
  • a point cloud with the same number of points as the reconstructed point cloud and one-to-one correspondence with each point can be obtained, which can be input to the Wiener filter as the real initial point cloud.
  • the function of the Wiener filter at the encoding end is mainly to calculate the filter coefficients, and at the same time determine whether the quality of the point cloud is improved after the Wiener filter.
  • the optimal coefficient of the Wiener filter ie, the filter coefficient
  • the encoding end will filter all three components of YUV (to obtain the filtering performance on these three components). Under this condition, after obtaining the filter coefficients, the filtered point cloud can be obtained (there is no need to overwrite and reconstruct the point cloud at this time).
  • the scheme calculates the PSNR value of each component of the filtered point cloud and the reconstructed point cloud in YUV.
  • the scheme also performs a rate-distortion trade-off between the filtered point cloud and the reconstructed point cloud, and uses the rate-distortion cost function to calculate the cost value after the comprehensive quality improvement and the code stream increase.
  • the decision array with a size of 1 ⁇ 3 ie, the filter identification information in the foregoing embodiment is used to determine whether to use , Which components need to be post-processed at the decoding end, that is, the filtering operation, such as [0,1,1] indicates that the U component and the V component are filtered
  • the filtering operation such as [0,1,1] indicates that the U component and the V component are filtered
  • FIG. 12 is a schematic flow chart of a filtering process at a decoding end provided by an embodiment of the present application.
  • the decoder performs Wiener filtering processing
  • the reconstructed point cloud and filter coefficients can be used as the input of the Wiener filter at the decoding end, and the conversion of the attribute information of the points in the reconstructed point cloud (such as After the color components in the attribute information are converted from RGB space to YUV space)
  • filter coefficients can be used for filtering processing, and the filtered and quality-enhanced filtered point cloud can be obtained through calculation, and the point cloud of the filtered point cloud is completed After the conversion of the attribute information (from YUV space to RGB space), the filtered point cloud is output.
  • G-PCC decodes the attribute residual value, it can be decoded immediately.
  • a 1 ⁇ 3 decision array is decoded. If after judgment, it is determined that certain (several) components need to be filtered, it means There is a transmission of filter coefficients at the encoding end, and Wiener filtering will increase the quality of the reconstructed point cloud; otherwise, decoding will not continue, and the Wiener filtering part will be skipped at the same time, and the point cloud reconstruction will be performed according to the original program to obtain the reconstructed point cloud.
  • the filter coefficients can be passed to the point cloud reconstruction.
  • the reconstructed point cloud and filter coefficients can be directly input to the Wiener filter at the decoding end, and the filtered and quality can be obtained through calculation.
  • the enhanced filtered point cloud is overlaid to reconstruct the point cloud to complete the entire codec and quality enhancement operations.
  • FIG. 13 is a schematic diagram of a related technology provided in the embodiment of the present application and the test result of the technical solution under the CY test condition of predictive transformation
  • Fig. 14 is a schematic diagram of a test result provided in the embodiment of the present application
  • FIG. 15 is a schematic diagram of the test result of a related technology and this technical solution under the C1 test condition of the RAHT transformation provided by the embodiment of the application.
  • FIG. 16 is The embodiment of the present application provides a schematic diagram of the test results of the lifting transformation under the C2 test conditions
  • FIG. 17 is a schematic diagram of the test results of the RAHT transformation under the C2 test conditions provided by the embodiments of the present application.
  • the codec method proposed in the embodiment of the present application is implemented on the G-PCC reference software TMC13 V12.0
  • the partial test sequence required by MPEG is carried out under the test conditions of CTC CY, C1 and C2.
  • the test (cat1-A&cat1-B) was carried out, and the final test results in the attribute lossless coding method were also compared with related technologies.
  • CY condition is lossless geometry, lossy attribute
  • C1 condition is lossless geometry, nearlossless attribute
  • C2 condition is lossy geometry, lossy attribute Coding method (lossy geometry, lossy attribute).
  • End-to-End BD-AttrRate indicates the BD-Rate of the end-to-end attribute value for the attribute code stream.
  • BD-Rate reflects the difference between PSNR curves in two cases (with or without filtering). When BD-Rate decreases, it means that in the case of equal PSNR, the code rate decreases and the performance improves; otherwise, the performance decreases. That is, the more the BD-Rate drops, the better the compression effect will be.
  • Cat1-A average and Cat1-B average respectively represent the average value of the respective point cloud sequence test results of the two data sets.
  • Overall average is the average of all sequence test effects.
  • the encoding and decoding method proposed in the embodiment of the present application is mainly an optimized G-PCC point cloud decoder attribute Wiener filtering quality enhancement technology, which can selectively perform reconstruction on the reconstructed point cloud output by the decoder.
  • Quality enhancement operation especially for sequences with dense point distribution and medium bit rate sequences, it has a more prominent optimization effect.
  • the number of bit streams does not increase significantly, and the compression performance is improved; at the same time, this technology can operate on all three attribute transformation (Predicting transform, Lifing transform, RAHT) encoding methods, and is universal. For the very few sequences with poor filtering effect, this technology will not affect the reconstruction process of the decoding end. Except for a slight increase in the running time of the encoding end, the impact of the filtering identification information on the size of the attribute stream is negligible.
  • the current point itself is added to the calculation, that is, the filtered value of this point also depends on its own attribute value.
  • the corresponding coefficient in the optimal coefficient is relatively large, which means that the attribute value of this point has a large weight in the filtering process; therefore, it is necessary to take the calculation of the current point into account; from the test results It can be seen that the final BD-Rate has been greatly improved, which also proves that this operation is very correct.
  • the method of judging whether the overall performance after filtering is improved or not is optimized.
  • the embodiment of the present application proposes to use a cost function to calculate the cost before and after filtering of each channel (Y, U, V) for rate-distortion trade-off; this method not only considers the attribute
  • the cost of writing coefficients and other information into the code stream is calculated, and the performance of the two is combined to judge whether the compression performance after filtering has improved, so as to decide whether to perform coefficient transfer and decoder operation.
  • the size of the filter order K is improved.
  • the embodiment of the present application can filter the reconstructed point cloud obtained by GPCC geometrically lossy and attribute lossy coding methods, and proposes a one-to-one correspondence method between the geometrically distorted reconstructed point cloud and the points of the original point cloud, and the scope of application is further promote.
  • FIG. 18 shows a schematic structural diagram of an encoder 180 provided by the embodiment of the present application.
  • the encoder 180 may include: a first determining unit 1801, a first filtering unit 1802, and an encoding unit 1803; wherein,
  • the first determination unit 1801 is configured to determine the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud; and determine the filter coefficient according to the initial point cloud and the reconstructed point cloud;
  • the first filtering unit 1802 is configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point And (K-1) neighboring points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud;
  • the first determining unit 1801 is further configured to determine filtering identification information according to the reconstructed point cloud and the filtered point cloud; wherein the filtering identification information is used to determine whether to perform filtering processing on the reconstructed point cloud;
  • the encoding unit 1803 is configured to encode the filter identification information and filter coefficients if the filter identification information indicates that the reconstructed point cloud is to be filtered, and write the obtained encoded bits into the code stream.
  • the first determining unit 1801 is further configured to obtain the first reconstructed point cloud if the initial point cloud is encoded and reconstructed using the first type of encoding method, and the first reconstructed point cloud is used as the reconstructed point cloud ;
  • the first type of encoding method is used to indicate the geometric lossless and attribute lossy encoding of the initial point cloud.
  • the first determining unit 1801 is further configured to obtain a second reconstructed point cloud if the initial point cloud is encoded and reconstructed using the second type of encoding method; and perform geometric restoration on the second reconstructed point cloud processing to obtain the restored and reconstructed point cloud, and use the restored and reconstructed point cloud as the reconstructed point cloud; wherein, the second type of encoding method is used to indicate that the initial point cloud is encoded with lossy geometry and lossy attributes.
  • the first determining unit 1801 is further configured to perform geometric compensation processing on the second reconstructed point cloud to obtain an intermediate reconstructed point cloud; and perform scaling processing on the intermediate reconstructed point cloud to obtain a restored and reconstructed point cloud; wherein , the restored reconstructed point cloud has the same size and the same geometric position as the initial point cloud.
  • the encoder 180 may further include a search unit 1804 configured to determine, according to a first preset search method, that the points in the reconstructed point cloud are in the initial point cloud in the case of the first type of encoding method. Corresponding points in , establish the correspondence between the points in the reconstructed point cloud and the points in the initial point cloud.
  • the search unit 1804 is further configured to, in the case of the second encoding method, determine the corresponding point of the point in the reconstructed point cloud in the initial point cloud according to the second preset search method; and according to the determined Corresponding points are used to construct a matching point cloud, and the matching point cloud is used as the initial point cloud, and the corresponding relationship between the points in the reconstructed point cloud and the points in the initial point cloud is established.
  • the first preset search method is a K-nearest neighbor search method for searching a first constant number of points;
  • the second preset search method is a K-nearest neighbor search method for searching a second constant number of points.
  • the search unit 1804 is specifically configured to search for a second constant number of points in the initial point cloud using a second preset search method based on the current point in the reconstructed point cloud; The distance value between two constant value points, select the minimum distance value from the distance value; and if the number of the minimum distance value is 1, then the point corresponding to the minimum distance value is determined as the corresponding point of the current point; if the minimum If there are multiple distance values, the corresponding point of the current point is determined according to the multiple points corresponding to the minimum distance value.
  • the first constant value is equal to 1 and the second constant value is equal to 5.
  • the first determining unit 1801 is further configured to determine K target points corresponding to the first point in the reconstructed point cloud;
  • the first filtering unit 1802 is specifically configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filter value of the attribute information of the first point in the reconstructed point cloud; and After the filter value of the attribute information of at least one point in the reconstructed point cloud is determined, the filtered point cloud is determined according to the filter value of the attribute information of the at least one point.
  • the first determining unit 1801 is further configured to search for a preset value of candidate points in the reconstructed point cloud by using the K-nearest neighbor search method based on the first point in the reconstructed point cloud; and calculate the first point and The distance value between the preset value candidate points, select (K-1) distance values from the obtained preset value distance values, and the (K-1) distance values are all less than the preset value distance values and determine (K-1) neighbor points according to the candidate points corresponding to (K-1) distance values, and determine the first point and (K-1) neighbor points as the first point corresponding K target points.
  • the first determining unit 1801 is further configured to determine the first attribute parameter according to the original value of the attribute information of the point in the initial point cloud; and according to the attribute information of the K target points corresponding to the point in the reconstructed point cloud Determining a second property parameter for the reconstructed value of ; and determining a filter coefficient based on the first property parameter and the second property parameter.
  • the first determining unit 1801 is further configured to determine the cross-correlation parameter according to the first attribute parameter and the second attribute parameter; and determine the autocorrelation parameter according to the second attribute parameter; The parameters are calculated to obtain the filter coefficients.
  • the attribute information includes a color component
  • the color component includes at least one of the following: a first color component, a second color component, and a third color component; wherein, if the color component conforms to the RGB color space, then determine the first color component
  • the first color component, the second color component and the third color component are: R component, G component, B component; if the color component conforms to the YUV color space, then determine the first color component, the second color component and the third color component in order For: Y component, U component, V component.
  • the encoder 180 may further include a calculation unit 1805 configured to determine the first-generation value of the component to be processed of the attribute information of the reconstructed point cloud, and determine the value of the component to be processed of the attribute information of the filtered point cloud. the second-generation value of the processing component; and
  • the first determining unit 1801 is further configured to determine the filter identification information of the component to be processed according to the first generation value and the second generation value; and obtain the filter identification information according to the filter identification information of the component to be processed.
  • the first determination unit 1801 is further configured to determine that the value of the filter identification information of the component to be processed is the first value if the second generation value is less than the first generation value; if the second generation value is greater than the first generation value One generation value, then determine the value of the filter identification information of the component to be processed as the second value.
  • the calculation unit 1805 is specifically configured to calculate the cost value of the unprocessed component of the attribute information of the reconstructed point cloud using the rate-distortion cost method, and use the obtained first rate-distortion value as the first generation value; and
  • the rate-distortion cost method is used to calculate the cost value of the component to be processed of the attribute information of the filtered point cloud, and the obtained second rate-distortion value is used as the second-generation value.
  • the calculation unit 1805 is further configured to use the rate-distortion cost method to calculate the cost value of the unprocessed component of the attribute information of the reconstructed point cloud to obtain the first rate-distortion value; performing performance value calculation on the component to be processed of the attribute information of the point cloud to obtain a first performance value;
  • the first determining unit 1801 is further configured to determine a first generation value according to the first rate-distortion value and the first performance value.
  • the computing unit 1805 is further configured to use a rate-distortion cost method to calculate the cost value of the component to be processed of the attribute information of the filtered point cloud to obtain a second rate-distortion value; performing performance value calculation on the component to be processed of the attribute information of the filtered point cloud to obtain a second performance value;
  • the first determining unit 1801 is further configured to determine a second generation value according to the second rate-distortion value and the second performance value.
  • the first determining unit 1801 is further configured to determine the value of the filter identification information of the component to be processed if the second performance value is greater than the first performance value and the second rate-distortion value is less than the first rate-distortion value is the first value; if the second performance value is smaller than the first performance value, determine that the value of the filter identification information of the component to be processed is the second value.
  • the first determining unit 1801 is further configured to determine the value of the filter identification information of the component to be processed if the second performance value is greater than the first performance value and the second rate-distortion value is less than the first rate-distortion value is the first value; if the second rate-distortion value is greater than the first rate-distortion value, determine that the value of the filter identification information of the component to be processed is the second value.
  • the first determining unit 1801 is further configured to determine that the filter identification information of the component to be processed indicates that the attribute information of the reconstructed point cloud is to be processed if the value of the filter identification information of the component to be processed is the first value.
  • the processing component performs filtering processing; if the value of the filtering identification information of the component to be processed is the second value, it is determined that the filtering identification information of the component to be processed indicates that no filtering process is performed on the component to be processed of the attribute information of the reconstructed point cloud.
  • the first determining unit 1801 is further configured to determine the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component when the component to be processed is a color component. filtering identification information; and obtaining the filtering identification information according to the filtering identification information of the first color component, the filtering identification information of the second color component and the filtering identification information of the third color component.
  • the first determining unit 1801 is further configured to if at least one of the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component is the first value, Then it is determined that the filter identification information indicates that the reconstruction point cloud is filtered; if all the filter identification information of the first color component, the filter identification information of the second color component and the filter identification information of the third color component are the second value, then determine The filtering identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the encoding unit 1803 is further configured to encode only the filter identification information if the filter identification information indicates that the reconstruction point cloud is not to be filtered, and write the obtained encoded bits into the code stream.
  • the first determination unit 1801 is further configured to determine the value of K according to the number of points of the reconstructed point cloud; and/or, determine the value of K according to the quantitative parameters of the reconstructed point cloud; and/or, The value of K is determined according to the neighborhood difference value of the points in the reconstructed point cloud; wherein, the neighborhood difference value is calculated based on the component difference between the attribute information of the point and the attribute information of at least one neighboring point.
  • the first determining unit 1801 is further configured to select the first type of filter to perform filtering processing on the reconstructed point cloud if the neighborhood difference value of the points in the reconstructed point cloud satisfies the first preset interval; If the neighborhood difference value of the points in the point cloud satisfies the second preset interval, the filter of the second category is selected to filter the reconstructed point cloud; different.
  • the first determination unit 1801 is further configured to determine the predicted value of the attribute information of the points in the initial point cloud; The residual value of the attribute information of the point;
  • the encoding unit 1803 is further configured to encode the residual value of the attribute information of the points in the initial point cloud, and write the obtained encoded bits into the code stream.
  • a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a module, or it may be non-modular.
  • each component in this embodiment may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the prior art or the whole or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium, and includes several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various media that can store program codes.
  • the embodiment of the present application provides a computer storage medium, which is applied to the encoder 180, and the computer storage medium stores a computer program, and when the computer program is executed by the first processor, it implements any one of the preceding embodiments. Methods.
  • the encoder 180 may include: a first communication interface 1901 , a first memory 1902 and a first processor 1903 ; each component is coupled together through a first bus system 1904 .
  • the first bus system 1904 is used to realize connection and communication between these components.
  • the first bus system 1904 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as first bus system 1904 in FIG. 19 . in,
  • the first communication interface 1901 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the first memory 1902 is used to store computer programs that can run on the first processor 1903;
  • the first processor 1903 is configured to, when running the computer program, execute:
  • the filter coefficient uses the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the points adjacent to the first point (K-1) neighbor points, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud;
  • the filter identification information indicates that the reconstructed point cloud is to be filtered
  • the filter identification information and filter coefficients are encoded, and the obtained encoded bits are written into the code stream.
  • the first memory 1902 in this embodiment of the present application may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • the volatile memory can be Random Access Memory (RAM), which acts as external cache memory.
  • RAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SRAM Dynamic Random Access Memory
  • Synchronous Dynamic Random Access Memory Synchronous Dynamic Random Access Memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDRSDRAM
  • enhanced SDRAM ESDRAM
  • Synchlink DRAM SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus RAM, DRRAM
  • the first memory 1902 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the first processor 1903 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above method may be implemented by an integrated logic circuit of hardware in the first processor 1903 or an instruction in the form of software.
  • the above-mentioned first processor 1903 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the first storage 1902, and the first processor 1903 reads the information in the first storage 1902, and completes the steps of the above method in combination with its hardware.
  • the embodiments described in this application may be implemented by hardware, software, firmware, middleware, microcode or a combination thereof.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing device (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor, other devices used to perform the functions described in this application electronic unit or its combination.
  • the techniques described herein can be implemented through modules (eg, procedures, functions, and so on) that perform the functions described herein.
  • Software codes can be stored in memory and executed by a processor. Memory can be implemented within the processor or external to the processor.
  • the first processor 1903 is further configured to execute the method described in any one of the foregoing embodiments when running the computer program.
  • This embodiment provides an encoder, which may include a first determining unit, a first filtering unit, and an encoding unit.
  • the encoding end uses the initial point cloud and the reconstructed point cloud to calculate the filter coefficients for filtering processing, and after determining to perform filtering processing on the reconstructed point cloud, the corresponding filter coefficients are passed to the decoder; correspondingly, the decoding end It can be directly decoded to obtain the filter coefficient, and then use the filter coefficient to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • the encoder uses the neighboring points for filtering processing, the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss is proposed, which not only improves the scope of application , which improves the quality of the point cloud, and can also save the bit rate and improve the encoding and decoding efficiency.
  • FIG. 20 shows a schematic structural diagram of a decoder 200 provided by an embodiment of the present application.
  • the decoder 200 may include: a decoding unit 2001 and a second filtering unit 2002; wherein,
  • the decoding unit 2001 is configured to decode the code stream and determine the filter identification information; wherein the filter identification information is used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud; and if the filter identification information indicates that the reconstructed point cloud is filtered. , then decode the code stream and determine the filter coefficient;
  • the second filtering unit 2002 is configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point And (K-1) neighboring points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the decoder 200 may further include a second determining unit 2003 configured to determine K target points corresponding to the first point in the reconstructed point cloud;
  • the second filtering unit 2002 is specifically configured to filter the K target points corresponding to the first point in the reconstructed point cloud by using the filter coefficient, and determine the filter value of the attribute information of the first point in the reconstructed point cloud; and After the filter value of the attribute information of at least one point in the reconstructed point cloud is determined, the filtered point cloud is determined according to the filter value of the attribute information of the at least one point.
  • the second determination unit 2003 is specifically configured to search for a preset value of candidate points in the reconstructed point cloud using the K-nearest neighbor search method based on the first point in the reconstructed point cloud; and calculate the first point and The distance value between the preset value candidate points, select (K-1) distance values from the obtained preset value distance values, and the (K-1) distance values are all less than the preset value distance values and determine (K-1) neighbor points according to the candidate points corresponding to (K-1) distance values, and determine the first point and (K-1) neighbor points as the first point corresponding K target points.
  • the decoding unit 2001 is further configured to decode the code stream and determine the residual value of the attribute information of the points in the initial point cloud;
  • the second determining unit 2003 is further configured to determine the attribute information of the initial point cloud midpoint according to the predicted value and residual value of the attribute information of the initial point cloud midpoint after determining the predicted value of the attribute information of the initial point cloud midpoint
  • the reconstruction value of and based on the reconstruction value of the attribute information of the points in the initial point cloud, the reconstruction point cloud is constructed.
  • the attribute information includes a color component
  • the color component includes at least one of the following: a first color component, a second color component, and a third color component; wherein, if the color component conforms to the RGB color space, then determine the first color component
  • the first color component, the second color component and the third color component are: R component, G component, B component; if the color component conforms to the YUV color space, then determine the first color component, the second color component and the third color component in order For: Y component, U component, V component.
  • the decoding unit 2001 is further configured to decode the code stream and determine the filter identification information of the component to be processed; wherein the filter identification information of the component to be processed is used to indicate whether to reconstruct the component to be processed of the attribute information of the point cloud Perform filtering.
  • the second determination unit 2003 is further configured to determine to perform filtering processing on the component to be processed of the attribute information of the reconstructed point cloud if the value of the filter identification information of the component to be processed is the first value; and if If the value of the filtering identification information of the component to be processed is the second value, it is determined not to filter the component to be processed of the attribute information of the reconstructed point cloud;
  • the second filtering unit 2002 is further configured to decode the code stream and determine the filter coefficient corresponding to the component to be processed if the value of the filter identification information of the component to be processed is the first value.
  • the decoding unit 2001 is further configured to decode the code stream, and determine the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component; wherein, the first color The filter identification information of the component is used to indicate whether to filter the first color component of the attribute information of the reconstructed point cloud, and the filter identification information of the second color component is used to indicate whether to filter the second color component of the attribute information of the reconstructed point cloud.
  • the filtering identification information of the third color component is used to indicate whether to perform filtering processing on the third color component of the attribute information of the reconstructed point cloud.
  • the second determination unit 2003 is further configured to determine to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud if the value of the filter identification information of the first color component is the first value; If the value of the filter identification information of the first color component is the second value, then it is determined not to filter the first color component of the attribute information of the reconstructed point cloud;
  • the second filtering unit 2002 is further configured to decode the code stream and determine the filter coefficient corresponding to the first color component if the value of the filter identification information of the first color component is the first value.
  • the second determining unit 2003 is further configured to determine to perform filtering processing on the second color component of the attribute information of the reconstructed point cloud if the value of the filtering identification information of the second color component is the first value; If the value of the filtering identification information of the second color component is the second value, it is determined not to filter the second color component of the attribute information of the reconstructed point cloud;
  • the second filtering unit 2002 is further configured to decode the code stream and determine the filter coefficient corresponding to the second color component if the value of the filter identification information of the second color component is the first value.
  • the second determining unit 2003 is further configured to determine to perform filtering processing on the third color component of the attribute information of the reconstructed point cloud if the value of the filtering identification information of the third color component is the first value; If the value of the filtering identification information of the third color component is the second value, it is determined not to filter the third color component of the attribute information of the reconstructed point cloud;
  • the second filtering unit 2002 is further configured to decode the code stream and determine the filter coefficient corresponding to the third color component if the value of the filter identification information of the third color component is the first value.
  • the second determining unit 2003 is further configured to if at least one of the filter identification information of the first color component, the filter identification information of the second color component, and the filter identification information of the third color component is the first value, Then it is determined that the filter identification information indicates that the reconstruction point cloud is filtered; if all the filter identification information of the first color component, the filter identification information of the second color component and the filter identification information of the third color component are the second value, then determine The filtering identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the second filtering unit 2002 is further configured to not execute the step of decoding the code stream and determining the filter coefficient if the filter identification information indicates that no filtering process is performed on the reconstructed point cloud.
  • the second filtering unit 2002 is further configured to perform color space conversion on the filtered point cloud if the color component of the filtered point cloud midpoint does not conform to the RGB color space, so that the filtered point cloud midpoint The color components conform to the RGB color space.
  • a "unit” may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a module, or it may be non-modular.
  • each component in this embodiment may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software function modules.
  • the integrated units are implemented in the form of software function modules and are not sold or used as independent products, they can be stored in a computer-readable storage medium.
  • this embodiment provides a computer storage medium, which is applied to the decoder 200, and the computer storage medium stores a computer program, and when the computer program is executed by the second processor, any one of the preceding embodiments is implemented. the method described.
  • the decoder 200 may include: a second communication interface 2101 , a second memory 2102 and a second processor 2103 ; all components are coupled together through a second bus system 2104 .
  • the second bus system 2104 is used to realize connection and communication between these components.
  • the second bus system 2104 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as the second bus system 2104 in FIG. 21 . in,
  • the second communication interface 2101 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 2102 is used to store computer programs that can run on the second processor 2103;
  • the second processor 2103 is configured to, when running the computer program, execute:
  • the code stream is decoded to determine the filter coefficient
  • the filter coefficient uses the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and the points adjacent to the first point (K-1) neighbor points, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the second processor 2103 is further configured to execute the method described in any one of the foregoing embodiments when running the computer program.
  • the hardware function of the second memory 2102 is similar to that of the first memory 1902, and the hardware function of the second processor 2103 is similar to that of the first processor 1903; details will not be described here.
  • This embodiment provides a decoder, which may include a decoding unit and a second filtering unit.
  • a decoder which may include a decoding unit and a second filtering unit.
  • the encoder uses the neighboring points for filtering processing, the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss is proposed, which not only improves the scope of application , which improves the quality of the point cloud, and can also save the bit rate and improve the encoding and decoding efficiency.
  • FIG. 22 shows a schematic diagram of the composition and structure of a codec system provided by the embodiment of the present application.
  • the codec system 220 may include an encoder 2201 and a decoder 2202 .
  • the encoder 2201 may be the encoder described in any one of the foregoing embodiments
  • the decoder 2202 may be the decoder described in any one of the foregoing embodiments.
  • the encoding end uses the initial point cloud and the reconstructed point cloud to calculate the filter coefficients for filtering processing, and only after it is determined to perform filtering processing on the reconstructed point cloud, the corresponding The filter coefficients are passed to the decoder; correspondingly, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • the encoder uses the neighboring points for filtering processing, the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss is proposed, which not only improves the scope of application , which improves the quality of the point cloud, and can also save the bit rate and improve the encoding and decoding efficiency.
  • the encoder determine the initial point cloud and the reconstructed point cloud corresponding to the initial point cloud; determine the filter coefficient according to the initial point cloud and the reconstructed point cloud; use the filter coefficient to correspond to the first point in the reconstructed point cloud
  • the K target points are filtered to determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point and (K-1) neighbor points adjacent to the first point, and K is greater than An integer of 1, the first point represents any point in the reconstructed point cloud; according to the reconstructed point cloud and the filtered point cloud, determine the filter identification information; where the filter identification information is used to determine whether to filter the reconstructed point cloud; if the filter The identification information indicates to perform filtering processing on the reconstructed point cloud, then the filtering identification information and filter coefficients are encoded, and the obtained encoded bits are written into the code stream.
  • the code stream is decoded to determine the filter identification information; wherein, the filter identification information is used to determine whether to filter the reconstructed point cloud corresponding to the initial point cloud; if the filter identification information indicates that the reconstructed point cloud is filtered, then Decode the code stream to determine the filter coefficient; use the filter coefficient to filter the K target points corresponding to the first point in the reconstructed point cloud, and determine the filtered point cloud corresponding to the reconstructed point cloud; wherein, the K target points include the first point And (K-1) neighboring points adjacent to the first point, K is an integer greater than 1, and the first point represents any point in the reconstructed point cloud.
  • the encoding end uses the initial point cloud and the reconstructed point cloud to calculate the filter coefficients for filtering processing, and after determining to perform filtering processing on the reconstructed point cloud, the corresponding filter coefficients are passed to the decoder; correspondingly, the decoding end It can be directly decoded to obtain the filter coefficient, and then use the filter coefficient to filter the reconstructed point cloud, thereby realizing the optimization of the reconstructed point cloud and improving the quality of the point cloud.
  • the encoder uses the neighboring points for filtering processing, the current point itself is taken into account at the same time, so that the filtered value also depends on the attribute value of the current point itself; and when determining whether to filter the reconstructed point cloud, Not only the PSNR performance index is considered, but also the rate-distortion cost is considered for trade-off; in addition, the determination of the corresponding relationship between the reconstructed point cloud and the initial point cloud in the case of geometric loss and attribute loss is proposed, which not only improves the scope of application , which improves the quality of the point cloud, and can also save the bit rate and improve the encoding and decoding efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种编解码方法、码流、编码器、解码器以及存储介质,该方法包括:解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。这样,不仅能够提升点云的质量,还能够节省码率,进而提高编解码效率。

Description

编解码方法、码流、编码器、解码器以及存储介质 技术领域
本申请实施例涉及视频编解码技术领域,尤其涉及一种编解码方法、码流、编码器、解码器以及存储介质。
背景技术
目前,在基于几何的点云压缩(Geometry-based Point Cloud Compression,G-PCC)编解码框架中,点云的属性信息编码主要针对颜色信息的编码。首先,将颜色信息从RGB颜色空间转换到YUV颜色空间。然后,利用重建的几何信息对点云重新着色,使得未编码的属性信息与重建的几何信息对应起来。在颜色信息的编码中,主要使用预测变换(Predicting Transform)、提升变换(Lifting Transform)、区域自适应分层变换(Region Adaptive Hierarchal Transform,RAHT)等三种预测变换方式进行操作,最终生成二进制码流。
然而,在相关技术中,对于已有的G-PCC编解码框架只会针对初始点云进行基础性重建,而对于属性有损编码的情况,在重建后可能使得重建点云和初始点云相差比较大,失真较为严重,从而影响了整个点云的质量。
发明内容
本申请实施例提供一种编解码方法、码流、编码器、解码器以及存储介质,不仅能够提升点云的质量,还能够节省码率,进而提高编解码效率。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种编码方法,应用于编码器,该方法包括:
确定初始点云以及初始点云对应的重建点云;
根据初始点云和重建点云,确定滤波系数;
利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;
根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;
若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。
第二方面,本申请实施例提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息包括下述至少之一:初始点云中点的属性信息的残差值、滤波标识信息和滤波系数。
第三方面,本申请实施例提供了一种解码方法,应用于解码器,该方法包括:
解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;
若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;
利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。
第四方面,本申请实施例提供了一种编码器,该编码器包括第一确定单元、第一滤波单元和编码单元;其中,
第一确定单元,配置为确定初始点云以及初始点云对应的重建点云;以及根据初始点云和重建点云,确定滤波系数;
第一滤波单元,配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;
第一确定单元,还配置为根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;
编码单元,配置为若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。
第五方面,本申请实施例提供了一种编码器,该编码器包括第一存储器和第一处理器;其中,
第一存储器,用于存储能够在第一处理器上运行的计算机程序;
第一处理器,用于在运行计算机程序时,执行如第一方面的方法。
第六方面,本申请实施例提供了一种解码器,该解码器包括解码单元和第二滤波单元;其中,
解码单元,配置为解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;以及若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;
第二滤波单元,配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。
第七方面,本申请实施例提供了一种解码器,该解码器包括第二存储器和第二处理器;其中,
第二存储器,用于存储能够在第二处理器上运行的计算机程序;
第二处理器,用于在运行计算机程序时,执行如第三方面所述的方法。
第八方面,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如第一方面所述的方法、或者被第二处理器执行时实现如第三方面所述的方法。
本申请实施例提供了一种编解码方法、码流、编码器、解码器以及存储介质,在编码器中,确定初始点云以及初始点云对应的重建点云;根据初始点云和重建点云,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。在解码器中,解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。这样,编码器利用初始点云和重建点云计算用于进行滤波处理的滤波系数,并在确定对重建点云进行滤波处理之后,才将对应的滤波系数传递至解码器;相应地,解码器可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量;而且编码器在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;进一步提升了点云的质量,能够节省码率,提高了编解码效率。
附图说明
图1为一种G-PCC编码器的组成框架示意图;
图2为一种G-PCC解码器的组成框架示意图;
图3为一种零行程编码的结构示意图;
图4为本申请实施例提供的一种编码方法的流程示意图一;
图5为本申请实施例提供的一种编码方法的流程示意图二;
图6为本申请实施例提供的一种不同分量下BD-Rate性能指标与K值之间的变化关系示意图;
图7为本申请实施例提供的一种编解码时间与K值之间的变化关系示意图;
图8为本申请实施例提供的一种解码方法的流程示意图一;
图9为本申请实施例提供的一种解码方法的流程示意图二;
图10为本申请实施例提供的一种编解码的总体流程示意图;
图11为本申请实施例提供的一种编码端滤波处理的流程示意图;
图12为本申请实施例提供的一种解码端滤波处理的流程示意图;
图13为本申请实施例提供的一种在CY测试条件下预测变换的测试结果对比示意图;
图14为本申请实施例提供的一种在C1测试条件下提升变换的测试结果对比示意图;
图15为本申请实施例提供的一种在C1测试条件下RAHT变换的测试结果对比示意图;
图16为本申请实施例提供的一种在C2测试条件下提升变换的测试结果示意图;
图17为本申请实施例提供的一种在C2测试条件下RAHT变换的测试结果示意图;
图18为本申请实施例提供的一种编码器的组成结构示意图;
图19为本申请实施例提供的一种编码器的具体硬件结构示意图;
图20为本申请实施例提供的一种解码器的组成结构示意图;
图21为本申请实施例提供的一种解码器的具体硬件结构示意图;
图22为本申请实施例提供的一种编解码系统的组成结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
对本申请实施例进行进一步详细说明之前,先对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释:
基于几何的点云压缩(Geometry-based Point Cloud Compression,G-PCC或GPCC)
基于视频的点云压缩(Video-based Point Cloud Compression,V-PCC或VPCC)
维纳滤波器(Wiener Filter)
八叉树(Octree)
包围盒(bounding box)
K近邻(K Nearest Neighbor,KNN)
细节层次(Level of Detail,LOD)
预测变换(Predicting Transform)
提升变换(Lifting Transform)
区域自适应分层变换(Region Adaptive Hierarchal Transform,RAHT)
峰值信噪比(Peak Signal to Noise Ratio,PSNR)
和方差(Sum Squared Error,SSE)
最小均方误差(Minimum Mean Squared Error,MMSE)
率失真优化(Rate Distortion Optimization,RDO)
亮度分量(Luminance,L或Y)
蓝色色度分量(Chroma blue,Cb)
红色色度分量(Chroma red,Cr)
点云是物体表面的三维表现形式,通过光电雷达、激光雷达、激光扫描仪、多视角相机等采集设备,可以采集得到物体表面的点云(数据)。
点云(Point Cloud)是指海量三维点的集合,点云中的点可以包括点的位置信息和点的属性信息。例如,点的位置信息可以是点的三维坐标信息。点的位置信息也可称为点的几何信息。例如,点的属性信息可包括颜色信息和/或反射率等等。例如,颜色信息可以是任意一种色彩空间上的信息。例如,颜色信息可以是RGB信息。其中,R表示红色(Red,R),G表示绿色(Green,G),B表示蓝色(Blue,B)。再如,颜色信息可以是亮度色度(YCbCr,YUV)信息。其中,Y表示明亮度,Cb(U)表示蓝色色度,Cr(V)表示红色色度。
根据激光测量原理得到的点云,点云中的点可以包括点的三维坐标信息和点的激光反射强度(reflectance)。再如,根据摄影测量原理得到的点云,点云中的点可以可包括点的三维坐标信息和点的颜色信息。再如,结合激光测量和摄影测量原理得到点云,点云中的点可以可包括点的三维坐标信息、点的激光反射强度(reflectance)和点的颜色信息。
点云可以按获取的途径分为:
第一类静态点云:即物体是静止的,获取点云的设备也是静止的;
第二类动态点云:物体是运动的,但获取点云的设备是静止的;
第三类动态获取点云:获取点云的设备是运动的。
例如,按点云的用途分为两大类:
类别一:机器感知点云,其可以用于自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等场景;
类别二:人眼感知点云,其可以用于数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。
由于点云是海量点的集合,存储点云不仅会消耗大量的内存,而且不利于传输,也没有这么大的带宽可以支持将点云不经过压缩直接在网络层进行传输,因此,需要对点云进行压缩。
截止目前,可对点云进行压缩的点云编码框架可以是运动图像专家组(Moving Picture Experts Group,MPEG)提供的G-PCC编解码框架或V-PCC编解码框架,也可以是音视频编码标准(Audio Video Standard,AVS)提供的AVS-PCC编解码框架。其中,G-PCC编解码框架可用于针对第一类静态点云和第三类动态获取点云进行压缩,V-PCC编解码框架可用于针对第二类动态点云进行压缩。在本申请实施例中,这里主要是针对G-PCC编解码框架进行描述。
可以理解,在点云G-PCC编解码框架中,将输入三维图像模型的点云进行条带(slice)划分后,对每一个slice进行独立编码。
图1为一种G-PCC编码器的组成框架示意图。如图1所示,该G-PCC编码器应用于点云编码器。在该G-PCC编码框架中,针对待编码的点云数据,首先通过slice划分,将点云数据划分为多个slice。在每一个slice中,点云的几何信息和每个点云所对应的属性信息是分开进行编码的。在几何编码过程中,对几何信息进行坐标转换,使点云全都包含在一个bounding box中,然后再进行量化,这一步量化主要起到缩放的作用,由于量化取整,使得一部分点云的几何信息相同,于是再基于参数来决定是否移除重复点,量化和移除重复点这一过程又被称为体素化过程。接着对bounding box进行八叉树划分。在基于八叉树的几何信息编码流程中,将包围盒八等分为8个子立方体,对非空的(包含点云中的点)的子立方体继续进行八等分,直到划分得到的叶子结点为1×1×1的单位立方体时停止划分,对叶子结点中的点进行算术编码,生成二进制的几何比特流,即几何码流。在基于三角面片集(triangle soup,trisoup)的几何信息编码过程中,同样也要先进行八叉树划分,但区别于基于八叉树的几何信息编码,该trisoup不需要将点云逐级划分到边长为1×1×1的单位立方体,而是划分到子块(block)边长为W时停止划分,基于每个block种点云的分布所形成的表面,得到该表面与block的十二条边所产生的至多十二个交点(vertex),对vertex进行算术编码(基于交点进行表面拟合),生成二进制的几何比特流,即几何码流。Vertex还用于在几何重建的过程的实现,而重建的集合信息在对点云的属性编码时使用。
在属性编码过程中,几何编码完成,对几何信息进行重建后,进行颜色转换,将颜色信息(即属性信息)从RGB颜色空间转换到YUV颜色空间。然后,利用重建的几何信息对点云重新着色,使得未编码的属性信息与重建的几何信息对应起来。属性编码主要针对颜色信息进行,在颜色信息编码过程中,主要有两种变换方法,一是依赖于LOD划分的基于距离的提升变换,二是直接进行RAHT变换,这两种方法都会将颜色信息从空间域转换到频域,通过变换得到高频系数和低频系数,最后对系数进行量化(即量化系数),最后,将经过八叉树划分及表面拟合的几何编码数据与量化系数处理属性编码数据进行slice合成后,依次编码每个block的vertex坐标(即算术编码),生成二进制的属性比特流,即属性码流。
图2为一种G-PCC解码器的组成框架示意图。如图2所示,该G-PCC解码器应用于点云编码器。在该G-PCC解码框架中,针对所获取的二进制码流,首先对二进制码流中的几何比特流和属性比特流分别进行独立解码。在对几何比特流的解码时,通过算术解码-八叉树合成-表面拟合-重建几何-逆坐标转换,得到点云的几何信息;在对属性比特流的解码时,通过算术解码-反量化-基于LOD的提升逆变换或者基于RAHT的逆变换-逆颜色转换,得到点云的属性信息,基于几何信息和属性信息还原待编码的点云数据的三维图像模型。
在如上述图1所示的G-PCC编码器中,LOD划分主要用于点云属性变换中的预测变换(Predicting Transform)和提升变换(Lifting Transform)两种方式。
还可以理解,LOD划分的过程是在点云几何重建之后,这时候点云的几何坐标信息是可以直接获取的。根据点云点之间的欧式距离将点云划分为多个LOD;依次对LOD中点的颜色进行解码,计算零行程编码技术中零的数量(用zero_cnt表示),然后根据zero_cnt的值对残差进行解码。
其中,依据编码零行程编码方法进行解码操作,首先解出码流中第一个zero_cnt的大小,若大于0, 说明有连续zero_cnt个残差为0;若zero_cnt等于0,说明此点的属性残差不为0,解码出相应的残差值,然后对解码出的残差值进行反量化与当前点的颜色预测值相加得到该点的重建值,继续执行该操作直到解码完所有的点云点。示例性地,图3为一种零行程编码的结构示意图。如图3所示,如果残差值为73、50、32、15,那么这时候zero_cnt等于0;如果残差值为0且数量仅有一个,那么这时候zero_cnt等于1;如果残差值为0且数量有N个,那么这时候zero_cnt等于N。
也就是说,当前点的颜色重建值(用reconstructedColor表示)需要基于当前预测模式下的颜色预测值(用predictedColor表示)与当前预测模式下颜色反量化的残差值(用residual表示)计算获得,即reconstructedColor=predictedColor+residual。
进一步地,当前点将会作为后续LOD中点的最近邻居,并利用当前点的颜色重建值对后续点进行属性预测。
然而,已有的G-PCC编解码框架只会对点云序列进行基础性的重建,在重建之后,并未进行一定的处理来进一步提升重建点云颜色属性的质量。这样可能会使得重建点云和原始点云相差比较大,失真较为严重,从而会影响到整个点云的质量。
基于此,本申请实施例提出一种编解码方法,该方法可以影响G-PCC编码框架中的算术编码及以后的部分,也可以影响G-PCC解码框架中的属性重建之后的部分。
也就是说,本申请实施例提出一种编码方法,可以应用在如图1所示的算术编码及以后的部分。相应地,本申请实施例还提出一种解码方法,可以应用在如图2所示的属性重建之后的部分。
这样,编码端利用初始点云和重建点云计算用于进行滤波处理的滤波系数,并在确定对重建点云进行滤波处理之后,才将对应的滤波系数传递至解码器;相应地,解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,而且还考虑了率失真代价值进行权衡;此外,这里还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。
下面将结合附图对本申请各实施例进行清楚、完整的描述。
在本申请的一实施例中,参见图4,其示出了本申请实施例提供的一种编码方法的流程示意图一。如图4所示,该方法可以包括:
S401:确定初始点云以及初始点云对应的重建点云。
需要说明的是,本申请实施例所述的编码方法具体是指点云编码方法,可以应用于点云编码器(本申请实施例中,可简称为“编码器”)。
还需要说明的是,在本申请实施例中,可以先确定初始点云以及初始点云对应的重建点云,然后利用初始点云和重建点云进行滤波系数的计算。另外,对于初始点云中的一个点,在对该点进行编码时,其可以作为初始点云中的待编码点,而该点的周围存在有多个已编码点。
进一步地,在本申请实施例中,对于初始点云中的一个点,其对应一个几何信息和一个属性信息;其中,几何信息表征该点的空间位置,属性信息表征该点的属性值(如颜色分量值)。
在这里,属性信息可以包括颜色分量,具体为任意颜色空间的颜色信息。示例性地,属性信息可以为RGB空间的颜色信息,也可以为YUV空间的颜色信息,还可以为YCbCr空间的颜色信息等等,本申请实施例不作具体限定。
进一步地,在本申请实施例中,颜色分量可以包括第一颜色分量、第二颜色分量和第三颜色分量。这样,如果颜色分量符合RGB颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:R分量、G分量、B分量;如果颜色分量符合YUV颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、U分量、V分量;如果颜色分量符合YCbCr颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、Cb分量、Cr分量。
可以理解的是,在本申请实施例中,对于初始点云中的一个点,该点的属性信息可以为颜色分量,也可以是反射率或者其它属性,本申请实施例也不作具体限定。
进一步地,在本申请实施例中,对于初始点云中的一个点,可以先确定出该点的属性信息的预测值和残差值,然后再利用预测值和残差值进一步计算获得该点的属性信息的重建值,以便构建出重建点云。具体来讲,对于初始点云中的一个点,在确定该点的属性信息的预测值时,可以利用该点的多个目标邻居点的几何信息和属性信息,结合该点的几何信息对该点的属性信息进行预测,从而获得对应的预测值,进而能够确定出对应的重建值。这样,对于初始点云中的一个点,在确定出该点的属性信息的重建值之后,该点可以作为后续LOD中点的最近邻居,以利用该点的属性信息的重建值继续对后续的点进行属 性预测,如此即可构建出重建点云。
也就是说,在本申请实施例中,初始点云可以通过编解码程序点云读取函数直接得到,重建点云则是在经过属性编码、属性重建和几何补偿后获得的。另外,本申请实施例的重建点云可以是解码后输出的重建点云,也可以是用作解码后续点云参考;此外,这里的重建点云不仅可以在在预测环路内,即作为inloop filter使用,可用作解码后续点云的参考;也可以在预测环路外,即作为post filter使用,不用作解码后续点云的参考;本申请实施例对此不作具体限定。
还可以理解的是,本申请实施例是在属性有损编码的情况下实施的,而且这里还可以分为两种情况:几何无损且属性有损编码和几何有损且属性有损编码。因此,在一种可能的实现方式中,对于S401来说,所述确定初始点云以及初始点云对应的重建点云,可以包括:若利用第一类编码方式对初始点云进行编码及重建处理,则得到第一重建点云,将第一重建点云作为重建点云。
在另一种可能的实现方式中,对于S401来说,所述确定初始点云以及初始点云对应的重建点云,可以包括:
若利用第二类编码方式对初始点云进行编码及重建处理,则得到第二重建点云;对第二重建点云进行几何恢复处理,得到恢复重建点云,将恢复重建点云作为重建点云。
需要说明的是,在本申请实施例中,第一类编码方式用于指示对初始点云进行几何无损且属性有损编码,第二类编码方式用于指示对初始点云进行几何有损且属性有损编码。
还需要说明的是,在几何无损的情况下,所得到的重建点云中点的数目和坐标并没有改变;而在几何有损的情况下,所得到的重建点云根据设置的码率不同,点的数目和坐标甚至整个重建点云的包围盒(bounding box)都会发生较大的改变,这时候需要进行几何恢复处理。
进一步地,在一些实施例中,所述对第二重建点云进行几何恢复处理,得到恢复重建点云,可以包括:
对第二重建点云进行几何补偿处理,得到中间重建点云;
对中间重建点云进行尺度缩放处理,得到恢复重建点云;其中,恢复重建点云与初始点云的大小相同且几何位置相同。
也就是说,在本申请实施例中,首先进行重建点云的几何坐标预处理,经过几何补偿处理后,利用配置参数中的缩放尺度(scale),依照重建过程对每个点的几何坐标除以该缩放尺度,通过此处理就可以将重建点云恢复到与初始点云的包围盒相当大小,而且几何位置相同(即消除了几何坐标的偏移)。
还需要说明的是,在初始点云与重建点云输入滤波器之前,还需要保证重建点云中的点与初始点云中的点之间的对应关系。因此,在一种可能的实现方式中,该方法还可以包括:在第一类编码方式的情况下,根据第一预设搜索方式确定重建点云中的点在初始点云中的对应点,建立重建点云中的点与初始点云中的点之间的对应关系。
在另一种可能的实现方式中,该方法还可以包括:在第二类编码方式的情况下,根据第二预设搜索方式确定重建点云中的点在初始点云中的对应点,并根据确定的对应点构建匹配点云,将匹配点云作为初始点云,建立重建点云中的点与初始点云中的点之间的对应关系。
在一种具体的实施例中,所述根据第二预设搜索方式确定重建点云中的点在初始点云中的对应点,可以包括:
基于重建点云中的当前点,利用第二预设搜索方式在初始点云中搜索到第二常数值个点;
分别计算当前点与第二常数值个点之间的距离值,从距离值中选取最小距离值;
若最小距离值的个数为1个,则将最小距离值对应的点确定为当前点的对应点;
若最小距离值的个数为多个,则根据最小距离值对应的多个点,确定当前点的对应点。
需要说明的是,在本申请实施例中,根据最小距离值对应的多个点,确定当前点的对应点,可以包括:从最小距离值对应的多个点中随机选取一个点,将所选取的点确定为当前点的对应点;或者,对最小距离值对应的多个点进行融合处理,将融合后的点确定为当前点的对应点;这里对此不作具体限定。
还需要说明的是,在本申请实施例中,第一预设搜索方式为搜索第一常数值个点的K近邻搜索方式;第二预设搜索方式为搜索第二常数值个点的K近邻搜索方式。在一种具体的实施例中,第一常数值等于1,第二常数值等于5;即第一预设搜索方式为k=1的KNN搜索方式,第二预设搜索方式为k=5的KNN搜索方式,但是这里也不作具体限定。
也就是说,对于第一类编码方式,即在几何无损、属性有损的编码方式中,这时候得到的重建点云中点的数目和坐标并没有改变,因此与初始点云中对应点的匹配较为轻松,只需利用k=1的KNN搜索方式即可得到点与点之间的对应关系。而在几何有损、属性有损的编码方式中,这时候得到的重建点云根据设置的码率不同,点的数目和坐标甚至整个点云的bounding box都存在有较大的改变,此时如果希望更准确地找出点与点之间的对应关系,那么首先进行重建点云的几何坐标预处理,经过几何坐标补偿 处理后,利用配置参数中的缩放尺度,然后依照重建过程对每个点的几何坐标除以该缩放尺度,这样通过此处理可以将重建点云恢复到与原始点云相当的大小且消除了几何坐标的偏移,使得几何位置相同。随后,对于重建点云中的每个点在原始点云中进行k=5的KNN搜索方式,并依次计算这5个点与当前点的距离,将距离最近的近邻点保存;如果存在多个距离相同且距离最近的近邻点,那么可以从这些近邻点中随机选取一个点;或者也可以将这些近邻点融合成一个点,即对这些近邻点的属性值取均值作为当前点的最近邻匹配点的属性值。通过这种方式可以得到一个与重建点云点数相同且每个点一一对应的点云,将其作为真正的初始点云与重建点云一起输入到滤波器。
S402:根据初始点云和重建点云,确定滤波系数。
需要说明的是,在本申请实施例中,初始点云中的点与重建点云中的点是具有对应关系的。在输入滤波器进行滤波系数之前,首先需要确定重建点云中点对应的K个目标点。在这里,以当前点为例,重建点云中的当前点对应的K个目标点可以包括当前点以及与当前点相邻的(K-1)个近邻点,K为大于1的整数。
在一些实施例中,所述根据初始点云和重建点云,确定滤波系数,可以包括:根据初始点云中点和重建点云中点对应的K个目标点,确定滤波系数。
在一种具体的实施例中,所述确定重建点云中点对应的K个目标点,可以包括:
基于重建点云中的第一点,利用K近邻搜索方式在重建点云中搜索预设值个候选点;
分别计算第一点与预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且(K-1)个距离值均小于预设值个距离值中剩余的距离值;
根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将第一点和(K-1)个近邻点确定为第一点对应的K个目标点。
需要说明的是,第一点可以为重建点云中的任意点。在这里,以第一点为例,可以利用K近邻搜索方式在重建点云中搜索预设值个候选点,计算第一点与这些候选点之间的距离值,然后从这些候选点中选取与第一点距离最近的(K-1)个近邻点;将第一点自身和(K-1)个近邻点确定为最终的K个目标点。也就是说,对于K个目标点来说,除了第一点自身之外,另外还包括与第一点几何距离最接近的(K-1)个近邻点,总共组成了重建点云中第一点对应的K个目标点。
还需要说明的是,这里的滤波器可以是自适应滤波器,例如可以是基于神经网络的滤波器、维纳滤波器等等,这里不作具体限定。
在本申请实施例中,以维纳滤波器为例,维纳滤波器的作用主要是计算滤波系数,同时判断维纳滤波之后的点云质量是否提升。也就是说,本申请实施例所述的滤波系数可以为维纳滤波处理的系数,即该滤波系数为维纳滤波器输出的系数。
在这里,维纳滤波器(Wiener Filter)是一种以最小化均方误差为最优准则的线性滤波器。在一定的约束条件下,其输出与一给定函数(通常称为期望输出)的差的平方达到最小,通过数学运算最终可变为一个托布利兹方程的求解问题。维纳滤波器又被称为最小二乘滤波器或最小平方滤波器。
其中,维纳滤波是利用平稳随机过程的相关特性和频谱特性对混有噪声的信号进行滤波的方法,目前是基本的滤波方法之一。维纳滤波的具体算法如下:
对于一列(混有噪声的)输入信号,滤波器长度或阶数为M时的输出如下,
Figure PCTCN2021143955-appb-000001
其中,M为滤波器的长度或阶数,y(n)为输出信号,x(n)为一列(混有噪声的)输入信号。
将式(1)转换为矩阵形式,表示如下,
y(n)=H(m)×X(n)      (2)
已知期望信号为d(n),则可以计算已知信号与期望信号之间的误差,用e(n)表示,具体如下,
e(n)=d(n)-y(n)=d(n)-H(m)×X(n),m=0,1,...M    (3)
维纳滤波器以最小均方误差为目标函数,故令目标函数如下表示,
Min E(e(n) 2)=E[(d(n)-H(m)×X(n)) 2]      (4)
当滤波系数为最优时,目标函数对系数的倒数应该为0,即:
Figure PCTCN2021143955-appb-000002
也即:
2E[(d(n)-H(m)×X(n))]×X(n)=0     (6)
E[d(n)X(n)]-H(m)E[X(n)X(n)]=0     (7)
进一步可以表示为:
Rxd-H×Rxx=0     (8)
其中,Rxd与Rxx分别为输入信号与期望信号的相关矩阵与输入信号的自相关矩阵。从而根据维纳--霍夫方程进行最优解计算,可以得到滤波系数H:
H=Rxx -1×Rxd       (9)
进一步地,在进行维纳滤波时,需要含噪信号与期望信号;而在本申请实施例中,对于点云编解码框架,两者分别对应重建点云与初始点云,这样就可以确定出维纳滤波器的输入。这样,对于S402来说,所述根据初始点云和重建点云,确定滤波系数,可以包括:利用维纳滤波器将初始点云和重建点云输入到维纳滤波器中进行计算,输出滤波系数。
在本申请实施例中,将初始点云和重建点云输入到维纳滤波器中进行计算,输出滤波系数,可以包括:
根据初始点云中点的属性信息的原始值,确定第一属性参数;
根据重建点云中点对应的K个目标点的属性信息的重建值,确定第二属性参数;
基于第一属性参数和第二属性参数,确定滤波系数。
在一种具体的实施例中,所述基于第一属性参数和第二属性参数,确定滤波系数,可以包括:
根据第一属性参数和第二属性参数,确定互相关参数;
根据第二属性参数确定自相关参数;
根据互相关参数和自相关参数进行系数计算,得到滤波系数。
需要说明的是,对于初始点云和重建点云而言,以属性信息的颜色分量为例,如果颜色分量符合YUV空间,那么在根据初始点云和重建点云确定滤波系数时,可以先基于颜色分量(如Y分量,U分量,V分量)的原始值和重建值,进行颜色分量的第一属性参数和第二属性参数的确定,进而可以根据第一属性参数和第二属性参数进一步确定出用于进行滤波处理的滤波系数。
具体而言,第一属性参数是根据初始点云中至少一个点的属性信息的原始值进行确定;第二属性参数是根据重建点云中至少一个点对应的K个目标点的属性信息的重建值进行确定,这里的K个目标点包括当前点以及与当前点相邻的(K-1)个近邻点。
还需要说明的是,在利用维纳滤波器进行滤波系数确定时,还涉及到维纳滤波器的阶数。在本申请实施例,可以设置维纳滤波器的阶数等于M。其中,M和K的取值可以相同,也可以不相同,这里不作具体限定。
还需要说明的是,对于维纳滤波器而言,滤波器类型可以用于指示滤波器阶数,和/或滤波器形状,和/或滤波器维数。其中,滤波器形状包括菱形、矩形等,滤波器维数包括一维、二维乃至更多维等。
这样,在本申请实施例中,不同的滤波器类型可以对应不同阶数的维纳滤波器,例如,阶数的取值可以为12、32、128的维纳滤波器;不同的类型还可以对应不同维数的滤波器,例如,一维滤波器、二维滤波器等,这里并不作具体限定。也就是说,如果需要确定一个16阶的滤波器,那么可以用16个点来确定16阶非对称滤波器、或者8阶一维对称滤波器、或者其他数量(例如更加特殊的二维、三维滤波器等)的滤波器等等,这里并不对滤波器作具体限定。
简单来说,在本申请实施例中,以颜色分量为例,在根据初始点云和重建点云,确定滤波系数的过程时,可以先根据初始点云中点的颜色分量的原始值,确定颜色分量的第一属性参数;同时,可以根据重建点云中点对应的K个目标点的颜色分量的重建值,确定颜色分量的第二属性参数;最终,便可以基于第一属性参数和第二属性参数,确定出颜色分量对应的滤波系数向量。如此,对于一个颜色分量进行对应的第一属性参数和第二属性参数的计算,并利用第一属性参数和第二属性参数确定出该颜色分量对应的滤波系数向量。在遍历完全部颜色分量(如Y分量,U分量,V分量)之后,可以获得每一个颜色分量的滤波系数向量,从而基于每一个颜色分量的滤波系数向量能够确定出滤波系数。
具体来讲,在根据初始点云和重建点云,确定滤波系数时,可以先根据颜色分量对应的第一属性参数和第二属性参数确定颜色分量对应的互相关参数;同时可以根据第二属性参数确定颜色分量对应的自相关参数;接着便可以基于互相关参数和自相关参数确定颜色分量对应的滤波系数向量;最后可以遍历全部颜色分量,利用全部颜色分量对应的滤波系数向量来确定滤波系数。
示例性地,在本申请实施例中,以点云序列中的YUV空间的颜色分量为例,假设滤波器的阶数为K,即利用重建点云中每一个点对应的K个目标点去计算最优系数,即计算滤波系数。在这里,K个目标点可以包括该点自身和与其相邻的(K-1)个近邻点。
假设点云序列为n,使用向量S(n)表示初始点云中所有点在某颜色分量(如Y分量)下的原始值,即S(n)为由初始点云中所有点的Y分量的原始值所构成的第一属性参数;使用矩阵P(n,k)表示重建点云中所有点对应的K个目标点在相同颜色分量(如Y分量)下的重建值,即P(n,k)为由重建点云中所有点对应的K个目标点的Y分量的重建值所构成的第二属性参数。
具体地,可以按照如下维纳滤波算法执行:
根据第一属性参数S(n)和第二属性参数P(n,k)计算获得互相关参数B(k),如下所示,
B(k)=P(n,k) T×S(n)      (10)
根据第二属性参数P(n,k)计算获得自相关参数A(k,k),如下所示,
A(k,k)=P(n,k) T×P(n,k)       (11)
根据维纳--霍夫方程,Y分量下的最优系数H(k),即Y分量下的K阶维纳滤波器的滤波系数向量H(k),如下所示,
H(k)=A(k,k) -1×B(k)       (12)
接着,可以按照上述方法遍历U分量和V分量,最终确定出U分量下的滤波系数向量和V分量下的滤波系数向量,进而便可以利用全部颜色分量下的滤波系数向量确定出滤波系数。
进一步地,在确定滤波系数的过程中,这里是基于YUV颜色空间进行的;如果初始点云或者重建点云为不符合YUV颜色空间(例如,RGB颜色空间),那么还需要进行颜色空间转换,以使其符合YUV颜色空间。因此,在一些实施例中,该方法还可以包括:若初始点云中点的颜色分量符合RGB颜色空间,则对初始点云进行颜色空间转换,使得初始点云中点的颜色分量符合YUV颜色空间;若重建点云中点的颜色分量符合RGB颜色空间,则对重建点云进行颜色空间转换,使得重建点云中点的颜色分量符合YUV颜色空间。
也就是说,在利用初始点云和重建点云进行滤波系数的确定时,可以先基于初始点云和重建点云中点的颜色分量,分别确定出每一个颜色分量对应的第一属性参数和第二属性参数,进而可以确定出每一个颜色分量对应的滤波系数向量,最终便可以利用全部颜色分量的滤波系数向量获得滤波系数。
S403:利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云。
需要说明的是,第一点表示重建点云中的任意点。另外,K个目标点可以包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数。其中,这里的(K-1)个近邻点具体是指与第一点几何距离最接近的(K-1)个近邻点。
在本申请实施例中,编码器在根据初始点云和重建点云,确定滤波系数之后,还可以进一步利用滤波系数确定重建点云对应的滤波后点云。具体地,在一些实施例中,首先需要确定重建点云中第一点对应的K个目标点,这里的第一点表示重建点云中的任意点。然后,利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云,可以包括:
利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云中第一点的属性信息的滤波值;
在确定出重建点云中至少一个点的属性信息的滤波值之后,根据至少一个点的属性信息的滤波值确定滤波后点云。
在一种具体的实施例中,确定重建点云中第一点对应的K个目标点,可以包括:
基于重建点云中的第一点,利用K近邻搜索方式在重建点云中搜索预设值个候选点;
分别计算第一点与预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且(K-1)个距离值均小于预设值个距离值中剩余的距离值;
根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将第一点和(K-1)个近邻点确定为第一点对应的K个目标点。
需要说明的是,以第一点为例,可以利用K近邻搜索方式在重建点云中搜索预设值个候选点,计算第一点与这些候选点之间的距离值,然后从这些候选点中选取与第一点距离最近的(K-1)个近邻点;也就是说,除了第一点自身之外,另外还包括与第一点几何距离最接近的(K-1)个近邻点,总共组成了重建点云中第一点对应的K个目标点。
还需要说明的是,仍以属性信息中的颜色分量为例,在利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理时,可以包括:利用颜色分量对应的滤波系数向量对重建点云中第一点对应的K个目标点进行滤波处理,得到重建点云中第一点的颜色分量的滤波值;根据重建点云中点的颜色分量的滤波值,得到滤波后点云。
具体而言,可以根据颜色分量对应的滤波系数向量和第二属性参数,确定重建点云中每一个点的颜色分量的滤波值;然后基于重建点云中每一个点的颜色分量的滤波值,得到滤波后点云。
可以理解的是,在本申请实施例中,在使用维纳滤波器对重建点云进行滤波处理时,需要含噪信号与期望信号,在点云编解码框架中,可以将重建点云作为含噪信号,将初始点云作为期望信号;因此,可以将初始点云和重建点云同时输入至维纳滤波器中,即维纳滤波器的输入为初始点云和重建点云,维纳滤波器的输出为滤波系数;在获得滤波系数之后,还可以基于该滤波系数完成对重建点云的滤波处理,获得对应的滤波后点云。
也就是说,在本申请实施例中,滤波系数是基于原始点云和重建点云获得的;因此,将滤波系数作用于重建点云上,可以最大限度恢复出原始点云。换句话说,在根据滤波系数对重建点云进行滤波时,对于其中一个颜色分量,可以基于该颜色分量对应的滤波系数向量,结合该颜色分量下的第二属性参数,确定出该颜色分量对应的滤波值。
示例性地,在本申请实施例中,将Y分量下的滤波系数向量H(k)作用在重建点云也就是说第二属性参数P(n,k)上,可以获得Y分量下的滤波值R(n),如下所示,
R(n)=P(n,k)×H(k)         (13)
接着,可以按照上述方法遍历U分量和V分量,最终确定出U分量下的滤波值和V分量下的滤波值,进而便可以利用全部颜色分量下的滤波值来确定出重建点云对应的滤波后点云。
S404:根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理。
在本申请实施例中,编码器在利用滤波系数获得重建点云对应的滤波后点云之后,可以进一步根据重建点云和滤波后点云确定初始点云对应的滤波标识信息。
需要说明的是,在本申请实施例中,滤波标识信息可以用于确定是否对重建点云进行滤波处理;进一步地,滤波标识信息还可以用于确定对重建点云中的哪一个或者哪几个颜色分量进行滤波处理。
还需要说明的是,在本申请实施例中,颜色分量可以包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,属性信息的待处理分量可以为第一颜色分量、第二颜色分量和第三颜色分量中的任意一项。参见图5,其示出了本申请实施例提供的一种编码方法的流程示意图二。如图5所示,对于滤波标识信息的确定,该方法可以包括:
S501:确定重建点云的属性信息的待处理分量的第一代价值,以及确定滤波后点云的属性信息的待处理分量的第二代价值。
可以理解地,在一种可能的实现方式中,所述确定重建点云的属性信息的待处理分量的第一代价值,可以包括:利用率失真代价方式对重建点云的属性信息的待处理分量进行代价值计算,将所得到的第一率失真值作为所述第一代价值;
所述确定滤波后点云的属性信息的待处理分量的第二代价值,可以包括:利用率失真代价方式对滤波后点云的属性信息的待处理分量进行代价值计算,将所得到的第二率失真值作为得到所述第二代价值。
在该实现方式下,可以利用率失真代价方式来确定滤波标识信息。首先,利用率失真代价方式分别确定重建点云的属性信息的待处理分量对应的第一代价值和滤波后点云的属性信息的待处理分量对应的第二代价值;然后根据第一代价值和第二代价值的比较结果确定出待处理分量的滤波标识信息。其中,这里的代价值可以是用于失真度量的失真值,也可以是率失真代价结果等等,本申请实施例不作具体限定。
需要说明的是,为了更准确衡量滤波前后性能提升与否,本申请实施例同时对滤波后点云与重建点云进行率失真权衡,这里可以利用率失真代价方式来计算综合质量提升与码流增加后的率失真值。其中,第一率失真值和第二率失真值可以分别表征相同的颜色分量下重建点云和滤波后点云各自的率失真代价结果,用于表示滤波前后点云的压缩效率。对于第一率失真值和第二率失真值的计算,具体计算公式如下所示,
J=D+λ×R i      (14)
在这里,J为率失真值,D是初始点云与重建点云或滤波后点云的SSE,即对应点误差的平方和;λ是与量化参数QP有关的量,本申请实施例可以选取
Figure PCTCN2021143955-appb-000003
R i为该颜色分量的码流大小,用比特(bit)表示,本申请实施例可以近似选取每一个颜色分量的码流大小R i=R all/3+R coef;其中,R all为整个点云属性码流大小,R coef为每一个颜色分量系数占用的码流大小(对于滤波后点云,可以选取R coef=K×sizeof(int),重建点云不考虑此项)。
这样,在获得重建点云和滤波后点云之后,可以根据上述方法计算出重建点云的待处理分量的第一率失真值以及滤波后点云的待处理分量的第二率失真值,将其作为第一代价值和第二代价值,然后根据两者的比较结果确定处待处理分量的滤波标识信息。
还可以理解地,在另一种可能的实现方式中,所述确定重建点云的属性信息的待处理分量的第一代价值,可以包括:利用率失真代价方式对重建点云的属性信息的待处理分量进行代价值计算,得到第一率失真值;利用预设性能衡量指标对重建点云的属性信息的待处理分量进行性能值计算,得到第一性能值;根据第一率失真值和第一性能值,确定第一代价值。
所述确定滤波后点云的属性信息的待处理分量的第二代价值,可以包括:利用率失真代价方式对滤波后点云的属性信息的待处理分量进行代价值计算,得到第二率失真值;利用预设性能衡量指标对滤波后点云的属性信息的待处理分量进行性能值计算,得到第二性能值;根据第二率失真值和第二性能值, 确定第二代价值。
在该实现方式下,不仅可以考虑利用率失真代价方式确定的率失真值,还可以考虑利用预设性能衡量指标确定的性能值(例如PSNR值)。在这里,第一性能值和第二性能值可以分别表征相同的颜色分量下重建点云和滤波后点云各自的编解码性能。示例性地,在本申请实施例中,第一性能值可以为重建点云中点的颜色分量的PSNR值,第二性能值可以为滤波后点云中点的颜色分量的PSNR值。
需要说明的是,考虑到G-PCC编解码框架最终目的是进行点云压缩,压缩率越高代表整体性能越好。本申请实施例不仅可以考虑PSNR值决定是否在解码端进行滤波,同时还可以利用率失真代价方式进行滤波前后的率失真权衡。也就是说,在获得重建点云和滤波后点云之后,可以根据上述方法计算出重建点云的待处理分量的第一性能值和第一率失真值,以得到第一代价值;以及计算出滤波后点云的待处理分量的第二性能值和第二率失真值,以得到第二代价值,然后根据第一代价值和第二代价值的比较结果确定处待处理分量的滤波标识信息。
这样,不仅考虑了属性值质量的提升,同时计算了将滤波系数等信息写入码流所需的代价,综合二者表现判断滤波后压缩性能是否有了提升,从而决定编码端是否进行滤波系数的传递。
S502:根据第一代价值和第二代价值,确定待处理分量的滤波标识信息。
在一种可能的实现方式中,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二代价值小于第一代价值,则确定待处理分量的滤波标识信息的取值为第一值;
若第二代价值大于第一代价值,则确定待处理分量的滤波标识信息的取值为第二值。
需要说明的是,对于第二代价值等于第一代价值的情况,这时候可以确定待处理分量的滤波标识信息的取值为第一值;或者,也可以确定待处理分量的滤波标识信息的取值为第二值。
还需要说明的是,如果待处理分量的滤波标识信息的取值为第一值,那么可以确定待处理分量的滤波标识信息指示对重建点云的属性信息的待处理分量进行滤波处理;或者,如果待处理分量的滤波标识信息的取值为第二值,那么可以确定待处理分量的滤波标识信息指示不对重建点云的属性信息的待处理分量进行滤波处理。
在另一种可能的实现方式中,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二性能值大于第一性能值且第二率失真值小于第一率失真值,则确定待处理分量的滤波标识信息的取值为第一值;
若第二性能值小于第一性能值,则确定待处理分量的滤波标识信息的取值为第二值。
在又一种可能的实现方式中,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二性能值大于第一性能值且第二率失真值小于第一率失真值,则确定待处理分量的滤波标识信息的取值为第一值;
若第二率失真值大于第一率失真值,则确定待处理分量的滤波标识信息的取值为第二值。
需要说明的是,对于第二性能值等于第一性能值或者第二率失真值等于第一率失真值的情况,此时可以确定待处理分量的滤波标识信息的取值为第一值;或者,也可以确定待处理分量的滤波标识信息的取值为第二值。
还需要说明的是,如果待处理分量的滤波标识信息的取值为第一值,那么可以确定待处理分量的滤波标识信息指示对重建点云的属性信息的待处理分量进行滤波处理;或者,如果待处理分量的滤波标识信息的取值为第二值,那么可以确定待处理分量的滤波标识信息指示不对重建点云的属性信息的待处理分量进行滤波处理。
还可以理解地,当待处理分量为颜色分量时,根据第一性能值、第一率失真值、第二性能值和第二率失真值,可以确定出滤波后点云相比于重建点云,这些颜色分量中某个或者某几个分量的性能值是否增加同时率失真代价是否有所下降。
在一种可能的实施方式中,当待处理分量为第一颜色分量时,对于S502来说,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二性能值大于第一性能值且第二率失真值小于第一率失真值,确定第一颜色分量的滤波标识信息的取值为第一值;
若第二性能值小于第一性能值,或者第二率失真值大于第一率失真值,确定第一颜色分量的滤波标识信息的取值为第二值。
在另一种可能的实施方式中,当待处理分量为第二颜色分量时,对于S502来说,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二性能值大于第一性能值且第二率失真值小于第一率失真值,确定第二颜色分量的滤波标识信息的取值为第一值;
若第二性能值小于第一性能值,或者第二率失真值大于第一率失真值,确定第二颜色分量的滤波标识信息的取值为第二值。
在又一种可能的实施方式中,当待处理分量为第三颜色分量时,对于S502来说,所述根据第一代价值和第二代价值,确定待处理分量的滤波标识信息,可以包括:
若第二性能值大于第一性能值且第二率失真值小于第一率失真值,确定第三颜色分量的滤波标识信息的取值为第一值;
若第二性能值小于第一性能值,或者第二率失真值大于第一率失真值,确定第三颜色分量的滤波标识信息的取值为第二值。
具体来说,在本申请实施例中,以Y分量的PSNR和率失真代价结果(Cost)为例,在根据第一性能值、第一率失真值、第二性能值和第二率失真值,确定Y分量的滤波标识信息时,如果重建点云对应的PSNR1大于滤波后点云对应的PSNR2、或者重建点云对应的Cost1小于滤波后点云对应的Cost2,即滤波后Y分量的PSNR值下降、或者滤波后Y分量的率失真代价增加,那么都可以认为滤波效果不佳,此时将设置Y分量的滤波标识信息指示不对Y分量进行滤波处理;相应地,如果重建点云对应的PSNR1小于滤波后点云对应的PSNR2且重建点云对应的Cost1大于滤波后点云对应的Cost2,即滤波后Y分量的PSNR得到提高而且率失真代价得到减小,那么可以认为滤波效果较好,此时将设置Y分量的滤波标识信息指示对Y分量进行滤波处理。
这样,本申请的实施例可以按照上述方法遍历U分量和V分量,最终确定出U分量的滤波标识信息和V分量的滤波标识信息,以便利用全部颜色分量的滤波标识信息来确定出最终的滤波标识信息。
S503:根据待处理分量的滤波标识信息,得到滤波标识信息。
需要说明的是,在本申请实施例中,所述利用待处理分量的滤波标识信息确定滤波标识信息,可以包括:在待处理分量为颜色分量的情况下,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;根据第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息,得到滤波标识信息。
也就是说,滤波标识信息可以为数组形式,具体为1×3的数组,其是由第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息组成的。例如,滤波标识信息可以用[y u v]表示,其中,y表示第一颜色分量的滤波标识信息,u表示第二颜色分量的滤波标识信息,v表示第三颜色分量的滤波标识信息。
由此可见,由于滤波标识信息包括有每一个颜色分量的滤波标识信息,因此,滤波标识信息不仅可以用于是否对重建点云是否进行滤波处理进行确定,还可以用于确定出具体地对哪一个颜色分量进行滤波处理。
进一步地,在一些实施例中,该方法还可以包括:
若第一颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的第一颜色分量进行滤波处理;若第一颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的第一颜色分量进行滤波处理;或者,
若第二颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的第二颜色分量进行滤波处理;若第二颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的第二颜色分量进行滤波处理;或者,
若第三颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的第三颜色分量进行滤波处理;若第三颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的第三颜色分量进行滤波处理。
也就是说,在本申请实施例中,如果某颜色分量对应的滤波标识信息为第一值,那么可以指示对该颜色分量进行滤波处理;如果某颜色分量对应的滤波标识信息为第二值,那么可以指示不对该颜色分量进行滤波处理。
进一步地,在一些实施例中,该方法还可以包括:
若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中至少一个为第一值,则确定滤波标识信息指示对重建点云进行滤波处理;
若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中全部为第二值,则确定滤波标识信息指示不对重建点云进行滤波处理。
也就是说,在本申请实施例中,如果这些颜色分量对应的滤波标识信息均为第二值,那么可以指示对这些颜色分量均不进行滤波处理,即可确定滤波标识信息指示不对重建点云进行滤波处理;相应地,如果这些颜色分量的滤波标识信息中至少一个为第一值,那么可以指示对至少一个颜色分量进行滤波处 理,即可确定滤波标识信息指示对重建点云进行滤波处理。
另外,在本申请实施例中,第一值和第二值不同,而且第一值和第二值可以是参数形式,也可以是数字形式。具体地,这些颜色分量对应的标识信息可以是写入在概述(profile)中的参数,也可以是一个标志(flag)的取值,这里对此不作具体限定。
示例性地,对于第一值和第二值而言,第一值可以设置为1,第二值可以设置为0;或者,第一值还可以设置为true,第二值还可以设置为false;但是这里并不作具体限定。
S405:若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。
需要说明的是,在本申请实施例中,编码器在根据重建点云和滤波后点云确定出滤波标识信息之后,如果滤波标识信息指示对重建点云进行滤波处理,那么可以将滤波标识信息写入码流,同时还可以选择性地将滤波系数写入码流。
示例性地,以滤波标识信息可以用[1 0 1]为例,此时由于第一颜色分量的滤波标识信息的取值为1,即需要对重建点云的第一颜色分量进行滤波处理;第二颜色分量的滤波标识信息的取值为0,即不需要对重建点云的第二颜色分量进行滤波处理;第三颜色分量的滤波标识信息的取值为1,即需要对重建点云的第三颜色分量进行滤波处理;这时候仅需要将第一颜色分量对应的滤波系数和第三颜色分量对应的滤波系数写入码流,而无需将第二颜色分量对应的滤波系数写入码流。
还需要说明的是,在本申请实施例中,针对第二类编码方式,即几何有损且属性有损编码方式下,如果几何失真过大,这时候可以直接确定不对重建点云进行滤波处理。因此,在一些实施例中,该方法还可以包括:若重建点云的量化参数大于预设门限值,则确定滤波标识信息指示不对重建点云进行滤波处理。
也就是说,在几何有损的情况下,当几何失真较大(即量化参数大于预设门限值)时,这时候重建点云的点数比较少,如果写入差不多大小的滤波系数,将会导致率失真代价过大;因此,这种情况下将不再进行维纳滤波处理。
进一步地,在一些实施例中,当滤波标识信息指示不对重建点云进行滤波处理时,该方法还可以包括:仅对滤波标识信息进行编码,将所得到的编码比特写入码流。
需要说明的是,在本申请实施例中,编码器在利用滤波系数获得重建点云对应的滤波后点云之后,可以进一步根据重建点云和滤波后点云确定滤波标识信息。如果滤波标识信息指示对所述重建点云进行滤波处理,那么可以将滤波标识信息和滤波系数写入码流;如果滤波标识信息指示不对所述重建点云进行滤波处理,那么这时候不需要将滤波系数写入码流,仅需将滤波标识信息写入码流即可,以便后续通过码流传输至解码器中。
还需要说明的是,在本申请实施例中,该方法还可以包括:确定初始点云中点的属性信息的预测值;根据初始点云中点的颜色分量属性信息的原始值和预测值,确定初始点云中点的属性信息的残差值;对初始点云中点的属性信息的残差值进行编码,并将所得到的编码比特写入码流。也就是说,编码器在确定出属性信息的残差值之后,还需要将属性信息的残差值写入码流,以便后续通过码流传输至解码器中。
进一步地,在本申请实施例中,对于维纳滤波器的阶数,这里假定阶数与K的取值相同,此时可以设置K等于16或其他数值。具体而言,基于loot_vox12_1200.ply序列在CTC_C1(几何无损、属性有损)的条件、r1-r6码率下利用Lifting Transform变换方式的滤波前点云和滤波后点云的BD-Rate增益以及运行时间进行了测试,其测试结果如图6和图7所示。在图6中,(a)示出了Y分量对应的BD-Rate性能指标随K值大小的变化情况,(b)示出了U分量对应的BD-Rate性能指标随K值大小的变化情况,(c)示出了V分量对应的BD-Rate性能指标随K值大小的变化情况;由此可见BD-Rate这一性能指标在K值过大时(如大于20左右)变化缓慢。而图7示出了所有码率的编解码总时间随K值大小的变化情况,由此可见编解码时间几乎与K值呈线性关系。根据图6和图7可以得到,如果一直增大K值对于性能提升并没有好处;同时,过大的K值在处理较大点云时,容易出现内存溢出的情况,使得普适性降低。因此,在本申请实施例,可以设置K值为16,既能够保证滤波效果,又可以在一定程度上减少时间复杂度。
除此之外,在本申请实施例中,对于K值的确定,该方法还可以包括:
根据重建点云的点数,确定K的取值;和/或,
根据重建点云的量化参数,确定K的取值;和/或,
根据重建点云中点的邻域差异值,确定K的取值。
在本申请实施例中,邻域差异值可以是基于点的待滤波颜色分量与至少一个近邻点的待滤波颜色分量之间的分量差值计算得到的。
进一步地,所述根据重建点云中点的邻域差异值,确定K的取值,还可以包括:若重建点云中点 的邻域差异值满足第一预设区间,则选择第一类别滤波器对所述重建点云进行滤波处理;若重建点云中点的邻域差异值满足第二预设区间,则选择第二类别滤波器对所述重建点云进行滤波处理;其中,第一类别滤波器的阶数与第二类别滤波器的阶数不同。
需要说明的是,一种可能的实施方式为:针对不同大小的点云滤波时选择不同的K,具体可以根据点的数目设定阈值进行判断,从而实现K的自适应选择;当点云中的点数较小时,可以适当增大K,着重提升质量;当点云中的点数较大时,可以适当减小K,这时候首先保证内存足够,而且运行速度不会太慢。
还需要说明的是,另一种可能的实施方式为:维纳滤波时考虑邻域差异值,由于点云轮廓或边界处的属性值变化较大,单个滤波器可能难以覆盖全点云的滤波,导致效果仍有提升的空间,尤其对于稀疏点云序列。因此,可以根据邻域差异值进行分类,不同类别应用不同的滤波器,从而传递多组滤波系数,可以实现自适应选择滤波器进行滤波以达到更好的效果。
还需要说明的是,又一种可能的实施方式为:在几何有损编码的情况下,当几何失真较大(即量化参数较大)时,不再进行维纳滤波;或者,根据量化参数的大小,对K进行自适应调整,以减少码流的开销。甚至还可以对需要传递的滤波系数或滤波标识信息进行熵编码,从而消除信息冗余,进一步提升编码效率。
简言之,本申请实施例提出了一种优化的对编解码颜色重建值的YUV分量做维纳滤波处理以更好地增强点云质量的技术,具体可以通过步骤S401~S405所提出的编码方法,在对点云序列的颜色重建值的颜色分量(如Y分量、U分量、V分量)进行滤波时,不仅考虑了几何无损、属性有损情况,还考虑了几何有损、属性有损情况下的点云维纳滤波后处理方法,同时对于质量提升的判断、邻点选择、K的大小进行了优化,从而可以实现有选择性地对解码端输出的重建点云进行质量增强操作,码流比特数大小增加较少,总体压缩性能得以提升。
需要说明的是,本申请实施例提出的编码方法,适用于任意的点云序列,尤其是对于点的分布较密集的序列以及中等码率序列,具有较为突出的优化效果。另外,本申请实施例提出的编码方法,可对全部三种属性变换(Predicting transform、Lifing transform、RAHT)编码方式进行操作,具有普适性。而对于极少数滤波效果不好的序列,该方法不会影响解码端重建过程,除编码端运行时间稍有增加外,滤波标识信息对于属性码流大小的影响可忽略不计。
需要说明的是,在本申请实施例提出的维纳滤波器可以在预测环路内,即作为inloop filter使用,可用作解码后续点云的参考;也可以在预测环路外,即作为post filter使用,不用作解码后续点云的参考。对此这里不进行具体限定。
需要说明的是,在本申请实施例中,如果本申请实施例提出的维纳滤波器为环路内滤波器,那么在确定进行滤波处理之后,需要将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也需要将滤波系数写入码流;相应地,在确定不进行滤波处理之后,可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流。反之,如果本申请实施例提出的维纳滤波器为后处理滤波器,一种情况下,该滤波器对应的滤波系数是位于一个单独的辅助信息数据单元(例如,补充增强信息SEI),那么可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流,相应地,解码器在未获得该补充增强信息的情况下,即不对重建点云进行滤波处理。另一种情况下,该滤波器对应的滤波系数与其他信息在一个辅助信息数据单元中,此时在确定进行滤波处理之后,需要将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也需要将滤波系数写入码流;相应地,在确定不进行滤波处理之后,可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流。
进一步地,对于本申请实施例提出的编码方法,可以选择对重建点云中的一个或多个部分进行滤波处理,即滤波标识信息的控制范围可以是整个重建点云,也可以是重建点云中的某个部分,这里并不作具体限定。
本实施例提供了一种编码方法,应用于编码器。通过确定初始点云以及初始点云对应的重建点云;根据初始点云和重建点云,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。这样,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,而且还考虑了率失真代价值进行权衡;此外,这里还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关 系的确定,不仅提升了适用范围,还实现了对重建点云的优化,进一步提升了点云的质量,而且还能够节省码率,提高了编解码效率。
在本申请的另一实施例中,本申请实施例还提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息包括下述至少之一:初始点云中点的属性信息的残差值、滤波标识信息和滤波系数。
这样,当初始点云中点的属性信息的残差值、滤波标识信息和滤波系数由编码器传递到解码器之后,解码器通过解码获得初始点云中点的属性信息的残差值,进而构建出重建点云;而且解码器通过解码获得滤波标识信息,还可以确定是否需要对重建点云进行滤波处理;然后在需要对重建点云进行滤波处理时,通过解码直接获得滤波系数;根据滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。
在本申请的又一实施例中,参见图8,其示出了本申请实施例提供的一种解码方法的流程示意图一。如图8所示,该方法可以包括:
S801:解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理。
需要说明的是,本申请实施例所述的解码方法具体是指点云解码方法,可以应用于点云解码器(本申请实施例中,可简称为“解码器”)。
还需要说明的是,在本申请实施例中,解码器可以先解码码流,从而确定滤波标识信息。其中,滤波标识信息可以用于确定是否对初始点云对应的重建点云进行滤波处理。具体地,滤波标识信息还可以用于确定对重建点云中的哪一个或者哪几个颜色分量进行滤波处理。
还需要说明的是,在本申请实施例中,对于初始点云中的一个点,在对该一个点进行解码时,其可以作为初始点云中的待解码点,而该一个点的周围有多个已解码点。
进一步地,在本申请实施例中,对于初始点云中的一个点,其对应一个几何信息和一个属性信息;其中,几何信息表征该点的空间位置,属性信息表征该点的属性值(如颜色分量值)。
在这里,属性信息可以包括颜色分量,具体为任意颜色空间的颜色信息。示例性地,属性信息可以为RGB空间的颜色信息,也可以为YUV空间的颜色信息,还可以为YCbCr空间的颜色信息等等,本申请实施例不作具体限定。
进一步地,在本申请实施例中,颜色分量可以包括第一颜色分量、第二颜色分量和第三颜色分量。这样,如果颜色分量符合RGB颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:R分量、G分量、B分量;如果颜色分量符合YUV颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、U分量、V分量;如果颜色分量符合YCbCr颜色空间,那么可以确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、Cb分量、Cr分量。
可以理解的是,在本申请实施例中,对于初始点云中的一个点,该点的属性信息可以为颜色分量,也可以是反射率或者其它属性,本申请实施例也不作具体限定。
需要说明的是,在本申请实施例中,解码器可以通过解码码流,确定出初始点云中点的属性信息的残差值,以便构建重建点云。因此,在一些实施例中,该方法还可以包括:
解码码流,确定初始点云中点的属性信息的残差值;
在确定初始点云中点的属性信息的预测值后,根据初始点云中点的属性信息的预测值和残差值,确定初始点云中点的属性信息的重建值;
基于初始点云中点的属性信息的重建值,构建重建点云。
也就是说,对于初始点云中的一个点,可以先确定出该点的属性信息的预测值和残差值,然后再利用预测值和残差值进一步计算获得该点的属性信息的重建值,以便构建出重建点云。具体来讲,对于初始点云中的一个点,在确定该点的属性信息的预测值时,可以利用该点的多个目标邻居点的几何信息和属性信息,结合该点的几何信息对该点的属性信息进行预测,从而获得对应的预测值,进而能够确定出对应的重建值。这样,对于初始点云中的一个点,在确定出该点的属性信息的重建值之后,该点可以作为后续LOD中点的最近邻居,以利用该点的属性信息的重建值继续对后续的点进行属性预测,如此即可构建出重建点云。
还需要说明的是,在本申请实施例中,初始点云可以通过编解码程序点云读取函数直接得到,重建点云则是在经过属性编码、属性重建和几何补偿后获得的。另外,本申请实施例的重建点云可以是解码后输出的重建点云,也可以是用作解码后续点云参考;此外,这里的重建点云不仅可以在在预测环路内,即作为inloop filter使用,可用作解码后续点云的参考;也可以在预测环路外,即作为post filter使用, 不用作解码后续点云的参考;本申请实施例对此不作具体限定。
另外,在本申请实施例中,滤波标识信息可以包括属性信息的待处理分量的滤波标识信息。具体地,在一些实施例中,对于S801来说,所述解码码流,确定滤波标识信息,可以包括:解码码流,确定待处理分量的滤波标识信息;其中,待处理分量的滤波标识信息用于指示是否对重建点云的属性信息的待处理分量进行滤波处理。
进一步地,对于解码码流,确定滤波标识信息来说,在一些实施例中,具体可以包括:若所述待处理分量的滤波标识信息的取值为第一值,则确定对所述重建点云的属性信息的待处理分量进行滤波处理;若所述待处理分量的滤波标识信息的取值为第二值,则确定不对所述重建点云的属性信息的待处理分量进行滤波处理。
还需要说明的是,在待处理分量为颜色分量的情况下,每一个颜色分量对应的滤波标识信息可以用于指示是否对该颜色分量进行滤波处理。因此,在一些实施例中,所述解码码流,确定待处理分量的滤波标识信息,可以包括:解码码流,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;
其中,第一颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第一颜色分量进行滤波处理,第二颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第二颜色分量进行滤波处理,第三颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第三颜色分量进行滤波处理。
也就是说,滤波标识信息可以为数组形式,具体为1×3的数组,其是由第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息组成的。例如,滤波标识信息可以用[y u v]表示,其中,y表示第一颜色分量的滤波标识信息,u表示第二颜色分量的滤波标识信息,v表示第三颜色分量的滤波标识信息。
由此可见,由于滤波标识信息包括有每一个颜色分量对应的滤波标识信息,因此,滤波标识信息不仅可以用于是否对重建点云是否进行滤波处理进行确定,还可以用于确定出具体对哪一个或者哪几个颜色分量进行滤波处理。
进一步地,对于解码码流,确定滤波标识信息来说,在一些实施例中,具体可以包括:
若第一颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第一颜色分量进行滤波处理;若第一颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第一颜色分量进行滤波处理;或者,
若第二颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第二颜色分量进行滤波处理;若第二颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第二颜色分量进行滤波处理;或者,
若第三颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第三颜色分量进行滤波处理;若第三颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第三颜色分量进行滤波处理。
也就是说,在本申请实施例中,如果某颜色分量对应的滤波标识信息为第一值,那么可以指示对该颜色分量进行滤波处理;如果某颜色分量对应的滤波标识信息为第二值,那么可以指示不对该颜色分量进行滤波处理。
进一步地,在一些实施例中,该方法还可以包括:
若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中至少一个为第一值,则确定滤波标识信息指示对重建点云进行滤波处理;
若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中全部为第二值,则确定滤波标识信息指示不对重建点云进行滤波处理。
也就是说,在本申请实施例中,如果这些颜色分量对应的滤波标识信息均为第二值,那么可以指示对这些颜色分量均不进行滤波处理,即可确定滤波标识信息指示不对重建点云进行滤波处理;相应地,如果这些颜色分量的滤波标识信息中至少一个为第一值,那么可以指示对至少一个颜色分量进行滤波处理,即可确定滤波标识信息指示对重建点云进行滤波处理。
另外,在本申请实施例中,第一值和第二值不同,而且第一值和第二值可以是参数形式,也可以是数字形式。具体地,这些颜色分量对应的标识信息可以是写入在概述(profile)中的参数,也可以是一个标志(flag)的取值,这里对此不作具体限定。
示例性地,对于第一值和第二值而言,第一值可以设置为1,第二值可以设置为0;或者,第一值还可以设置为true,第二值还可以设置为false;但是这里并不作具体限定。
S802:若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数。
需要说明的是,在本申请实施例中,解码器在解码码流,确定滤波标识信息之后,如果滤波标识信 息指示对重建点云进行滤波处理,那么可以进一步确定用于进行滤波处理的滤波系数。
还需要说明的是,滤波器可以是自适应滤波器,例如可以是基于神经网络的滤波器、维纳滤波器等等,这里不作具体限定。以维纳滤波器为例,本申请实施例所述的滤波系数可以用于维纳滤波,即滤波系数为维纳滤波处理的系数。
在这里,维纳滤波器(Wiener filter)是一种以最小化均方误差为最优准则的线性滤波器。在一定的约束条件下,其输出与一给定函数(通常称为期望输出)的差的平方达到最小,通过数学运算最终可变为一个托布利兹方程的求解问题。维纳滤波器又被称为最小二乘滤波器或最小平方滤波器。
可以理解的是,在本申请实施例中,滤波系数可以是指待处理分量对应的滤波系数。相应地,在一些实施例中,所述若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数,可以包括:若待处理分量的滤波标识信息的取值为第一值,则解码码流,确定待处理分量对应的滤波系数。
需要说明的是,在待处理分量为颜色分量的情况下,这里的颜色分量包括第一颜色分量、第二颜色分量和第三颜色分量;那么相应地,滤波系数可以是由第一颜色分量对应的滤波系数、第二颜色分量对应的滤波系数和第三颜色分量对应的滤波系数向量。具体地,在一些实施例中,所述若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数,可以包括:
若所述第一颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第一颜色分量对应的滤波系数;或者,
若所述第二颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第二颜色分量对应的滤波系数;或者,
若所述第三颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第三颜色分量对应的滤波系数。
进一步地,在一些实施例中,该方法还可以包括:若滤波标识信息指示不对重建点云进行滤波处理,则不执行解码码流,确定滤波系数的步骤。
也就是说,在本申请实施例中,在滤波标识信息指示对重建点云进行滤波处理时,解码器可以解码码流,直接获得滤波系数。但是解码器在解码码流,确定滤波标识信息之后,如果滤波标识信息指示不对重建点云的某一颜色分量进行滤波处理,那么解码器便不需要再解码获取该颜色分量对应的滤波系数向量。
S803:利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云。
需要说明的是,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。其中,这里的(K-1)个近邻点具体是指与第一点几何距离最接近的(K-1)个近邻点。
还需要说明的是,在本申请实施例中,如果滤波标识信息指示对重建点云进行滤波处理,那么在确定初始点云对应的滤波系数之后,可以进一步利用滤波系数对重建点云进行滤波处理,从而可以获得重建点云对应的滤波后点云。
还需要说明的是,在本申请实施例中,属性信息以颜色分量为例,可以对重建点云的各个颜色分量单独滤波,例如,对于YUV空间的颜色信息,可以分别确定Y分量对应的滤波系数,U分量对应的滤波系数,V分量对应的滤波系数。这些滤波系数共同组成了重建点云的滤波系数。另外,由于滤波系数可以是由K阶滤波器确定的;这样,解码端器在进行滤波时,可以是针对重建点云中的每个点进行KNN搜索以确定出该点对应的K个目标点,然后利用该点的滤波系数进行滤波即可。
在一些实施例中,参见图9,其示出了本申请实施例提供的一种解码方法的流程示意图二。如图9所示,对于S803来说,该步骤可以包括:
S901:利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云中第一点的属性信息的滤波值。
S902:在确定出重建点云中至少一个点的属性信息的滤波值之后,根据至少一个点的属性信息的滤波值确定滤波后点云。
需要说明的是,在本申请实施例中,解码器在确定滤波系数之后,还可以进一步利用滤波系数确定重建点云对应的滤波后点云。具体地,在一些实施例中,首先需要确定重建点云中第一点对应的K个目标点,这里的第一点表示重建点云中的任意点。然后,再利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云。
在一种具体的实施例中,确定重建点云中第一点对应的K个目标点,可以包括:
基于重建点云中的第一点,利用K近邻搜索方式在重建点云中搜索预设值个候选点;
分别计算第一点与预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距 离值,且(K-1)个距离值均小于预设值个距离值中剩余的距离值;
根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将第一点和(K-1)个近邻点确定为第一点对应的K个目标点。
需要说明的是,以第一点为例,可以利用K近邻搜索方式在重建点云中搜索预设值个候选点,计算第一点与这些候选点之间的距离值,然后从这些候选点中选取与第一点距离最近的(K-1)个近邻点;也就是说,除了第一点自身之外,另外还包括与第一点几何距离最接近的(K-1)个近邻点,总共组成了重建点云中第一点对应的K个目标点。
还需要说明的是,仍以属性信息中的颜色分量为例,在利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理时,可以包括:利用颜色分量对应的滤波系数向量对重建点云中第一点对应的K个目标点进行滤波处理,得到重建点云中第一点的颜色分量的滤波值;根据重建点云中点的颜色分量的滤波值,得到滤波后点云。具体而言,可以根据颜色分量对应的滤波系数向量和第二属性参数,确定重建点云中每一个点的颜色分量的滤波值;然后基于重建点云中每一个点的颜色分量的滤波值,得到滤波后点云。
简单来说,在本申请实施例中,在利用滤波系数对重建点云进行滤波处理时,可以先根据维纳滤波器的阶数和重建点云中点对应的K个目标点的属性信息的重建值,确定属性信息对应的第二属性参数;然后可以根据滤波系数和第二属性参数,确定属性信息的滤波值;最终便可以基于属性信息的滤波值,得到滤波后点云。
需要说明的是,在本申请实施例中,如果属性信息为YUV空间的颜色分量,那么可以先基于颜色分量(如Y分量,U分量,V分量)的重建值,结合维纳滤波器的阶数,确定出第二属性参数(用P(n,k)表示),即第二属性参数表示重建点云中点对应的K个目标点的颜色分量的重建值。
还需要说明的是,对于维纳滤波器而言,滤波器类型可以用于指示滤波器阶数,和/或滤波器形状,和/或滤波器维数。其中,滤波器形状包括菱形、矩形等,滤波器维数包括一维、二维乃至更多维等。
也就是说,在本申请实施例中,不同的滤波器类型可以对应不同阶数的维纳滤波器,例如,阶数的取值可以为12、32、128的维纳滤波器;不同的类型还可以对应不同维数的滤波器,例如,一维滤波器、二维滤波器等,这里并不作具体限定。也就是说,如果需要确定一个16阶的滤波器,那么可以用16个点来确定16阶非对称滤波器、或者8阶一维对称滤波器、或者其他数量(例如更加特殊的二维、三维滤波器等)的滤波器等等,这里并不对滤波器作具体限定。
需要说明的是,在本申请实施例中,对于某一个颜色分量进行滤波值计算,可以利用该颜色分量的第二属性参数以及该颜色分量对应的滤波系数向量,确定出该颜色分量的滤波值。在遍历完全部颜色分量,获得每一个颜色分量的滤波值之后,便可得到滤波后点云。
还可以理解的是,在本申请实施例中,在使用维纳滤波器对重建点云进行滤波处理时,可以将重建点云和滤波系数同时输入至维纳滤波器中,即维纳滤波器的输入为滤波系数和重建点云,最终便可以基于该滤波系数完成对重建点云的滤波处理,获得对应的滤波后点云。
也就是说,在本申请实施例中,滤波系数是基于原始点云和重建点云获得的,因此,将滤波系数作用于重建点云上,可以最大限度恢复原始点云。示例性地,在本申请实施例中,以点云序列中的YUV空间的颜色分量为例,假设滤波器的阶数为K,点云序列为n,使用矩阵P(n,k)表示重建点云中所有点对应的K个目标点在相同颜色分量(如Y分量)下的重建值,即P(n,k)为由重建点云中所有点对应的K个目标点的Y分量的重建值所构成的第二属性参数。
这样,基于上述式(13),将Y分量下的滤波系数向量H(k)作用在重建点云也就是说第二属性参数P(n,k)上,可以获得Y分量下的属性信息的滤波值R(n)。
接着,可以按照上述方法遍历U分量和V分量,最终确定出U分量下的滤波值和V分量下的滤波值,进而便可以利用全部颜色分量下的滤波值来确定出重建点云对应的滤波后点云。
除此之外,在本申请实施例中,解码器在确定重建点云对应的滤波后点云之后,可以使用滤波后点云覆盖重建点云。
需要说明的是,在本申请实施例中,相较于重建点云,在经过滤波处理后获得的滤波后点云的质量有较明显的增强;因此,在获得滤波后点云之后,可以使用滤波后点云覆盖原有的重建点云,从而实现整个编解码及质量增强操作。
还需要说明的是,由于点云通常是采用RGB颜色空间表示的,而且YUV分量难以用现有应用进行点云可视化;因此,在确定重建点云对应的滤波后点云之后,该方法还可以包括:若滤波后点云中点的颜色分量不符合RGB颜色空间(例如,YUV颜色空间、YCbCr颜色空间等),则对滤波后点云进行颜色空间转换,使得滤波后点云中点的颜色分量符合RGB颜色空间。
这样,当滤波后点云中点的颜色分量符合YUV颜色空间时,首先需要在将滤波后点云中点的颜色 分量由符合YUV颜色空间转换为符合RGB颜色空间,然后再利用滤波后点云更新原有的重建点云。
可以理解的是,在本申请实施例中,如果滤波标识信息指示不对重建点云进行滤波处理,那么解码器可以跳过滤波处理流程,按照原程序得到的重建点云进行应用,即不再对重建点云进行更新处理。
进一步地,在本申请实施例中,对于维纳滤波器的阶数,即确定,本申请实施例可以设置阶数的取值等于16,从而既能够保证滤波效果,又可以在一定程度上减少时间复杂度。
另外,如果编码器端维纳滤波时考虑邻域差异值,由于点云轮廓或边界处的属性值变化较大,单个滤波器可能难以覆盖全点云的滤波,导致效果仍有提升的空间,尤其对于稀疏点云序列;因此,可以根据邻域差异值进行分类,不同类别应用不同的滤波器,比如设置多个阶数不同的维纳滤波器,这时候编码器向解码器传递多组滤波系数;相应地,在解码器端将会解码获得多组滤波系数以对重建点云进行滤波处理,以便实现自适应选择滤波器进行滤波以达到更好的效果。
简言之,本申请实施例提出了一种优化的对编解码颜色重建值的YUV分量做维纳滤波处理以更好地增强点云质量的技术,具体可以通过步骤S801~S803所提出的解码方法,在对点云序列的颜色重建值的颜色分量(如Y分量、U分量、V分量)进行滤波时,不仅考虑了几何无损、属性有损情况,还考虑了几何有损、属性有损情况下的点云维纳滤波后处理方法,同时对于质量提升的判断、邻点选择、K的大小进行了优化,从而可以有选择性地对解码端输出的重建点云进行质量增强操作,码流比特数大小增加较少,总体压缩性能得以提升。
需要说明的是,本申请实施例提出的解码方法,适用于任意的点云序列,尤其是对于点的分布较密集的序列以及中等码率序列,具有较为突出的优化效果。另外,本申请实施例提出的解码方法,可对全部三种属性变换(Predicting transform、Lifing transform、RAHT)解码方式进行操作,具有普适性。而对于极少数滤波效果不好的序列,该方法不会影响解码端重建过程,除编码端运行时间稍有增加外,滤波标识信息对于属性码流大小的影响可忽略不计。
需要说明的是,在本申请实施例提出的维纳滤波器可以在预测环路内,即作为inloop filter使用,可用作解码后续点云的参考;也可以在预测环路外,即作为post filter使用,不用作解码后续点云的参考。对此本申请不进行具体限定。
需要说明的是,在本申请实施例中,如果本申请实施例提出的维纳滤波器为环路内滤波器,那么在确定进行滤波处理之后,需要将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也需要将滤波系数写入码流;相应地,在确定不进行滤波处理之后,可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流。反之,如果本申请实施例提出的维纳滤波器为后处理滤波器,一种情况下,该滤波器对应的滤波系数是位于一个单独的辅助信息数据单元(例如,补充增强信息SEI),那么可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流,相应地,解码器在未获得该补充增强信息的情况下,即不对重建点云进行滤波处理。另一种情况下,该滤波器对应的滤波系数与其他信息在一个辅助信息数据单元中,此时在确定进行滤波处理之后,需要将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也需要将滤波系数写入码流;相应地,在确定不进行滤波处理之后,可以选择不将指示进行滤波处理的参数信息,如滤波标识信息写入码流,同时也不将滤波系数写入码流。
进一步地,对于本申请实施例提出的解码方法,可以选择对重建点云中的一个或多个部分进行滤波处理,即滤波标识信息的控制范围可以是整个重建点云,也可以是重建点云中的某个部分,这里并不作具体限定。
本实施例提供了一种解码方法,应用于解码器。通过解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。这样,解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,由于编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,而且还考虑了率失真代价值进行权衡;此外,编码端还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而使得解码端不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。
可以理解地,虽然相关技术也存在一种基于维纳滤波的质量增强技术,但是这种技术方案仍然存在以下问题:一方面,由于在邻域搜索过程中,并未将当前点自身考虑在内,这表明该点在最小均方误差 准则下的最优值与自身当前值无关,只由近邻点决定,这显然存在着不合理性,也使得经过该技术方案处理后的点云质量仍有很大提升空间;另外,判断质量提升的方式单一,只凭借PSNR这一性能指标来决定是否有性能提升较为片面,由于将判定数组和最优系数写入码流需要增加总码流大小,因此压缩比率、综合性能提升与否需要结合质量提升量与码流大小进行考虑;又一方面,选取滤波器的阶数较大(K=32),导致KNN搜索时间较长,同时PSNR增加量不一定与K的大小成正比例,这将使得整个算法运算复杂度比较高,令G-PCC本身处理速度较快的特点难以表现;再一方面,该技术方案针对的只能是几何无损、属性有损的编码方式,对于几何有损、属性有损的编码方式并未提供一种有效的处理方式。这使得该技术方案的普适性较差,严重限制了维纳滤波后处理的应用范围。
基于上述实施例提供的编解码方法,在本申请的再一实施例中,这里提出了一种优化的对编解码的颜色重建值的YUV分量做维纳滤波处理以更好地增强点云质量的技术,其主要提出了对于几何有损、属性有损编码情况的点云维纳滤波后处理方法,同时还对于质量提升的判断、邻点选择、K的大小进行了优化。
以使用维纳滤波器进行滤波处理为例,图10为本申请实施例提供的一种编解码的总体流程示意图。如图10所示,在编码端,初始点云在经过编码及重建之后,可以得到重建点云;然后初始点云和重建点云可以作为维纳滤波器的输入,而维纳滤波器的输出即为滤波系数;这些滤波系数会根据判断效果优劣,确定是否需要对重建点云进行滤波处理;只有需要对重建点云进行滤波时才会将对应的滤波系数写入码流;另外,初始点云在经过编码之后的属性残差值也会写入码流。这样在解码端,解码器通过解码码流,可以得到属性残差值,进而经过解码及重建处理,可以得到重建点云;另外,如果码流中存在有滤波系数,还需要解码获得对应的滤波系数,然后将滤波系数和重建点云输入到维纳滤波器,最后输出滤波后点云(即输出点云),而且具有质量增强效果。
在一种具体的实施例中,图11为本申请实施例提供的一种编码端滤波处理的流程示意图。如图11所示,初始点云和重建点云作为维纳滤波器的输入,初始点云中的点和重建点云中的点分别进行位置信息的对齐,并进行属性信息的转换(如将属性信息中的颜色分量从RGB空间转换为YUV空间),可以利用初始点云中的点的颜色分量和重建点云中的点的颜色分量进行滤波系数的计算。其中,可以基于重建点云中的点的颜色分量进行自相关参数的计算,同时基于初始点云中的点的颜色分量和重建点云中的点的颜色分量进行互相关参数的计算,最终可以利用自相关参数和互相关参数确定出最优滤波器系数,即对应的滤波系数。
具体地,在编码端,具体操作如下:
首先,需要获得维纳滤波器的输入:初始点云与重建点云。其中,初始点云可以通过编解码程序点云读取函数可直接得到,重建点云则是在经过属性编码、属性重建和几何补偿后获得的。在G-PCC中获得重建点云之后,即可进入维纳滤波环节。
需要说明的是,在几何无损、属性有损的编码方式中,重建点云点的数目和坐标并没有改变,因此与初始点云中对应点的匹配较为轻松,只需利用k=1的KNN搜索即可得到点的对应关系。而在几何有损、属性有损的编码方式中,所得到的重建点云根据设置的码率不同,点的数目和坐标以及整个点云的bounding box都有较大的改变,因此这里提供了一种更准确地找出点与点的对应关系的方法。该方法中,首先进行重建点云的几何坐标预处理,经过几何补偿后,利用配置参数中的缩放尺度(scale),依照重建过程对每个点的几何坐标除以该缩放尺度,通过此处理可以将重建点云恢复到与原始点云相当的大小且消除了坐标的偏移。之后对于重建点云中的每个点在初始点云中进行k=5的KNN搜索,并依次计算这5个点与当前点的距离,将距离最近的近邻点保存,如果有多个距离相同的最近邻点,那么可以将这些距离当前点相同的近邻点融合成一个点,即对这些点属性值取均值作为当前点的最近邻匹配点的属性真实值。通过此方法可以得到一个与重建点云点数相同且每个点一一对应的点云,将其作为真正的初始点云输入至维纳滤波器。
其次,编码端维纳滤波器的作用主要是计算滤波系数,同时判断维纳滤波之后点云质量是否提升。在输入初始点云与重建点云之后,应用维纳滤波算法原理可计算出该维纳滤波器的最优系数(即滤波系数)。由于滤波器的阶数(即滤波时选择目标点的个数为K)以及需要滤波的属性信息的个数等会影响到滤波后点云的质量,经过多次尝试之后,兼顾性能与效率,选择K=16,注意,K个目标点包含该点自身和与该点邻近的(K-1)个近邻点。同时编码端会对YUV三个分量都进行滤波(以获得在这三个分量上的滤波性能)。在该条件下,获得滤波系数后,可以得到滤波后点云(此时不需覆盖重建点云)。
进一步地,计算滤波后点云以及重建点云在YUV每个分量各自的PSNR值。同时,为了更准确衡量滤波前后性能提升与否,该方案同时对滤波后点云与重建点云进行率失真权衡,利用率失真代价函数来计算综合质量提升与码流增加后的代价值,以表示滤波前后点云的压缩效率。其中,代价值(用J表示)的具体计算如上述的式(14)所示。
这样,在计算出重建点云和滤波后点云各自的每一个颜色分量对应的PSNR和J之后,若某个或某几个分量PSNR增加的同时J相对于重建点云有所降低,如仅V分量的PSNR值增加且J减小,那么在解码端只会针对V分量进行滤波,此时首先将大小为1×3的判定数组(即前述实施例的滤波标识信息,用于判定是否需要、哪些分量需要在解码端进行后处理即滤波操作,如[0,1,1]说明对U分量和V分量进行滤波)写入码流,并紧接着将滤波系数写入码流中。若没有一个分量PSNR增加或者J全部增大,则说明滤波后性能反而有所下降,此时不需将判定数组和滤波系数写入码流,以免降低编解码效率,同时在解码端不应进行维纳滤波操作。
进一步地,本申请实施例为了确定最佳的K的取值,对于测试序列在CTC_C1(几何无损、属性有损)条件、r1-r6码率下利用Lifting Transform变换方式的滤波前后BD-Rate增益以及运行时间进行了测试,具体结果如图6和图7所示。由此可见,无限增大K对于性能提升并没有好处,同时过大的K在处理较大点云时,容易出现内存溢出的情况,使得普适性降低。因此,这里设置K为16,保证滤波效果的同时一定程度上减少时间复杂度。
在另一种具体的实施例中,图12为本申请实施例提供的一种解码端滤波处理的流程示意图。如图12所示,解码器在进行维纳滤波处理时,可以将重建点云与滤波系数作为解码端的维纳滤波器的输入,在完成对重建点云中的点的属性信息的转换(如将属性信息中的颜色分量从RGB空间转换为YUV空间)之后,可以使用滤波系数进行滤波处理,通过计算可得到滤波后的且质量增强的滤波后点云,并在完成滤波后点云的点的属性信息的转换(从YUV空间转换为RGB空间)之后,输出滤波后点云。
具体地,在解码端,具体操作如下:
首先,在G-PCC解码出属性残差值之后可再紧接着进行解码,首先解码出一个1×3的判定数组,若经过判断,确定出需要对某(几)个分量进行滤波,则说明编码端有滤波系数的传递,并且维纳滤波会使重建点云质量增加;否则将不会再继续解码,同时跳过维纳滤波部分,按照原程序进行点云重建,以得到重建点云。
其次,在解码出滤波系数后,可将滤波系数传递至点云重建完成后,此时可直接将重建点云与滤波系数输入至解码端的维纳滤波器,通过计算可得到滤波后的且质量增强的滤波后点云,并将此点云覆盖重建点云,完成整个编解码及质量增强操作。
进一步地,在本申请实施例中,图13为本申请实施例提供的一种相关技术与本技术方案在CY测试条件下预测变换的测试结果示意图,图14为本申请实施例提供的一种相关技术与本技术方案在C1测试条件下提升变换的测试结果示意图,图15为本申请实施例提供的一种相关技术与本技术方案在C1测试条件下RAHT变换的测试结果示意图,图16为本申请实施例提供的一种在C2测试条件下提升变换的测试结果示意图,图17为本申请实施例提供的一种在C2测试条件下RAHT变换的测试结果示意图。如图13~图17所示,本申请实施例提出的编解码方法在G-PCC参考软件TMC13 V12.0上实现后,在CTC CY、C1和C2测试条件下对MPEG要求的部分测试序列进行了测试(cat1-A&cat1-B),同时属性无损编码方式中最终测试结果也与相关技术进行了对比。
其中,CY条件为几何无损、属性有损编码方式(lossless geometry,lossy attribute);C1条件为几何无损、属性几乎无损编码方式(lossless geometry,nearlossless attribute);C2条件为几何有损、属性有损编码方式(lossy geometry,lossy attribute)。上述测试结果中,End-to-End BD-AttrRate表示端到端属性值针对属性码流的BD-Rate。BD-Rate反映的是两种情况下(有无滤波)PSNR曲线的差异,BD-Rate减少时,表示在PSNR相等的情况下,码率减少,性能提高;反之性能下降。即BD-Rate下降越多则压缩效果越好。Cat1-A average与Cat1-B average分别代表两数据集各自点云序列测试效果的平均值。最后Overall average为所有序列测试效果的平均值。
由此可见,本申请实施例提出的编解码方法,主要是一种优化的G-PCC点云解码端属性维纳滤波质量增强技术,可以实现有选择性地对解码端输出的重建点云进行质量增强操作;尤其是对于点的分布较密集的序列以及中等码率序列,有较为突出的优化效果。而且码流比特数没有明显的增加,压缩性能得以提升;同时该技术可对全部三种属性变换(Predicting transform、Lifing transform、RAHT)编码方式进行操作,具有普适性。而对于极少数滤波效果不好的序列,该技术不会影响解码端重建过程,除编码端运行时间稍有增加外,滤波标识信息对于属性码流大小的影响可忽略不计。
与相关技术相比,一方面,利用近邻点进行滤波时,将当前点自身加入计算,即该点滤波后的值也取决于自身的属性值。经过测试,发现最优系数中自身对应的系数较大,也就是说该点属性值在滤波过程中的权重较大;因此,将当前点的计算考虑在内是很有必要的;从测试结果来看,最后的BD-Rate有了相当大的提升,也证明了该操作是十分正确的。另一方面,在滤波系数等信息写入码流前,优化了滤波后整体性能提升与否的判定方法。考虑到G-PCC最终目的是进行点云压缩,压缩率越高代表整体性能越好。相对于相关技术仅仅依据PSNR值决定是否在解码端进行滤波,本申请实施例提出利用代价 函数计算每一通道(Y,U,V)滤波前后的代价进行率失真权衡;本方法不仅考虑了属性值质量的提升,同时计算了将系数等信息写入码流所需的代价,综合二者表现判断滤波后压缩性能是否有了提升,从而决定是否进行系数传递及解码端操作。又一方面,对滤波器阶数K的大小进行了改进,为了在保证滤波效果的同时减少运行时间、增加普适性,进行了系统的测试,本申请实施例选择K=16进行滤波;再一方面,本申请实施例可以针对GPCC几何有损、属性有损编码方式得到的重建点云进行滤波,提出了几何失真的重建点云与原始点云的点一一对应的方法,适用范围进一步提升。
在本申请的再一实施例中,基于前述实施例相同的发明构思,参见图18,其示出了本申请实施例提供的一种编码器180的组成结构示意图。如图18所示,该编码器180可以包括:第一确定单元1801、第一滤波单元1802和编码单元1803;其中,
第一确定单元1801,配置为确定初始点云以及初始点云对应的重建点云;以及根据初始点云和重建点云,确定滤波系数;
第一滤波单元1802,配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;
第一确定单元1801,还配置为根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;
编码单元1803,配置为若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。
在一些实施例中,第一确定单元1801,还配置为若利用第一类编码方式对初始点云进行编码及重建处理,则得到第一重建点云,将第一重建点云作为重建点云;其中,第一类编码方式用于指示对初始点云进行几何无损且属性有损编码。
在一些实施例中,第一确定单元1801,还配置为若利用第二类编码方式对初始点云进行编码及重建处理,则得到第二重建点云;以及对第二重建点云进行几何恢复处理,得到恢复重建点云,将恢复重建点云作为重建点云;其中,第二类编码方式用于指示对初始点云进行几何有损且属性有损编码。
在一些实施例中,第一确定单元1801,还配置为对第二重建点云进行几何补偿处理,得到中间重建点云;以及对中间重建点云进行尺度缩放处理,得到恢复重建点云;其中,恢复重建点云与初始点云的大小相同且几何位置相同。
在一些实施例中,参见图18,编码器180还可以包括搜索单元1804,配置为在第一类编码方式的情况下,根据第一预设搜索方式确定重建点云中的点在初始点云中的对应点,建立重建点云中的点与初始点云中的点之间的对应关系。
在一些实施例中,搜索单元1804,还配置为在第二类编码方式的情况下,根据第二预设搜索方式确定重建点云中的点在初始点云中的对应点;以及根据确定的对应点构建匹配点云,将匹配点云作为初始点云,建立重建点云中的点与初始点云中的点之间的对应关系。
在一些实施例中,第一预设搜索方式为搜索第一常数值个点的K近邻搜索方式;第二预设搜索方式为搜索第二常数值个点的K近邻搜索方式。
在一些实施例中,搜索单元1804,具体配置为基于重建点云中的当前点,利用第二预设搜索方式在初始点云中搜索到第二常数值个点;以及分别计算当前点与第二常数值个点之间的距离值,从距离值中选取最小距离值;以及若最小距离值的个数为1个,则将最小距离值对应的点确定为当前点的对应点;若最小距离值的个数为多个,则根据最小距离值对应的多个点,确定当前点的对应点。
在一些实施例中,第一常数值等于1,第二常数值等于5。
在一些实施例中,第一确定单元1801,还配置为确定重建点云中第一点对应的K个目标点;
相应地,第一滤波单元1802,具体配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云中第一点的属性信息的滤波值;以及在确定出重建点云中至少一个点的属性信息的滤波值之后,根据至少一个点的属性信息的滤波值确定滤波后点云。
在一些实施例中,第一确定单元1801,还配置为基于重建点云中的第一点,利用K近邻搜索方式在重建点云中搜索预设值个候选点;以及分别计算第一点与预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且(K-1)个距离值均小于预设值个距离值中剩余的距离值;以及根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将第一点和(K-1)个近邻点确定为第一点对应的K个目标点。
在一些实施例中,第一确定单元1801,还配置为根据初始点云中点的属性信息的原始值,确定第一属性参数;以及根据重建点云中点对应的K个目标点的属性信息的重建值,确定第二属性参数;以 及基于第一属性参数和第二属性参数,确定滤波系数。
在一些实施例中,第一确定单元1801,还配置为根据第一属性参数和第二属性参数,确定互相关参数;以及根据第二属性参数确定自相关参数;以及根据互相关参数和自相关参数进行系数计算,得到滤波系数。
在一些实施例中,属性信息包括颜色分量,且颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,若颜色分量符合RGB颜色空间,则确定第一颜色分量、第二颜色分量和第三颜色分量依次为:R分量、G分量、B分量;若颜色分量符合YUV颜色空间,则确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、U分量、V分量。
在一些实施例中,参见图18,编码器180还可以包括计算单元1805,配置为确定重建点云的属性信息的待处理分量的第一代价值,以及确定滤波后点云的属性信息的待处理分量的第二代价值;以及
第一确定单元1801,还配置为根据第一代价值和第二代价值,确定待处理分量的滤波标识信息;以及根据待处理分量的滤波标识信息,得到滤波标识信息。
在一些实施例中,第一确定单元1801,还配置为若第二代价值小于第一代价值,则确定待处理分量的滤波标识信息的取值为第一值;若第二代价值大于第一代价值,则确定待处理分量的滤波标识信息的取值为第二值。在一些实施例中,计算单元1805,具体配置为利用率失真代价方式对重建点云的属性信息的待处理分量进行代价值计算,将所得到的第一率失真值作为第一代价值;以及利用率失真代价方式对滤波后点云的属性信息的待处理分量进行代价值计算,将所得到的第二率失真值作为得到第二代价值。
在一些实施例中,计算单元1805,还配置为利用率失真代价方式对重建点云的属性信息的待处理分量进行代价值计算,得到第一率失真值;以及利用预设性能衡量指标对重建点云的属性信息的待处理分量进行性能值计算,得到第一性能值;
第一确定单元1801,还配置为根据第一率失真值和第一性能值,确定第一代价值。
在一些实施例中,计算单元1805,还配置为利用率失真代价方式对滤波后点云的属性信息的待处理分量进行代价值计算,得到第二率失真值;以及利用预设性能衡量指标对滤波后点云的属性信息的待处理分量进行性能值计算,得到第二性能值;
第一确定单元1801,还配置为根据第二率失真值和第二性能值,确定第二代价值。
在一些实施例中,第一确定单元1801,还配置为若第二性能值大于第一性能值且第二率失真值小于第一率失真值,则确定待处理分量的滤波标识信息的取值为第一值;若第二性能值小于第一性能值,则确定待处理分量的滤波标识信息的取值为第二值。
在一些实施例中,第一确定单元1801,还配置为若第二性能值大于第一性能值且第二率失真值小于第一率失真值,则确定待处理分量的滤波标识信息的取值为第一值;若第二率失真值大于第一率失真值,则确定待处理分量的滤波标识信息的取值为第二值。
在一些实施例中,第一确定单元1801,还配置为若待处理分量的滤波标识信息的取值为第一值,则确定待处理分量的滤波标识信息指示对重建点云的属性信息的待处理分量进行滤波处理;若待处理分量的滤波标识信息的取值为第二值,则确定待处理分量的滤波标识信息指示不对重建点云的属性信息的待处理分量进行滤波处理。
在一些实施例中,第一确定单元1801,还配置为在待处理分量为颜色分量的情况下,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;以及根据第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息,得到滤波标识信息。
在一些实施例中,第一确定单元1801,还配置为若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中至少一个为第一值,则确定滤波标识信息指示对重建点云进行滤波处理;若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中全部为第二值,则确定滤波标识信息指示不对重建点云进行滤波处理。
在一些实施例中,编码单元1803,还配置为若滤波标识信息指示不对重建点云进行滤波处理,则仅对滤波标识信息进行编码,并将所得到的编码比特写入码流。
在一些实施例中,第一确定单元1801,还配置为根据重建点云的点数,确定K的取值;和/或,根据重建点云的量化参数,确定K的取值;和/或,根据重建点云中点的邻域差异值,确定K的取值;其中,邻域差异值是基于点的属性信息与至少一个近邻点的属性信息之间的分量差值计算得到的。
在一些实施例中,第一确定单元1801,还配置为若重建点云中点的邻域差异值满足第一预设区间,则选择第一类别滤波器对重建点云进行滤波处理;若重建点云中点的邻域差异值满足第二预设区间,则选择第二类别滤波器对重建点云进行滤波处理;其中,第一类别滤波器的阶数与第二类别滤波器的阶数 不同。
在一些实施例中,第一确定单元1801,还配置为确定初始点云中点的属性信息的预测值;以及根据初始点云中点的属性信息的原始值和预测值,确定初始点云中点的属性信息的残差值;
编码单元1803,还配置为对初始点云中点的属性信息的残差值进行编码,并将所得到的编码比特写入码流。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机存储介质,应用于编码器180,该计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于上述编码器180的组成以及计算机存储介质,参见图19,其示出了本申请实施例提供的编码器180的具体硬件结构示意图。如图19所示,编码器180可以包括:第一通信接口1901、第一存储器1902和第一处理器1903;各个组件通过第一总线系统1904耦合在一起。可理解,第一总线系统1904用于实现这些组件之间的连接通信。第一总线系统1904除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图19中将各种总线都标为第一总线系统1904。其中,
第一通信接口1901,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器1902,用于存储能够在第一处理器1903上运行的计算机程序;
第一处理器1903,用于在运行所述计算机程序时,执行:
确定初始点云以及初始点云对应的重建点云;
根据初始点云和重建点云,确定滤波系数;
利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;
根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;
若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。
可以理解,本申请实施例中的第一存储器1902可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器1902旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器1903可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器1903中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器1903可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器 等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器1902,第一处理器1903读取第一存储器1902中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器1903还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种编码器,该编码器可以包括第一确定单元、第一滤波单元和编码单元。这样,编码端利用初始点云和重建点云计算用于进行滤波处理的滤波系数,并在确定对重建点云进行滤波处理之后,才将对应的滤波系数传递至解码器;相应地,解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,还考虑了率失真代价值进行权衡;此外还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。
基于前述实施例相同的发明构思,参见图20,其示出了本申请实施例提供的一种解码器200的组成结构示意图。如图20所示,该解码器200可以包括:解码单元2001和第二滤波单元2002;其中,
解码单元2001,配置为解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;以及若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;
第二滤波单元2002,配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。
在一些实施例中,参见图20,解码器200还可以包括第二确定单元2003,配置为确定重建点云中第一点对应的K个目标点;
相应地,第二滤波单元2002,具体配置为利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云中第一点的属性信息的滤波值;以及在确定出重建点云中至少一个点的属性信息的滤波值之后,根据至少一个点的属性信息的滤波值确定滤波后点云。
在一些实施例中,第二确定单元2003,具体配置为基于重建点云中的第一点,利用K近邻搜索方式在重建点云中搜索预设值个候选点;以及分别计算第一点与预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且(K-1)个距离值均小于预设值个距离值中剩余的距离值;以及根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将第一点和(K-1)个近邻点确定为第一点对应的K个目标点。
在一些实施例中,解码单元2001,还配置为解码码流,确定初始点云中点的属性信息的残差值;
第二确定单元2003,还配置为在确定初始点云中点的属性信息的预测值后,根据初始点云中点的属性信息的预测值和残差值,确定初始点云中点的属性信息的重建值;以及基于初始点云中点的属性信息的重建值,构建重建点云。
在一些实施例中,属性信息包括颜色分量,且颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,若颜色分量符合RGB颜色空间,则确定第一颜色分量、第二颜色分量和第三颜色分量依次为:R分量、G分量、B分量;若颜色分量符合YUV颜色空间,则确定第一颜色分量、第二颜色分量和第三颜色分量依次为:Y分量、U分量、V分量。
在一些实施例中,解码单元2001,还配置为解码码流,确定待处理分量的滤波标识信息;其中,待处理分量的滤波标识信息用于指示是否对重建点云的属性信息的待处理分量进行滤波处理。
在一些实施例中,第二确定单元2003,还配置为若待处理分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的待处理分量进行滤波处理;以及若待处理分量的滤波标识信息的取值为 第二值,则确定不对重建点云的属性信息的待处理分量进行滤波处理;
相应地,第二滤波单元2002,还配置为若待处理分量的滤波标识信息的取值为第一值,则解码码流,确定待处理分量对应的滤波系数。
在一些实施例中,解码单元2001,还配置为解码码流,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;其中,第一颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第一颜色分量进行滤波处理,第二颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第二颜色分量进行滤波处理,第三颜色分量的滤波标识信息用于指示是否对重建点云的属性信息的第三颜色分量进行滤波处理。
在一些实施例中,第二确定单元2003,还配置为若第一颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第一颜色分量进行滤波处理;若第一颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第一颜色分量进行滤波处理;
相应地,第二滤波单元2002,还配置为若第一颜色分量的滤波标识信息的取值为第一值,则解码码流,确定第一颜色分量对应的滤波系数。
在一些实施例中,第二确定单元2003,还配置为若第二颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第二颜色分量进行滤波处理;若第二颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第二颜色分量进行滤波处理;
相应地,第二滤波单元2002,还配置为若第二颜色分量的滤波标识信息的取值为第一值,则解码码流,确定第二颜色分量对应的滤波系数。
在一些实施例中,第二确定单元2003,还配置为若第三颜色分量的滤波标识信息的取值为第一值,则确定对重建点云的属性信息的第三颜色分量进行滤波处理;若第三颜色分量的滤波标识信息的取值为第二值,则确定不对重建点云的属性信息的第三颜色分量进行滤波处理;
相应地,第二滤波单元2002,还配置为若第三颜色分量的滤波标识信息的取值为第一值,则解码码流,确定第三颜色分量对应的滤波系数。
在一些实施例中,第二确定单元2003,还配置为若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中至少一个为第一值,则确定滤波标识信息指示对重建点云进行滤波处理;若第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息中全部为第二值,则确定滤波标识信息指示不对重建点云进行滤波处理。
在一些实施例中,第二滤波单元2002,还配置为若滤波标识信息指示不对重建点云进行滤波处理,则不执行解码码流,确定滤波系数的步骤。
在一些实施例中,第二滤波单元2002,还配置为若滤波后点云中点的颜色分量不符合RGB颜色空间,则对滤波后点云进行颜色空间转换,使得滤波后点云中点的颜色分量符合RGB颜色空间。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机存储介质,应用于解码器200,该计算机存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法。
基于上述解码器200的组成以及计算机存储介质,参见图21,其示出了本申请实施例提供的解码器200的具体硬件结构示意图。如图21所示,解码器200可以包括:第二通信接口2101、第二存储器2102和第二处理器2103;各个组件通过第二总线系统2104耦合在一起。可理解,第二总线系统2104用于实现这些组件之间的连接通信。第二总线系统2104除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图21中将各种总线都标为第二总线系统2104。其中,
第二通信接口2101,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器2102,用于存储能够在第二处理器2103上运行的计算机程序;
第二处理器2103,用于在运行所述计算机程序时,执行:
解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;
若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;
利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一 点表示重建点云中的任意点。
可选地,作为另一个实施例,第二处理器2103还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
可以理解,第二存储器2102与第一存储器1902的硬件功能类似,第二处理器2103与第一处理器1903的硬件功能类似;这里不再详述。
本实施例提供了一种解码器,该解码器可以包括解码单元和第二滤波单元。这样,由于解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,还考虑了率失真代价值进行权衡;此外还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。
在本申请的再一实施例中,参见图22,其示出了本申请实施例提供的一种编解码系统的组成结构示意图。如图22所示,编解码系统220可以包括编码器2201和解码器2202。其中,编码器2201可以为前述实施例中任一项所述的编码器,解码器2202可以为前述实施例中任一项所述的解码器。
在本申请实施例中,该编解码系统220中,编码端利用初始点云和重建点云计算用于进行滤波处理的滤波系数,并在确定对重建点云进行滤波处理之后,才将对应的滤波系数传递至解码器;相应地,解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,还考虑了率失真代价值进行权衡;此外还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,在编码器中,确定初始点云以及初始点云对应的重建点云;根据初始点云和重建点云,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点;根据重建点云和滤波后点云,确定滤波标识信息;其中,滤波标识信息用于确定是否对重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则对滤波标识信息和滤波系数进行编码,将所得到的编码比特写入码流。在解码器中,解码码流,确定滤波标识信息;其中,滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;若滤波标识信息指示对重建点云进行滤波处理,则解码码流,确定滤波系数;利用滤波系数对重建点云中第一点对应的K个目标点进行滤波处理,确定重建点云对应的滤波后点云;其中,K个目标点包括第一点以 及与第一点相邻的(K-1)个近邻点,K为大于1的整数,第一点表示重建点云中的任意点。这样,编码端利用初始点云和重建点云计算用于进行滤波处理的滤波系数,并在确定对重建点云进行滤波处理之后,才将对应的滤波系数传递至解码器;相应地,解码端可以直接解码获得滤波系数,然后使用滤波系数对重建点云进行滤波处理,从而实现了对重建点云的优化,能够提升点云的质量。另外,编码端在使用近邻点进行滤波处理时,同时将当前点自身考虑在内,使得滤波后的值也取决于当前点自身的属性值;而且在确定是否对重建点云进行滤波处理时,不仅考虑了PSNR性能指标,还考虑了率失真代价值进行权衡;此外还提出了几何有损且属性有损情况下重建点云与初始点云之间对应关系的确定,从而不仅提升了适用范围,提升了点云的质量,而且还能够节省码率,提高了编解码效率。

Claims (47)

  1. 一种编码方法,应用于编码器,所述方法包括:
    确定初始点云以及所述初始点云对应的重建点云;
    根据所述初始点云和所述重建点云,确定滤波系数;
    利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云;其中,所述K个目标点包括所述第一点以及与所述第一点相邻的(K-1)个近邻点,K为大于1的整数,所述第一点表示所述重建点云中的任意点;
    根据所述重建点云和所述滤波后点云,确定滤波标识信息;其中,所述滤波标识信息用于确定是否对所述重建点云进行滤波处理;
    若所述滤波标识信息指示对所述重建点云进行滤波处理,则对所述滤波标识信息和所述滤波系数进行编码,将所得到的编码比特写入码流。
  2. 根据权利要求1所述的方法,其中,所述确定初始点云以及所述初始点云对应的重建点云,包括:
    若利用第一类编码方式对所述初始点云进行编码及重建处理,则得到第一重建点云,将所述第一重建点云作为所述重建点云;
    其中,所述第一类编码方式用于指示对所述初始点云进行几何无损且属性有损编码。
  3. 根据权利要求1所述的方法,其中,所述确定初始点云以及所述初始点云对应的重建点云,包括:
    若利用第二类编码方式对所述初始点云进行编码及重建处理,则得到第二重建点云;
    对所述第二重建点云进行几何恢复处理,得到恢复重建点云,将所述恢复重建点云作为所述重建点云;
    其中,所述第二类编码方式用于指示对所述初始点云进行几何有损且属性有损编码。
  4. 根据权利要求3所述的方法,其中,所述对所述第二重建点云进行几何恢复处理,得到恢复重建点云,包括:
    对所述第二重建点云进行几何补偿处理,得到中间重建点云;
    对所述中间重建点云进行尺度缩放处理,得到所述恢复重建点云;其中,所述恢复重建点云与所述初始点云的大小相同且几何位置相同。
  5. 根据权利要求2所述的方法,其中,所述方法还包括:
    在所述第一类编码方式的情况下,根据第一预设搜索方式确定所述重建点云中的点在所述初始点云中的对应点,建立所述重建点云中的点与所述初始点云中的点之间的对应关系。
  6. 根据权利要求3所述的方法,其中,所述方法还包括:
    在所述第二类编码方式的情况下,根据第二预设搜索方式确定所述重建点云中的点在所述初始点云中的对应点;
    根据所述确定的对应点构建匹配点云,将所述匹配点云作为所述初始点云,建立所述重建点云中的点与所述初始点云中的点之间的对应关系。
  7. 根据权利要求5或6所述的方法,其中,
    第一预设搜索方式为搜索第一常数值个点的K近邻搜索方式;
    第二预设搜索方式为搜索第二常数值个点的K近邻搜索方式。
  8. 根据权利要求6所述的方法,其中,所述根据第二预设搜索方式确定所述重建点云中的点在所述初始点云中的对应点,包括:
    基于所述重建点云中的当前点,利用所述第二预设搜索方式在所述初始点云中搜索到第二常数值个点;
    分别计算所述当前点与所述第二常数值个点之间的距离值,从所述距离值中选取最小距离值;
    若所述最小距离值的个数为1个,则将所述最小距离值对应的点确定为所述当前点的对应点;
    若所述最小距离值的个数为多个,则根据所述最小距离值对应的多个点,确定所述当前点的对应点。
  9. 根据权利要求7所述的方法,其中,所述第一常数值等于1,所述第二常数值等于5。
  10. 根据权利要求1所述的方法,其中,所述方法还包括:
    确定所述重建点云中第一点对应的K个目标点;
    相应地,所述利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云,包括:
    利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云中所述第一点的属性信息的滤波值;
    在确定出所述重建点云中至少一个点的属性信息的滤波值之后,根据所述至少一个点的属性信息的滤波值确定所述滤波后点云。
  11. 根据权利要求10所述的方法,其中,所述确定所述重建点云中第一点对应的K个目标点,包括:
    基于所述重建点云中的第一点,利用K近邻搜索方式在所述重建点云中搜索预设值个候选点;
    分别计算所述第一点与所述预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且所述(K-1)个距离值均小于所述预设值个距离值中剩余的距离值;
    根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将所述第一点和所述(K-1)个近邻点确定为所述第一点对应的K个目标点。
  12. 根据权利要求1所述的方法,其中,所述根据所述初始点云和所述重建点云,确定滤波系数,包括:
    根据所述初始点云中点的属性信息的原始值,确定第一属性参数;
    根据所述重建点云中点对应的K个目标点的属性信息的重建值,确定第二属性参数;
    基于所述第一属性参数和所述第二属性参数,确定所述滤波系数。
  13. 根据权利要求12所述的方法,其中,所述基于所述第一属性参数和所述第二属性参数,确定所述滤波系数,包括:
    根据所述第一属性参数和所述第二属性参数,确定互相关参数;
    根据所述第二属性参数确定自相关参数;
    根据所述互相关参数和所述自相关参数进行系数计算,得到所述滤波系数。
  14. 根据权利要求1至13任一项所述的方法,其中,所述属性信息包括颜色分量,且所述颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,
    若所述颜色分量符合RGB颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:R分量、G分量、B分量;
    若所述颜色分量符合YUV颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:Y分量、U分量、V分量。
  15. 根据权利要求1所述的方法,其中,所述根据所述重建点云和所述滤波后点云,确定滤波标识信息,包括:
    确定所述重建点云的属性信息的待处理分量的第一代价值,以及确定所述滤波后点云的属性信息的待处理分量的第二代价值;
    根据所述第一代价值和所述第二代价值,确定所述待处理分量的滤波标识信息;
    根据所述待处理分量的滤波标识信息,得到所述滤波标识信息。
  16. 根据权利要求15所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的滤波标识信息,包括:
    若所述第二代价值小于所述第一代价值,则确定所述待处理分量的滤波标识信息的取值为第一值;
    若所述第二代价值大于所述第一代价值,则确定所述待处理分量的滤波标识信息的取值为第二值。
  17. 根据权利要求15所述的方法,其中,所述确定所述重建点云的属性信息的待处理分量的第一代价值,包括:
    利用率失真代价方式对所述重建点云的属性信息的待处理分量进行代价值计算,将所得到的第一率失真值作为所述第一代价值;
    相应地,所述确定所述滤波后点云的属性信息的待处理分量的第二代价值,包括:
    利用率失真代价方式对所述滤波后点云的属性信息的待处理分量进行代价值计算,将所得到的第二率失真值作为得到所述第二代价值。
  18. 根据权利要求15所述的方法,其中,所述确定所述重建点云的属性信息的待处理分量的第一代价值,包括:
    利用率失真代价方式对所述重建点云的属性信息的待处理分量进行代价值计算,得到第一率失真值;
    利用预设性能衡量指标对所述重建点云的属性信息的待处理分量进行性能值计算,得到第一性能值;
    根据所述第一率失真值和所述第一性能值,确定所述第一代价值;
    相应地,所述确定所述滤波后点云的属性信息的待处理分量的第二代价值,包括:
    利用率失真代价方式对所述滤波后点云的属性信息的待处理分量进行代价值计算,得到第二率失真值;
    利用预设性能衡量指标对所述滤波后点云的属性信息的待处理分量进行性能值计算,得到第二性能值;
    根据所述第二率失真值和所述第二性能值,确定所述第二代价值。
  19. 根据权利要求18所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的滤波标识信息,包括:
    若所述第二性能值大于所述第一性能值且所述第二率失真值小于所述第一率失真值,则确定所述待处理分量的滤波标识信息的取值为第一值;
    若所述第二性能值小于所述第一性能值,则确定所述待处理分量的滤波标识信息的取值为第二值。
  20. 根据权利要求18所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的滤波标识信息,包括:
    若所述第二性能值大于所述第一性能值且所述第二率失真值小于所述第一率失真值,则确定所述待处理分量的滤波标识信息的取值为第一值;
    若所述第二率失真值大于所述第一率失真值,则确定所述待处理分量的滤波标识信息的取值为第二值。
  21. 根据权利要求16、19或20所述的方法,其中,所述方法还包括:
    若所述待处理分量的滤波标识信息的取值为第一值,则确定所述待处理分量的滤波标识信息指示对所述重建点云的属性信息的待处理分量进行滤波处理;
    若所述待处理分量的滤波标识信息的取值为第二值,则确定所述待处理分量的滤波标识信息指示不对所述重建点云的属性信息的待处理分量进行滤波处理。
  22. 根据权利要求15所述的方法,其中,所述根据所述待处理分量的滤波标识信息,得到所述滤波标识信息,包括:
    在所述待处理分量为颜色分量的情况下,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;
    根据所述第一颜色分量的滤波标识信息、所述第二颜色分量的滤波标识信息和所述第三颜色分量的滤波标识信息,得到所述滤波标识信息。
  23. 根据权利要求22所述的方法,其中,所述方法还包括:
    若所述第一颜色分量的滤波标识信息、所述第二颜色分量的滤波标识信息和所述第三颜色分量的滤波标识信息中至少一个为第一值,则确定所述滤波标识信息指示对所述重建点云进行滤波处理;
    若所述第一颜色分量的滤波标识信息、所述第二颜色分量的滤波标识信息和所述第三颜色分量的滤波标识信息中全部为第二值,则确定所述滤波标识信息指示不对所述重建点云进行滤波处理。
  24. 根据权利要求23所述的方法,其中,所述方法还包括:
    若所述滤波标识信息指示不对所述重建点云进行滤波处理,则仅对所述滤波标识信息进行编码,并将所得到的编码比特写入所述码流。
  25. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据所述重建点云的点数,确定所述K的取值;和/或,
    根据所述重建点云的量化参数,确定所述K的取值;和/或,
    根据所述重建点云中点的邻域差异值,确定所述K的取值;其中,所述邻域差异值是基于所述点的属性信息与至少一个近邻点的属性信息之间的分量差值计算得到的。
  26. 根据权利要求25所述的方法,其中,所述根据所述重建点云中点的邻域差异值,确定所述K的取值,还包括:
    若所述重建点云中点的邻域差异值满足第一预设区间,则选择第一类别滤波器对所述重建点云进行滤波处理;
    若所述重建点云中点的邻域差异值满足第二预设区间,则选择第二类别滤波器对所述重建点云进行滤波处理;
    其中,所述第一类别滤波器的阶数与所述第二类别滤波器的阶数不同。
  27. 根据权利要求1所述的方法,其中,所述方法还包括:
    确定所述初始点云中点的属性信息的预测值;
    根据所述初始点云中点的属性信息的原始值和所述预测值,确定所述初始点云中点的属性信息的残差值;
    对所述初始点云中点的属性信息的残差值进行编码,并将所得到的编码比特写入所述码流。
  28. 一种码流,所述码流是根据待编码信息进行比特编码生成的;其中,所述待编码信息包括下述至少之一:初始点云中点的属性信息的残差值、滤波标识信息和滤波系数。
  29. 一种解码方法,应用于解码器,所述方法包括:
    解码码流,确定滤波标识信息;其中,所述滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;
    若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数;
    利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云;其中,所述K个目标点包括所述第一点以及与所述第一点相邻的(K-1)个近邻点,K为大于1的整数,所述第一点表示所述重建点云中的任意点。
  30. 根据权利要求29所述的方法,其中,所述方法还包括:
    确定所述重建点云中第一点对应的K个目标点;
    相应地,所述利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云,包括:
    利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云中所述第一点的属性信息的滤波值;
    在确定出所述重建点云中至少一个点的属性信息的滤波值之后,根据所述至少一个点的属性信息的滤波值确定所述滤波后点云。
  31. 根据权利要求30所述的方法,其中,所述确定所述重建点云中第一点对应的K个目标点,包括:
    基于所述重建点云中的第一点,利用K近邻搜索方式在所述重建点云中搜索预设值个候选点;
    分别计算所述第一点与所述预设值个候选点之间的距离值,从所得到的预设值个距离值中选取(K-1)个距离值,且所述(K-1)个距离值均小于所述预设值个距离值中剩余的距离值;
    根据(K-1)个距离值对应的候选点确定(K-1)个近邻点,将所述第一点和所述(K-1)个近邻点确定为所述第一点对应的K个目标点。
  32. 根据权利要求29所述的方法,其中,所述方法还包括:
    解码所述码流,确定所述初始点云中点的属性信息的残差值;
    在确定所述初始点云中点的属性信息的预测值后,根据所述初始点云中点的属性信息的所述预测值和所述残差值,确定所述初始点云中点的属性信息的重建值;
    基于所述初始点云中点的属性信息的重建值,构建所述重建点云。
  33. 根据权利要求29所述的方法,其中,所述属性信息包括颜色分量,且所述颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,
    若所述颜色分量符合RGB颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:R分量、G分量、B分量;
    若所述颜色分量符合YUV颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:Y分量、U分量、V分量。
  34. 根据权利要求29所述的方法,其中,所述解码码流,确定滤波标识信息,包括:
    解码码流,确定待处理分量的滤波标识信息;
    其中,所述待处理分量的滤波标识信息用于指示是否对所述重建点云的属性信息的待处理分量进行滤波处理。
  35. 根据权利要求34所述的方法,其中,所述解码码流,确定待处理分量的滤波标识信息,包括:
    若所述待处理分量的滤波标识信息的取值为第一值,则确定对所述重建点云的属性信息的待处理分量进行滤波处理;
    若所述待处理分量的滤波标识信息的取值为第二值,则确定不对所述重建点云的属性信息的待处理分量进行滤波处理;
    相应地,所述若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数,包括:
    若所述待处理分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述待处理分量对应的滤波系数。
  36. 根据权利要求34所述的方法,其中,在所述待处理分量为颜色分量的情况下,所述解码码流,确定待处理分量的滤波标识信息,包括:
    解码所述码流,确定第一颜色分量的滤波标识信息、第二颜色分量的滤波标识信息和第三颜色分量的滤波标识信息;
    其中,所述第一颜色分量的滤波标识信息用于指示是否对所述重建点云的属性信息的第一颜色分量进行滤波处理,所述第二颜色分量的滤波标识信息用于指示是否对所述重建点云的属性信息的第二颜色 分量进行滤波处理,所述第三颜色分量的滤波标识信息用于指示是否对所述重建点云的属性信息的第三颜色分量进行滤波处理。
  37. 根据权利要求36所述的方法,其中,所述解码所述码流,确定第一颜色分量的滤波标识信息,包括:
    若所述第一颜色分量的滤波标识信息的取值为第一值,则确定对所述重建点云的属性信息的第一颜色分量进行滤波处理;
    若所述第一颜色分量的滤波标识信息的取值为第二值,则确定不对所述重建点云的属性信息的第一颜色分量进行滤波处理;
    相应地,所述若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数,包括:
    若所述第一颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第一颜色分量对应的滤波系数。
  38. 根据权利要求36所述的方法,其中,所述解码所述码流,确定第二颜色分量的滤波标识信息,包括:
    若所述第二颜色分量的滤波标识信息的取值为第一值,则确定对所述重建点云的属性信息的第二颜色分量进行滤波处理;
    若所述第二颜色分量的滤波标识信息的取值为第二值,则确定不对所述重建点云的属性信息的第二颜色分量进行滤波处理;
    相应地,所述若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数,包括:
    若所述第二颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第二颜色分量对应的滤波系数。
  39. 根据权利要求36所述的方法,其中,所述解码所述码流,确定第三颜色分量的滤波标识信息,包括:
    若所述第三颜色分量的滤波标识信息的取值为第一值,则确定对所述重建点云的属性信息的第三颜色分量进行滤波处理;
    若所述第三颜色分量的滤波标识信息的取值为第二值,则确定不对所述重建点云的属性信息的第三颜色分量进行滤波处理;
    相应地,所述若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数,包括:
    若所述第三颜色分量的滤波标识信息的取值为第一值,则解码所述码流,确定所述第三颜色分量对应的滤波系数。
  40. 根据权利要求36所述的方法,其中,所述方法还包括:
    若所述第一颜色分量的滤波标识信息、所述第二颜色分量的滤波标识信息和所述第三颜色分量的滤波标识信息中至少一个为第一值,则确定所述滤波标识信息指示对所述重建点云进行滤波处理;
    若所述第一颜色分量的滤波标识信息、所述第二颜色分量的滤波标识信息和所述第三颜色分量的滤波标识信息中全部为第二值,则确定所述滤波标识信息指示不对所述重建点云进行滤波处理。
  41. 根据权利要求40所述的方法,其中,所述方法还包括:
    若所述滤波标识信息指示不对所述重建点云进行滤波处理,则不执行解码所述码流,确定滤波系数的步骤。
  42. 根据权利要求29至41任一项所述的方法,其中,在所述确定滤波后点云之后,所述方法还包括:
    若所述滤波后点云中点的颜色分量不符合RGB颜色空间,则对所述滤波后点云进行颜色空间转换,使得所述滤波后点云中点的颜色分量符合RGB颜色空间。
  43. 一种编码器,所述编码器包括第一确定单元、第一滤波单元和编码单元;其中,
    所述第一确定单元,配置为确定初始点云以及所述初始点云对应的重建点云;以及根据所述初始点云和所述重建点云,确定滤波系数;
    所述第一滤波单元,配置为利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云;其中,所述K个目标点包括所述第一点以及与所述第一点相邻的(K-1)个近邻点,K为大于1的整数,所述第一点表示所述重建点云中的任意点;
    所述第一确定单元,还配置为根据所述重建点云和所述滤波后点云,确定滤波标识信息;其中,所述滤波标识信息用于确定是否对所述重建点云进行滤波处理;
    所述编码单元,配置为若所述滤波标识信息指示对所述重建点云进行滤波处理,则对所述滤波标识信息和所述滤波系数进行编码,将所得到的编码比特写入码流。
  44. 一种编码器,所述编码器包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至27任一项所述的方法。
  45. 一种解码器,所述解码器包括解码单元和第二滤波单元;其中,
    所述解码单元,配置为解码码流,确定滤波标识信息;其中,所述滤波标识信息用于确定是否对初始点云对应的重建点云进行滤波处理;以及若所述滤波标识信息指示对所述重建点云进行滤波处理,则解码所述码流,确定滤波系数;
    所述第二滤波单元,配置为利用所述滤波系数对所述重建点云中第一点对应的K个目标点进行滤波处理,确定所述重建点云对应的滤波后点云;其中,所述K个目标点包括所述第一点以及与所述第一点相邻的(K-1)个近邻点,K为大于1的整数,所述第一点表示所述重建点云中的任意点。
  46. 一种解码器,所述解码器包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求29至42任一项所述的方法。
  47. 一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如权利要求1至27任一项所述的方法、或者被第二处理器执行时实现如权利要求29至42任一项所述的方法。
PCT/CN2021/143955 2021-12-31 2021-12-31 编解码方法、码流、编码器、解码器以及存储介质 WO2023123471A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/143955 WO2023123471A1 (zh) 2021-12-31 2021-12-31 编解码方法、码流、编码器、解码器以及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/143955 WO2023123471A1 (zh) 2021-12-31 2021-12-31 编解码方法、码流、编码器、解码器以及存储介质

Publications (1)

Publication Number Publication Date
WO2023123471A1 true WO2023123471A1 (zh) 2023-07-06

Family

ID=86997185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/143955 WO2023123471A1 (zh) 2021-12-31 2021-12-31 编解码方法、码流、编码器、解码器以及存储介质

Country Status (1)

Country Link
WO (1) WO2023123471A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242997A (zh) * 2020-01-13 2020-06-05 北京大学深圳研究生院 一种基于滤波器的点云属性预测方法及设备
CN111435551A (zh) * 2019-01-15 2020-07-21 华为技术有限公司 点云滤波方法、装置及存储介质
WO2020189876A1 (ko) * 2019-03-15 2020-09-24 엘지전자 주식회사 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법
CN112385217A (zh) * 2018-07-11 2021-02-19 索尼公司 图像处理装置和图像处理方法
US20210256735A1 (en) * 2018-07-02 2021-08-19 Apple Inc. Point Cloud Compression with Adaptive Filtering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210256735A1 (en) * 2018-07-02 2021-08-19 Apple Inc. Point Cloud Compression with Adaptive Filtering
CN112385217A (zh) * 2018-07-11 2021-02-19 索尼公司 图像处理装置和图像处理方法
CN111435551A (zh) * 2019-01-15 2020-07-21 华为技术有限公司 点云滤波方法、装置及存储介质
WO2020189876A1 (ko) * 2019-03-15 2020-09-24 엘지전자 주식회사 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법
CN111242997A (zh) * 2020-01-13 2020-06-05 北京大学深圳研究生院 一种基于滤波器的点云属性预测方法及设备

Similar Documents

Publication Publication Date Title
Djelouah et al. Neural inter-frame compression for video coding
CN108028941B (zh) 用于通过超像素编码和解码数字图像的方法和装置
CN108886621B (zh) 非本地自适应环路滤波方法
CN108141593B (zh) 用于针对深度视频的高效帧内编码的基于深度不连续的方法
Zhang et al. Implicit neural video compression
CN110383695B (zh) 用于对数字图像或视频流进行编码和解码的方法和装置
WO2023130333A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2021062772A1 (zh) 预测方法、编码器、解码器及计算机存储介质
CN116250235A (zh) 具有基于神经网络的环路滤波的视频编解码
JP2020205609A (ja) 画像の符号化及び復号方法、画像の符号化及び復号デバイス、及びこれに対応するコンピュータプログラム
TW202404359A (zh) 編解碼方法、編碼器、解碼器以及可讀儲存媒介
WO2021062771A1 (zh) 颜色分量预测方法、编码器、解码器及计算机存储介质
CN117136540A (zh) 残差编码方法及设备、视频编码方法及设备、存储介质
WO2023123471A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2021117091A1 (ja) 符号化方法、符号化装置、及びプログラム
JP7467687B2 (ja) 符号化・復号方法及び装置
WO2023201450A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
Ameer et al. Image compression using plane fitting with inter-block prediction
CN114025166A (zh) 视频压缩方法、电子设备及计算机可读存储介质
WO2023123467A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024065408A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024065406A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024011739A1 (zh) 点云编解码方法、编解码器及计算机存储介质
WO2022170511A1 (zh) 点云解码方法、解码器及计算机存储介质
WO2024082152A1 (zh) 编解码方法及装置、编解码器、码流、设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21969816

Country of ref document: EP

Kind code of ref document: A1