WO2022247704A1 - 一种点云深度信息的预测编解码方法及装置 - Google Patents

一种点云深度信息的预测编解码方法及装置 Download PDF

Info

Publication number
WO2022247704A1
WO2022247704A1 PCT/CN2022/093614 CN2022093614W WO2022247704A1 WO 2022247704 A1 WO2022247704 A1 WO 2022247704A1 CN 2022093614 W CN2022093614 W CN 2022093614W WO 2022247704 A1 WO2022247704 A1 WO 2022247704A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth information
list
point cloud
candidate list
prediction
Prior art date
Application number
PCT/CN2022/093614
Other languages
English (en)
French (fr)
Inventor
张伟
杨付正
张可
宋佳润
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22810431.1A priority Critical patent/EP4236311A1/en
Priority to US18/039,633 priority patent/US20240007669A1/en
Publication of WO2022247704A1 publication Critical patent/WO2022247704A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention belongs to the technical field of point cloud encoding and decoding, and in particular relates to a predictive encoding and decoding method and device for point cloud depth information.
  • 3D point cloud data has been widely used in fields such as virtual reality, augmented reality, autonomous driving, and environment modeling.
  • large-scale point clouds usually have a large amount of data, which is not conducive to the transmission and storage of point cloud data. Therefore, it is necessary to efficiently encode and decode large-scale point clouds.
  • the cylindrical coordinate information (including depth information, azimuth angle information, etc.) of the point cloud is usually used to assist the encoding of the Cartesian coordinate information.
  • the surface coordinate components are used for predictive coding.
  • the predictive encoding of point cloud depth information in the point cloud encoding and decoding method based on two-dimensional regularized plane projection provided by prior art 1, first, it is possible to choose whether to Fill the empty pixels with depth information; secondly, predict the depth information of the current pixel based on the occupancy information and the depth information of the encoded pixel; then, you can choose whether to quantize the prediction residual of the depth information; finally, the depth information
  • the prediction residual is encoded to generate a compressed bitstream of the depth information map.
  • the point cloud encoding and decoding method based on the prediction tree provided by prior art 2 for each node in the prediction tree, it is necessary to select the best prediction mode from the fixed prediction mode list, so as to obtain the prediction value of the depth information of the current node And the prediction residual, and finally encode the selected prediction mode and the prediction residual of the depth information to reconstruct the depth information of the point cloud in the same way at the decoding end.
  • the point cloud encoding and decoding method based on the single-chain structure provided by the prior art 3 when predictively encoding the depth information of the point cloud, an adaptively updated prediction mode list is first established, and the list is updated according to certain rules.
  • each time the existing value in the prediction list is removed according to the prediction value corresponding to the prediction mode selected by the previous coded point of the current point to be coded, and the depth of the previous coded point of the current coded point The information is inserted at the first position in the prediction mode list. Then select the best prediction mode at the encoding end to obtain the prediction residual of the depth information at the current point, and finally encode the selected prediction mode and the prediction residual of the depth information to perform the prediction and prediction of the depth information in the same way at the decoding end. reconstruction.
  • the above method only uses the depth information of adjacent points for prediction, without considering the discontinuity of the actual scene, resulting in a relatively low prediction residual of the depth information obtained. Large, the prediction accuracy is low, and there are many outliers and jump values, which affect the coding efficiency.
  • the present invention provides a method and device for predictive encoding and decoding of point cloud depth information.
  • the technical problem to be solved in the present invention is realized through the following technical solutions:
  • a predictive coding method for point cloud depth information comprising:
  • Predictive encoding is performed on the depth information of the point cloud according to the adaptive prediction list to obtain code stream information.
  • the depth information adaptive prediction list of point clouds is established, including:
  • a certain value is selected from the candidate list to fill the prediction list to obtain the adaptive prediction list.
  • the establishment of several depth information candidate lists of the current point to be encoded includes:
  • the depth information value of the encoded point collected by the same Laser as the current point to be encoded is stored in the first candidate list
  • the second candidate list stores the depth information value of the coded point collected by the Laser different from the current point to be coded
  • Depth information values of historical encoded points are stored in the third candidate list
  • Prior depth information values are stored in the fourth candidate list.
  • establishing the first candidate list includes:
  • establishing a second candidate list includes:
  • the current second candidate list is not filled and the points collected by the first q Lasers of the current Laser have been encoded, insert the depth information of the encoded points within a certain range directly above the current point to be encoded into the second candidate List.
  • establishing a third candidate list includes:
  • the current third candidate list is not filled, insert the depth information value of the currently encoded point into the end of the third candidate list; otherwise, selectively remove the value in the third candidate list according to a preset threshold , and insert the depth information value of the currently encoded point at the end of the list.
  • a certain value is selected from the candidate list to fill the prediction list to obtain the adaptive prediction list, including:
  • a certain number of values are sequentially selected from the several candidate lists in a certain order and filled into the prediction list, including:
  • the values in the fourth candidate list are sequentially inserted into the remaining positions of the prediction list.
  • Another embodiment of the present invention provides a device for predictive encoding of point cloud depth information, including:
  • the data acquisition module is used to acquire the original point cloud data
  • the first calculation module is used to establish an adaptive prediction list of point cloud depth information
  • the predictive encoding module is configured to perform predictive encoding on the depth information of the point cloud according to the adaptive prediction list to obtain code stream information.
  • Another embodiment of the present invention also provides a predictive decoding method for point cloud depth information, including:
  • the depth information of the point cloud is reconstructed according to the adaptive prediction list, the prediction mode and the prediction residual obtained through decoding, and the reconstructed point cloud depth information is obtained.
  • Another embodiment of the present invention also provides a predictive decoding device for point cloud depth information, including:
  • Decoding module used to obtain code stream information and decode it
  • the second calculation module is used to establish an adaptive prediction list of point cloud depth information
  • the reconstruction module is used to reconstruct the depth information of the point cloud according to the adaptive prediction list, the prediction mode and the prediction residual obtained by decoding, and obtain the depth information of the reconstructed point cloud.
  • the present invention solves the problem of discontinuity of point cloud depth information caused by the discontinuity of the actual scene by establishing an adaptively updated depth information prediction list and selecting the best prediction value from the list to predict the depth information of the point cloud. Therefore, the prediction residual error of depth information and the occurrence frequency of outliers and jump values are significantly reduced, and the prediction accuracy and coding efficiency are improved.
  • Fig. 1 is a schematic flow chart of a predictive encoding method for point cloud depth information provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an update process of the first candidate list List0 provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of an update process of a second candidate list List1 provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an update process of a third candidate list List2 provided by an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a predictive encoding device for point cloud depth information provided by an embodiment of the present invention.
  • Fig. 6 is a schematic flow chart of a predictive decoding method for point cloud depth information provided by an embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of an apparatus for predictive decoding of point cloud depth information provided by an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a predictive encoding method for point cloud depth information provided by an embodiment of the present invention, specifically including:
  • Step 1 Get the original point cloud data.
  • the original point cloud data usually consists of a set of three-dimensional space points, and each space point records its own geometric position information, as well as additional attribute information such as color, reflectivity, and normal.
  • the geometric position information of the point cloud is generally expressed based on the Cartesian coordinate system, that is, expressed by the x, y, and z coordinates of the point.
  • the original point cloud data can be obtained by laser radar scanning.
  • the laser radar is composed of multiple laser beams distributed along both sides of the central axis. Each laser has a fixed pitch angle and can be regarded as a relative Independent acquisition system.
  • raw point cloud data are also available through public datasets provided by various platforms.
  • the acquired geometric position information of the original point cloud data is expressed based on a Cartesian coordinate system. It should be noted that the representation method of the geometric position information of the original point cloud data is not limited to Cartesian coordinates.
  • Step 2 Establish a point cloud depth information adaptive prediction list, including:
  • the depth information of the point cloud is mainly obtained by the following calculation formula:
  • r represents the depth information of each point in the original point cloud data
  • x and y are the Cartesian coordinate components of each point respectively.
  • the depth information of adjacent points in the scene collected by lidar usually has similarity, but due to the discontinuity of the scene, the depth information of non-adjacent points collected by lidar may also have certain Therefore, it is necessary to save the depth information of the encoded point in the point cloud before the point to be encoded in a series of candidate lists through certain rules, and then select the appropriate value from the candidate list through certain rules to predict The depth information of the current point to be encoded.
  • step 2 includes:
  • the depth information candidate list may include at least one of the first candidate list List0, the second candidate list List1, the third candidate list List2, or the fourth candidate list List3; wherein,
  • the depth information value of the historical coded point is stored in the third candidate list
  • the prior depth information value is stored in the fourth candidate list.
  • the size of the first candidate list List0 may be initialized to 4.
  • the first candidate list List0 is updated according to the depth information value of the encoded point collected by the same Laser.
  • the specific update process is: judge whether the current first candidate list List0 is filled, if not, then insert the depth information value of the current coded point into the end of the List0 list; otherwise, the first candidate list is processed according to The first-in-first-out principle is used for updating, that is, the first value in List0 is removed from the list, and the depth information of the currently encoded point is inserted into the end of the List0 list.
  • the size of the first candidate list List0 is 4 as an example, whenever the depth information of a point is encoded, it is necessary to determine the size of the first candidate list List0.
  • the depth information value of the currently encoded point is inserted at the end of the list; when the list size is equal to 4, that is, the list is filled, the first value in the list is removed from the list, and the depth information value of the currently encoded point is inserted into the list end.
  • Figure 2 is a schematic diagram of the update process of the first candidate list List0 provided by the embodiment of the present invention
  • is the depth information value of the currently encoded point to be inserted
  • is the depth information value of the coded point collected by the same Laser as the current point
  • is the existing depth information value in List0
  • the size of the second candidate list List1 may be initialized to 2.
  • the second candidate list List1 is updated according to the depth information values of encoded points collected by different Lasers.
  • the specific update process is: if the current second candidate list List1 is not filled, and the points collected by the first q Lasers of the current Laser have been encoded, the depth information of the encoded points within a certain range directly above the current point to be encoded is Insert into the second candidate list List1.
  • Figure 3 is a schematic diagram of the update process of the second candidate list List1 provided by the embodiment of the present invention, in Figure 3 is the depth information value of the current point to be encoded, and ⁇ is the depth information value of the currently encoded point to be inserted.
  • the size of the third candidate list List2 may be initialized to 5.
  • the third candidate list List2 is updated according to the depth information value of the historical coded point.
  • the specific update process is: judge whether the current third candidate list List2 is filled, if not, then insert the depth information value of the current coded point into the end of the third candidate list List2; otherwise, select according to the preset threshold
  • the value in the third candidate list is permanently removed, and the depth information value of the currently encoded point is inserted into the end of the List2 list.
  • the redundancy check is first performed, when the ratio of the depth information value p 0 of the current coded point to be inserted to a certain depth information value h i in the list (use the larger of p 0 and h i When the number is smaller than the upper number) is less than th 0 , remove the first h i that satisfies the condition from the list, and insert p 0 at the end of the list.
  • Figure 4 is a schematic diagram of the update process of the third candidate list List2 provided by the embodiment of the present invention
  • the depth information value of the current point to be encoded is the depth information value of the coded point collected by the same Laser as the current point
  • is the existing depth information value in List2
  • is the depth information value of the current coded point to be inserted
  • It is the depth information value to be removed in the list.
  • the life cycle of List2 is the number of points scanned by a Laser, that is, after the depth information of all scanned points of a Laser is encoded, List2 is cleared to encode the depth information of the point scanned by the next Laser.
  • the size of the fourth candidate list List3 may be initialized to 4.
  • the depth information is divided into several intervals with the same size as the fourth candidate list List3;
  • the depth information is generally in the range of [0,2 32 -1], that is, the depth information is represented by a maximum of 32 bits, and currently most depth information is coded using effective bit number coding. Therefore, before performing predictive coding on the depth information, the depth information may be divided into several intervals with the same size as the fourth candidate list List3 according to the number of bits. For example, the depth information can be divided into 4 intervals according to the number of bits, which are: [0,2 8 -1], [2 8 ,2 16 -1], [2 16 ,2 24 -1], [2 24 ,2 32 -1+.
  • the depth information is divided into the following four intervals according to the number of bits: [0,2 8 -1], [2 8 ,2 16 -1], [2 16 ,2 24 -1], [2 24 , 2 32 -1], a value can be selected in these four intervals, for example, 128, 32640, 65536, and 16777216 are respectively selected to fill the fourth candidate list List3.
  • the size of the prediction list may be initialized to 4. It should be noted that, for all points to be encoded, the size of the prediction list is uniform.
  • four candidate lists are established, and corresponding values may be selected in sequence from the first candidate list to the fourth candidate list to fill the prediction list. details as follows:
  • the values in the fourth candidate list are sequentially inserted into the remaining positions of the prediction list.
  • the first depth information value in List0 can be inserted into the prediction list
  • n can be fixed or can be adaptively changed with the size of the prediction list. Taking n equal to 1 as an example, the specific method is as follows :
  • k can be fixed or can be adaptively changed with the size of the prediction list. Taking k equal to 1 as an example, the specific method is as follows :
  • the values in the fourth candidate list are sequentially inserted into the remaining positions of the prediction list.
  • the depth information of points scanned by different Lasers will be more accurate than the depth information of adjacent points scanned by the same Laser in some cases. near. Therefore, the order of each numerical value in the prediction list can be appropriately adjusted according to a certain rule to adapt to the change of the scene.
  • the sequence adjustment may be performed according to the difference of the depth information values in the prediction list.
  • the difference between the depth information value filled in the prediction list and the depth information value not filled in the prediction list in the second candidate list List1 can be calculated.
  • the difference is greater than a certain threshold, it will be filled in the prediction list.
  • the depth information value is swapped with the value at the top of the prediction list.
  • the first threshold T 0 , the second threshold T 1 , and the third threshold T 2 are not fixed, and all of them can be updated along with the encoding process.
  • the above-mentioned threshold may be updated according to the depth information of the previous coded point and the predicted value of the selected depth information, for example, the threshold may be updated as the ratio of the depth information of the previous coded point and its predicted value.
  • Step 3 Predictively encode the depth information of the point cloud according to the adaptive prediction list to obtain the code stream information.
  • the existing rate-distortion optimization technology can be used to select the prediction mode with the least cost in the prediction list to predict the depth information of the current point to be encoded, and obtain The prediction residual of the depth information, and then use the existing entropy coding technology to encode the selected prediction mode and the prediction residual of the depth information to obtain the code stream information.
  • the decoding end it only needs to establish the adaptive prediction list of the point cloud synchronously, and analyze the prediction mode in the code stream and the prediction residual of the depth information to reconstruct the depth information of the point cloud.
  • This embodiment solves the discontinuity of the point cloud depth information caused by the discontinuity of the actual scene by establishing an adaptively updated depth information candidate list, constructing a prediction list according to the candidate list, and selecting the best prediction value to predict the depth information of the point cloud
  • the problem which significantly reduces the prediction residual of depth information and the occurrence frequency of outliers and jump values, and improves the prediction accuracy and coding efficiency.
  • Figure 5 is a structure of a predictive encoding device for point cloud depth information provided by an embodiment of the present invention Schematic, which includes:
  • Data obtaining module 11 is used for obtaining original point cloud data
  • the first calculation module 12 is used to establish an adaptive prediction list of point cloud depth information
  • the predictive encoding module 13 is configured to perform predictive encoding on the depth information of the point cloud according to the adaptive prediction list to obtain code stream information.
  • the device provided in this embodiment can implement the encoding method provided in Embodiment 1 above, and the detailed process will not be repeated here.
  • FIG. 6 is a schematic flowchart of a predictive decoding method for point cloud depth information provided by an embodiment of the present invention, specifically including:
  • Step 1 Obtain code stream information and decode it.
  • the decoded data includes the prediction mode and prediction residual of the point cloud depth information.
  • Step 2 Establish an adaptive prediction list of point cloud depth information.
  • the establishment of the adaptive prediction list of the point cloud depth information can refer to the method at the encoding end in the first embodiment above, and will not be described in detail here.
  • Step 3 Reconstruct the depth information of the point cloud according to the adaptive prediction list and the prediction mode and prediction residual obtained by decoding, and obtain the reconstructed point cloud depth information.
  • the obtained predicted value is added to the predicted residual obtained by decoding to obtain the depth information of the reconstructed point cloud.
  • Figure 7 is a structure of a predictive decoding device for point cloud depth information provided by an embodiment of the present invention Schematic, which includes:
  • Decoding module 21 used to obtain code stream information and decode it
  • the second calculation module 22 is used to establish an adaptive prediction list of point cloud depth information
  • the reconstruction module 23 is used to reconstruct the depth information of the point cloud according to the adaptive prediction list, the prediction mode and the prediction residual obtained through decoding, and obtain the depth information of the reconstructed point cloud.
  • the device provided in this embodiment can implement the decoding method provided in Embodiment 3 above, and the detailed process will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了一种点云深度信息的预测编解码方法及装置,所述编码方法包括:获取原始点云数据;建立点云深度信息的自适应预测列表;根据自适应预测列表对点云的深度信息进行预测编码,得到码流信息。本发明通过建立自适应更新的深度信息预测列表,并从该列表中选择最佳预测模式来预测点云的深度信息,解决了由于实际场景不连续所导致的点云深度信息不连续的问题,从而显著减小了深度信息的预测残差以及离异值和跳变值的出现频率,提高了预测精度和编码效率。

Description

一种点云深度信息的预测编解码方法及装置
本申请要求于2021年05月26日提交中国专利局、申请号为202110580227.8、申请名称为“一种点云深度信息的预测编解码方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明属于点云编解码技术领域,具体涉及一种点云深度信息的预测编解码方法及装置。
背景技术
随着硬件处理能力的提升和计算机视觉的飞速发展,三维点云数据在虚拟现实、增强现实、自动驾驶和环境建模等领域得到了广泛的应用。然而大规模点云通常具有较大的数据量,十分不利于点云数据的传输及存储,因此需要对大规模点云进行高效的编解码。
在现有的点云编解码技术中,通常利用点云的柱面坐标信息(包括深度信息、方位角信息等)来辅助笛卡尔坐标信息的编码,因此现有技术需要对点云的各柱面坐标分量进行预测编码。具体地,对于点云深度信息的预测编码,现有技术一提供的基于二维规则化平面投影的点云编解码方法中,首先,可选择是否对由二维平面投影得到的深度信息图中的空像素进行深度信息填充;其次,基于占位信息和已编码像素的深度信息对当前像素的深度信息进行预测;然后,可选择是否对深度信息的预测残差进行量化;最后,对深度信息的预测残差进行编码,生成深度信息图的压缩比特流。现有技术二提供的基于预测树的点云编解码方法中,对于预测树中的每个节点,需要从固定的预测模式列表中选择最佳的预测模式,从而得到当前节点深度信息的预测值以及预测残差,最后编码选择的预测模式以及深度信息的预测残差即可在解码 端按照相同的方式重建点云的深度信息。现有技术三提供的基于单链结构的点云编解码方法中,在对点云的深度信息进行预测编码时,首先建立一个自适应更新的预测模式列表,该列表根据一定的规则进行更新,具体地,每次根据当前待编码点的前一个已编码点选择的预测模式所对应的预测值来移除预测列表中的已有值,并将当前待编码点的前一个已编码点的深度信息插入到预测模式列表中的第一个位置。然后在编码端选择最佳的预测模式,从而得到当前点深度信息的预测残差,最后编码选择的预测模式以及深度信息的预测残差即可在解码端按照相同的方式进行深度信息的预测和重建。
然而,由于点云的深度信息与场景密切相关,而上述方法只利用了邻近点的深度信息来进行预测,没有考虑实际场景的不连续性,从而导致其所得到的深度信息的预测残差较大,预测精度较低,并且存在较多离异值和跳变值,影响编码效率。
发明内容
为了解决现有技术中存在的上述问题,本发明提供了一种点云深度信息的预测编解码方法及装置。本发明要解决的技术问题通过以下技术方案实现:
一种点云深度信息的预测编码方法,包括:
获取原始点云数据;
建立点云的深度信息自适应预测列表;
根据所述自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
在本发明的一个实施例中,建立点云的深度信息自适应预测列表,包括:
建立当前待编码点的若干深度信息候选列表;
从所述候选列表中选择一定的值对预测列表进行填充,得到所述自适应预测列表。
在本发明的一个实施例中,所述建立当前待编码点的若干深度信息候选列表包括:
建立第一候选列表、第二候选列表、第三候选列表或第四候选列表中的至少一种;其中,
所述第一候选列表中存放与当前待编码点同一Laser采集的已编码点的深度信息值;
所述第二候选列表中存放与当前待编码点不同Laser采集的已编码点的深度信息值;
所述第三候选列表中存放历史已编码点的深度信息值;
所述第四候选列表中存放先验的深度信息值。
在本发明的一个实施例中,建立第一候选列表,包括:
初始化第一候选列表的大小;
若当前第一候选列表未被填满,则将当前已编码点的深度信息值插入到第一候选列表末端;否则,对所述第一候选列表按照先入先出原则进行更新。
在本发明的一个实施例中,建立第二候选列表,包括:
初始化第二候选列表的大小;
若当前第二候选列表未被填满,且当前Laser的前q个Laser采集的点已被编码时,将当前待编码点正上方一定范围内的已编码点的深度信息插入所述第二候选列表中。
在本发明的一个实施例中,建立第三候选列表,包括:
初始化第三候选列表的大小;
若当前第三候选列表未被填满,则将当前已编码点的深度信息值插入到第三候选列表末端;否则,根据预设阈值选择性的将所述第三候选列表中的值移除,并将当前已编码点的深度信息值插入到列表末端。
在本发明的一个实施例中,从所述候选列表中选择一定的值对预测列表进行填充,得到所述自适应预测列表,包括:
初始化预测列表的大小;
按照一定顺序依次从所述若干候选列表中选取一定数量的值,填充至预 测列表;
调整预测列表中各个数值的顺序,得到自适应预测列表。
在本发明的一个实施例中,按照一定顺序依次从所述若干候选列表中选取一定数量的值,填充至预测列表,包括:
将所述第一候选列表中的某一个深度信息值插入到预测列表中,并根据第一阈值选择性的将所述第一候选列表中的其他深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,根据第二阈值选择性的将所述第二候选列表中的深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,根据第三阈值选择性的将所述第三候选列表中的深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,将所述第四候选列表中的值依次插入到预测列表的剩余位置。
本发明的另一个实施例提供了一种点云深度信息的预测编码装置,包括:
数据获取模块,用于获取原始点云数据;
第一计算模块,用于建立点云深度信息的自适应预测列表;
预测编码模块,用于根据所述自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
本发明的另一个实施例还提供了一种点云深度信息的预测解码方法,包括:
获取码流信息并进行解码;
建立点云深度信息的自适应预测列表;
根据所述自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
本发明的另一个实施例还提供了一种点云深度信息的预测解码装置,包括:
解码模块,用于获取码流信息并进行解码;
第二计算模块,用于建立点云深度信息的自适应预测列表;
重建模块,用于根据所述自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
本发明的有益效果:
本发明通过建立自适应更新的深度信息预测列表,并从该列表中选择最佳预测值来预测点云的深度信息,解决了由于实际场景不连续所导致的点云深度信息不连续的问题,从而显著减小了深度信息的预测残差以及离异值和跳变值的出现频率,提高了预测精度和编码效率。
以下将结合附图及实施例对本发明做进一步详细说明。
附图说明
图1是本发明实施例提供的一种点云深度信息的预测编码方法流程示意图;
图2是本发明实施例提供的第一候选列表List0的更新过程示意图;
图3是本发明实施例提供的第二候选列表List1的更新过程示意图;
图4是本发明实施例提供的第三候选列表List2的更新过程示意图;
图5是本发明实施例提供的一种点云深度信息的预测编码装置结构示意图;
图6是本发明实施例提供的一种点云深度信息的预测解码方法流程示意图;
图7是本发明实施例提供的一种点云深度信息的预测解码装置结构示意图。
具体实施方式
下面结合具体实施例对本发明做进一步详细的描述,但本发明的实施方式不限于此。
实施例一
请参见图1,图1是本发明实施例提供的一种点云深度信息的预测编码方法流程示意图,具体包括:
步骤1:获取原始点云数据。
具体地,原始点云数据通常由一组三维空间点组成,每个空间点都记录了自身的几何位置信息,以及颜色、反射率、法线等额外的属性信息。其中,点云的几何位置信息一般是基于笛卡尔坐标系进行表示的,即利用点的x,y,z坐标进行表示。原始点云数据可通过激光雷达扫描获取,激光雷达是由多束沿中心轴两侧分布的Laser(激光扫描器)组合排列构成,每一个Laser具有一个固定的俯仰角,并且可以看作一个相对独立的采集系统。此外,原始点云数据也可通过各种平台提供的公共数据集获得。
在本实施例中,设获取到的原始点云数据的几何位置信息基于笛卡尔坐标系进行表示。需要说明的是,原始点云数据的几何位置信息的表示方法不限于笛卡尔坐标。
步骤2:建立点云的深度信息自适应预测列表,具体包括:
在本实施例中,点云的深度信息主要通过以下计算公式得到:
Figure PCTCN2022093614-appb-000001
其中,r表示原始点云数据中每个点的深度信息,x、y分别为每个点的笛卡尔坐标分量。
此外,需要说明的是,对于点云的深度信息,还可利用公式
Figure PCTCN2022093614-appb-000002
或其他方式进行计算,其中(x,y,z)为点的笛卡尔坐标。
通过分析激光雷达的采集原理可知,激光雷达采集到的场景中的邻近点的深度信息通常具有相似性,然而由于场景的不连续性使得激光雷达采集到的非邻近点的深度信息也可能具有一定的相似性,因此,需要将点云中待编码点前的已编码点的深度信息通过一定规则保存到一系列候选列表中,则可通过一定的规则从候选列表中选出合适的值来预测当前待编码点的深度信息。
具体地,步骤2包括:
21)建立当前待编码点的若干深度信息候选列表。
在本实施例中,深度信息候选列表可以包括第一候选列表List0、第二候选列表List1、第三候选列表List2或第四候选列表List3中的至少一种;其中,
第一候选列表中存放与当前待编码点同一Laser采集的已编码点的深度信息值;
第二候选列表中存放与当前待编码点不同Laser采集的已编码点的深度信息值;
第三候选列表中存放历史已编码点的深度信息值;
第四候选列表中存放先验的深度信息值。
此外,还可以根据实际需要设计其他候选列表。
下面分别对上述四种候选列表的建立过程进行详细描述。
a)建立第一候选列表List0
首先,初始化第一候选列表List0的大小;
具体地,在本实施例中,可将第一候选列表List0的大小初始化为4。
然后,根据同一Laser采集的已编码点的深度信息值对第一候选列表List0进行更新。具体的更新过程为:判断当前第一候选列表List0是否被填满,若未被填满,则将当前已编码点的深度信息值插入到List0列表末端;否则,对所述第一候选列表按照先入先出原则进行更新,也即将List0中第一个值移除出列表,并将当前已编码点的深度信息插入到List0列表末端。
具体地,以第一候选列表List0大小等于4为例,每当编码完一个点的深度信息,需要判断第一候选列表List0的大小,当列表大小小于4即列表未被填满时,直接将当前已编码点的深度信息值插入到列表末端;当列表大小等于4即列表已被填满时,将列表中的第一个值移出列表,并将当前已编码点的深度信息值插入到列表末端。如图2所示,图2是本发明实施例提供的第一候选列表List0的更新过程示意图,图2中
Figure PCTCN2022093614-appb-000003
为当前待编码点的深度信息值,◎为待插入的当前已编码点的深度信息值,
Figure PCTCN2022093614-appb-000004
为与当前点处于同一Laser采集 的已编码点深度信息值,○为List0中已存在的深度信息值,
Figure PCTCN2022093614-appb-000005
为列表中要被移除的深度信息值。
b)建立第二候选列表List1
首先,初始化第二候选列表的大小;
具体地,在本实施例中,可将第二候选列表List1的大小初始化为2。
然后,根据不同Laser采集的已编码点的深度信息值对第二候选列表List1进行更新。具体的更新过程为:若当前第二候选列表List1未被填满,且当前Laser的前q个Laser采集的点已编码时,将当前待编码点正上方一定范围内的已编码点的深度信息插入所述第二候选列表List1中。
具体地,以第二候选列表List1大小等于2为例,每当编码完一个点的深度信息,首先判断当前待编码点所在Laser的前q个Laser的点是否已编码,这里q可以是固定值,也可以是随着当前Laser的索引号自适应变化的值,以q等于1为例。其次,将当前待编码点正上方的点的深度信息插入到List1中,如果当前待编码点正上方的点不存在,那么对其进行深度信息值填充,填充的值为当前待编码点左上方一定范围内或右上方一定范围内点的深度信息值或其线性组合。最后再选取当前待编码点上一个Laser中与其正上方点距离为D的已编码点的深度信息值,并将其插入到List1中,其中D可以为正值也可以为负值,可以是固定值也可以是随当前待编码点所处位置自适应变化的值,D为正值时表示当前待编码点的右上方,D为负值时表示当前待编码点的左上方。如图3所示,图3是本发明实施例提供的第二候选列表List1的更新过程示意图,图3中
Figure PCTCN2022093614-appb-000006
为当前待编码点的深度信息值,◎为待插入的当前已编码点的深度信息值。
c)建立第三候选列表List2
首先,初始化第三候选列表List2的大小;
具体地,在本实施例中,可将第三候选列表List2的大小初始化为5。
然后,根据历史已编码点的深度信息值对第三候选列表List2进行更新。 具体的更新过程为:判断当前第三候选列表List2是否被填满,若未被填满,则将当前已编码点的深度信息值插入到第三候选列表List2末端;否则,根据预设阈值选择性的将第三候选列表中的值移除,并将当前已编码点的深度信息值插入到List2列表末端。
具体地,以第三候选列表List2大小等于5为例,每当编码完一个点的深度信息,需要判断第三候选列表List2的大小,当列表大小小于5即列表未被填满时,直接将当前已编码点的深度信息值插入到List2列表末端;当列表大小等于5即列表已被填满时,根据预设的阈值th 0来判断是否应该将当前已编码点的深度信息值插入到List2中,具体的,首先进行冗余性检查,当待插入的当前已编码点的深度信息值p 0与列表中的某一深度信息值h i的比值(用p 0和h i中的较大数比上较小数)小于th 0时,将满足该条件的第一个h i从列表中移除,并将p 0插入到列表末端。如图4所示,图4是本发明实施例提供的第三候选列表List2的更新过程示意图,图4中
Figure PCTCN2022093614-appb-000007
为当前待编码点的深度信息值,
Figure PCTCN2022093614-appb-000008
为与当前点处于同一Laser采集的已编码点深度信息值,○为List2中已存在的深度信息值,◎为待插入的当前已编码点的深度信息值,
Figure PCTCN2022093614-appb-000009
为列表中将要被移除的深度信息值。
此外,需要说明的是,List2的生命周期为一个Laser扫描的点数,即当编码完一个Laser所有扫描点的深度信息后,将List2清空以编码下一个Laser扫描到的点的深度信息。
d)建立第四候选列表List3
首先,初始化第四候选列表List3的大小;
具体地,在本实施例中,可将第四候选列表List3的大小初始化为4。
其次,将深度信息划分成与第四候选列表List3大小相同的若干区间;
具体地,考虑到深度信息一般是在[0,2 32-1]的范围内,也即深度信息最多使用32比特进行表示,并且目前对于深度信息的编码大都采用有效比特位数编码的方法。因此,在对深度信息进行预测编码之前,可以将深度信息 按其比特数划分成与第四候选列表List3大小相同的若干区间。例如,可将深度信息按比特数划分为4个区间,依次为:[0,2 8-1]、[2 8,2 16-1]、[2 16,2 24-1]、[2 24,2 32-1+。
最后,分别从每个区间中选取一个值填充第四候选列表List3。
具体地,若将深度信息按比特数划分为以下4个区间:[0,2 8-1]、[2 8,2 16-1]、[2 16,2 24-1]、[2 24,2 32-1],则可以分别在这4个区间中选取一个值,如分别选取128、32640、65536、16777216来填充第四候选列表List3。
至此,完成对所需深度信息候选列表的建立。
22)从候选列表中选择一定的值对预测列表进行填充,得到自适应预测列表。
首先,初始化预测列表的大小;
具体地,在本实施例中,可将预测列表的大小初始化为4。需要说明的是,对于所有待编码点,其预测列表的大小是统一的。
然后,按照一定顺序依次从若干候选列表中选取一定数量的值,填充至预测列表。
例如,本实施例建立了四个候选列表,可以依次按照从第一候选列表至第四候选列表的顺序选择相应的值填充预测列表。具体如下:
将第一候选列表中的某一个(优选第一个)深度信息值插入到预测列表中,并根据第一阈值选择性的将第一候选列表中的其他深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,根据第二阈值选择性的将第二候选列表中的深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,根据第三阈值选择性的将第三候选列表中的深度信息值插入到预测列表中;
当判断当前预测列表未被填满时,将第四候选列表中的值依次插入到预 测列表的剩余位置。
下面分别对每个候选列表的选取过程进行详细介绍。
对于第一候选列表List0:
假定最多选取m个第一候选列表List0中的值对预测列表进行填充,此处的m不是固定的,以m等于3为例,具体方式为:
a.可将List0中的第一个深度信息值插入到预测列表中;
b.判断List0中的其他深度信息值和当前已经插入到预测列表中的深度信息值的比值是否大于第一阈值T 0,如果大于T 0,那么将List0中的对应深度信息值插入到预测列表中,否则不插入该深度信息值。
c.如果插入到预测列表中的List0中的值的个数m 0小于m,则将与当前已经插入到预测列表中的深度信息差别最大的前m-m 0个List0中的深度信息插入到预测列表中。
对于第二候选列表List1:
当判断当前预测列表未被填满时,根据第二阈值选择性的将第二候选列表中的深度信息值插入到预测列表中;
假定最多选取n个第二候选列表List1中的值对预测列表进行填充,此处的n可以是固定的,也可以随着预测列表的大小自适应变化,以n等于1为例,具体方式为:
判断List1中的深度信息值与当前预测列表中的深度信息值的比值是否大于第二阈值T 1,如果大于T 1,则将List1中的该深度信息值插入到预测列表中,否则不插入该深度信息值。
对于第三候选列表List2:
当判断当前预测列表未被填满时,根据第三阈值选择性的将第三候选列表中的深度信息值插入到预测列表中;
假定最多选取k个第三候选列表List2中的值对预测列表进行填充,此处的k可以是固定的,也可以随着预测列表的大小自适应变化,以k等于1为 例,具体方式为:
判断List2中的深度信息值与当前预测列表中的深度信息值的比值是否大于第三阈值T 2,如果大于T 2,则将List2中的深度信息值插入到预测列表中,否则不插入该深度信息值。
对于第四候选列表List3:
当判断当前预测列表未被填满时,将第四候选列表中的值依次插入到预测列表的剩余位置。
最后,调整预测列表中各个数值的顺序,得到自适应预测列表。
考虑到激光雷达扫描特性,例如,当扫描到的物体为一个人或一棵树时,不同Laser扫描到的点的深度信息在某些情况下会比相同Laser扫描到的邻近点的深度信息更接近。因此可以根据一定规则适当调整预测列表中各个数值的顺序来适应场景的变化。
在本实施例中,可根据预测列表中深度信息值的差别进行顺序调整。
例如,可计算第二候选列表List1中被填入预测列表的深度信息值和未被填入预测列表中的深度信息值的差别,当该差别大于某一阈值时,将被填入预测列表的深度信息值与预测列表最前面的值进行交换。
需要说明的是,在本实施例中,第一阈值T 0、第二阈值T 1、第三阈值T 2并非固定不变的,其均可随编码过程进行更新。
具体地,上述阈值可根据前一个已编码点的深度信息及其所选的深度信息的预测值来更新,例如,可将阈值更新为前一个已编码点的深度信息及其预测值的比值。
步骤3:根据自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
具体地,在对每个待编码点建立了对应的预测列表后,即可通过现有的率失真优化技术在预测列表中选取代价最小的预测模式对当前待编码点的深度信息进行预测,得到深度信息的预测残差,然后利用现有的熵编码技术对 选取的预测模式和深度信息的预测残差进行编码,得到码流信息。在解码端,只需同步建立点云的自适应预测列表,并解析码流中的预测模式和深度信息的预测残差即可重建点云的深度信息。
本实施例通过建立自适应更新的深度信息候选列表、根据候选列表构建预测列表并选择最佳预测值来预测点云的深度信息,解决了由于实际场景不连续所导致的点云深度信息不连续的问题,从而显著减小了深度信息的预测残差以及离异值和跳变值的出现频率,提高了预测精度和编码效率。
实施例二
在上述实施例一的基础上,本实施例提供了一种点云深度信息的预测编码装置,请参见图5,图5是本发明实施例提供的一种点云深度信息的预测编码装置结构示意图,其包括:
数据获取模块11,用于获取原始点云数据;
第一计算模块12,用于建立点云深度信息的自适应预测列表;
预测编码模块13,用于根据自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
本实施例提供的装置可以实现上述实施例一提供的编码方法,详细过程在此不再赘述。
实施例三
本实施例提供了一种点云深度信息的预测解码方法,请参见图6,图6是本发明实施例提供的一种点云深度信息的预测解码方法流程示意图,具体包括:
步骤一:获取码流信息并进行解码。
具体的,解码得到数据包括点云深度信息的预测模式及预测残差。
步骤二:建立点云深度信息的自适应预测列表。
在本实施例中,点云深度信息的自适应预测列表的建立可参考上述实施例一中编码端的方法,在此不再详述。
步骤三:根据自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
首先,根据解码得到的预测模式,从自适应预测列表中选择对应的值作为点云深度信息的预测值;
然后,将得到的预测值与解码得到的预测残差相加,即可得到重建的点云深度信息。
实施例四
在上述实施例三的基础上,本实施例提供了一种点云深度信息的预测解码装置,请参见图7,图7是本发明实施例提供的一种点云深度信息的预测解码装置结构示意图,其包括:
解码模块21,用于获取码流信息并进行解码;
第二计算模块22,用于建立点云深度信息的自适应预测列表;
重建模块23,用于根据自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
本实施例提供的装置可以实现上述实施例三提供的解码方法,详细过程在此不再赘述。
以上内容是结合具体地优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (11)

  1. 一种点云深度信息的预测编码方法,其特征在于,包括:
    获取原始点云数据;
    建立点云深度信息的自适应预测列表;
    根据所述自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
  2. 根据权利要求1所述的点云深度信息的预测编码方法,其特征在于,建立点云深度信息的自适应预测列表,包括:
    建立当前待编码点的若干深度信息候选列表;
    从所述候选列表中选择一定的值对预测列表进行填充,得到所述自适应预测列表。
  3. 根据权利要求2所述的点云深度信息的预测编码方法,其特征在于,所述建立当前待编码点的若干深度信息候选列表包括:
    建立第一候选列表、第二候选列表、第三候选列表或第四候选列表中的至少一种;其中,
    所述第一候选列表中存放与当前待编码点同一Laser采集的已编码点的深度信息值;
    所述第二候选列表中存放与当前待编码点不同Laser采集的已编码点的深度信息值;
    所述第三候选列表中存放历史已编码点的深度信息值;
    所述第四候选列表中存放先验的深度信息值。
  4. 根据权利要求3所述的点云深度信息的预测编码方法,其特征在于,建立第一候选列表,包括:
    初始化第一候选列表的大小;
    若当前第一候选列表未被填满,则将当前已编码点的深度信息值插入到第一候选列表末端;否则,对所述第一候选列表按照先入先出原则进行更新。
  5. 根据权利要求3所述的点云深度信息的预测编码方法,其特征在于,建立第二候选列表,包括:
    初始化第二候选列表的大小;
    若当前第二候选列表未被填满,且当前Laser的前q个Laser采集的点已被编码时,将当前待编码点正上方一定范围内的已编码点的深度信息插入所述第二候选列表中。
  6. 根据权利要求3所述的点云深度信息的预测编码方法,其特征在于,建立第三候选列表,包括:
    初始化第三候选列表的大小;
    若当前第三候选列表未被填满,则将当前已编码点的深度信息值插入到第三候选列表末端;否则,根据预设阈值选择性的将所述第三候选列表中的值移除,并将当前已编码点的深度信息值插入到列表末端。
  7. 根据权利要求3所述的点云深度信息的预测编码方法,其特征在于,从所述候选列表中选择一定的值对预测列表进行填充,得到所述自适应预测列表,包括:
    初始化预测列表的大小;
    按照一定顺序依次从所述若干候选列表中选取一定数量的值,填充至预测列表;
    调整预测列表中各个数值的顺序,得到自适应预测列表。
  8. 根据权利要求7所述的点云深度信息的预测编码方法,其特征在于,按照一定顺序依次从所述若干候选列表中选取一定数量的值,填充至预测列表,包括:
    将所述第一候选列表中的某一个深度信息值插入到预测列表中,并根据第一阈值选择性的将所述第一候选列表中的其他深度信息值插入到预测列表中;
    当判断当前预测列表未被填满时,根据第二阈值选择性的将所述第二候 选列表中的深度信息值插入到预测列表中;
    当判断当前预测列表未被填满时,根据第三阈值选择性的将所述第三候选列表中的深度信息值插入到预测列表中;
    当判断当前预测列表未被填满时,将所述第四候选列表中的值依次插入到预测列表的剩余位置。
  9. 一种点云深度信息的预测编码装置,其特征在于,包括:
    数据获取模块(11),用于获取原始点云数据;
    第一计算模块(12),用于建立点云深度信息的自适应预测列表;
    预测编码模块(13),用于根据所述自适应预测列表对点云的深度信息进行预测编码,得到码流信息。
  10. 一种点云深度信息的预测解码方法,其特征在于,包括:
    获取码流信息并进行解码;
    建立点云深度信息的自适应预测列表;
    根据所述自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
  11. 一种点云深度信息的预测解码装置,其特征在于,包括:
    解码模块(21),用于获取码流信息并进行解码;
    第二计算模块(22),用于建立点云深度信息的自适应预测列表;
    重建模块(23),用于根据所述自适应预测列表和解码得到的预测模式及预测残差对点云的深度信息进行重建,得到重建后的点云深度信息。
PCT/CN2022/093614 2021-05-26 2022-05-18 一种点云深度信息的预测编解码方法及装置 WO2022247704A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22810431.1A EP4236311A1 (en) 2021-05-26 2022-05-18 Predictive coding/decoding method and device for point cloud depth information
US18/039,633 US20240007669A1 (en) 2021-05-26 2022-05-18 Method and Apparatus for Predictively Coding and Decoding Depth Information of Point Cloud

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110580227.8 2021-05-26
CN202110580227.8A CN115412713A (zh) 2021-05-26 2021-05-26 一种点云深度信息的预测编解码方法及装置

Publications (1)

Publication Number Publication Date
WO2022247704A1 true WO2022247704A1 (zh) 2022-12-01

Family

ID=84156615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093614 WO2022247704A1 (zh) 2021-05-26 2022-05-18 一种点云深度信息的预测编解码方法及装置

Country Status (4)

Country Link
US (1) US20240007669A1 (zh)
EP (1) EP4236311A1 (zh)
CN (1) CN115412713A (zh)
WO (1) WO2022247704A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053324A1 (en) * 2016-08-19 2018-02-22 Mitsubishi Electric Research Laboratories, Inc. Method for Predictive Coding of Point Cloud Geometries
CN110996098A (zh) * 2018-10-02 2020-04-10 腾讯美国有限责任公司 处理点云数据的方法和装置
TW202037169A (zh) * 2019-03-15 2020-10-01 聯發科技股份有限公司 基於視訊的點雲壓縮的區塊分段的方法及裝置
CN112218079A (zh) * 2020-08-24 2021-01-12 北京大学深圳研究生院 一种基于空间顺序的点云分层方法、点云预测方法及设备
WO2021084292A1 (en) * 2019-10-31 2021-05-06 Blackberry Limited Method and system for azimuthal angular prior and tree representation for cloud compression
US20220108484A1 (en) * 2020-10-07 2022-04-07 Qualcomm Incorporated Predictive geometry coding in g-pcc

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112673398A (zh) * 2018-07-11 2021-04-16 交互数字Vc控股公司 一种用于编码/解码点云的几何结构的方法和设备
CN109657559B (zh) * 2018-11-23 2023-02-07 盎锐(上海)信息科技有限公司 点云深度感知编码引擎装置
WO2020175708A1 (ja) * 2019-02-28 2020-09-03 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
KR20230152815A (ko) * 2019-03-21 2023-11-03 엘지전자 주식회사 포인트 클라우드 데이터 부호화 장치, 포인트 클라우드 데이터 부호화 방법, 포인트 클라우드 데이터 복호화 장치 및 포인트 클라우드 데이터 복호화 방법
US11581022B2 (en) * 2019-05-29 2023-02-14 Nokia Technologies Oy Method and apparatus for storage and signaling of compressed point clouds
WO2020246689A1 (ko) * 2019-06-05 2020-12-10 엘지전자 주식회사 포인트 클라우드 데이터 전송 장치, 포인트 클라우드 데이터 전송 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법
CN112565734B (zh) * 2020-12-03 2022-04-19 西安电子科技大学 基于混合编码的点云属性编解码方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053324A1 (en) * 2016-08-19 2018-02-22 Mitsubishi Electric Research Laboratories, Inc. Method for Predictive Coding of Point Cloud Geometries
CN110996098A (zh) * 2018-10-02 2020-04-10 腾讯美国有限责任公司 处理点云数据的方法和装置
TW202037169A (zh) * 2019-03-15 2020-10-01 聯發科技股份有限公司 基於視訊的點雲壓縮的區塊分段的方法及裝置
WO2021084292A1 (en) * 2019-10-31 2021-05-06 Blackberry Limited Method and system for azimuthal angular prior and tree representation for cloud compression
CN112218079A (zh) * 2020-08-24 2021-01-12 北京大学深圳研究生院 一种基于空间顺序的点云分层方法、点云预测方法及设备
US20220108484A1 (en) * 2020-10-07 2022-04-07 Qualcomm Incorporated Predictive geometry coding in g-pcc

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AHN JAE-KYUN; LEE KYU-YUL; SIM JAE-YOUNG; KIM CHANG-SU: "Large-Scale 3D Point Cloud Compression Using Adaptive Radial Distance Prediction in Hybrid Coordinate Domains", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, IEEE, US, vol. 9, no. 3, 1 April 2015 (2015-04-01), US , pages 422 - 434, XP011575879, ISSN: 1932-4553, DOI: 10.1109/JSTSP.2014.2370752 *

Also Published As

Publication number Publication date
CN115412713A (zh) 2022-11-29
EP4236311A1 (en) 2023-08-30
US20240007669A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
US11252441B2 (en) Hierarchical point cloud compression
US11409998B2 (en) Trimming search space for nearest neighbor determinations in point cloud compression
CN111095929B (zh) 一种压缩针对点云的属性信息的系统、方法及计算机可读介质
US11276203B2 (en) Point cloud compression using fixed-point numbers
US11727603B2 (en) Adaptive distance based point cloud compression
US11508095B2 (en) Hierarchical point cloud compression with smoothing
US10897269B2 (en) Hierarchical point cloud compression
US11356116B2 (en) Methods and devices for on-the-fly coder mapping updates in point cloud coding
CN114598892B (zh) 点云数据编码方法、解码方法、装置、设备及存储介质
US20230048381A1 (en) Context determination for planar mode in octree-based point cloud coding
US20220376702A1 (en) Methods and devices for tree switching in point cloud compression
WO2022247704A1 (zh) 一种点云深度信息的预测编解码方法及装置
US20240013444A1 (en) Point cloud encoding/decoding method and apparatus based on two-dimensional regularized plane projection
WO2022247705A1 (zh) 一种点云属性信息的预测编解码方法及装置
WO2024031586A1 (en) Method for encoding and decoding a 3d point cloud, encoder, decoder
WO2024065406A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
EP4258213A1 (en) Methods and apparatus for entropy coding a presence flag for a point cloud and data stream including the presence flag
CN117581549A (zh) 帧内预测、编解码方法及装置、编解码器、设备、介质
CN117896536A (zh) 点云解码、编码方法、介质、电子设备及产品
CN116941242A (zh) 帧内预测方法及装置、编解码器、设备、存储介质
CN117615136A (zh) 点云解码方法、点云编码方法、解码器、电子设备以及介质
CN112017292A (zh) 网格译码方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810431

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022810431

Country of ref document: EP

Effective date: 20230526

NENP Non-entry into the national phase

Ref country code: DE