WO2024065271A1 - Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement - Google Patents

Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement Download PDF

Info

Publication number
WO2024065271A1
WO2024065271A1 PCT/CN2022/122116 CN2022122116W WO2024065271A1 WO 2024065271 A1 WO2024065271 A1 WO 2024065271A1 CN 2022122116 W CN2022122116 W CN 2022122116W WO 2024065271 A1 WO2024065271 A1 WO 2024065271A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
coordinate
repeated
points
value
Prior art date
Application number
PCT/CN2022/122116
Other languages
English (en)
Chinese (zh)
Inventor
孙泽星
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/122116 priority Critical patent/WO2024065271A1/fr
Publication of WO2024065271A1 publication Critical patent/WO2024065271A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties

Definitions

  • the present application relates to the field of point cloud technology, and in particular to a point cloud encoding and decoding method, device, equipment and storage medium.
  • the surface of the object is collected by the acquisition device to form point cloud data, which includes hundreds of thousands or even more points.
  • the point cloud data is transmitted between the point cloud encoding device and the point cloud decoding device in the form of point cloud media files.
  • the point cloud encoding device needs to compress the point cloud data before transmission.
  • Point cloud compression is also called point cloud encoding.
  • different encoding models are used to correspond the points in the point cloud to nodes and encode the nodes, some of which include duplicate points.
  • the current encoding and decoding methods such as the low latency, low complexity coding model (L3C2), need to encode and decode the duplicate point information of each node in L3C2 when encoding and decoding the node, thereby reducing the encoding and decoding efficiency of the point cloud.
  • L3C2 low latency, low complexity coding model
  • the embodiments of the present application provide a point cloud encoding and decoding method, apparatus, device and storage medium to reduce the complexity of encoding and decoding, save encoding and decoding time, and thereby improve the encoding and decoding efficiency of the point cloud.
  • an embodiment of the present application provides a point cloud decoding method, comprising:
  • the current node is decoded according to the number of the decoded repeated points and the total number of repeated points in the point cloud.
  • the present application provides a point cloud encoding method, comprising:
  • the current node is encoded according to the number of the encoded repeated points and the total number of repeated points in the point cloud.
  • the present application provides a point cloud decoding device for executing the method in the first aspect or its respective implementations.
  • the device includes a functional unit for executing the method in the first aspect or its respective implementations.
  • the present application provides a point cloud encoding device for executing the method in the second aspect or its respective implementations.
  • the device includes a functional unit for executing the method in the second aspect or its respective implementations.
  • a point cloud decoder comprising a processor and a memory.
  • the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the method in the first aspect or its implementation manners.
  • a point cloud encoder comprising a processor and a memory.
  • the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the method in the second aspect or its respective implementations.
  • a point cloud encoding and decoding system comprising a point cloud encoder and a point cloud decoder.
  • the point cloud decoder is used to execute the method in the first aspect or its respective implementations
  • the point cloud encoder is used to execute the method in the second aspect or its respective implementations.
  • a chip for implementing the method in any one of the first to second aspects or their respective implementations.
  • the chip includes: a processor for calling and running a computer program from a memory, so that a device equipped with the chip executes the method in any one of the first to second aspects or their respective implementations.
  • a computer-readable storage medium for storing a computer program, wherein the computer program enables a computer to execute the method of any one of the first to second aspects or any of their implementations.
  • a computer program product comprising computer program instructions, which enable a computer to execute the method in any one of the first to second aspects or their respective implementations.
  • a computer program which, when executed on a computer, enables the computer to execute the method in any one of the first to second aspects or in each of their implementations.
  • a code stream is provided, which is generated based on the method of the second aspect.
  • the code stream includes at least one of the first parameter and the second parameter.
  • the total number of repeated points included in the point cloud is first determined by the total number of points included in the point cloud and the total number of L3C2 nodes of the point cloud.
  • the number of repeated points that have been encoded and decoded is recorded in real time, and the number of repeated points that have been encoded and decoded is compared with the total number of repeated points included in the point cloud to determine whether to encode and decode the repeated point information of the node when encoding and decoding the current node.
  • the number of repeated points that have been encoded and decoded is equal to the total number of repeated points in the point cloud, indicating that the repeated points in the point cloud have been encoded and decoded, and the remaining nodes do not include repeated points, and thus there is no need to encode and decode the repeated point information of subsequent nodes, thereby reducing the encoding and decoding complexity of the point cloud, saving encoding and decoding time, and thus improving the encoding and decoding efficiency.
  • FIG1A is a schematic diagram of a point cloud
  • Figure 1B is a partial enlarged view of the point cloud
  • FIG2 is a schematic diagram of six viewing angles of a point cloud image
  • FIG3 is a schematic block diagram of a point cloud encoding and decoding system according to an embodiment of the present application.
  • FIG4A is a schematic block diagram of a point cloud encoder provided in an embodiment of the present application.
  • FIG4B is a schematic block diagram of a point cloud decoder provided in an embodiment of the present application.
  • 5A to 5C are schematic diagrams of geometric information encoding based on triangular facets
  • FIG6 is a schematic diagram of a decoding framework of L3C2
  • FIG7A is a schematic diagram of a single-chain structure
  • FIG7B is a schematic diagram of a single chain after the single chain structure shown in FIG7A is regularized
  • FIG8 is a schematic diagram of the scanning principle of a laser scanner
  • FIG9 is a schematic diagram of a prediction structure
  • FIG10 is a schematic diagram of a determination principle of a quantization factor
  • FIG11 is a schematic diagram of another principle for determining a quantization factor
  • FIG12 is a schematic diagram of a decoding framework of L3C2
  • FIG13 is a schematic diagram of a point cloud decoding method flow chart provided in an embodiment of the present application.
  • FIG14 is a schematic diagram of a point cloud encoding method flow chart provided by an embodiment of the present application.
  • FIG15 is a schematic block diagram of a point cloud decoding device provided in an embodiment of the present application.
  • FIG16 is a schematic block diagram of a point cloud encoding device provided in an embodiment of the present application.
  • FIG17 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
  • Figure 18 is a schematic block diagram of the point cloud encoding and decoding system provided in an embodiment of the present application.
  • the present application can be applied to the field of point cloud upsampling technology, for example, can be applied to the field of point cloud compression technology.
  • Point cloud refers to a set of irregularly distributed discrete points in space that express the spatial structure and surface properties of a three-dimensional object or three-dimensional scene.
  • Figure 1A is a schematic diagram of a three-dimensional point cloud image
  • Figure 1B is a partial enlarged view of Figure 1A. It can be seen from Figures 1A and 1B that the point cloud surface is composed of densely distributed points.
  • Two-dimensional images have information expressed at each pixel point, and the distribution is regular, so there is no need to record its position information; however, the distribution of points in the point cloud in three-dimensional space is random and irregular, so it is necessary to record the position of each point in space to fully express a point cloud. Similar to two-dimensional images, each position has corresponding attribute information during the acquisition process.
  • Point cloud data is a specific record form of point cloud.
  • Points in the point cloud may include the location information of the point and the attribute information of the point.
  • the location information of the point may be the three-dimensional coordinate information of the point.
  • the location information of the point may also be called the geometric information of the point.
  • the attribute information of the point may include color information, reflectance information, normal vector information, etc.
  • Color information reflects the color of an object, and reflectance information reflects the surface material of an object.
  • the color information may be information in any color space.
  • the color information may be (RGB).
  • the color information may be information about brightness and chromaticity (YcbCr, YUV).
  • Y represents brightness (Luma)
  • Cb (U) represents blue color difference
  • Cr (V) represents red
  • U and V represent chromaticity (Chroma) for describing color difference information.
  • the points in the point cloud may include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point.
  • the points in the point cloud may include the three-dimensional coordinate information of the point and the color information of the point.
  • a point cloud is obtained by combining the principles of laser measurement and photogrammetry.
  • the points in the point cloud may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point, and the color information of the point.
  • FIG2 shows a point cloud image, where FIG2 shows six viewing angles of the point cloud image.
  • Table 1 shows the point cloud data storage format composed of a file header information part and a data part:
  • the header information includes the data format, data representation type, the total number of point cloud points, and the content represented by the point cloud.
  • the point cloud in this example is in the ".ply" format, represented by ASCII code, with a total number of 207242 points, and each point has three-dimensional position information XYZ and three-dimensional color information RGB.
  • Point clouds can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes. Point clouds are obtained by directly sampling real objects, so they can provide a strong sense of reality while ensuring accuracy. Therefore, they are widely used, including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, three-dimensional immersive remote presentation, and three-dimensional reconstruction of biological tissues and organs.
  • Point cloud data can be obtained by at least one of the following ways: (1) computer equipment generation. Computer equipment can generate point cloud data based on virtual three-dimensional objects and virtual three-dimensional scenes. (2) 3D (3-Dimension) laser scanning acquisition. 3D laser scanning can be used to obtain point cloud data of static real-world three-dimensional objects or three-dimensional scenes, and millions of point cloud data can be obtained per second; (3) 3D photogrammetry acquisition. The visual scene of the real world is collected by 3D photography equipment (i.e., a group of cameras or camera equipment with multiple lenses and sensors) to obtain point cloud data of the visual scene of the real world. 3D photography can be used to obtain point cloud data of dynamic real-world three-dimensional objects or three-dimensional scenes. (4) Point cloud data of biological tissues and organs can be obtained by medical equipment. In the medical field, point cloud data of biological tissues and organs can be obtained by medical equipment such as magnetic resonance imaging (MRI), computed tomography (CT), and electromagnetic positioning information.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • Point clouds can be divided into dense point clouds and sparse point clouds according to the way they are acquired.
  • Point clouds are divided into the following types according to the time series of the data:
  • the first type of static point cloud the object is stationary, and the device that obtains the point cloud is also stationary;
  • the second type of dynamic point cloud the object is moving, but the device that obtains the point cloud is stationary;
  • the third type of dynamic point cloud acquisition the device that acquires the point cloud is moving.
  • Point clouds can be divided into two categories according to their uses:
  • Category 1 Machine perception point cloud, which can be used in autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, emergency rescue robots, etc.
  • Category 2 Point cloud perceived by the human eye, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • the above point cloud acquisition technology reduces the cost and time of point cloud data acquisition and improves the accuracy of data.
  • the change in the point cloud data acquisition method makes it possible to acquire a large amount of point cloud data.
  • the processing of massive 3D point cloud data encounters bottlenecks of storage space and transmission bandwidth.
  • a point cloud video with a frame rate of 30fps frames per second
  • the number of points in each point cloud frame is 700,000
  • each point has coordinate information xyz (float) and color information RGB (uchar).
  • the YUV sampling format is 4:2:0
  • the frame rate is 24fps.
  • the 10s data volume of a 1280 720 two-dimensional video is approximately 1280 720 12bit 24frames 10s ⁇ 0.33GB
  • point cloud compression has become a key issue in promoting the development of the point cloud industry.
  • FIG3 is a schematic block diagram of a point cloud encoding and decoding system involved in an embodiment of the present application. It should be noted that FIG3 is only an example, and the point cloud encoding and decoding system of the embodiment of the present application includes but is not limited to that shown in FIG3.
  • the point cloud encoding and decoding system 100 includes an encoding device 110 and a decoding device 120.
  • the encoding device is used to encode (which can be understood as compression) the point cloud data to generate a code stream, and transmit the code stream to the decoding device.
  • the decoding device decodes the code stream generated by the encoding device to obtain decoded point cloud data.
  • the encoding device 110 of the embodiment of the present application can be understood as a device with a point cloud encoding function
  • the decoding device 120 can be understood as a device with a point cloud decoding function, that is, the embodiment of the present application includes a wider range of devices for the encoding device 110 and the decoding device 120, such as smartphones, desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, televisions, cameras, display devices, digital media players, point cloud game consoles, vehicle-mounted computers, etc.
  • the encoding device 110 may transmit the encoded point cloud data (such as a code stream) to the decoding device 120 via the channel 130.
  • the channel 130 may include one or more media and/or devices capable of transmitting the encoded point cloud data from the encoding device 110 to the decoding device 120.
  • the channel 130 includes one or more communication media that enable the encoding device 110 to transmit the encoded point cloud data directly to the decoding device 120 in real time.
  • the encoding device 110 can modulate the encoded point cloud data according to the communication standard and transmit the modulated point cloud data to the decoding device 120.
  • the communication medium includes a wireless communication medium, such as a radio frequency spectrum, and optionally, the communication medium may also include a wired communication medium, such as one or more physical transmission lines.
  • the channel 130 includes a storage medium, which can store the point cloud data encoded by the encoding device 110.
  • the storage medium includes a variety of locally accessible data storage media, such as optical disks, DVDs, flash memories, etc.
  • the decoding device 120 can obtain the encoded point cloud data from the storage medium.
  • the channel 130 may include a storage server that can store the point cloud data encoded by the encoding device 110.
  • the decoding device 120 can download the stored encoded point cloud data from the storage server.
  • the storage server can store the encoded point cloud data and transmit the encoded point cloud data to the decoding device 120, such as a web server (e.g., for a website), a file transfer protocol (FTP) server, etc.
  • FTP file transfer protocol
  • the encoding device 110 includes a point cloud encoder 112 and an output interface 113.
  • the output interface 113 may include a modulator/demodulator (modem) and/or a transmitter.
  • the encoding device 110 may further include a point cloud source 111 in addition to the point cloud encoder 112 and the input interface 113 .
  • the point cloud source 111 may include at least one of a point cloud acquisition device (e.g., a scanner), a point cloud archive, a point cloud input interface, and a computer graphics system, wherein the point cloud input interface is used to receive point cloud data from a point cloud content provider, and the computer graphics system is used to generate point cloud data.
  • a point cloud acquisition device e.g., a scanner
  • a point cloud archive e.g., a point cloud archive
  • a point cloud input interface e.g., a point cloud input interface
  • the computer graphics system is used to generate point cloud data.
  • the point cloud encoder 112 encodes the point cloud data from the point cloud source 111 to generate a code stream.
  • the point cloud encoder 112 transmits the encoded point cloud data directly to the decoding device 120 via the output interface 113.
  • the encoded point cloud data can also be stored in a storage medium or a storage server for subsequent reading by the decoding device 120.
  • the decoding device 120 includes an input interface 121 and a point cloud decoder 122 .
  • the decoding device 120 may further include a display device 123 in addition to the input interface 121 and the point cloud decoder 122 .
  • the input interface 121 includes a receiver and/or a modem.
  • the input interface 121 can receive the encoded point cloud data through the channel 130 .
  • the point cloud decoder 122 is used to decode the encoded point cloud data to obtain decoded point cloud data, and transmit the decoded point cloud data to the display device 123.
  • the decoded point cloud data is displayed on the display device 123.
  • the display device 123 may be integrated with the decoding device 120 or may be external to the decoding device 120.
  • the display device 123 may include a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
  • Figure 3 is only an example, and the technical solution of the embodiment of the present application is not limited to Figure 3.
  • the technology of the present application can also be applied to unilateral point cloud encoding or unilateral point cloud decoding.
  • the current point cloud encoder can adopt two point cloud compression coding technology routes proposed by the International Standards Organization Moving Picture Experts Group (MPEG), namely Video-based Point Cloud Compression (VPCC) and Geometry-based Point Cloud Compression (GPCC).
  • MPEG International Standards Organization Moving Picture Experts Group
  • VPCC Video-based Point Cloud Compression
  • GPCC Geometry-based Point Cloud Compression
  • VPCC projects the three-dimensional point cloud into two dimensions and uses the existing two-dimensional coding tools to encode the projected two-dimensional image.
  • GPCC uses a hierarchical structure to divide the point cloud into multiple units step by step, and encodes the entire point cloud by encoding the division process.
  • the following uses the GPCC encoding and decoding framework as an example to explain the point cloud encoder and point cloud decoder applicable to the embodiments of the present application.
  • FIG. 4A is a schematic block diagram of a point cloud encoder provided in an embodiment of the present application.
  • the points in the point cloud can include the location information of the points and the attribute information of the points. Therefore, the encoding of the points in the point cloud mainly includes location encoding and attribute encoding.
  • the location information of the points in the point cloud is also called geometric information, and the corresponding location encoding of the points in the point cloud can also be called geometric encoding.
  • the geometric information of the point cloud and the corresponding attribute information are encoded separately.
  • the current geometric coding and decoding of G-PCC can be divided into octree-based geometric coding and decoding and prediction tree-based geometric coding and decoding.
  • the process of position coding includes: preprocessing the points in the point cloud, such as coordinate transformation, quantization, and removal of duplicate points; then, geometric coding the preprocessed point cloud, such as constructing an octree, or constructing a prediction tree, and geometric coding based on the constructed octree or prediction tree to form a geometric code stream.
  • geometric coding such as constructing an octree, or constructing a prediction tree
  • geometric coding based on the constructed octree or prediction tree to form a geometric code stream.
  • the position information of each point in the point cloud data is reconstructed to obtain the reconstructed value of the position information of each point.
  • the attribute encoding process includes: given the reconstruction information of the input point cloud position information and the original value of the attribute information, selecting one of the three prediction modes for point cloud prediction, quantizing the predicted result, and performing arithmetic coding to form an attribute code stream.
  • position encoding can be achieved by the following units:
  • Coordinate transformation transformation (Tanmsform coordinates) unit 201, voxel (Voxelize) unit 202, octree partition (Analyze octree) unit 203, geometry reconstruction (Reconstruct geometry) unit 204, first arithmetic encoding (Arithmetic enconde) unit 205, surface fitting unit (Analyze surface approximation) 206 and prediction tree construction unit 207.
  • the coordinate conversion unit 201 can be used to convert the world coordinates of the point in the point cloud into relative coordinates. For example, the geometric coordinates of the point are respectively subtracted from the minimum value of the xyz coordinate axis, which is equivalent to a DC removal operation, so as to realize the conversion of the coordinates of the point in the point cloud from the world coordinates to the relative coordinates.
  • the voxel unit 202 is also called a quantize and remove points unit, which can reduce the number of coordinates by quantization; after quantization, originally different points may be assigned the same coordinates, based on which, duplicate points can be deleted by deduplication operation; for example, multiple clouds with the same quantized position and different attribute information can be merged into one cloud by attribute conversion.
  • the voxel unit 202 is an optional unit module.
  • the octree division unit 203 may use an octree encoding method to encode the position information of the quantized points.
  • the point cloud is divided in the form of an octree, so that the position of the point can correspond to the position of the octree one by one, and the position of the point in the octree is counted and its flag is recorded as 1 to perform geometric encoding.
  • the point cloud in the process of geometric information encoding based on triangle soup (trisoup), the point cloud is also divided into octrees by the octree division unit 203.
  • the trisoup does not need to divide the point cloud into unit cubes with a side length of 1x1x1 step by step, but stops dividing when the block (sub-block) has a side length of W.
  • the intersections Based on the surface formed by the distribution of the point cloud in each block, at most twelve vertices (intersections) generated by the surface and the twelve edges of the block are obtained, and the intersections are surface fitted by the surface fitting unit 206, and the fitted intersections are geometrically encoded.
  • the prediction tree construction unit 207 can use the prediction tree encoding method to encode the position information of the quantized points.
  • the point cloud is divided in the form of a prediction tree, so that the position of the point can correspond to the position of the node in the prediction tree one by one.
  • the geometric position information of the node is predicted by selecting different prediction modes to obtain the prediction residual, and the geometric prediction residual is quantized using the quantization parameter.
  • the prediction residual of the prediction tree node position information, the prediction tree structure and the quantization parameter are encoded to generate a binary code stream.
  • the geometric reconstruction unit 204 can perform position reconstruction based on the position information output by the octree division unit 203 or the intersection points fitted by the surface fitting unit 206 to obtain the reconstructed value of the position information of each point in the point cloud data.
  • the position reconstruction can be performed based on the position information output by the prediction tree construction unit 207 to obtain the reconstructed value of the position information of each point in the point cloud data.
  • the arithmetic coding unit 205 can use entropy coding to perform arithmetic coding on the position information output by the octree analysis unit 203 or the intersection points fitted by the surface fitting unit 206, or the geometric prediction residual values output by the prediction tree construction unit 207 to generate a geometric code stream; the geometric code stream can also be called a geometry bitstream.
  • Attribute encoding can be achieved through the following units:
  • Transform colors a color conversion (Transform colors) unit 210
  • a recoloring (Transfer attributes) unit 211 a Region Adaptive Hierarchical Transform (RAHT) unit 212, a Generate LOD (Generate LOD) unit 213, a lifting (lifting transform) unit 214, a Quantize coefficients (Quantize coefficients) unit 215 and an arithmetic coding unit 216.
  • RAHT Region Adaptive Hierarchical Transform
  • point cloud encoder 200 may include more, fewer or different functional components than those shown in FIG. 4A .
  • the color conversion unit 210 may be used to convert the RGB color space of the points in the point cloud into a YCbCr format or other formats.
  • the recoloring unit 211 recolors the color information using the reconstructed geometric information so that the uncoded attribute information corresponds to the reconstructed geometric information.
  • any transformation unit can be selected to transform the points in the point cloud.
  • the transformation unit may include: RAHT transformation 212 and lifting (lifting transform) unit 214. Among them, the lifting transformation depends on generating a level of detail (LOD).
  • LOD level of detail
  • any of the RAHT transformation and the lifting transformation can be understood as being used to predict the attribute information of a point in a point cloud to obtain a predicted value of the attribute information of the point, and then obtain a residual value of the attribute information of the point based on the predicted value of the attribute information of the point.
  • the residual value of the attribute information of the point can be the original value of the attribute information of the point minus the predicted value of the attribute information of the point.
  • the process of generating LOD by the LOD generating unit includes: obtaining the Euclidean distance between points according to the position information of the points in the point cloud; and dividing the points into different detail expression layers according to the Euclidean distance.
  • the Euclidean distances can be sorted and the Euclidean distances in different ranges can be divided into different detail expression layers. For example, a point can be randomly selected as the first detail expression layer. Then the Euclidean distances between the remaining points and the point are calculated, and the points whose Euclidean distances meet the first threshold requirement are classified as the second detail expression layer.
  • the centroid of the points in the second detail expression layer is obtained, and the Euclidean distances between the points other than the first and second detail expression layers and the centroid are calculated, and the points whose Euclidean distances meet the second threshold are classified as the third detail expression layer.
  • all points are classified into the detail expression layer.
  • the threshold of the Euclidean distance By adjusting the threshold of the Euclidean distance, the number of points in each LOD layer can be increased. It should be understood that the LOD division method can also be adopted in other ways, and the present application does not limit this.
  • the point cloud may be directly divided into one or more detail expression layers, or the point cloud may be first divided into a plurality of point cloud slices, and then each point cloud slice may be divided into one or more LOD layers.
  • the point cloud can be divided into multiple point cloud blocks, and the number of points in each point cloud block can be between 550,000 and 1.1 million.
  • Each point cloud block can be regarded as a separate point cloud.
  • Each point cloud block can be divided into multiple detail expression layers, and each detail expression layer includes multiple points.
  • the detail expression layer can be divided according to the Euclidean distance between points.
  • the quantization unit 215 may be used to quantize the residual value of the attribute information of the point. For example, if the quantization unit 215 is connected to the RAHT transformation unit 212, the quantization unit 215 may be used to quantize the residual value of the attribute information of the point output by the RAHT transformation unit 212.
  • the arithmetic coding unit 216 may use zero run length coding to perform entropy coding on the residual value of the attribute information of the point to obtain an attribute code stream.
  • the attribute code stream may be bit stream information.
  • FIG4B is a schematic block diagram of a point cloud decoder provided in an embodiment of the present application.
  • the decoder 300 can obtain the point cloud code stream from the encoding device, and obtain the position information and attribute information of the points in the point cloud by parsing the code.
  • the decoding of the point cloud includes position decoding and attribute decoding.
  • the process of position decoding includes: performing arithmetic decoding on the geometric code stream; merging after building the octree, reconstructing the position information of the point to obtain the reconstructed information of the point position information; performing coordinate transformation on the reconstructed information of the point position information to obtain the point position information.
  • the point position information can also be called the geometric information of the point.
  • the attribute decoding process includes: obtaining the residual value of the attribute information of the point in the point cloud by parsing the attribute code stream; obtaining the residual value of the attribute information of the point after dequantization by dequantizing the residual value of the attribute information of the point; based on the reconstruction information of the point position information obtained in the position decoding process, selecting one of the following RAHT inverse transform and lifting inverse transform to predict the point cloud to obtain the predicted value, and adding the predicted value to the residual value to obtain the reconstructed value of the attribute information of the point; performing color space inverse conversion on the reconstructed value of the attribute information of the point to obtain the decoded point cloud.
  • position decoding can be achieved by the following units:
  • Arithmetic decoding unit 301 Arithmetic decoding unit 301, octree synthesis (synthesize octree) unit 302, surface fitting unit (Synthesize suface approximation) 303, geometry reconstruction (Reconstruct geometry) unit 304, inverse transform (inverse transform coordinates) unit 305 and prediction tree reconstruction unit 306.
  • Attribute encoding can be achieved through the following units:
  • each unit in the decoder 300 can refer to the functions of the corresponding units in the encoder 200.
  • the point cloud decoder 300 may include more, fewer or different functional components than those in FIG. 5 .
  • the decoder 300 can divide the point cloud into multiple LODs according to the Euclidean distance between points in the point cloud; then, the attribute information of the points in the LOD is decoded in sequence; for example, the number of zeros (zero_cnt) in the zero-run encoding technology is calculated to decode the residual based on zero_cnt; then, the decoding framework 200 can perform inverse quantization based on the decoded residual value, and add the inverse quantized residual value to the predicted value of the current point to obtain the reconstruction value of the point cloud until all point clouds are decoded.
  • the current point will be used as the nearest point of the subsequent LOD point, and the reconstruction value of the current point will be used to predict the attribute information of the subsequent point.
  • the following introduces octree-based geometric coding and prediction tree-based geometric coding.
  • Octree-based geometric encoding includes: first, coordinate transformation of geometric information so that all point clouds are contained in a bounding box. Then quantization is performed. This step of quantization mainly plays a role of scaling. Due to quantization rounding, the geometric information of some points is the same. Whether to remove duplicate points is determined based on parameters. The process of quantization and removal of duplicate points is also called voxelization. Next, the bounding box is continuously divided into trees (octree/quadtree/binary tree) in the order of breadth-first traversal, and the placeholder code of each node is encoded. In an implicit geometric division method, the bounding box of the point cloud is first calculated.
  • K and M In the process of binary tree/quadtree/octree partitioning, two parameters are introduced: K and M.
  • K indicates the maximum number of binary tree/quadtree partitioning before octree partitioning;
  • parameter M is used to indicate that the minimum block side length corresponding to binary tree/quadtree partitioning is 2M .
  • the current node has only one occupied child node, and the parent node of the current node's parent node has only two occupied child nodes, that is, the current node has at most one neighbor node.
  • the parent node of the current node has only one child node, the current node.
  • the six neighbor nodes that share a face with the current node are also empty nodes.
  • the current node does not have the DCM coding qualification, it will be divided into octrees. If it has the DCM coding qualification, the number of points contained in the node will be further determined. When the number of points is less than the threshold 2, the node will be DCM-encoded, otherwise the octree division will continue.
  • the DCM coding mode is applied, the geometric coordinates X, Y, and Z components of the points contained in the current node will be directly encoded independently.
  • dx, dy, dz bits are required to encode each component of the x , y, and z components of the geometric coordinates of the node, and this bit information is directly encoded into the bitstream.
  • G-PCC currently introduces a plane coding mode. In the process of geometric division, it will determine whether the child nodes of the current node are in the same plane. If the child nodes of the current node meet the conditions of the same plane, the child nodes of the current node will be represented by the plane.
  • the decoder obtains the placeholder code of each node by continuously parsing in the order of breadth-first traversal, and divides the nodes in sequence until a 1x1x1 unit cube is obtained. The number of points contained in each leaf node is parsed, and finally the geometric reconstructed point cloud information is restored.
  • geometric division In the geometric information coding framework based on trisoup (triangle soup, triangle patch set), geometric division must also be performed first, but different from the geometric information coding based on binary tree/quadtree/octree, this method does not need to divide the point cloud into unit cubes with a side length of 1x1x1 step by step, but stops dividing when the block (sub-block) has a side length of W.
  • this method Based on the surface formed by the distribution of the point cloud in each block, at most twelve vertices (intersection points) generated by the surface and the twelve edges of the block are obtained. The vertex coordinates of each block are encoded in turn to generate a binary code stream.
  • the vertex coordinates are first decoded to complete the reconstruction of the triangle facets at the decoding end.
  • the process is shown in Figures 5A to 5C.
  • the triangle facet set formed by these three vertices in a certain order is called triangle soup, or trisoup, as shown in Figure 5B.
  • sampling is performed on the triangle facet set, and the obtained sampling points are used as the reconstructed point cloud in the block, as shown in Figure 5C.
  • the geometric coding based on the prediction tree includes: first, sorting the input point cloud.
  • the currently used sorting methods include unordered, Morton order, azimuth order and radial distance order.
  • the prediction tree structure is established by using two different methods, including: KD-Tree (high-latency slow mode) and using the laser radar calibration information to divide each point into different Lasers, and establish a prediction structure according to different Lasers (low-latency fast mode).
  • KD-Tree high-latency slow mode
  • Lasers low-latency fast mode
  • traverse each node in the prediction tree predict the geometric position information of the node by selecting different prediction modes to obtain the prediction residual, and quantize the geometric prediction residual using the quantization parameter.
  • the prediction residual of the prediction tree node position information, the prediction tree structure and the quantization parameters are encoded to generate a binary code stream.
  • the decoding end Based on the geometric decoding of the prediction tree, the decoding end reconstructs the prediction tree structure by continuously parsing the bit stream, and then obtains the geometric position prediction residual information and quantization parameters of each prediction node through parsing, and dequantizes the prediction residual to recover the reconstructed geometric position information of each node, and finally completes the geometric reconstruction of the decoding end.
  • the geometric information is reconstructed.
  • attribute encoding is mainly performed on color information.
  • the color information is converted from the RGB color space to the YUV color space.
  • the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information.
  • RAHT Random Adaptive Hierarchal Transform
  • Morton codes can be used to search for nearest neighbors.
  • the Morton code corresponding to each point in the point cloud can be obtained from the geometric coordinates of the point.
  • the specific method for calculating the Morton code is described as follows. For each component of the three-dimensional coordinate represented by a d-bit binary number, its three components can be expressed as formula (1):
  • the highest bits of x, y, and z are To the lowest position The corresponding binary value.
  • the Morton code M is x, y, z, starting from the highest bit, arranged in sequence To the lowest bit, the calculation formula of M is shown in the following formula (2):
  • Condition 1 The geometric position is limitedly lossy and the attributes are lossy;
  • Condition 3 The geometric position is lossless, and the attributes are limitedly lossy
  • Condition 4 The geometric position and attributes are lossless.
  • GPCC's general test sequences include four categories: Cat1A, Cat1B, Cat3-fused, and Cat3-frame. Among them, Cat2-frame point cloud only contains reflectance attribute information, Cat1A and Cat1B point clouds only contain color attribute information, and Cat3-fused point cloud contains both color and reflectance attribute information.
  • GPCC's technical routes There are 2 types in total, differentiated by the algorithms used for geometric compression.
  • the bounding box is divided into sub-cubes in sequence, and the non-empty sub-cubes (containing points in the point cloud) are continued to be divided until the leaf node obtained by division is a 1X1X1 unit cube.
  • the leaf node obtained by division is a 1X1X1 unit cube.
  • the decoding end obtains the placeholder code of each node by continuously parsing in the order of breadth-first traversal, and continuously divides the nodes in turn until a 1x1x1 unit cube is obtained.
  • geometric lossless decoding it is necessary to parse the number of points contained in each leaf node and finally restore the geometrically reconstructed point cloud information.
  • the prediction tree structure is established by using two different methods, including: KD-Tree (high-latency slow mode) and using the laser radar calibration information to divide each point into different Lasers and establish a prediction structure according to different Lasers (low-latency fast mode).
  • KD-Tree high-latency slow mode
  • laser radar calibration information to divide each point into different Lasers
  • establish a prediction structure according to different Lasers low-latency fast mode.
  • each node in the prediction tree is traversed, and the geometric position information of the node is predicted by selecting different prediction modes to obtain the prediction residual, and the geometric prediction residual is quantized using the quantization parameter.
  • the prediction residual of the prediction tree node position information, the prediction tree structure, and the quantization parameters are encoded to generate a binary code stream.
  • the decoding end reconstructs the prediction tree structure by continuously parsing the bit stream, and then obtains the geometric position prediction residual information and quantization parameters of each prediction node through parsing, and dequantizes the prediction residual to restore the reconstructed geometric position information of each node, and finally completes the geometric reconstruction at the decoding end.
  • Input for constructing a single chain structure voxelized point cloud and prior information of rotating lidar.
  • the output of constructing a single chain structure includes: the geometric prediction value and prediction residual of the current point, the prediction mode adopted by the current point, the quantization parameter of the current point, the number of repeated points, and the number of skipped points corresponding to each mode.
  • L3C2 As shown in Figure 6, the construction of L3C2 is specifically divided into: reordering, coordinate transformation, establishing a single chain structure, selecting a prediction mode, generating a prediction value, encoding the number of repeated points, quantizing the prediction residual, inverse coordinate transformation, and encoding the coordinate transformation residual.
  • the voxelized point cloud is reordered to construct a more efficient single chain structure.
  • the default sorting method is to sort according to the scanning order of the lidar.
  • Cartesian coordinates (x, y, z) of each point are converted into polar coordinates (r, ⁇ , tan ⁇ ), and the points are sorted in turn according to the elevation tangent value tan ⁇ , the azimuth angle ⁇ , and the radius r.
  • the point cloud is traversed and the points are converted from Cartesian coordinates (x, y, z) to cylindrical coordinates (r, ⁇ , i) according to the following formula (4) and stored.
  • i is the LaserID corresponding to the point (a typical laser radar system may have 16, 32 or 64 Laser Scanners, and the prior information of each laser is different, that is, the elevation angle ⁇ and the height zLaser in the vertical direction are different).
  • i is determined by looping the prior information of different Lasers. In each loop, the z component of the point is calculated using the r, prior information and the above conversion formula of the point, and the deviation between the converted z component and the original z component of the point is calculated, and then the point with the smallest deviation is selected from different LaserIDs as the i of the point. This process processes the non-uniform distribution of the point cloud in the vertical direction in space and makes it regular.
  • FIG. 7A is a single chain structure obtained by sorting the points in the point cloud based on the cylindrical coordinates (r, ⁇ , i).
  • the rotation interval of each Laser can be obtained by using the acquisition parameters of the laser scanner, that is, by using To compare the single-chain structure shown in FIG7A in the vertical direction (i.e. The regularized single-stranded structure is obtained as shown in FIG7B .
  • the single-chain structure shown in FIG. 7A can be regularized in the vertical direction by the following formula (5):
  • round() is the rounding function.
  • the entire encoding process is performed using the regularized single-chain structure as an example.
  • N laser represents the number of Lasers
  • i represents the LaserID corresponding to the current point
  • each point is predicted and encoded according to the structure shown in FIG9.
  • the best prediction mode predMode is selected by using the rate-distortion optimization criterion at the encoding end, and the prediction mode needs to be encoded.
  • the update criteria are as follows:
  • Criterion 1 When the prediction residual is greater than a certain threshold (Th), the current point is updated to the prediction list and the last prediction value is deleted.
  • Criterion 2 When the prediction residual is less than or equal to a certain threshold (Th), the selected prediction value is deleted and the current point is updated to the prediction list.
  • Th a certain threshold
  • the prediction value of the current point is assumed to be Based on the predicted value and original value of the current point, the predicted residual value of the current point is determined to be
  • the determination principle of the quantization factors corresponding to x and y is shown in FIG. 10 , where r and The determination principle of the corresponding quantization factor is shown in FIG11 .
  • r and r are calculated according to the following formula (7) and formula (8) respectively:
  • the corresponding quantization factors ⁇ r and ⁇ are:
  • the prediction residual value of the current point is quantized, for example,
  • i is the LaserID corresponding to the point, and the prior information of each laser is different, that is, the elevation angle ⁇ and the height zLaser in the vertical direction are different. Therefore, the elevation angle corresponding to the i-th Laser is ⁇ (i), and the height in the vertical direction is zLaser(i).
  • Input for reconstructing the single-chain structure decoded data (including the prediction mode adopted by the current point, the prediction residual of the current point, the quantization parameter of the current point, the number of repeated points, the order of each point), and the prior information of the rotating lidar.
  • Output of reconstructing a single chain structure reconstructed voxelized point cloud.
  • the reconstruction of the single chain structure is divided into four steps: generating prediction values, reconstructing the single chain structure, decoding the number of repeated points, inverse coordinate transformation, and reconstructing the geometric point cloud.
  • the following are introduced respectively:
  • the cylindrical coordinates of the current point are predicted and the corresponding prediction values are generated.
  • the specific process is as follows:
  • predictive coding is performed on each point according to the structure shown in FIG9 .
  • the prediction mode predMode of the current point is obtained by parsing the bitstream at the decoding end, and then the cylindrical coordinate prediction value of the current point is obtained in the prediction list using the prediction mode predMode.
  • the predicted value of Cartesian coordinates is the reconstructed value of cylindrical coordinates of the point.
  • the predicted value of the cylindrical coordinates of the current point can be obtained through the previous step Next, according to the following formula (10), the cylindrical coordinate residual obtained by decoding is used and the predicted cylindrical coordinates of the current point Calculate the reconstructed cylindrical coordinates of the current point
  • the position of the current point in the L3C2 structure can be determined, and the cylindrical coordinates (r, ⁇ , i) can be reconstructed to reconstruct the L3C2 structure.
  • the reconstructed cylindrical coordinates of the current point (r, ⁇ , i), that is, the reconstructed cylindrical coordinates mentioned above Convert to Cartesian coordinates according to the following formula (11): This is the predicted Cartesian coordinate of the current point.
  • i is the LaserID corresponding to the point, and the prior information of each laser is different, that is, the elevation angle ⁇ and the height zLaser in the vertical direction are different. Therefore, the elevation angle corresponding to the i-th Laser is ⁇ (i), and the height in the vertical direction is zLaser(i).
  • the following formula (12) is used to use the decoded Cartesian coordinate residual (r x , ry , r z ) and the predicted Cartesian coordinate of the current point: Computes the reconstructed Cartesian coordinates (x,y,z) of the current point.
  • the above introduces the encoding and decoding method based on L3C2. It can be seen from the above that in the encoding and decoding based on L3C2, for each node in L3C2, the repeated point information of the node needs to be encoded and decoded, but not every node has repeated points. Therefore, encoding and decoding repeated point information for each node increases the complexity of encoding and decoding, wastes a lot of encoding and decoding time, and makes the encoding and decoding efficiency low.
  • the total number of repeated points included in the point cloud is first determined by the total number of points included in the point cloud and the total number of L3C2 nodes of the point cloud. In this way, during encoding and decoding, the number of repeated points that have been encoded and decoded is recorded in real time, and the number of repeated points that have been encoded and decoded is compared with the total number of repeated points included in the point cloud to determine whether to encode and decode the repeated point information of the node when encoding and decoding the current node.
  • the number of repeated points that have been encoded and decoded is equal to the total number of repeated points in the point cloud, indicating that the repeated points in the point cloud have been encoded and decoded, and the remaining nodes do not include repeated points, and thus there is no need to encode and decode the repeated point information of subsequent nodes, thereby reducing the encoding and decoding complexity of the point cloud, saving encoding and decoding time, and thus improving the encoding and decoding efficiency.
  • the point cloud decoding method provided in the embodiment of the present application is introduced.
  • Fig. 13 is a schematic diagram of a point cloud decoding method according to an embodiment of the present application.
  • the point cloud decoding method according to the embodiment of the present application can be implemented by the point cloud decoding device shown in Fig. 3 or Fig. 5 above.
  • the point cloud decoding method of the embodiment of the present application includes:
  • the point cloud includes geometric information and attribute information
  • the decoding of the point cloud includes geometric decoding and attribute decoding.
  • the embodiment of the present application relates to geometric decoding of point clouds.
  • the geometric information of the point cloud is also referred to as the position information of the point cloud. Therefore, the geometric decoding of the point cloud is also referred to as the position decoding of the point cloud.
  • the encoder constructs the L3C2 structure of the point cloud based on the geometric information of the point cloud.
  • the L3C2 structure is a chain structure composed of at least one single chain structure, and each single chain structure includes at least one node.
  • a node includes at least one point in the point cloud, that is, in the L3C2 encoding, the points in the point cloud are divided into nodes in L3C2.
  • these points with the same coordinates are divided into the same node of L3C2, so that the node includes repeated points.
  • L3C2 of a point cloud when constructing L3C2 of a point cloud, it is necessary to perform coordinate conversion on the points in the point cloud. For example, when converting the coordinates of the points in the point cloud in the second coordinate system to the coordinates in the first coordinate system, the coordinates of the points with different coordinates in the second coordinate system may become the same when converted to the first coordinate system. In this way, when constructing L3C2 based on the coordinates of the points in the first coordinate system, the points with the same coordinates in the first coordinate system will be divided into one node, so that the node includes duplicate points.
  • nodes in L3C2 of the point cloud include duplicate points.
  • the encoder needs to encode the repeated point information of each node.
  • the decoder decodes the repeated point information of each node. This will increase the complexity of encoding and decoding, waste encoding and decoding time, and reduce encoding and decoding efficiency.
  • the total number of duplicate points included in the point cloud is first determined, and during the decoding process, the number of decoded duplicate points is counted, and then before decoding each point, it is first determined whether the number of currently decoded duplicate points is equal to the total number of duplicate points included in the point cloud.
  • the number of currently decoded duplicate points is equal to the total number of duplicate points included in the point cloud, it means that the duplicate points in the point cloud have been decoded, and the remaining nodes to be decoded in L3C2 do not include duplicate nodes. In this way, when these nodes are subsequently decoded, the duplicate point information of these nodes will no longer be decoded, thereby reducing the decoding complexity of the point cloud, saving decoding time, and improving decoding efficiency.
  • the embodiment of the present application does not limit the specific method for the decoding end to obtain the total number of L3C2 nodes of the point cloud and the total number of points of the point cloud.
  • the encoder writes the total number of nodes of L3C2 of the point cloud and the total number of points of the point cloud into the geometry stream. In this way, the decoder obtains the total number of nodes of L3C2 of the point cloud and the total number of points of the point cloud by decoding the geometry stream.
  • the encoding end writes the number of single chains included in the point cloud and the number of nodes included in each single chain into the geometric code stream.
  • the decoding end decodes the geometric code stream of the point cloud to obtain the number of single chains headsCount included in the point cloud and the number of nodes included in each single chain, and then obtains the total number of nodes of L3C2 based on the number of single chains and the number of nodes included in each single chain. For example, the sum of the number of nodes included in each single chain is determined as the total number of nodes of L3C2.
  • the nodes on each single chain included in the point cloud can be added together to obtain the total number of nodes nodeCount of L3C2 through the following instructions:
  • the encoding end writes the total number of points of the point cloud into the geometric data unit of the point cloud. In this way, the decoding end obtains the total number of points of the point cloud by decoding the geometric data unit of the point cloud.
  • the encoder when encoding the total number of points in the point cloud, the encoder subtracts one from the total number of points in the point cloud to obtain a first value, and writes the first value into the geometry data unit, that is, the first value is represented as slice_num_points_minus1.
  • the decoder when decoding, the decoder decodes the geometry data unit to obtain the first value, and then adds 1 to the first value to obtain the total number of points in the point cloud.
  • the script syntax of the geometry data unit (Geometry data unit footer syntax) is shown in Table 2:
  • slice_num_points_minus1 plus 1 is used to specify the total number of points in the point cloud. Bitstream conformance requires that slice_num_points_minus1 plus 1 should be equal to the number of decodable points. Decoders should not rely on bitstream conformance to prevent implementation buffer overflows.
  • the decoding end obtains the first value slice_num_points_minus1 by decoding the geometric data unit shown in Table 2, and adds 1 to the first value to obtain the total number of points in the point cloud.
  • the decoding end obtains the total number of nodes of L3C2 of the point cloud and the total number of points of the point cloud, and then executes the following step S102.
  • S102 Determine the total number of repeated points in the point cloud according to the total number of points in the point cloud and the total number of nodes in L3C2.
  • the decoding end determines the total number of L3C2 nodes of the point cloud and the total number of points in the point cloud, it determines the total number of repeated points in the point cloud based on the total number of points in the point cloud and the total number of L3C2 nodes, and then uses the total number of repeated points in the point cloud as supervision for subsequent decoding.
  • the embodiment of the present application does not limit the specific method in which the decoding end determines the total number of repeated points in the point cloud based on the total number of nodes of L3C2 of the point cloud and the total number of points in the point cloud.
  • some point information may be lost or damaged during data transmission or encoding.
  • the decoding end may perform shallow decoding on the geometry stream to determine the number of nodes included in the geometry stream, and then determine the total number of repeated points in the point cloud based on the number of nodes included in the geometry stream, the total number of points in the point cloud determined in the above steps, and the total number of nodes in L3C2.
  • the difference between the total number of points in the point cloud and the total number of nodes in L3C2 is directly determined as the total number of repeated points in the point cloud.
  • the decoding end determines the total number of repeated points in the point cloud by the following formula (13):
  • dupSumNum (slice_num_points_minus1+1)-nodeCount (13)
  • dupSumNum is the total number of duplicate points in the point cloud
  • slice_num_points_minus1+1 is the total number of points in the point cloud
  • nodeCount is the total number of nodes in L3C2.
  • the decoding end After the decoding end determines the total number of repeated points in the point cloud, it starts decoding each point in L3C2, as shown in S103 below.
  • the decoding end decodes each node in L3C2 in the same manner.
  • the current node in L3C2 is taken as an example for illustration.
  • dupCount represents the number of duplicate points currently decoded, and dupCount is initialized to 0.
  • dupCount 0, which is less than the total number of duplicate points dupSumNum in the point cloud.
  • the duplicate point information of the first node includes the number of duplicate points included in the first node. Assuming that the number of duplicate points included in the first node is a, dupCount is updated to a.
  • decode the second node in L3C2. Before decoding the second node, first determine whether dupCount a is equal to the total number of duplicate points dupSumNum in the point cloud. If not, continue to decode the duplicate point information of the second node, and use the number of duplicate points included in the second node to update the current dupCount, and so on.
  • the decoding end decodes the current node in L3C2
  • it first determines the number of duplicate points decoded before decoding the current node, that is, the number of decoded duplicate points dupCount, and then determines whether to decode the duplicate point information of the current node based on the number of decoded duplicate points dupCount.
  • the decoding end records the number of decoded duplicate points through the following instructions:
  • dupPointNum represents the number of duplicate points included in the node.
  • S104 Decode the current node according to the number of decoded duplicate points and the total number of duplicate points in the point cloud.
  • the decoding end compares the number of decoded duplicate points with the total number of duplicate points in the point cloud to decode the current node.
  • the implementation process of the above S104 includes the following situations:
  • the current node includes only one point, which is recorded as the first point. Then, the geometric reconstruction value of the first point is determined.
  • the above describes the process of skipping decoding the duplicate point information of the current node and determining the geometric reconstruction value of the first point included in the current node for case 1, if the number of decoded duplicate points is equal to the total number of duplicate points in the point cloud.
  • Case 2 If the duplicate point information includes the number of duplicate points included in the current node, the number of decoded duplicate points is less than the total number of duplicate points in the point cloud, and the current node is the last node in L3C2, the above S104 includes S104-C1 and S104-C2:
  • S104-A2 Determine the difference between the total number of repeated points in the point cloud and the number of decoded repeated points as the number of repeated points included in the current node.
  • the decoding end determines that the current node is the last node of L3C2, it is determined that the current node must include duplicate points, and the number of duplicate points included in the current node is the difference between the total number of duplicate points in the point cloud and the number of decoded duplicate points.
  • the decoding end also needs to determine the geometric reconstruction value of the first point included in the current node, wherein the process of determining the geometric reconstruction value of the first point can be specifically referred to the description of the following steps 11 to 13, which will not be repeated here.
  • Case 3 If the number of decoded duplicate points is less than the total number of duplicate points in the point cloud, and the current node is not the last node in L3C2, the above S104 includes S104-B1 and S104-B2:
  • the encoder determines that the number of repeated points of the encoded points corresponding to the current node is less than the total number of repeated points of the point cloud, and the current node is not the last node in L3C2, when determining that the current node has repeated points, the repeated point information of the current node is written into the geometric code stream.
  • the decoder determines that the number of repeated points of the decoded points corresponding to the current node is less than the total number of repeated points of the point cloud, and the current node is not the last node in L3C2
  • the decoder decodes the geometric code stream to obtain the repeated point information of the current node, and based on the repeated point information, determines the geometric reconstruction values of the N repeated points included in the current node.
  • the embodiment of the present application does not limit the specific content included in the repeated point information.
  • the repeated point information includes the number of repeated points included in the current node.
  • the repeated point information includes the number of repeated points included in the current node, and at least one first flag, where the first flag is used to indicate whether the coordinates of the current point in the current node are the same as those of the previous point at the second coordinate.
  • the current node includes three points, which are recorded as point 1, point 2 and point 3 respectively.
  • Point 2 and point 3 correspond to a first flag respectively, wherein the first flag of point 2 is used to indicate whether the coordinates of point 2 and point 1 in the second coordinate are the same, and the first flag of point 3 is used to indicate whether the coordinates of point 3 and point 2 in the second coordinate are the same.
  • step S104-B2 After the decoding end determines the repeated point information of the current node, it executes step S104-B2 to determine the geometric reconstruction value of each of the N repeated points included in the current node based on the repeated point information.
  • the geometric reconstruction values of all the points included in the current node are determined with reference to the above method of determining the geometric reconstruction value of the first point.
  • the above S104-B2 includes the following steps:
  • the encoder when the encoder encodes the current node, the current node includes N+1 points, of which N points are repeated with the first point, and thus it is determined that the current node includes N repeated points.
  • the encoder first encodes the first point in the current node, for example, the coordinate residual value of the first point in the first coordinate system, the coordinate residual value of the first point in the second coordinate system, and the prediction mode corresponding to the first point and the order o(P) in the single chain are encoded.
  • the repeated point information corresponding to the current node is determined, and the repeated point information is encoded into the bitstream.
  • the decoding end first determines the geometric reconstruction value of the first point in the current node.
  • the process of the decoding end determining the geometric reconstruction value of the first point in the current node can refer to the detailed description of the following steps 11 to 13, which will not be repeated here.
  • the geometric reconstruction value of each of the N repeated points is determined based on the repeated point information of the current node and the geometric reconstruction value of the first point.
  • the repeated point information is parsed, and the first flag corresponding to each repeated point is obtained from the repeated point information. If the first flag corresponding to each repeated point indicates that the coordinate value of the repeated point is the same as that of the previous point in the second coordinate system, and is also the same as that of the first point in the second coordinate system, for example, the value of the first flag of each repeated point is 1. In this way, the decoding end can directly determine the geometric reconstruction value of the first point as the geometric reconstruction value of the N repeated points.
  • the decoding end decodes each of the N repeated points one by one.
  • the above S104-B22 includes the following steps:
  • S104-B22-1 For the i-th repeated point among the N repeated points, parse the repeated point information to obtain a first flag corresponding to the i-th repeated point, where the first flag is used to indicate whether the coordinates of the i-th repeated point and the i-1-th repeated point in the second coordinate system are the same, where i is a positive integer less than or equal to N, and if i is 1, the i-1-th repeated point is the first point;
  • the method for determining each of the N repeated points is the same.
  • the i-th repeated point among the N repeated points is used as an example for explanation.
  • the first identifier corresponding to the i-th repeated point is obtained from the repeated point information of the current node.
  • the first identifier indicates whether the i-th repeated point and the previous point, i.e., the i-1-th repeated point, have the same coordinates in the second coordinate system.
  • the decoding end determines the geometric reconstruction value of the ith repeated point based on the first flag corresponding to the ith repeated point.
  • the geometric reconstruction value of the i-1-th repeated point is determined as the geometric reconstruction value of the i-th repeated point.
  • the decoding end needs to re-determine the geometric reconstruction value of the i-th repeated point.
  • the decoding end decodes the geometric code stream, obtains the coordinate residual value of the i-th repeated point in the second coordinate system, the coordinate residual value of the i-th repeated point in the first coordinate system, and the prediction mode corresponding to the i-th repeated point, determines the coordinate prediction value of the i-th repeated point in the first coordinate system based on the prediction mode corresponding to the i-th repeated point, determines the coordinate reconstruction value of the i-th repeated point in the first coordinate system based on the coordinate prediction value and the coordinate residual value of the i-th repeated point in the first coordinate system, performs coordinate conversion on the coordinate reconstruction value of the i-th repeated point in the first coordinate system, adds it to the coordinate residual value of the i-th repeated point in the second coordinate system, obtains the coordinate reconstruction value of the i-th repeated point in the second coordinate system, and determines the coordinate reconstruction value of the i-th repeated point in the second coordinate system as the geometric reconstruction value of the i-th repeated point.
  • the decoding end can determine the geometric reconstruction value of each of the N repeated points.
  • the number of decoded repeated points is also updated based on the number of repeated points N included in the current node, for example, the sum of the number of repeated points included in the current node and the number of decoded repeated points is determined as the new number of decoded repeated points. Based on the new number of decoded repeated points, the next node is decoded.
  • the following describes the process of determining the geometric reconstruction value of a point included in the current node, i.e., the first point, in the above case 1, the geometric reconstruction value of the first point when the current node includes multiple points in cases 2 and 3, and the process of determining the geometric reconstruction value of the i-th repeated point among the N repeated points included in the current node in case 3. That is, the target value in the following steps 11 to 13 can be understood as the first point in case 1, the first point in cases 2 and 3, or the i-th repeated point in case 3.
  • Step 11 determine the predicted coordinate value of the target point in the first coordinate system.
  • the coordinate prediction value of the target point in the first coordinate system is determined based on the coordinate reconstruction value of the current decoded point in the first coordinate system. For example, the arithmetic mean or weighted mean of the coordinate reconstruction values of one or more decoded points before the target point in the first coordinate system is determined as the coordinate prediction value of the target point in the first coordinate system.
  • step 11 includes the following steps: step 11-1 and step 11-2:
  • Step 11-1 decoding the geometric code stream to obtain the prediction mode corresponding to the target point
  • Step 11-2 Based on the prediction model, determine the coordinate prediction value of the target point in the first coordinate system.
  • the encoder When encoding, the encoder encodes the prediction mode corresponding to the target point into the geometric code stream, so that the decoder can decode the geometric code stream to obtain the prediction mode corresponding to the target point, and then determine the coordinate prediction value of the target point in the first coordinate system based on the prediction mode.
  • the embodiment of the present application does not limit the specific type of prediction mode corresponding to the target point.
  • the laser radar for scanning the point cloud includes N lasers, each laser corresponds to a prediction list, and the corresponding L3C2 corresponds to N prediction lists.
  • the coordinates of the target point in the first coordinate system are Where i represents the laser mark corresponding to the target point, so that the prediction list corresponding to the i of the target point can be determined as the prediction list corresponding to the target point.
  • the target point is predicted respectively using these M prediction values, and the cost corresponding to each of the M prediction values is determined, and then the index of the prediction value with the lowest price in the prediction list is determined as the prediction mode corresponding to the target point, and then encoded.
  • the prediction mode corresponding to the target point is the index of the prediction value with the lowest cost in the prediction list corresponding to the target point.
  • the encoding end determines the order of the target point in the L3C2 single chain based on the i of the target point, that is, o(P), and encodes o(P) and the prediction mode corresponding to the target point into the bitstream.
  • the decoding end decodes the bitstream to obtain o(P) and the prediction mode corresponding to the target point, and then determines the i component of the target point based on o(P), record i rec , and then determines the prediction list corresponding to the target point based on i rec . Then, in the prediction list, the prediction value indexed as the prediction mode corresponding to the target point is determined as the coordinate prediction value of the target point in the first coordinate system.
  • the predicted value includes r pred and
  • the embodiment of the present application does not limit the specific type of the first coordinate system.
  • the first coordinate system is cylindrical coordinates.
  • step 12 is executed.
  • Step 12 Determine the coordinate reconstruction value of the target point in the first coordinate system based on the coordinate prediction value of the target point in the first coordinate system.
  • the reconstructed coordinate value of the target point in the first coordinate system is determined based on the predicted coordinate value of the target point in the first coordinate system.
  • the predicted coordinate value of the target point in the first coordinate system is determined as the reconstructed coordinate value of the target point in the first coordinate system.
  • the above step 12 includes the following steps 12-1 to 12-3:
  • Step 12-1 decoding the geometric code stream to obtain the coordinate residual value of the target point after quantization in the first coordinate system
  • Step 12-2 dequantize the quantized coordinate residual value to obtain the coordinate residual value of the target point in the first coordinate system
  • Step 12-3 Based on the coordinate prediction value and the coordinate residual value of the target point in the first coordinate system, obtain the coordinate reconstruction value of the target point in the first coordinate system.
  • the encoder when encoding the target point, determines the coordinate residual value of the target point in the first coordinate system based on the coordinate prediction value of the target point in the first coordinate system, for example, the difference between the coordinate value of the target point in the first coordinate system and the coordinate prediction value is determined as the coordinate residual value of the target point in the first coordinate system. Then, the encoder quantizes the coordinate residual value of the target point in the first coordinate system and encodes it into the bitstream.
  • the decoding end decodes the geometric code stream to obtain the quantized coordinate residual value of the target point in the first coordinate system, and dequantizes the quantized coordinate residual value to obtain the coordinate residual value of the target point in the first coordinate system.
  • the decoding end before dequantizing the coordinate residual value of the target point after quantization in the first coordinate system, the decoding end first determines a quantization factor, and dequantizes the coordinate residual value of the target point after quantization in the first coordinate system based on the determined quantization factor.
  • the geometric code stream is decoded and the coordinate residual value of the target point after quantization in the first coordinate system is Q(r res ) and Based on the above steps, the predicted coordinate value of the target point in the first coordinate system is
  • the decoding end determines the quantization factor ⁇ r corresponding to Q(r res ) by the following formula (14):
  • ⁇ Q is determined based on a preset quantization parameter.
  • the quantized coordinate residual value Q(r res ) of the target point in the first coordinate system is dequantized to obtain the coordinate residual value r res of the target point in the first coordinate system.
  • the decoding end obtains a coordinate reconstruction value r rec of the target point in the first coordinate system based on the coordinate prediction value r prd and the coordinate residual value r res of the target point in the first coordinate system.
  • the decoding end determines the sum of the coordinate prediction value r prd and the coordinate residual value r res of the target point in the first coordinate system as the coordinate reconstruction value r rec of the target point in the first coordinate system.
  • the decoding end determines the coordinate reconstruction value r rec of the target point in the first coordinate system according to the above method, the decoding end determines based on r rec The corresponding quantization factor
  • the decoding end determines by the following formula (15): The corresponding quantization factor
  • the decoding end determines The corresponding quantization factor Then, based on the quantization factor right Perform inverse quantization to obtain the coordinate residual value of the target point in the first coordinate system
  • the decoder predicts the coordinates of the target point in the first coordinate system. and coordinate residuals
  • the coordinate reconstruction value r rec of the target point in the first coordinate system is obtained.
  • the decoding end predicts the coordinates of the target point in the first coordinate system and coordinate residuals The sum of the coordinates of the target point in the first coordinate system is determined as
  • the coordinate reconstruction value of the target point in the first coordinate system can be determined
  • Step 13 Determine the coordinate reconstruction value of the target point in the second coordinate system based on the coordinate reconstruction value of the target point in the first coordinate system.
  • the embodiment of the present application does not limit the specific types of the first coordinate system and the second coordinate system.
  • the embodiment of the present application does not limit the specific method of determining the coordinate reconstruction value of the target point in the second coordinate system based on the coordinate reconstruction value of the target point in the first coordinate system.
  • Mode 1 The decoding end converts the coordinate reconstruction value of the target point in the first coordinate system into the coordinate reconstruction value of the target point in the second coordinate system based on the conversion relationship between the first coordinate system and the second coordinate system.
  • the decoding end determines the coordinate reconstruction value of the target point in the second coordinate system through the following steps 13-1 to 13-3:
  • Step 13-1 performing coordinate transformation on the coordinate reconstruction value of the target point in the first coordinate system to obtain the coordinate prediction value of the target point in the second coordinate system.
  • the coordinate reconstruction value of the target point in the first coordinate system is transformed to obtain the coordinate prediction value of the target point in the second coordinate system.
  • the coordinate prediction value of the target point in the second coordinate system is determined based on the following formula (16):
  • is the elevation angle of the laser Laser corresponding to i rec
  • zLaser is the height of the laser Laser corresponding to i rec in the vertical direction.
  • Step 13-2 Decode the geometric code stream to obtain the coordinate residual value of the target point in the second coordinate system.
  • the encoding point when encoding the target point, determines the coordinate reconstruction value of the target point in the first coordinate system based on the coordinate prediction value of the target point in the first coordinate system, and transforms the coordinate reconstruction value of the target point in the first coordinate system to obtain the coordinate reconstruction value of the target point in the second coordinate system. Then, the encoding end determines the coordinate residual value of the target point in the second coordinate system based on the coordinate prediction value and the coordinate reconstruction value of the target point in the second coordinate system. The encoding end writes the coordinate residual value of the target point in the second coordinate system into the geometric code stream.
  • the decoding end decodes the geometric code stream to obtain the coordinate residual value of the target point in the second coordinate system.
  • the decoder parses the coordinate residual value of the target point in the second coordinate system from the geometric code stream and then dequantizes the coordinate residual value of the target point in the second coordinate system to obtain the coordinate residual value of the target point in the second coordinate system.
  • Step 13-3 Based on the predicted coordinate value and the residual coordinate value of the target point in the second coordinate system, obtain the reconstructed coordinate value of the target point in the second coordinate system.
  • the sum of the predicted coordinate value and the residual coordinate value of the target point in the second coordinate system is determined as the reconstructed coordinate value of the target point in the second coordinate system.
  • the coordinate reconstruction value (x, y, z) of the target point in the second coordinate system is determined:
  • (r x , ry ,r z ) is the coordinate residual value of the target point in the second coordinate system.
  • the above embodiment is described by taking the geometric decoding process of the current node in L3C2 as an example.
  • the decoding process of other nodes in L3C2 can refer to the decoding process of the current node, and then the reconstructed point cloud geometric information can be obtained.
  • the point cloud decoding method decodes the geometric code stream of the point cloud to obtain the total number of L3C2 nodes of the point cloud and the total number of points of the point cloud. Based on the total number of points of the point cloud and the total number of L3C2 nodes of the point cloud, the total number of repeated points included in the point cloud is determined. In this way, during decoding, the number of decoded repeated points is recorded in real time, and the number of decoded repeated points is compared with the total number of repeated points included in the point cloud to determine whether to decode the repeated point information of the node when decoding the current node.
  • the number of decoded repeated points is equal to the total number of repeated points in the point cloud, indicating that the repeated points in the point cloud have been decoded, and the remaining nodes do not include repeated points, so that there is no need to decode the repeated point information of subsequent nodes, thereby reducing the decoding complexity of the point cloud, saving decoding time, and improving decoding efficiency.
  • the above takes the decoding end as an example to introduce in detail the point cloud decoding method provided in the embodiment of the present application.
  • the following takes the encoding end as an example to introduce the point cloud encoding method provided in the embodiment of the present application.
  • Fig. 14 is a schematic diagram of a point cloud coding method according to an embodiment of the present application.
  • the point cloud coding method according to the embodiment of the present application can be implemented by the point cloud coding device shown in Fig. 3 or Fig. 4 above.
  • the point cloud encoding method of the embodiment of the present application includes:
  • the encoding end reorders the points in the point cloud based on the coordinate information of the points in the point cloud, constructs a single chain structure based on the coordinate information of the reordered points, and then obtains the L3C2 structure of the point cloud.
  • the embodiment of the present application does not limit the method of reordering the point cloud.
  • the voxelized point cloud is reordered to construct a more efficient single chain structure, and the default sorting method is to sort according to the scanning order of the lidar.
  • the coordinate value of each point in the point cloud in the second coordinate system is converted into the coordinate value in the third coordinate system, such as the third coordinate system is the polar coordinate system, and the corresponding coordinate value is (r, ⁇ , tan ⁇ ).
  • the points are sorted according to the elevation tangent value tan ⁇ , the azimuth angle ⁇ , and the radius r in the polar coordinate system.
  • the third coordinate system includes a cylindrical coordinate system, a polar coordinate system, and may also include other coordinate systems.
  • coordinate transformation is performed on the sorted point cloud. Specifically, the point cloud is traversed according to the sorted result, and the coordinate values of the midpoints of the point cloud in the second coordinate system are transformed into coordinate values in the first coordinate system and stored.
  • the first coordinate system is a cylindrical coordinate system and the second coordinate system is a Cartesian coordinate system
  • the coordinate value (x, y, z) of the point in the point cloud in the Cartesian coordinate system is converted to the coordinate value (r, ⁇ , i) in the cylindrical coordinate system.
  • a single chain structure is constructed based on the coordinate values of the midpoints of the point cloud in the first coordinate system.
  • the first coordinate system is a cylindrical coordinate system
  • the single chain structure constructed by the point cloud is shown in FIG. 7A .
  • the rotation interval of each Laser is by using The single-chain structure shown in FIG7A is regularized in the vertical direction to obtain the single-chain structure shown in FIG7B.
  • the above formula (5) may be used for regularization.
  • encoding is performed based on the regularized single-chain structure.
  • the encoding end constructs the L3C2 structure of the point cloud based on the geometric information of the point cloud.
  • the L3C2 structure is a chain structure, which is composed of at least one single chain structure, and each single chain structure includes at least one node.
  • a node includes at least one point in the point cloud, that is, in the L3C2 encoding, the points in the point cloud are divided into nodes in L3C2.
  • these points with the same coordinates are divided into the same node of L3C2, so that the node includes repeated points.
  • L3C2 of a point cloud it is necessary to perform coordinate conversion on the points in the point cloud.
  • the coordinates of the points with different coordinates in the second coordinate system may become the same when converted to the first coordinate system.
  • the points with the same coordinates in the first coordinate system will be divided into one node, so that the node includes duplicate points.
  • the encoding end needs to encode the repeated point information of each node, which will increase the complexity of encoding, waste encoding time and reduce encoding efficiency.
  • the total number of duplicate points included in the point cloud is first determined, and during the encoding process, the number of encoded duplicate points is counted, and then before encoding each point, it is first determined whether the number of currently encoded duplicate points is equal to the total number of duplicate points included in the point cloud.
  • the number of currently encoded duplicate points is equal to the total number of duplicate points included in the point cloud, it means that the duplicate points in the point cloud have been encoded, and the remaining nodes to be encoded in L3C2 do not include duplicate nodes. In this way, when these nodes are subsequently encoded, the duplicate point information of these nodes will no longer be encoded, thereby reducing the encoding complexity of the point cloud, saving encoding time, and improving encoding efficiency.
  • the embodiment of the present application does not limit the specific method for the encoder to obtain the total number of L3C2 nodes of the point cloud and the total number of points of the point cloud.
  • the encoder when constructing L3C2, the encoder counts the number of node points included in L3C2.
  • the encoding end determines the number of single chains included in L3C2 and the number of nodes included in each single chain, and obtains the total number of nodes of L3C2 based on the number of single chains and the number of nodes included in each single chain. For example, the sum of the number of nodes included in each single chain is determined as the total number of nodes of L3C2.
  • the nodes on each single chain included in the point cloud can be added together to obtain the total number of nodes nodeCount of L3C2 through the following instructions:
  • the encoding end also includes: writing the number of single chains included in L3C2 and the number of nodes included in each single chain into the geometric code stream of the point cloud.
  • the point cloud file includes the total number of points of the point cloud, so that the encoding end obtains the total number of points of the point cloud by receiving the point cloud file.
  • the encoder writes the total number of points in the point cloud into the geometry code stream.
  • the encoding end writes the total number of points of the point cloud into the geometric data unit.
  • the encoder subtracts 1 from the total number of points in the point cloud to obtain a first value, and writes the first value into the geometric data unit. For example, as shown in Table 2 above.
  • the encoder obtains the total number of nodes of L3C2 of the point cloud and the total number of points of the point cloud, and then executes the following step S203.
  • S203 Determine the total number of repeated points in the point cloud according to the total number of points in the point cloud and the total number of nodes in L3C2.
  • the encoder determines the total number of L3C2 nodes and the total number of points of the point cloud, it determines the total number of repeated points in the point cloud based on the total number of points in the point cloud and the total number of L3C2 nodes, and then uses the total number of repeated points in the point cloud as supervision for subsequent encoding.
  • the embodiment of the present application does not limit the specific method in which the encoder determines the total number of repeated points in the point cloud based on the total number of nodes of L3C2 of the point cloud and the total number of points in the point cloud.
  • the difference between the total number of points in the point cloud and the total number of nodes in L3C2 is directly determined as the total number of repeated points in the point cloud.
  • the encoder determines the total number of repeated points in the point cloud through the above formula (13).
  • the encoder After the encoder determines the total number of repeated points in the point cloud, it starts to encode each point in L3C2, as shown in S204 below.
  • the encoding end encodes each node in L3C2 in the same way.
  • the current node in L3C2 is taken as an example for illustration.
  • dupCount is used to represent the number of duplicate points currently encoded, and dupCount is initialized to 0.
  • dupCount 0, which is less than the total number of duplicate points dupSumNum in the point cloud.
  • the duplicate point information of the first node needs to be encoded.
  • the duplicate point information of the first node includes the number of duplicate points included in the first node. Assuming that the number of duplicate points included in the first node is a, dupCount is updated to a. Next, encode the second node in L3C2.
  • dupCount a is equal to the total number of duplicate points dupSumNum in the point cloud. If not, continue to encode the duplicate point information of the second node, and use the number of duplicate points included in the second node to update the current dupCount, and so on.
  • the encoding end when encoding the current node in L3C2, the encoding end first determines the number of repeated points that have been encoded before encoding the current node, that is, the number of encoded repeated points dupCount, and then determines whether to encode the repeated point information of the current node based on the number of encoded repeated points dupCount.
  • the encoder records the number of encoded duplicate points through the following instructions:
  • dupPointNum represents the number of duplicate points included in the node.
  • the encoder compares the number of encoded duplicate points with the total number of duplicate points in the point cloud to encode the current node.
  • the implementation process of the above S205 includes the following situations:
  • S205-A Determine the coordinate residual value of the first point included in the current node, and write the coordinate residual value of the first point into the geometric code stream of the point cloud.
  • the current node includes only one point, which is recorded as the first point. Then, the coordinate residual value of the first point is determined, and the coordinate residual value of the first point is written into the geometric code stream of the point cloud.
  • the embodiment of the present application does not limit the specific content included in the repeated point information.
  • the encoding end when encoding the current node, determines the number of encoded duplicate points corresponding to the current node. If the number of encoded duplicate points corresponding to the current node is less than the total number of duplicate points in the point cloud, it means that the current node may include duplicate points. At this time, in order to accurately encode, it is necessary to determine the duplicate point information of the current node and encode the duplicate point information of the current node.
  • the repeated point information includes the number of repeated points included in the current node. Thus, by determining the number of repeated points included in the current node, the repeated point information of the current node can be determined.
  • the repeated point information includes the number of repeated points included in the current node and at least one first flag, where the first flag is used to indicate whether the coordinates of the current point in the current node are the same as those of the previous point at the second coordinate.
  • the current node includes three points, which are recorded as point 1, point 2 and point 3 respectively.
  • Point 2 and point 3 correspond to a first flag respectively, wherein the first flag of point 2 is used to indicate whether the coordinates of point 2 and point 1 in the second coordinate are the same, and the first flag of point 3 is used to indicate whether the coordinates of point 3 and point 2 in the second coordinate are the same.
  • the first flag is used to indicate whether the coordinates of the current point in the current node are the same as the coordinates of the previous point under the second coordinate, then the above S205-B1 includes the following S205-B11 and S205-B12:
  • S205-B13 Determine the repeated point information of the current node based on the number of N repeated points included in the current node and the first flags respectively corresponding to the N repeated points.
  • the points that are repeated with the first point are determined. For example, if the current node includes 3 points, it is determined that the current node includes 2 repeated nodes.
  • first marks corresponding to the N repeated points are determined.
  • the coordinate value of the repeated point in the second coordinate system is compared with the coordinate value of the first point in the second coordinate system to determine the first mark corresponding to the repeated point. For example, if the coordinate value of the repeated point in the second coordinate system is the same as the coordinate value of the first point in the second coordinate system, the value of the first mark corresponding to the repeated point is set to the first value; if the coordinate value of the repeated point in the second coordinate system is different from the coordinate value of the first point in the second coordinate system, the value of the first mark corresponding to the repeated point is set to the second value.
  • the embodiment of the present application does not limit the specific values of the first numerical value and the second numerical value.
  • the first value is 1.
  • the second value is 0.
  • the first mark corresponding to the i-th repeated point is determined, where i is a positive integer greater than 0 and less than or equal to N. If i is 1, the i-1-th repeated point is the first point.
  • the value of the first mark is set to a first numerical value, and the first numerical value indicates that the coordinate value of the i-th repeated point in the second coordinate system is the same as the coordinate value of the i-1-th repeated point in the second coordinate system.
  • the value of the first mark is determined to be set to a second numerical value, and the second numerical value indicates that the coordinate value of the i-th repeated point in the second coordinate system is different from the coordinate value of the i-1-th repeated point in the second coordinate system.
  • the embodiment of the present application further includes: determining the coordinate residual value of the i-th repeated point, and using the coordinate residual value of the i-th repeated point to determine the geometric code stream.
  • the determination of the coordinate residual value of the i-th repeated point can refer to the description of the following steps 21 to 25, which will not be repeated here.
  • the number N of repeated points included in the current node and the first mark corresponding to each repeated point are determined as the repeated point information of the current node.
  • the encoding end uses the number N of repeated points included in the current node to update the number of encoded repeated points, for example, the sum of the number of repeated points included in the current node and the number of encoded repeated points is determined as the new number of encoded repeated points.
  • step S205-B2 is executed.
  • the coordinate prediction value of the first point in the current node is determined, and the coordinate residual value of the first point is determined based on the coordinate prediction value and the coordinate value.
  • S205-C3 write the coordinate residual value of the first point and the duplicate point information of the current node into the geometry code stream.
  • the encoder determines that the current node is the last node of L3C2, it is determined that the current node must include duplicate points, and the number of duplicate points included in the current node is the difference between the total number of duplicate points in the point cloud and the number of encoded duplicate points.
  • the decoder can determine the difference between the total number of duplicate points in the point cloud and the number of encoded duplicate points as the number of duplicate points included in the current node, and then the encoder skips encoding the duplicate point information of the current node, thereby reducing encoding complexity, saving encoding time, and improving encoding efficiency.
  • the encoding end also needs to determine the coordinate residual value of the first point included in the current node, and write the coordinate residual value of the first point into the geometric code stream.
  • the process of determining the coordinate residual value of the first point can be specifically referred to the description of steps 21 to 25 below, which will not be repeated here.
  • the following describes the process of determining the coordinate residual value of a point included in the current node, i.e., the first point, in the above-mentioned case 1, as well as the process of determining the coordinate residual value of the first point when the current node includes multiple points and the coordinate residual value of the i-th repeated point among the N repeated points included in the current node in case 2, and the process of determining the coordinate residual value of the first point of the current node in case 3. That is, the target value in the following steps 21 to 25 can be understood as the first point in case 1, and can also be understood as the first point in case 2.
  • Step 21 Determine the predicted coordinate value of the target point in the first coordinate system.
  • the predicted coordinate value of the target point in the first coordinate system is determined. For example, the arithmetic mean or weighted mean of the coordinate values of one or more encoded points before the target point in the first coordinate system is determined as the predicted coordinate value of the target point in the first coordinate system.
  • step 21 includes the following steps 21-1 and 21-2:
  • Step 21-1 determining the prediction mode corresponding to the target point
  • Step 21-2 Based on the prediction model, determine the coordinate prediction value of the target point in the first coordinate system.
  • the prediction mode corresponding to the target point is a default mode.
  • the laser radar for scanning the point cloud includes N lasers, each laser corresponds to a prediction list, and the corresponding L3C2 corresponds to N prediction lists.
  • the coordinates of the target point in the first coordinate system are Where i represents the laser mark corresponding to the target point.
  • the prediction list corresponding to i of the target point can be determined as the prediction list corresponding to the target point.
  • the prediction list corresponding to the target point includes M prediction values, the target point is predicted using these M prediction values, and the cost corresponding to each of the M prediction values is determined.
  • the index of the prediction value with the lowest price in the prediction list is determined as the prediction mode corresponding to the target point.
  • the prediction mode corresponding to the target point is the index of the prediction value with the lowest cost in the prediction list corresponding to the target point.
  • the coordinate prediction value of the target point in the first coordinate system is determined.
  • the prediction value corresponding to the index of the prediction mode in the prediction list corresponding to the target point i is determined as the coordinate prediction value of the target point in the first coordinate system.
  • the prediction value includes r pred and
  • the encoder writes the prediction mode corresponding to the target point into the geometry bitstream.
  • the encoder determines the predicted coordinate value of the target point in the first coordinate system based on the above steps, it executes the following step 22.
  • Step 22 Determine a residual value of the target point in the first coordinate system based on the predicted coordinate value of the target point in the first coordinate system and the coordinate value of the target point in the first coordinate system.
  • the difference between the coordinate value of the target point in the first coordinate system and the predicted coordinate value is determined as the residual value of the target point in the first coordinate system.
  • the encoding end quantizes the residual value of the target point in the first coordinate system to obtain the quantized residual value of the target point in the first coordinate system.
  • the quantization factor when the residual value of the target point in the first coordinate system is quantized, the quantization factor needs to be determined first.
  • the quantization factors ⁇ r and ⁇ r are determined based on the above formulas (7) and (8). Then, based on the quantization factor ⁇ r, the residual value r res of the target point in the first coordinate system is quantized and used The residual value of the target point in the first coordinate system Quantify.
  • the coordinate residual value of the target point includes the coordinate residual value of the target point in the first coordinate system and the coordinate residual value of the target point in the second coordinate system. Based on the above method, the coordinate residual value of the target point in the first coordinate system is determined. Then, based on steps 23 to 25, the coordinate residual value of the target point in the second coordinate system is determined.
  • Step 23 Based on the predicted coordinate value of the target point in the first coordinate system, obtain the reconstructed coordinate value of the target point in the first coordinate system.
  • the predicted coordinate value of the target point in the first coordinate system is used as the reconstructed coordinate value of the target point in the first coordinate system.
  • the sum of the coordinate prediction value and the residual value of the target point in the first coordinate system is determined as the coordinate reconstruction value of the target point in the first coordinate system.
  • the encoder if the residual value of the target point in the first coordinate system is quantized, performs inverse quantization on the quantized residual value of the coordinate of the target point in the first coordinate system to obtain the residual value of the coordinate of the target point in the first coordinate system; based on the predicted value of the coordinate of the target point in the first coordinate system and the residual value of the coordinate, obtain the reconstructed value of the coordinate of the target point in the first coordinate system. For example, the sum of the predicted value of the coordinate of the target point in the first coordinate system and the residual value of the coordinate is determined as the reconstructed value of the coordinate of the target point in the first coordinate system.
  • Step 24 performing coordinate transformation on the coordinate reconstruction value of the target point in the first coordinate system to obtain the coordinate reconstruction value of the target point in the second coordinate system.
  • Different first coordinate systems correspond to different second coordinate systems in different conversion relationships.
  • the coordinate reconstruction value of the target point in the first coordinate system is transformed with reference to the above formula (9) to obtain the coordinate reconstruction value of the target point in the second coordinate system.
  • Step 25 Determine the coordinate residual value of the target point in the second coordinate system based on the coordinate reconstruction value and the coordinate value of the target point in the second coordinate system.
  • the difference between the predicted coordinate value and the reconstructed coordinate value of the target point in the second coordinate system is determined as the coordinate residual value of the target point in the second coordinate system.
  • the encoder writes the coordinate residual value of the target point in the first coordinate system and the coordinate residual value of the target point in the second coordinate system into the geometric code stream.
  • the encoder quantizes at least one of a coordinate residual value of the target point in the first coordinate system and a coordinate residual value of the target point in the second coordinate system, and then writes the quantized value into a geometric code stream.
  • the encoding end further determines the order of the target point in the single-link structure, that is, o(P), based on the above formula (6), and then encodes o(P).
  • the above embodiment is described by taking the geometric encoding process of the current node in L3C2 as an example.
  • the encoding process of other nodes in L3C2 can refer to the encoding process of the current node, and then the reconstructed point cloud geometric information can be obtained.
  • the point cloud encoding method determines the L3C2 structure of the point cloud, determines the total number of nodes of L3C2 of the point cloud, and the total number of points of the point cloud, and determines the total number of repeated points included in the point cloud based on the total number of points of the point cloud and the total number of nodes of L3C2 of the point cloud. In this way, during encoding, the number of encoded repeated points is recorded in real time, and the number of encoded repeated points is compared with the total number of repeated points included in the point cloud to determine whether to encode the repeated point information of the node when encoding the current node.
  • the number of encoded repeated points is equal to the total number of repeated points in the point cloud, indicating that the repeated points in the point cloud have been encoded, and the remaining nodes do not include repeated points, and thus there is no need to encode the repeated point information of subsequent nodes, thereby reducing the encoding complexity of the point cloud, saving encoding time, and improving encoding efficiency.
  • FIGS. 13 to 14 are merely examples of the present application and should not be construed as limitations to the present application.
  • the size of the sequence number of each process does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the term "and/or” is merely a description of the association relationship of associated objects, indicating that three relationships may exist. Specifically, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" in this article generally indicates that the objects associated before and after are in an "or" relationship.
  • FIG. 15 is a schematic block diagram of a point cloud decoding device provided in an embodiment of the present application.
  • the point cloud decoding device 10 may include:
  • the first decoding unit 11 is used to decode the geometric code stream of the point cloud to obtain the total number of nodes of the low-delay and low-complexity coding model L3C2 of the point cloud and the total number of points of the point cloud;
  • a point number determination unit 12 configured to determine the total number of repeated points of the point cloud according to the total number of points of the point cloud and the total number of nodes of the L3C2;
  • a repeated point determination unit 13 used to determine the number of decoded repeated points when decoding the current node in the L3C2;
  • the second decoding unit 14 is used to decode the current node according to the number of the decoded repeated points and the total number of repeated points in the point cloud.
  • the point number determination unit 12 is specifically configured to determine the difference between the total number of points of the point cloud and the total number of nodes of the L3C2 as the total number of repeated points of the point cloud.
  • the second decoding unit 14 is specifically used to skip decoding the duplicate point information of the current node if the number of decoded duplicate points is equal to the total number of duplicate points in the point cloud, and determine the geometric reconstruction value of the first point included in the current node.
  • the second decoding unit 14 is specifically used to skip decoding the duplicate point information of the current node if the number of decoded duplicate points is less than the total number of duplicate points in the point cloud and the current node is the last node in the L3C2; and determine the difference between the total number of duplicate points in the point cloud and the number of decoded duplicate points as the number of duplicate points included in the current node.
  • the second decoding unit 14 is specifically used to decode the duplicate point information of the current node if the number of decoded duplicate points is less than the total number of duplicate points in the point cloud and the current node is not the last node in the L3C2; based on the duplicate point information, determine the geometric reconstruction values of the N duplicate points included in the current node, where N is an integer.
  • the second decoding unit 14 is specifically used to determine the geometric reconstruction value of the first point in the current node; based on the repeated point information and the geometric reconstruction value of the first point, determine the geometric reconstruction values of the N repeated points.
  • the second decoding unit 14 is specifically used to parse the repeated point information for the i-th repeated point among the N repeated points, and obtain the first flag corresponding to the i-th repeated point, where i is a positive integer less than or equal to N, and if i is 1, the i-1th repeated point is the first point; based on the first flag corresponding to the i-th repeated point, determine the geometric reconstruction value of the i-th repeated point.
  • the second decoding unit 14 is specifically used to determine the geometric reconstruction value of the i-1th repeated point as the geometric reconstruction value of the i-1th repeated point if the first flag indicates that the coordinates of the i-1th repeated point and the i-1th repeated point in the second coordinate system are the same.
  • the second decoding unit 14 is specifically used to determine the geometric reconstruction value of the i-th repeated point if the first flag indicates that the coordinates of the i-th repeated point and the i-1-th repeated point in the second coordinate system are different.
  • the second decoding unit 14 is specifically configured to determine the sum of the number of repeated points included in the current node and the number of decoded repeated points as the new number of decoded repeated points.
  • the second decoding unit 14 is specifically used to determine a coordinate prediction value of a target point in a first coordinate system, where the target point is the first point in the current node, or the first point included in the current node, or the i-th repeated point among N repeated points included in the current node; based on the coordinate prediction value of the target point in the first coordinate system, determine a coordinate reconstruction value of the target point in the first coordinate system; based on the coordinate reconstruction value of the target point in the first coordinate system, determine a coordinate reconstruction value of the target point in a second coordinate system; and determine the coordinate reconstruction value of the target point in the second coordinate system as the geometric reconstruction value of the target point.
  • the second decoding unit 14 is specifically used to decode the geometric code stream to obtain a prediction mode corresponding to the target point; based on the prediction mode, determine a coordinate prediction value of the target point in the first coordinate system.
  • the second decoding unit 14 is specifically used to decode the geometric code stream to obtain the quantized coordinate residual value of the target point in the first coordinate system; dequantize the quantized coordinate residual value to obtain the coordinate residual value of the target point in the first coordinate system; based on the coordinate prediction value and the coordinate residual value of the target point in the first coordinate system, obtain the coordinate reconstruction value of the target point in the first coordinate system.
  • the second decoding unit 14 is specifically configured to determine the sum of the coordinate prediction value and the coordinate residual value of the target point in the first coordinate system as the coordinate reconstruction value of the target point in the first coordinate system.
  • the second decoding unit 14 is specifically used to perform coordinate conversion on the coordinate reconstruction value of the target point in the first coordinate system to obtain a coordinate prediction value of the target point in the second coordinate system; decode the geometric code stream to obtain a coordinate residual value of the target point in the second coordinate system; and obtain a coordinate reconstruction value of the target point in the second coordinate system based on the coordinate prediction value and the coordinate residual value of the target point in the second coordinate system.
  • the second decoding unit 14 is specifically configured to determine the sum of the coordinate prediction value and the coordinate residual value of the target point in the second coordinate system as the coordinate reconstruction value of the target point in the second coordinate system.
  • the first decoding unit 11 is specifically used to decode the geometric code stream of the point cloud to obtain the number of single chains included in the L3C2 and the number of nodes included in each single chain; based on the number of single chains and the number of nodes included in each single chain, the total number of nodes of the L3C2 is obtained.
  • the first decoding unit 11 is specifically configured to decode the geometric data unit of the point cloud to obtain the total number of points of the point cloud.
  • the first decoding unit 11 is specifically used to decode the geometric data unit to obtain a first value, where the first value is the number of points in the point cloud minus one; and add one to the first value to obtain the total number of points in the point cloud.
  • the device embodiment and the method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, no further description is given here.
  • the point cloud decoding device 10 shown in FIG. 15 may correspond to the corresponding subject in the point cloud decoding method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the point cloud decoding device 10 are respectively for implementing the corresponding processes in the point cloud decoding method, and for the sake of brevity, no further description is given here.
  • FIG16 is a schematic block diagram of a point cloud encoding device provided in an embodiment of the present application.
  • the point cloud encoding device 20 includes:
  • a structure determination unit 21 used to determine a low-delay and low-complexity coding model L3C2 structure of the point cloud;
  • a point number determination unit 22 used to determine the total number of nodes of L3C2 of the point cloud and the total number of points of the point cloud;
  • a repeated point determination unit 23 used to determine the total number of repeated points of the point cloud according to the total number of points of the point cloud and the total number of nodes of the L3C2;
  • a calculation unit 24 configured to determine the number of encoded repeated points when encoding the current node in the L3C2;
  • the encoding unit 25 is used to encode the current node according to the number of the encoded repeated points and the total number of repeated points in the point cloud.
  • the point number determination unit 22 is specifically configured to determine the difference between the total number of points of the point cloud and the total number of nodes of the L3C2 as the total number of repeated points of the point cloud.
  • the encoding unit 25 is specifically used to skip determining and encoding the duplicate point information of the current node if the number of encoded duplicate points is equal to the total number of duplicate points of the point cloud; determine the coordinate residual value of the first point included in the current node, and write the coordinate residual value of the first point into the geometric code stream of the point cloud.
  • the encoding unit 25 is specifically used to skip encoding the number of duplicate points included in the current node if the number of encoded duplicate points is less than the total number of duplicate points in the point cloud and the current node is the last node in the L3C2; determine the coordinate residual value of the first point in the current node; and write the coordinate residual value of the first point and the duplicate point information of the current node into the geometric code stream.
  • the encoding unit 25 is specifically used to determine the duplicate point information of the current node if the number of the encoded duplicate points is less than the total number of duplicate points in the point cloud and the current node is not the last node in the L3C2; determine the coordinate residual value of the first point in the current node; and write the coordinate residual value of the first point and the duplicate point information of the current node into the geometric code stream.
  • the encoding unit 25 is specifically used to determine the number of N repeated points included in the current node; based on the coordinate value of the first point included in the current node in the second coordinate system, and the coordinate values of the N repeated points in the second coordinate system, determine the first flags corresponding to the N repeated points respectively; based on the number of N repeated points included in the current node and the first flags corresponding to the N repeated points respectively, determine the repeated point information of the current node.
  • the encoding unit 25 is specifically used to determine, for the i-th repeated point among the N repeated points, a first mark corresponding to the i-th repeated point based on the coordinate value of the i-1-th repeated point in the second coordinate system and the coordinate value of the i-th repeated point in the second coordinate system, where i is a positive integer greater than 0 and less than or equal to N, and if i is 1, the i-1-th repeated point is the first point.
  • the encoding unit 25 is specifically used to set the value of the first mark to a first numerical value if the coordinate value of the i-th repeated point in the second coordinate system is the same as the coordinate value of the i-1-th repeated point in the second coordinate system, and the first numerical value indicates that the coordinate value of the i-th repeated point in the second coordinate system is the same as the coordinate value of the i-1-th repeated point in the second coordinate system.
  • the encoding unit 25 is specifically used to set the value of the first mark to a second numerical value if the coordinate value of the i-th repeated point in the second coordinate system is different from the coordinate value of the i-1-th repeated point in the second coordinate system, and the second numerical value indicates that the coordinate value of the i-th repeated point in the second coordinate system is different from the coordinate value of the i-1-th repeated point in the second coordinate system.
  • the encoding unit 25 is also used to determine the coordinate residual value of the i-th repeated point if the coordinate value of the i-th repeated point in the second coordinate system is different from the coordinate value of the i-1-th repeated point in the second coordinate system; and write the coordinate residual value of the i-th repeated point into the geometric code stream.
  • the encoding unit 25 is specifically configured to determine the sum of the number of repeated points included in the current node and the number of encoded repeated points as the number of new encoded repeated points.
  • the encoding unit 25 is specifically used to determine a coordinate prediction value of a target point in a first coordinate system, where the target point is the first point in the current node, or the first point included in the current node, or the i-th repeated point among N repeated points included in the current node; based on the coordinate prediction value of the target point in the first coordinate system and the coordinate value of the target point in the first coordinate system, determine the residual value of the target point in the first coordinate system; based on the coordinate prediction value of the target point in the first coordinate system, obtain the coordinate reconstruction value of the target point in the first coordinate system; perform coordinate transformation on the coordinate reconstruction value of the target point in the first coordinate system to obtain the coordinate reconstruction value of the target point in the second coordinate system; based on the coordinate reconstruction value and coordinate value of the target point in the second coordinate system, determine the coordinate residual value of the target point in the second coordinate system.
  • the encoding unit 25 is specifically configured to quantize at least one of a coordinate residual value of the target point in the first coordinate system and a coordinate residual value of the target point in the second coordinate system, and then write the quantized value into the geometric code stream.
  • the encoding unit 25 is specifically configured to determine a prediction mode corresponding to the target point; and based on the prediction mode, determine a coordinate prediction value of the target point in the first coordinate system.
  • the encoding unit 25 is specifically configured to write the prediction mode corresponding to the target point into the geometric code stream.
  • the encoding unit 25 is specifically used to dequantize the quantized coordinate residual value of the target point in the first coordinate system to obtain the coordinate residual value of the target point in the first coordinate system; based on the coordinate prediction value and the coordinate residual value of the target point in the first coordinate system, obtain the coordinate reconstruction value of the target point in the first coordinate system.
  • the encoding unit 25 is specifically configured to determine the sum of the coordinate prediction value and the coordinate residual value of the target point in the first coordinate system as the coordinate reconstruction value of the target point in the first coordinate system.
  • the point determination unit 22 is specifically used to determine the number of single chains included in the L3C2 and the number of nodes included in each single chain; based on the number of single chains and the number of nodes included in each single chain, the total number of nodes of the L3C2 is obtained.
  • the encoding unit 25 is further used to write the number of single chains included in the L3C2 and the number of nodes included in each single chain into the geometric code stream of the point cloud.
  • the encoding unit 25 is further configured to write the total number of points of the point cloud into the geometry data unit.
  • the encoding unit 25 is further used to subtract 1 from the total number of points in the point cloud to obtain a first value; and write the first value into the geometric data unit.
  • the device embodiment and the method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, it will not be repeated here.
  • the point cloud coding device 20 shown in Figure 16 may correspond to the corresponding subject in the point cloud coding method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the point cloud coding device 20 are respectively for implementing the corresponding processes in the point cloud coding method. For the sake of brevity, they will not be repeated here.
  • the functional unit can be implemented in hardware form, can be implemented by instructions in software form, and can also be implemented by a combination of hardware and software units.
  • the steps of the method embodiment in the embodiment of the present application can be completed by the hardware integrated logic circuit and/or software form instructions in the processor, and the steps of the method disclosed in the embodiment of the present application can be directly embodied as a hardware decoding processor to perform, or a combination of hardware and software units in the decoding processor to perform.
  • the software unit can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in a memory, and the processor reads the information in the memory, and completes the steps in the above method embodiment in conjunction with its hardware.
  • FIG. 17 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device 30 may be a point cloud decoding device or a point cloud encoding device as described in an embodiment of the present application, and the electronic device 30 may include:
  • the memory 33 and the processor 32, the memory 33 is used to store the computer program 34 and transmit the program code 34 to the processor 32.
  • the processor 32 can call and run the computer program 34 from the memory 33 to implement the method in the embodiment of the present application.
  • the processor 32 may be configured to execute the steps in the method 200 according to the instructions in the computer program 34 .
  • the processor 32 may include but is not limited to:
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the memory 33 includes but is not limited to:
  • Non-volatile memory can be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) or flash memory.
  • the volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link DRAM
  • Direct Rambus RAM Direct Rambus RAM, DR RAM
  • the computer program 34 may be divided into one or more units, which are stored in the memory 33 and executed by the processor 32 to complete the method provided by the present application.
  • the one or more units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 34 in the electronic device 30.
  • the electronic device 30 may further include:
  • the transceiver 33 may be connected to the processor 32 or the memory 33 .
  • the processor 32 may control the transceiver 33 to communicate with other devices, specifically, to send information or data to other devices, or to receive information or data sent by other devices.
  • the transceiver 33 may include a transmitter and a receiver.
  • the transceiver 33 may further include an antenna, and the number of antennas may be one or more.
  • bus system includes not only a data bus but also a power bus, a control bus and a status signal bus.
  • Figure 18 is a schematic block diagram of the point cloud encoding and decoding system provided in an embodiment of the present application.
  • the point cloud encoding and decoding system 40 may include: a point cloud encoder 41 and a point cloud decoder 42, wherein the point cloud encoder 41 is used to execute the point cloud encoding method involved in the embodiment of the present application, and the point cloud decoder 42 is used to execute the point cloud decoding method involved in the embodiment of the present application.
  • the present application also provides a code stream, which is generated according to the above encoding method.
  • the present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
  • the present application embodiment also provides a computer program product containing instructions, and when the instructions are executed by a computer, the computer can perform the method of the above method embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from a website, computer, server or data center to another website, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integrated.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a solid state drive (solid state disk, SSD)), etc.
  • a magnetic medium e.g., a floppy disk, a hard disk, a tape
  • an optical medium e.g., a digital video disc (digital video disc, DVD)
  • a semiconductor medium e.g., a solid state drive (solid state disk, SSD)
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into a processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente demande propose un procédé et un appareil de codage/décodage de nuage de points, ainsi qu'un dispositif et un support d'enregistrement. Le procédé de codage/décodage de nuage de points comprend les étapes suivantes : pendant un codage/décodage de nuage de points basé sur L3C2, la détermination, d'abord, en fonction du nombre total de points compris dans un nuage de points et du nombre total de nœuds de L3C2 du nuage de points, du nombre total de points dupliqués compris dans le nuage de points ; et pendant un tel codage/décodage, l'enregistrement en temps réel du nombre de points dupliqués qui ont été codés/décodés, et la comparaison entre le nombre de points dupliqués qui ont été codés/décodés et le nombre total de points dupliqués compris dans le nuage de points, de façon à déterminer s'il faut coder/décoder des informations de points dupliqués du nœud actuel lors du codage/décodage du nœud. Par exemple, lorsque le nœud actuel est codé/décodé, il est déterminé que le nombre de points dupliqués qui ont été codés/décodés est égal au nombre total de points dupliqués dans le nuage de points, indique que le codage/décodage de points dupliqués dans le nuage de points s'est terminé, et qu'aucun des nœuds restants ne comprend de points dupliqués et, par conséquent, il n'est pas nécessaire de coder/décoder des informations de points dupliqués des nœuds suivants, ce qui permet de réduire la complexité de codage/décodage du nuage de points, d'économiser du temps de codage/décodage, et d'augmenter l'efficacité de codage/décodage.
PCT/CN2022/122116 2022-09-28 2022-09-28 Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement WO2024065271A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/122116 WO2024065271A1 (fr) 2022-09-28 2022-09-28 Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/122116 WO2024065271A1 (fr) 2022-09-28 2022-09-28 Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2024065271A1 true WO2024065271A1 (fr) 2024-04-04

Family

ID=90475255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/122116 WO2024065271A1 (fr) 2022-09-28 2022-09-28 Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2024065271A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565794A (zh) * 2020-12-03 2021-03-26 西安电子科技大学 一种点云孤立点编码、解码方法及装置
CN112997498A (zh) * 2018-11-13 2021-06-18 松下电器(美国)知识产权公司 三维数据编码方法、三维数据解码方法、三维数据编码装置及三维数据解码装置
WO2021202002A1 (fr) * 2020-03-30 2021-10-07 Tencent America LLC Procédés de codage de points doubles et isolés pour un codage de nuage de points
CN114402624A (zh) * 2019-08-02 2022-04-26 Lg 电子株式会社 点云数据处理设备和方法
CN114598883A (zh) * 2020-12-07 2022-06-07 腾讯科技(深圳)有限公司 点云属性的预测方法、编码器、解码器及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112997498A (zh) * 2018-11-13 2021-06-18 松下电器(美国)知识产权公司 三维数据编码方法、三维数据解码方法、三维数据编码装置及三维数据解码装置
CN114402624A (zh) * 2019-08-02 2022-04-26 Lg 电子株式会社 点云数据处理设备和方法
WO2021202002A1 (fr) * 2020-03-30 2021-10-07 Tencent America LLC Procédés de codage de points doubles et isolés pour un codage de nuage de points
CN112565794A (zh) * 2020-12-03 2021-03-26 西安电子科技大学 一种点云孤立点编码、解码方法及装置
CN114598883A (zh) * 2020-12-07 2022-06-07 腾讯科技(深圳)有限公司 点云属性的预测方法、编码器、解码器及存储介质

Similar Documents

Publication Publication Date Title
US11803989B2 (en) Quantization for geometry-based point cloud compression
US11601488B2 (en) Device and method for transmitting point cloud data, device and method for processing point cloud data
US11910017B2 (en) Method for predicting point cloud attribute, encoder, decoder, and storage medium
US20230377208A1 (en) Geometry coordinate scaling for ai-based dynamic point cloud coding
WO2024065271A1 (fr) Procédé et appareil de codage/décodage de nuage de points, et dispositif et support d'enregistrement
WO2024065270A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositifs, et support de stockage
WO2024065269A1 (fr) Procédé et appareil de codage et de décodage de nuage de points, dispositif, et support de stockage
WO2024065272A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositif, et support de stockage
WO2024145934A1 (fr) Procédé et appareil de codage/décodage de nuage de points, dispositif, et support de stockage
WO2024145933A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositifs, et support de stockage
WO2024145912A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositif, et support de stockage
WO2024145913A1 (fr) Procédé et appareil de codage et de décodage de nuage de points, dispositif, et support de stockage
WO2024145935A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositif, et support de stockage
WO2024145911A1 (fr) Procédé et appareil de codage/décodage de nuage de points, dispositif, et support d'enregistrement
WO2024065406A1 (fr) Procédés de codage et de décodage, train de bits, codeur, décodeur et support de stockage
US20240015325A1 (en) Point cloud coding and decoding methods, coder, decoder and storage medium
WO2022257150A1 (fr) Procédés et appareil de codage et de décodage de nuage de points, codec de nuage de points et support de stockage
WO2024065408A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage
WO2022257145A1 (fr) Procédé et appareil de prédiction d'attributs de nuage de points, et codec
WO2023024842A1 (fr) Procédé, appareil et dispositif de codage/décodage de nuage de points, et support de stockage
US20240087174A1 (en) Coding and decoding point cloud attribute information
TWI806481B (zh) 點雲中鄰居點的選擇方法及裝置、編碼設備、解碼設備及電腦設備
CN115474041B (zh) 点云属性的预测方法、装置及相关设备
CN117354496A (zh) 点云编解码方法、装置、设备及存储介质
CN116866615A (zh) 点云编码方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22959918

Country of ref document: EP

Kind code of ref document: A1