WO2024011381A1 - 点云编解码方法、装置、设备及存储介质 - Google Patents

点云编解码方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2024011381A1
WO2024011381A1 PCT/CN2022/105000 CN2022105000W WO2024011381A1 WO 2024011381 A1 WO2024011381 A1 WO 2024011381A1 CN 2022105000 W CN2022105000 W CN 2022105000W WO 2024011381 A1 WO2024011381 A1 WO 2024011381A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
current
motion vector
information
parameter
Prior art date
Application number
PCT/CN2022/105000
Other languages
English (en)
French (fr)
Inventor
徐异凌
侯礼志
高粼遥
魏红莲
Original Assignee
上海交通大学
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海交通大学, Oppo广东移动通信有限公司 filed Critical 上海交通大学
Priority to PCT/CN2022/105000 priority Critical patent/WO2024011381A1/zh
Publication of WO2024011381A1 publication Critical patent/WO2024011381A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Definitions

  • the present application relates to the field of point cloud technology, and in particular to a point cloud encoding and decoding method, device, equipment and storage medium.
  • point cloud data which includes hundreds of thousands or more points.
  • point cloud data is transmitted between the point cloud encoding device and the point cloud decoding device in the form of point cloud media files.
  • point cloud encoding equipment needs to compress the point cloud data before transmitting it.
  • Embodiments of the present application provide a point cloud encoding and decoding method, device, equipment and storage medium to reduce encoding and decoding processing time and improve encoding and decoding efficiency.
  • embodiments of the present application provide a point cloud decoding method, including:
  • the current decoding unit is decoded according to at least one of classification information and motion vector information of the current decoding unit.
  • this application provides a point cloud coding method, including:
  • the first parameter is used to indicate a calculation period of classification information
  • the second parameter is used to indicate a calculation period of motion vector information
  • the current coding unit is encoded according to at least one of classification information and motion vector information of the current coding unit.
  • this application provides a point cloud decoding device for performing the method in the above first aspect or its respective implementations.
  • the device includes a functional unit for performing the method in the above-mentioned first aspect or its respective implementations.
  • the present application provides a point cloud encoding device for executing the method in the above second aspect or its respective implementations.
  • the device includes a functional unit for performing the method in the above-mentioned second aspect or its respective implementations.
  • the fifth aspect provides a point cloud decoder, including a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program stored in the memory to execute the method in the above first aspect or its respective implementations.
  • a sixth aspect provides a point cloud encoder, including a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program stored in the memory to execute the method in the above second aspect or its respective implementations.
  • a point cloud encoding and decoding system including a point cloud encoder and a point cloud decoder.
  • the point cloud decoder is used to perform the method in the above-mentioned first aspect or its various implementations, and the point cloud encoder is used to perform the method in the above-mentioned second aspect or its various implementations.
  • An eighth aspect provides a chip for implementing any one of the above-mentioned first to second aspects or the method in each implementation manner thereof.
  • the chip includes: a processor, configured to call and run a computer program from a memory, so that the device installed with the chip executes any one of the above-mentioned first to second aspects or implementations thereof. method.
  • a ninth aspect provides a computer-readable storage medium for storing a computer program that causes a computer to execute any one of the above-mentioned first to second aspects or the method in each implementation thereof.
  • a computer program product including computer program instructions, which enable a computer to execute any one of the above-mentioned first to second aspects or the methods in each implementation thereof.
  • An eleventh aspect provides a computer program that, when run on a computer, causes the computer to execute any one of the above-mentioned first to second aspects or the method in each implementation thereof.
  • a twelfth aspect provides a code stream.
  • the code stream is generated based on the method of the second aspect.
  • the code stream includes at least one of a first parameter and a second parameter.
  • At least one of the classification information and the motion vector information of the current decoding unit is determined by decoding the point cloud code stream, wherein the classification information is determined based on the first parameter, and the motion vector information is determined based on the second parameter.
  • the first parameter is used to indicate the calculation period of classification information
  • the second parameter is used to indicate the calculation period of motion vector information.
  • the current decoding unit is decoded according to at least one of the classification information and the motion vector information of the current decoding unit. That is, in this embodiment of the present application, classification information and motion vector information are periodically calculated. Compared with calculating classification information and motion vector information once for each decoding unit, this embodiment of the present application greatly reduces the number of calculations of classification information and motion vector information. , reducing the encoding and decoding processing time and improving encoding and decoding efficiency.
  • Figure 1 is a schematic block diagram of a point cloud encoding and decoding system involved in an embodiment of the present application
  • Figure 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
  • Figure 3 is a schematic block diagram of a point cloud decoder provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart of a point cloud decoding method provided by an embodiment of the present application.
  • Figure 5 is a point cloud histogram related to the embodiment of the present application.
  • Figure 6 is a schematic flow chart of a point cloud encoding method provided by an embodiment of the present application.
  • Figure 7 is a schematic block diagram of a point cloud decoding device provided by an embodiment of the present application.
  • Figure 8 is a schematic block diagram of a point cloud encoding device provided by an embodiment of the present application.
  • Figure 9 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • Figure 10 is a schematic block diagram of a point cloud encoding and decoding system provided by an embodiment of the present application.
  • This application can be applied to the field of point cloud upsampling technology, for example, can be applied to the field of point cloud compression technology.
  • Point Cloud refers to a set of discrete points randomly distributed in space that express the spatial structure and surface properties of a three-dimensional object or scene.
  • Point Cloud Data is a specific record form of point cloud.
  • Points in point cloud can include point location information and point attribute information.
  • the position information of the point may be the three-dimensional coordinate information of the point.
  • the position information of a point can also be called the geometric information of the point.
  • point attribute information may include color information, reflectance information, normal vector information, and so on.
  • the color information may be information in any color space.
  • the color information may be (RGB).
  • the color information may be brightness and chromaticity (YcbCr, YUV) information.
  • Y represents brightness (Luma)
  • Cb(U) represents blue color difference
  • Cr(V) represents red
  • U and V represent chroma (Chroma), which is used to describe color difference information.
  • the points in the point cloud may include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point.
  • the points in the point cloud may include the three-dimensional coordinate information of the point and the color information of the point.
  • a point cloud is obtained by combining the principles of laser measurement and photogrammetry.
  • the points in the point cloud may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point, and the color information of the point.
  • the ways to obtain point cloud data may include but are not limited to at least one of the following: (1) Generated by computer equipment.
  • the computer device can generate point cloud data based on virtual three-dimensional objects and virtual three-dimensional scenes.
  • Point cloud data of static real-world three-dimensional objects or three-dimensional scenes can be obtained through 3D laser scanning, and millions of point cloud data can be obtained per second;
  • Real-world visual scenes are collected through 3D photography equipment (i.e., a set of cameras or camera equipment with multiple lenses and sensors) to obtain point cloud data of real-world visual scenes.
  • Dynamic real-world three-dimensional objects can be obtained through 3D photography.
  • Point clouds can be divided into dense point clouds and sparse point clouds according to the way they are obtained.
  • Point clouds are divided into: according to the time series type of data:
  • the first type of static point cloud that is, the object is stationary and the device that obtains the point cloud is also stationary;
  • the second type of dynamic point cloud the object is moving, but the device that obtains the point cloud is stationary;
  • the third type of dynamically acquired point cloud the device that acquires the point cloud is in motion.
  • Point clouds are divided into two categories according to their uses:
  • Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and rescue and disaster relief robots;
  • Category 2 Human eye perception point cloud, which can be used in point cloud application scenarios such as digital cultural heritage, free-viewpoint broadcasting, three-dimensional immersive communication, and three-dimensional immersive interaction.
  • point cloud is widely used in virtual reality, immersive telepresence, 3D printing and other fields.
  • 3D point clouds often have a huge number of points, and the distribution of points is disordered in space; at the same time, each point often has rich attribute information, resulting in a point cloud with a huge amount of data, which makes the point cloud Storage and transmission bring huge challenges. Therefore, point cloud compression coding technology is one of the key technologies for point cloud processing and application.
  • Figure 1 is a schematic block diagram of a point cloud encoding and decoding system related to an embodiment of the present application. It should be noted that Figure 1 is only an example, and the point cloud encoding and decoding system in the embodiment of the present application includes but is not limited to what is shown in Figure 1 .
  • the point cloud encoding and decoding system 100 includes an encoding device 110 and a decoding device 120 .
  • the encoding device is used to encode the point cloud data (which can be understood as compression) to generate a code stream, and transmit the code stream to the decoding device.
  • the decoding device decodes the code stream generated by the encoding device to obtain decoded point cloud data.
  • the encoding device 110 in the embodiment of the present application can be understood as a device with a point cloud encoding function
  • the decoding device 120 can be understood as a device with a point cloud decoding function. That is, the embodiment of the present application includes a wider range of applications for the encoding device 110 and the decoding device 120.
  • Devices including, for example, smartphones, desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, televisions, cameras, display devices, digital media players, point cloud gaming consoles, vehicle-mounted computers, etc. .
  • the encoding device 110 may transmit the encoded point cloud data (such as a code stream) to the decoding device 120 via the channel 130 .
  • Channel 130 may include one or more media and/or devices capable of transmitting encoded point cloud data from encoding device 110 to decoding device 120 .
  • channel 130 includes one or more communication media that enables encoding device 110 to transmit encoded point cloud data directly to decoding device 120 in real time.
  • the encoding device 110 may modulate the encoded point cloud data according to the communication standard and transmit the modulated point cloud data to the decoding device 120 .
  • the communication media includes wireless communication media, such as radio frequency spectrum.
  • the communication media may also include wired communication media, such as one or more physical transmission lines.
  • channel 130 includes a storage medium that can store point cloud data encoded by encoding device 110 .
  • Storage media include a variety of local access data storage media, such as optical disks, DVDs, flash memories, etc.
  • the decoding device 120 may obtain the encoded point cloud data from the storage medium.
  • channel 130 may include a storage server that may store point cloud data encoded by encoding device 110 .
  • the decoding device 120 may download the stored encoded point cloud data from the storage server.
  • the storage server can store the encoded point cloud data and can transmit the encoded point cloud data to the decoding device 120, such as a web server (for example, for a website), a file transfer protocol (FTP) server, etc. .
  • FTP file transfer protocol
  • the encoding device 110 includes a point cloud encoder 112 and an output interface 113 .
  • the output interface 113 may include a modulator/demodulator (modem) and/or a transmitter.
  • the encoding device 110 may also include a point cloud source 111 in addition to the point cloud encoder 112 and the input interface 113 .
  • the point cloud source 111 may include at least one of a point cloud acquisition device (eg, a scanner), a point cloud archive, a point cloud input interface, and a computer graphics system, where the point cloud input interface is used to receive data from a point cloud content provider.
  • Point Cloud Data,Computer graphics systems are used to generate point cloud,data.
  • the point cloud encoder 112 encodes the point cloud data from the point cloud source 111 to generate a code stream.
  • the point cloud encoder 112 directly transmits the encoded point cloud data to the decoding device 120 via the output interface 113 .
  • the encoded point cloud data can also be stored in a storage medium or storage server for subsequent reading by the decoding device 120 .
  • decoding device 120 includes an input interface 121 and a point cloud decoder 122.
  • the decoding device 120 may also include a display device 123.
  • the input interface 121 includes a receiver and/or a modem. Input interface 121 may receive encoded point cloud data via channel 130 .
  • the point cloud decoder 122 is used to decode the encoded point cloud data to obtain decoded point cloud data, and transmit the decoded point cloud data to the display device 123 .
  • the display device 123 displays the decoded point cloud data.
  • Display device 123 may be integrated with decoding device 120 or external to decoding device 120 .
  • Display device 123 may include a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
  • LCD liquid crystal display
  • plasma display a plasma display
  • OLED organic light emitting diode
  • Figure 1 is only an example, and the technical solution of the embodiment of the present application is not limited to Figure 1.
  • the technology of the present application can also be applied to unilateral point cloud encoding or unilateral point cloud decoding.
  • the current point cloud encoder can use the International Standards Organization Moving Picture Experts Group (MPEG) to propose two point cloud compression encoding technology routes, namely projection-based point cloud compression (Video-based Point Cloud Compression, VPCC) and Geometry-based Point Cloud Compression (GPCC).
  • MPEG Moving Picture Experts Group
  • VPCC projects the three-dimensional point cloud into two dimensions and uses existing two-dimensional coding tools to encode the projected two-dimensional image.
  • GPCC uses a hierarchical structure to divide the point cloud into multiple units step by step, and records the divisions through coding. The process encodes the entire point cloud.
  • Figure 2 is a schematic block diagram of a point cloud encoder provided by an embodiment of the present application.
  • points in a point cloud can include point position information and point attribute information. Therefore, the coding of points in a point cloud mainly includes position coding and attribute coding.
  • the position information of the points in the point cloud is also called geometric information, and the corresponding position encoding of the points in the point cloud can also be called geometric encoding.
  • the geometric information and corresponding attribute information of the point cloud are encoded separately.
  • the process of position encoding includes: first establishing a minimum cube that surrounds all points of the point cloud, which is called a minimum bounding box. Carry out octree division of the minimum bounding box, that is, divide the bounding box into eight equal parts into eight sub-cubes, and continue to divide the non-empty sub-cubes (including points in the point cloud) into eight equal parts until the divided leaf nodes are The division stops when the unit cube is 1 ⁇ 1 ⁇ 1. During this process, an 8-bit binary number is used to encode the occupancy of the 8 sub-cubes generated by each division to generate a binary geometry bit stream, that is, a geometry code stream.
  • the points in the point cloud are preprocessed, such as coordinate transformation, quantization and removal of duplicate points, etc.; then, the preprocessed point cloud is geometrically encoded, such as constructing an octree, based on the constructed octree Perform geometric encoding to form a geometric code stream.
  • the position information of each point in the point cloud data is reconstructed to obtain the reconstructed value of the position information of each point.
  • the attribute encoding process includes: given the reconstructed information of the position information of the input point cloud and the original value of the attribute information, select one of the three prediction modes for point cloud prediction, quantify the predicted results, and perform arithmetic coding to form Attribute code stream.
  • position encoding can be implemented through the following units:
  • Coordinate transformation transformation (Tanmsform coordinates) unit 201, voxelize (Voxelize) unit 202, octree division (Analyze octree) unit 203, geometric reconstruction (Reconstruct geometry) unit 204, first arithmetic encoding (Arithmetic enconde) unit 205 and surface simulation Analyze surface approximation)206.
  • the coordinate conversion unit 201 may be used to convert world coordinates of points in the point cloud into relative coordinates. For example, the geometric coordinates of a point are subtracted from the minimum value of the xyz coordinate axis, which is equivalent to a DC removal operation to convert the coordinates of the points in the point cloud from world coordinates to relative coordinates.
  • the Voxelize unit 202 is also called the Quantize and remove points unit, which can reduce the number of coordinates through quantization; after quantization, originally different points may be assigned the same coordinates. Based on this, the The deduplication operation removes duplicate points; for example, multiple clouds with the same quantified position and different attribute information can be merged into one cloud through attribute transformation.
  • the voxel unit 202 is an optional unit module.
  • the octree dividing unit 203 may use an octree encoding method to encode the position information of the quantized points.
  • the point cloud is divided in the form of an octree, so that the position of the point can correspond to the position of the octree.
  • the flag is recorded as 1 for geometric coding.
  • the point cloud in the process of encoding geometric information based on triangle patch sets (trianglesoup, trisoup), the point cloud is also divided into octrees through the octree dividing unit 203, but it is different from the process based on octrees.
  • this trisoup does not need to divide the point cloud into unit cubes with a side length of 1x1x1 step by step, but divides it into blocks (sub-blocks) and stops dividing when the side length is W, based on the distribution of point clouds in each block From the formed surface, at most twelve vertex (intersection points) generated by the surface and the twelve edges of the block are obtained.
  • the intersection points are surface fitted through the surface fitting unit 206, and the fitted intersection points are geometrically encoded.
  • the geometric reconstruction unit 204 may perform position reconstruction based on the position information output by the octree dividing unit 203 or the intersection points fitted by the surface fitting unit 206 to obtain a reconstructed value of the position information of each point in the point cloud data.
  • the arithmetic coding unit 205 may use entropy coding to arithmetic encode the position information output by the octree analysis unit 203 or the intersection after fitting by the surface fitting unit 206.
  • the position information output by the octree analysis unit 203 may be encoded using arithmetic.
  • the encoding method generates a geometry code stream; the geometry code stream can also be called a geometry bitstream.
  • Attribute encoding is implemented via the following units:
  • Color conversion (Transform colors) unit 210 Color conversion (Transform colors) unit 210, recoloring (Transfer attributes) unit 211, Region Adaptive Hierarchical Transform (RAHT) unit 212, Generate LOD (Generate LOD) unit 213 and lifting (lifting transform) unit 214.
  • point cloud encoder 200 may include more, less, or different functional components than in FIG. 2 .
  • the color conversion unit 210 may be used to convert the RGB color space of points in the point cloud into YCbCr format or other formats.
  • the recoloring unit 211 uses the reconstructed geometric information to recolor the color information so that the uncoded attribute information corresponds to the reconstructed geometric information.
  • any transformation unit can be selected to transform the points in the point cloud.
  • the transformation unit may include: RAHT transformation 212 and lifting transform unit 214. Among them, the improvement change relies on generating a level of detail (LOD).
  • LOD level of detail
  • Either RAHT transform or lifting transform can be understood as being used to predict the attribute information of points in the point cloud to obtain the predicted value of the attribute information of the point, and then obtain the attribute information of the point based on the predicted value of the attribute information of the point.
  • the residual value For example, the residual value of the attribute information of the point may be the original value of the attribute information of the point minus the predicted value of the attribute information of the point.
  • the process of generating LOD by generating LOD units includes: obtaining the Euclidean distance between points according to the position information of the points in the point cloud; dividing the points into different detail expression layers according to the Euclidean distance .
  • different ranges of Euclidean distances can be divided into different detail expression levels. For example, you can randomly select a point as the first detail expression layer. Then calculate the Euclidean distance between the remaining points and this point, and classify the points whose Euclidean distance meets the first threshold requirement into the second detail expression layer.
  • the point cloud can be directly divided into one or more detail expression layers, or the point cloud can be divided into multiple point cloud slices first, and then each point cloud slice can be divided into one or more detail expression layers. Multiple LOD layers.
  • the point cloud can be divided into multiple point cloud slices, and the number of points in each point cloud slice can be between 550,000 and 1.1 million.
  • Each point cloud slice can be viewed as a separate point cloud.
  • Each point cloud slice can be divided into multiple detail expression layers, and each detail expression layer includes multiple points.
  • the detailed expression layer can be divided according to the Euclidean distance between points.
  • the quantization unit 215 may be used to quantize the residual value of the attribute information of the point. For example, if the quantization unit 215 and the RAHT transform unit 212 are connected, the quantization unit 215 may be used to quantize the residual value of the attribute information of the points output by the RAHT transform unit 212.
  • the arithmetic coding unit 216 may use zero run length coding to perform entropy coding on the residual value of the attribute information of the point to obtain the attribute code stream.
  • the attribute code stream may be bit stream information.
  • Figure 3 is a schematic block diagram of a point cloud decoder provided by an embodiment of the present application.
  • the decoder 300 can obtain the point cloud code stream from the encoding device, and obtain the position information and attribute information of the points in the point cloud by parsing the code.
  • Point cloud decoding includes position decoding and attribute decoding.
  • the process of position decoding includes: arithmetic decoding of the geometric code stream; merging after constructing the octree, reconstructing the position information of the point to obtain the reconstructed information of the position information of the point; performing coordinates on the reconstructed information of the position information of the point Transform to obtain the position information of the point.
  • the position information of a point can also be called the geometric information of the point.
  • the attribute decoding process includes: by parsing the attribute code stream, obtaining the residual value of the attribute information of the point cloud; by dequantizing the residual value of the attribute information of the point, obtaining the residual value of the dequantized attribute information of the point value; based on the reconstruction information of the position information of the point obtained during the position decoding process, select one of the following RAHT inverse transform and lifting inverse transform to perform point cloud prediction to obtain the predicted value.
  • the predicted value is added to the residual value to obtain the point cloud value.
  • the reconstructed value of the attribute information perform inverse color space conversion on the reconstructed value of the attribute information of the point to obtain the decoded point cloud.
  • position decoding can be achieved through the following units:
  • Arithmetic decoding unit 301 Arithmetic decoding unit 301, octree synthesis (synthesize octree) unit 302, surface fitting unit (Synthesize suface approximation) 303, geometric reconstruction (Reconstruct geometry) unit 304 and inverse coordinate transformation (inverse transform coordinates) unit 305.
  • Attribute encoding is implemented via the following units:
  • LOD Generate LOD
  • lifting inverse transform Inverse lifting
  • inverse color conversion inverse trasform colors
  • decompression is the reverse process of compression.
  • the functions of each unit in the decoder 300 can be referred to the functions of the corresponding units in the encoder 200 .
  • point cloud decoder 300 may include more, fewer, or different functional components than in FIG. 3 .
  • the decoder 300 can divide the point cloud into multiple LODs according to the Euclidean distance between points in the point cloud; then, decode the attribute information of the points in the LOD in sequence; for example, calculate the zero-run coding technology quantity (zero_cnt) to decode the residual based on zero_cnt; then, the decoding framework 200 can perform inverse quantization based on the decoded residual value, and obtain the result based on the addition of the inverse quantized residual value and the predicted value of the current point. The reconstructed value of the point cloud until all point clouds are decoded. The current point will be used as the nearest neighbor point of the subsequent LOD midpoint, and the reconstructed value of the current point will be used to predict the attribute information of the subsequent points.
  • zero_cnt zero-run coding technology quantity
  • the above is the basic process of the point cloud codec based on the GPCC codec framework. With the development of technology, some modules or steps of the framework or process may be optimized. This application is applicable to the point cloud codec based on the GPCC codec framework.
  • the basic process of the point cloud codec but is not limited to this framework and process.
  • Inter-frame prediction can be introduced to improve point cloud coding efficiency.
  • Inter-frame prediction mainly includes steps such as motion estimation and motion compensation.
  • the motion estimation step the spatial motion offset vectors of two adjacent frames are calculated and written into the code stream.
  • the motion compensation step the calculated motion vector is further used to calculate the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame.
  • the radar point cloud spans a large space and the motion vectors of different parts are very different, in some embodiments, the radar point cloud is divided into road and non-road parts, and only the non-road part is used to estimate the global motion vector.
  • the embodiments of the present application do not calculate the classification information and motion vector information once for each coding unit based on the similarity of the content of continuous point cloud frames, but calculate the classification information and motion vector information once every multiple coding units. , thereby reducing the number of calculations of classification information and motion vector information, reducing encoding and decoding processing time, and improving encoding and decoding efficiency.
  • FIG 4 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • the point cloud decoding method in the embodiment of the present application can be completed by the point cloud decoding device shown in Figure 1 or Figure 3 above.
  • the point cloud decoding method in this embodiment of the present application includes:
  • S101 Decode the point cloud code stream and determine at least one of the classification information and motion vector information of the current decoding unit.
  • the classification information is determined based on the first parameter, and the motion vector information is determined based on the second parameter.
  • the first parameter is used to indicate the calculation period of the classification information
  • the second parameter is used to indicate the calculation period of the motion vector information.
  • inter-frame prediction mainly includes steps such as motion estimation and motion compensation.
  • the motion estimation step the spatial motion offset vectors of two adjacent frames are calculated and written into the code stream.
  • the motion compensation step the calculated motion vector is further used to calculate the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame.
  • the embodiments of the present application do not limit the specific content of the motion vector information of the current decoding unit, and may be motion information involved in steps such as motion estimation and motion compensation.
  • the motion vector information may be the spatial motion offset vectors of two adjacent frames in motion estimation, that is, the motion vector.
  • the motion vector information can also be the motion estimation ME (Motion Estimation) between two adjacent frames in motion compensation.
  • ME Motion Estimation
  • the motions of different objects may be different.
  • take point cloud data captured by a lidar sensor on a moving vehicle In this point cloud data, roads and objects usually have different motions. Since the distance between the road and the radar sensor is relatively constant and the road changes slightly from one vehicle location to the next, the point representing the road moves very little relative to the radar sensor location. In contrast, objects such as buildings, road signs, vegetation, or other vehicles have larger motions. Since road and object points have different motions, dividing point cloud data into roads and objects will improve the accuracy of global motion estimation and compensation, thereby improving compression efficiency.
  • point cloud data using inter-frame prediction in order to improve the accuracy of inter-frame prediction and improve compression efficiency, for a decoding unit, it is necessary to classify the point cloud in the decoding unit, for example, classify the point cloud in the decoding unit.
  • the point cloud is divided into road point cloud and non-road point cloud.
  • the classification of the point cloud in the decoding unit is indicated by classification information, where the classification information can be understood as the information required to divide the point cloud into several categories.
  • the classification information of the current decoding unit can be understood as the classification information of the point cloud in the current decoding unit, that is, the information required to divide the point cloud in the current decoding unit into several categories.
  • the point cloud data can be divided into at least one decoding unit.
  • the decoding process of each decoding unit is independent, and the decoding process of each decoding unit is basically the same.
  • the embodiment of the present application takes the decoding unit currently being decoded, that is, the current decoding unit, as an example for description.
  • the embodiment of the present application does not limit the specific size of the current decoding unit, which can be determined according to actual needs.
  • the current decoding unit is the current point cloud frame, that is, a point cloud frame can be decoded as a decoding unit.
  • the current decoding unit is a partial area of the current point cloud frame.
  • the current point cloud frame is divided into multiple areas, and one area is used as a decoding unit for separate decoding.
  • the embodiments of this application do not limit the specific method of dividing the current point cloud frame into multiple areas.
  • the current point cloud frame is divided into multiple point cloud slices.
  • the sizes of the multiple point cloud slices may or may not be exactly the same, and each point cloud slice is used as a decoding unit for separate decoding.
  • the current point cloud frame is divided into multiple point cloud blocks.
  • the sizes of the multiple point cloud blocks may be the same or not exactly the same.
  • Each point cloud block is used as a decoding unit for separate decoding. .
  • the encoding end can calculate the cycle based on the classification information indicated by the first parameter, and calculate the classification information periodically, and/or calculate the cycle based on the motion vector information indicated by the second parameter, and calculate the motion vector information periodically, thereby reducing the number of classifications.
  • the number of calculations of information and/or motion vector information improves encoding and decoding efficiency.
  • the calculation cycle of the above classification information can be understood as calculating classification information every at least one decoding unit, or calculating classification information every at least one point cloud frame.
  • the calculation cycle of the above-mentioned motion vector information can be understood as calculating motion vector information every at least one decoding unit, or calculating motion vector information every at least one point cloud frame.
  • the specific implementation methods in which the decoding end decodes the point cloud code stream and determines at least one of the classification information and motion vector information of the current decoding unit in S101 include but are not limited to the following:
  • Method 1 The decoding end decodes at least one of the classification information and motion vector information of the current decoding unit from the point cloud code stream.
  • the encoding end can determine the classification information of each decoding unit and/or determine the motion vector information of each decoding unit based on the first parameter and/or the second parameter.
  • the encoding end writes at least one of the classification information and motion vector information of each decoding unit into the point cloud code stream. In this way, the decoding end can obtain the classification information of each decoding unit and/or the motion vector information of each decoding unit by directly decoding the code stream.
  • the encoding end can skip writing the first parameter and/or the second parameter into the point cloud code stream, that is, the encoding end does not write the first parameter and/or the second parameter into the point cloud code stream. into the point cloud code stream, but directly writes the classification information of each decoding unit and/or the motion vector information of each decoding unit into the point cloud code stream.
  • the decoder can use existing decoding methods to directly decode the classification information of each decoding unit and/or the motion vector information of each decoding unit from the code stream, thus improving the coding efficiency without increasing the cost. Decoding complexity.
  • the decoding end may also determine at least one of the classification information and the motion vector information of the current decoding unit according to the following method 2.
  • Method 2 The decoder determines at least one of classification information and motion vector information through the following steps S101-A and S101-B:
  • S101-A Decode at least one of the first parameter and the second parameter from the point cloud code stream;
  • S101-B Determine the classification information of the current decoding unit according to the first parameter, and/or determine the motion vector information of the current decoding unit according to the second parameter.
  • the above-mentioned first parameter and second parameter can be used independently.
  • the encoding end writes the first parameter into the point cloud code stream, but does not write the second parameter into the point cloud code stream.
  • the decoding end can determine the classification information of the current decoding unit according to the first parameter, and obtain the motion vector information of the current decoding unit by decoding the point cloud code stream.
  • the encoding end writes the second parameter into the point cloud code stream, but does not write the first parameter into the point cloud code stream. In this way, the decoding end can determine the motion of the current decoding unit based on the second parameter.
  • Vector information by decoding the point cloud code stream, obtains the classification information of the current decoding unit.
  • the encoding end writes both the first parameter and the second parameter into the point cloud code stream, so that the decoding end can determine the classification information of the current decoding unit according to the first parameter, and determine the current decoding unit according to the second parameter. motion vector information.
  • the decoding end determines the decoding value based on the first parameter.
  • the classification information of the unit is obtained instead of decoding one by one to obtain the classification information of the point cloud in the decoding unit.
  • the encoding end writes the second parameter into the point cloud code stream, it skips writing the classification information of each decoding unit into the point cloud code stream.
  • the decoding end determines the decoding unit's classification information based on the second parameter. Motion vector information, instead of obtaining the motion vector information of the decoding unit through decoding one by one.
  • the encoding end writes the first parameter and/or the second parameter into the code stream, and skips writing the classification information and/or motion vector information of each decoding unit into the point cloud code stream, which can reduce the decoding processing time. , and reduces the code stream burden of encoding classification information and/or motion vector information of each decoding unit.
  • At least one of the above-mentioned first parameter and second parameter can be stored in the form of an unsigned integer, denoted as u(v), indicating that v bits are used to describe a parameter.
  • At least one of the above-mentioned first parameter and the second parameter can also be stored in the form of unsigned exponential Golomb coding, recorded as ue(v), which means that the value of the above parameter first undergoes exponential Golomb coding to become v Bit 01 bit sequence, and then write it into the code stream.
  • the encoding end writes at least one of the first parameter and the second parameter into the sequence header parameter set.
  • the decoding end obtains the first parameter and the second parameter by decoding the sequence header parameter set. at least one of.
  • the first parameter is used to indicate that classification information is calculated once at intervals of multiple point cloud frames; and/or the second parameter is used to indicate that motion vector information is calculated once at intervals of multiple point cloud frames.
  • the first parameter and the second parameter are stored in the sequence header parameter set, as shown in Table 1:
  • classification_period represents the first parameter
  • motion_period represents the second parameter
  • classification_info represents classification information
  • motion_info represents motion vector information
  • the encoding end writes at least one of the first parameter and the second parameter into the point cloud header information.
  • the decoding end obtains the first parameter and the second parameter by decoding the point cloud header information. at least one.
  • the first parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the classification information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames, and i is a positive integer; and /Or, the second parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the motion vector information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames.
  • the first parameter and the second parameter are stored in the point cloud header information, as shown in Table 2:
  • classification_frame_period represents the first parameter
  • motion_frame_period represents the second parameter
  • classification_info represents classification information
  • motion_info represents motion vector information
  • the first parameter is used to indicate that multiple point cloud slices are spaced in one point cloud frame, and the classification information is calculated once; and/or the second parameter is used to indicate that multiple point cloud slices are spaced in one point cloud frame, Calculate motion vector information once.
  • the first parameter and the second parameter are stored in the point cloud header information, as shown in Table 3:
  • classification_slice_period represents the first parameter
  • motion_slice_period represents the second parameter
  • classification_info represents classification information
  • motion_info represents motion vector information
  • the decoder before decoding the first parameter and the second parameter, the decoder first needs to decode the point cloud code stream to obtain the first flag inter_prediction_flag.
  • the first flag inter_prediction_flag is used to indicate whether to perform inter-frame prediction decoding; if the first The flag inter_prediction_flag indicates that when inter-frame prediction encoding is performed, the point cloud code stream is decoded to obtain at least one of the first parameter and the second parameter.
  • the decoding end determines the classification information of the current decoding unit based on the first parameter.
  • the specific implementation methods include but are not limited to the following:
  • Method 1 the above S101-B includes the following steps of S101-B-11 and S101-B-12:
  • S101-B-12 Determine the classification information of the current decoding unit according to the classification information calculation cycle.
  • the classification information calculation cycles corresponding to different decoding units in the point cloud sequence may be the same or different, and the embodiment of the present application does not limit this.
  • a first parameter can be written in the code stream, and the first parameter can be used to indicate each decoding unit in the point cloud sequence. Calculation cycle of unit classification information. For example, the first parameter indicates that classification information is calculated every K decoding units.
  • multiple first parameters can be written in the code stream and used to indicate the point.
  • the calculation cycle of the classification information of each decoding unit in the cloud sequence For example, three first parameters are written in the code stream, where the first first parameter indicates that the classification information is calculated once every K1 decoding units, the second first parameter indicates that the classification information is calculated once every K2 decoding units, and the second first parameter indicates that the classification information is calculated once every K2 decoding units.
  • the three first parameters indicate that the classification information is calculated once every K3 decoding units.
  • the calculation period of the classification information corresponding to the current decoding unit can be determined according to the first parameter decoded from the code stream.
  • the current decoding unit is the current point cloud frame
  • the first parameter indicates that the classification information is calculated once every 4 point cloud frames.
  • the current decoding unit is the 6th point cloud frame in the decoding sequence, and the 0th point cloud frame in the decoding sequence.
  • the classification information is calculated once for each point cloud frame, the classification information is calculated once for the 5th point cloud frame, and the classification information is calculated once for the 10th point cloud frame.
  • the 0th point cloud frame to the 4th point cloud frame can be understood as classification
  • the first calculation cycle of the information, the 5th point cloud frame to the 9th point cloud frame can be understood as the second calculation cycle of the classification information, and the current decoding unit is in the second calculation cycle, and then the second calculation cycle is calculation cycle, which is determined as the classification information calculation cycle corresponding to the current decoding unit.
  • the decoding end After determining the classification information calculation period corresponding to the current decoding unit according to the above steps, the decoding end determines the classification information of the current decoding unit according to the classification information calculation period.
  • the embodiments of this application do not limit the specific way in which the decoding end determines the classification information of the current decoding unit based on the classification information calculation period corresponding to the current decoding unit.
  • the encoding and decoding ends agree that the classification information of the decoding unit within the classification information calculation cycle is a default value of 1, and the decoding end determines the default value of 1 as the classification information of the current decoding unit.
  • both encoding and decoding parties agree to use a preset calculation method to calculate the classification information of the decoding unit within the classification information calculation cycle. For example, if the current decoding unit is an area of the current point cloud frame, the classification information of the current decoding unit can be determined based on the classification information of the decoded point clouds around the current decoding unit in the current point cloud frame.
  • the encoding end writes the classification information of the first decoding unit within a classification information calculation cycle into the code stream, but the classification information of other decoding units within the classification information calculation cycle is not written into the code stream. In this way, the decoding end can determine the classification information of the current decoding unit according to the position of the current decoding unit in the classification information calculation cycle corresponding to the current decoding unit.
  • Example 1 if the current decoding unit is the first decoding unit within the classification information calculation cycle, decode the point cloud code stream to obtain the classification information of the current point decoding unit.
  • Example 2 If the current decoding unit is not the first decoding unit within the classification information calculation cycle, the classification information of the current decoding unit is determined based on the decoded information or the default value.
  • the encoding end writes the first parameter and the classification information of the first decoding unit in each classification information calculation cycle into the point cloud code stream, but does not write the classification information of other decoding units in the classification information calculation cycle. Information is written into the point cloud code stream. In this way, after determining the classification information calculation cycle corresponding to the current decoding unit, the decoding end can determine the classification information of the current decoding unit based on whether the current decoding unit is the first decoding unit within the classification information calculation cycle.
  • the classification information calculation period corresponding to the current decoding unit is the 5th point cloud frame to the 9th point cloud frame. If the current decoding unit is the 5th point cloud frame in the decoding sequence, the decoding end directly From the code stream, the classification information of the current decoding unit is decoded. If the current decoding unit is not the fifth point cloud frame, for example, the sixth point cloud frame, the decoding end determines the default value as the classification information of the current decoding unit, or determines the classification information of the current decoding unit based on the decoded information.
  • the implementation of this application does not limit the specific implementation method of determining the classification information of the current decoding unit based on the decoded information in the above Example 2.
  • the classification information of the current decoding unit is determined based on the classification information of the first decoding unit within the calculation period based on the classification information corresponding to the current decoding unit. For example, determine the classification information of the first decoding unit within the classification information calculation cycle as the classification information of the current decoding unit, or process the classification information of the first decoding unit within the classification information calculation cycle to obtain Classification information of the current decoding unit.
  • the classification information of the current decoding unit is determined according to the following step 11:
  • Step 11 Determine the classification information of the current decoding unit based on the classification information of M decoding units.
  • the M decoding units are M decoded decoding units located before the current decoding unit in the decoding order, and M is a positive integer.
  • the embodiment of the present application does not limit the specific selection method of the above M decoding units.
  • the above-mentioned M decoding units are sequentially adjacent in the decoding order without any gaps in between.
  • the above-mentioned M decoding units may be any M decoding units located before the current decoding unit in the decoding order, that is, these M decoding units may be adjacent or not completely adjacent. There are no restrictions on this.
  • M decoding units located before the current decoding unit in the decoding order are obtained, and according to the classification information of these M decoding units , determine the classification information of the current decoding unit.
  • the implementation method of determining the classification information of the current decoding unit based on the classification information of the M decoding units in step 11 includes at least the following examples:
  • the classification information of a decoding unit located before the current decoding unit in the decoding order is determined as the classification information of the current decoding unit. For example, if the current decoding unit is the sixth point cloud frame in the decoding sequence, then the classification information of the fifth point cloud frame in the decoding sequence is determined as the classification information of the current decoding unit.
  • the average value of the classification information of M decoding units is determined as the classification information of the current decoding unit.
  • the weighted average of the classification information of M decoding units is determined as the classification information of the current decoding unit.
  • the decoding end can also determine the classification information of the current decoding unit according to the following method 2.
  • Example 1 if the current decoding unit is the NK-th decoding unit in the decoding sequence, decode the point cloud code stream to obtain the classification information of the current decoding unit.
  • K and N are both positive integers.
  • Example 2 if the current decoding unit is not the NKth decoding unit in the decoding order, the classification information of the current decoding unit is determined based on the decoded information or the default value.
  • the classification information calculation period corresponding to each decoding unit in the point cloud series is the same, for example, the classification information is calculated once every K decoding units.
  • the encoding end writes the classification information of the decoding unit numbered 0 in the decoding sequence and the decoding unit numbered an integer multiple of K (i.e., the NKth decoding unit) into the code stream, but does not write the classification information of the decoding unit numbered not an integer multiple of K.
  • the classification information of the decoding unit (that is, not the NKth decoding unit) is written into the code stream, thereby reducing the code stream burden.
  • the decoding end when the decoding end decodes the current decoding unit, it determines whether the current decoding unit is the NK-th decoding unit in the decoding sequence, that is, it determines whether the serial number of the current decoding unit in the decoding sequence is an integer multiple of K. If the decoding end determines that the current decoding unit is the NK-th decoding unit in the decoding order, the classification information of the current decoding unit is decoded from the code stream. If the current decoding unit is not the NKth decoding unit in the decoding order, the default value is determined as the classification information of the current decoding unit, or the classification information of the current decoding unit is determined based on the decoded information.
  • the decoding end decodes the classification information from the code stream once every K decoding units, which can reduce the number of decoding times at the decoding end.
  • the point cloud sequence includes 1000 point cloud frames. Assume that one point cloud frame is used as a decoding unit, so that the number of decoding times on the decoding end is 1000/K instead of 1000 times, which greatly reduces the number of decoding times and reduces the decoding burden on the decoding end. Improve decoding efficiency.
  • the specific process of determining the classification information of the current decoding unit based on the decoded information can refer to the descriptions in steps 11 and 12 above. This will not be described again.
  • the classification information of the current decoding unit can be determined.
  • classification information can be understood as the information required to divide the point cloud into different categories.
  • the embodiments of this application do not limit the specific expression form of the classification information.
  • the classification information includes at least one of a first height threshold and a second height threshold, and the first height threshold and the second height threshold are used for classification of the point cloud in the current decoding unit.
  • At least one of the first height threshold and the second height threshold is a preset value.
  • At least one of the first height threshold and the second height threshold is a statistical value.
  • a histogram is used to count the height values of the point cloud midpoints.
  • the horizontal axis of the histogram is the height value of the point cloud midpoint, and the vertical axis of the histogram is the number of points at that height value.
  • Figure 5 takes the radar point cloud as an example for statistics.
  • the height of the radar is the height zero point, so the height values of most points are negative.
  • the threshold value higher than the center a times (for example, 1.5 times) of the standard deviation is recorded as the first height threshold Top_thr.
  • the threshold value that is lower than b times (for example, 1.5 times) the standard deviation of the center is recorded as the second height threshold Bottom_thr.
  • first height threshold and second height threshold divide the point cloud into different categories.
  • the point cloud whose height value is between the first height threshold and the second height threshold is recorded as the first type of point cloud
  • the points whose height value is greater than the first height threshold and the height value is less than the second height threshold are recorded as the first type point cloud.
  • Clouds are recorded as point clouds of the second type.
  • the first parameter classification_period may include at least one of the first sub-parameter top_threshold_period and the second sub-parameter bottom_threshold_period;
  • the first sub-parameter top_threshold_period is used to indicate the calculation period of the first height threshold
  • the second sub-parameter bottom_threshold_period is used to indicate the calculation period of the second height threshold.
  • first sub-parameter and second sub-parameter can be assigned values independently.
  • calculation period of the first height threshold and the calculation period of the second height threshold may be the same or different, and this embodiment of the present application does not limit this.
  • the specific implementation methods by which the decoder determines the motion vector information of the current decoding unit according to the second parameter include but are not limited to the following:
  • Method 1 the above S101-B includes the following steps of S101-B-21 and S101-B-22:
  • the motion vector information calculation cycles corresponding to different decoding units in the point cloud sequence may be the same or different, and the embodiment of the present application does not limit this.
  • a second parameter can be written in the code stream, and the second parameter can be used to indicate each element in the point cloud sequence. Calculation period for motion vector information of the decoding unit. For example, the second parameter indicates that motion vector information is calculated every R decoding units.
  • multiple second parameters can be written in the code stream, and the multiple second parameters can be used to indicate the The calculation cycle of the motion vector information corresponding to each decoding unit in the point cloud sequence. For example, three second parameters are written in the code stream, where the first second parameter indicates that the motion vector information is calculated once every R1 decoding units, and the second second parameter indicates that the motion vector information is calculated once every R2 decoding units. , the third second parameter indicates that motion vector information is calculated once every R3 decoding units.
  • the calculation period of motion vector information corresponding to the current decoding unit can be determined according to the second parameter decoded from the code stream.
  • the current decoding unit is the current point cloud frame
  • the second parameter indicates that motion vector information is calculated every 4 point cloud frames.
  • the current decoding unit is the 6th point cloud frame in the decoding sequence, and the 6th point cloud frame in the decoding sequence.
  • the 0th point cloud frame calculates the motion vector information once
  • the 5th point cloud frame calculates the motion vector information once
  • the 10th point cloud frame calculates the motion vector information once, among which, the 0th point cloud frame to the 4th point cloud frame It can be understood as the first calculation cycle of motion vector information.
  • the 5th point cloud frame to the 9th point cloud frame can be understood as the second calculation cycle of motion vector information, and the current decoding unit is in the second calculation cycle.
  • the second calculation period is determined as the motion vector information calculation period corresponding to the current decoding unit.
  • the decoding end After determining the motion vector information calculation period corresponding to the current decoding unit according to the above steps, the decoding end determines the motion vector information of the current decoding unit according to the motion vector information calculation period.
  • the embodiments of this application do not limit the specific manner in which the decoder determines the motion vector information of the current decoding unit based on the motion vector information calculation cycle corresponding to the current decoding unit.
  • the encoding and decoding ends agree that the motion vector information of the decoding unit within the motion vector information calculation period is a default value of 1, and the decoding end determines the default value of 1 as the motion vector information of the current decoding unit.
  • both encoding and decoding parties agree to use a preset calculation method to calculate the motion vector information of the decoding unit within the motion vector information calculation period. For example, if the current decoding unit is an area of the current point cloud frame, the motion vector information of the current decoding unit can be determined based on the motion vector information of the decoded area around the current decoding unit in the current point cloud frame.
  • the encoding end writes the motion vector information of the first decoding unit within a motion vector information calculation cycle into the code stream, but the motion vector information of other decoding units within the motion vector information calculation cycle is not written. code stream.
  • the decoding end can determine the motion vector information of the current decoding unit according to the position of the current decoding unit in the motion vector information calculation cycle corresponding to the current decoding unit.
  • Example 1 if the current decoding unit is the first decoding unit within the motion vector information calculation cycle, decode the point cloud code stream to obtain the motion vector information of the current point decoding unit.
  • Example 2 If the current decoding unit is not the first decoding unit within the motion vector information calculation cycle, the motion vector information of the current decoding unit is determined based on the decoded information or the default value.
  • the encoding end writes the second parameter and the motion vector information of the first decoding unit in each motion vector information calculation cycle into the point cloud code stream, without writing other decoded motion vector information in the calculation cycle.
  • the motion vector information of the unit is written into the point cloud code stream.
  • the decoder can determine the motion vector information of the current decoding unit based on whether the current decoding unit is the first decoding unit within the motion vector information calculation cycle.
  • the decoding end Decode the motion vector information of the current decoding unit directly from the code stream. If the current decoding unit is not the 5th point cloud frame, for example, the 6th point cloud frame, the decoding end determines the default value as the motion vector information of the current decoding unit, or determines the motion vector of the current decoding unit based on the decoded information. information.
  • the implementation of this application does not limit the specific implementation manner of determining the motion vector information of the current decoding unit based on the decoded information in the above Example 2.
  • the motion vector information of the current decoding unit is determined based on the motion vector information of the first decoding unit within the calculation period based on the motion vector information corresponding to the current decoding unit. For example, the motion vector information of the first decoding unit within the motion vector information calculation period is determined as the motion vector information of the current decoding unit, or the motion vector information is calculated as the motion vector of the first decoding unit within the period. The information is processed to obtain the motion vector information of the current decoding unit.
  • the motion vector information of the current decoding unit is determined according to the following step 21:
  • Step 21 Determine the motion vector information of the current decoding unit based on the motion vector information of S decoding units.
  • the S decoding units are S decoded decoding units located before the current decoding unit in the decoding order, and S is a positive integer. .
  • the embodiment of the present application does not limit the specific selection method of the above S decoding units.
  • the above-mentioned S decoding units are adjacent in sequence in the decoding order, with no interval in between.
  • the above-mentioned S decoding units may be any S decoding units located before the current decoding unit in the decoding order, that is, these S decoding units may be adjacent or not completely adjacent. There are no restrictions on this.
  • the decoding end obtains the S decoding units located before the current decoding unit in the decoding order from the decoded information. According to the values of these S decoding units, Motion vector information determines the motion vector information of the current decoding unit.
  • the implementation method of determining the motion vector information of the current decoding unit based on the motion vector information of the S decoding units in step 21 at least includes the following examples:
  • the motion vector information of a decoding unit located before the current decoding unit in the decoding order is determined as the motion vector information of the current decoding unit.
  • the motion vector information of the fifth point cloud frame in the decoding order is determined as the motion vector information of the current decoding unit.
  • the average value of the motion vector information of S decoding units is determined as the motion vector information of the current decoding unit.
  • the weighted average of the motion vector information of S decoding units is determined as the motion vector information of the current decoding unit.
  • the decoder can also determine the motion vector information of the current decoding unit according to the following method 2.
  • Example 1 if the current decoding unit is the NR-th decoding unit in the decoding order, decode the point cloud code stream to obtain the motion vector information of the current decoding unit, R and N are both positive integers.
  • Example 2 if the current decoding unit is not the NR-th decoding unit in the decoding order, determine the motion vector information of the current decoding unit based on the decoded information or the default value.
  • the motion vector information calculation period corresponding to each decoding unit in the point cloud series is the same.
  • the motion vector information is calculated once every R decoding units.
  • the encoding end writes the motion vector information of the decoding unit numbered 0 in the decoding sequence and the decoding unit numbered an integer multiple of R (i.e., the NR-th decoding unit) into the code stream, without writing the integer numbered not R
  • the motion vector information of multiple decoding units (that is, not the NR-th decoding unit) is written into the code stream, thereby reducing the code stream burden.
  • the decoding end when the decoding end decodes the current decoding unit, it determines whether the current decoding unit is the NR-th decoding unit in the decoding sequence, that is, it determines whether the serial number of the current decoding unit in the decoding sequence is an integer multiple of R. If the decoding end determines that the current decoding unit is the NR-th decoding unit in the decoding order, the motion vector information of the current decoding unit is decoded from the code stream. If the current decoding unit is not the NR-th decoding unit in the decoding order, the default value is determined as the motion vector information of the current decoding unit, or the motion vector information of the current decoding unit is determined based on the decoded information.
  • the decoding end decodes the motion vector information from the code stream once every R decoding units, which can reduce the number of decodings on the decoding end.
  • the point cloud sequence includes 1000 point cloud frames. Assume that one point cloud frame is used as a decoding unit. In this way, the number of decoding times on the decoding end is 1000/R instead of 1000 times, which greatly reduces the number of decoding times and reduces the decoding burden on the decoding end. Improve decoding efficiency.
  • the specific process of determining the motion vector information of the current decoding unit based on the decoded information can refer to the description of steps 21 and 22 above. I won’t go into details here.
  • the decoding end can also determine the motion vector information of the current decoding unit according to the following method 3.
  • Method 3 The decoding end determines the motion vector information based on the degree of change of the classification information of the decoding unit. That is, the decoder determines the motion vector information of the current decoding unit according to the following steps 1 and 2:
  • Step 1 Determine the degree of change of the classification information based on the first parameter
  • Step 2 Determine the motion vector information of the current decoding unit according to the degree of change.
  • the motion vector information of different decoding units can be determined according to the degree of change of the classification information of different decoding units.
  • the embodiment of the present application does not limit the specific implementation method of determining the degree of change of the classification information of the point cloud according to the first parameter in step 1 above.
  • the classification information of multiple decoding units is determined according to the first parameter, and the degree of change of the classification information is determined based on the classification information of the multiple decoding units. For example, when the classification information of the multiple decoding units differs greatly, it means that the degree of change of the classification information is large. If the difference of the classification information of the multiple decoding units is small, it means that the degree of change of the classification information is small.
  • the degree of change of the classification information is determined according to the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit.
  • the classification information of the current decoding unit is determined according to the first parameter.
  • the classification information of the current decoding unit is determined according to the first parameter.
  • the degree of change between the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit for example, the difference between the classification information of the current decoding unit and the classification information of the reference decoding unit of the current decoding unit.
  • the absolute value of the value is determined as the degree of change in the classification information.
  • the motion vector information of the current decoding unit is determined based on the degree of change of the classification information.
  • the default value or the motion vector information of the previous decoding unit of the current decoding unit in the decoding order is determined as the motion vector information of the current decoding unit.
  • the point cloud code stream is decoded to obtain the motion vector information of the current decoding unit.
  • the motion vector information of the current decoding unit can be determined.
  • the motion vector information can be understood as the motion information required by the decoder for inter-frame prediction.
  • the embodiments of this application do not limit the specific expression form of the motion vector information.
  • the motion vector information includes at least one of a rotation matrix and an offset vector.
  • the rotation matrix describes the three-dimensional rotation of the decoding unit and the reference decoding unit
  • the offset vector describes the offset in three directions of the coordinate origins of the decoding unit and the reference decoding unit.
  • the offset vector is This means that there is no offset between the coordinate origins of the current decoding unit and the reference decoding unit.
  • the motion vector between the two decoding units is recorded as a zero motion vector.
  • the second parameter motion_period includes at least one of the third sub-parameter rotation_matrix_period and the fourth sub-parameter translation_vector_period.
  • the third sub-parameter rotation_matrix_period is used to indicate the calculation period of the rotation matrix
  • the fourth sub-parameter translation_vector_period is used to indicate the calculation period of the offset vector.
  • the above third sub-parameter and fourth sub-parameter can be assigned values independently.
  • calculation period of the rotation matrix and the calculation period of the offset vector may be the same or different, and the embodiment of the present application does not limit this.
  • the encoding end writes at least one of the classification information and motion vector information of the current decoding unit into the dot In the cloud code stream.
  • the decoder can decode the point cloud code stream and directly obtain at least one of the classification information and motion vector information of the current decoding unit.
  • the decoding end after determining at least one of the classification information and motion vector information of the current decoding unit according to the above steps, the decoding end performs the following steps of S102.
  • S102 Decode the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit.
  • the category of the point cloud in the current decoding unit is determined based on the classification information of the current decoding unit, and different motion vector information is used to frame the point clouds of different categories. prediction. For example, taking the point cloud data scanned by vehicle-mounted radar as an example, the point cloud can be divided into road points and object points. The motion vector information of road points and object points is different.
  • the embodiment of the present application does not limit the specific process of decoding the current decoding unit according to at least one of the classification information and the motion vector information of the current decoding unit in the above S102.
  • the point cloud in the current decoding unit can be divided into multiple category. Different motion vector information is assigned to each category, where the motion vector information assigned to different categories can be preset values corresponding to different categories, or values calculated based on the categories. This embodiment of the present application does not limit this.
  • the decoder can determine the classification information of the current decoding unit on its own, for example Classification information of the current decoding unit is determined based on the decoding information of decoded units surrounding the current decoding unit. Then, according to the classification information, the point cloud in the current decoding unit is divided into multiple categories. According to the motion vector information of the current decoding unit, the motion vector information of each category of point cloud in the current decoding unit is determined.
  • the above-determined motion vector information can be determined as the motion vector information of the first type point cloud, and the motion vector information of the second type point cloud can be determined.
  • the above S102 includes the following steps:
  • the decoding end uses the classification information of the current decoding unit to divide the point cloud in the current decoding unit into P-type point clouds, and determines the motion vector information corresponding to the P-type point cloud based on the motion vector information of the current decoding unit. , and then decode the current decoding unit according to the motion vector information corresponding to the P-type point cloud. That is to say, in this embodiment of the present application, different types of point clouds in the current decoding unit are decoded using different motion vector information, thereby improving the accuracy of decoding.
  • the embodiment of the present application does not limit the specific type of the above-mentioned S102-A, classifying the point cloud in the current decoding unit into a P-type point cloud according to the classification information of the current decoding unit.
  • the classification information of the current decoding unit can be a category identifier.
  • each point in the current decoding unit includes a category identifier.
  • the point cloud in the current decoding unit can be divided into P-type points according to the category identifier. cloud.
  • the classification information of the current decoding unit includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold.
  • the above S102-A includes the following steps:
  • S102-A Classify the point cloud in the current decoding unit into type P point cloud according to the first height threshold and the second height threshold.
  • point clouds with a height value greater than the first height threshold in the current decoding unit are classified into a type of point cloud, and point clouds with a height value between the first height threshold and the second height threshold in the current decoding unit are divided into A type of point cloud is divided into point clouds whose height value is less than the second height threshold in the current decoding unit.
  • the point clouds whose height value in the current decoding unit is less than or equal to the first height threshold and greater than or equal to the second height threshold are classified into the first type of point cloud; the point clouds whose height value in the current decoding unit is greater than the first height Point clouds with a threshold value or less than the second height threshold value are classified into the second type of point cloud.
  • the motion vector information corresponding to the P-type point cloud is determined based on the motion vector information of the current decoding unit.
  • the motion vector information of the current decoding unit is determined as the motion vector information of one type of point cloud in the P type point cloud, and the motion vector information of other types of point clouds in the P type point cloud can be preset values.
  • the P type point cloud includes the above-mentioned first type point cloud and the second type point cloud
  • the motion vector information of the current decoding unit can be determined as the motion vector information of the second type point cloud
  • the first type point cloud can be determined as the motion vector information of the second type point cloud.
  • the motion vector information of the point cloud is a preset value, for example, a zero vector.
  • the first type of point cloud can be understood as road point cloud
  • the second type of point cloud can be understood as non-road point cloud. Since the road changes little, non-road point cloud is the focus of research, so , the motion vector information of the current decoding unit is determined as the motion vector information of the non-road point, and the determined road point is predicted to be static, that is, zero motion, that is, the motion vector information of the road point is a zero vector.
  • the current decoding unit is decoded according to the motion vector information corresponding to the P-type point cloud.
  • the embodiments of this application do not limit the specific implementation method of decoding the current decoding unit according to the motion vector information corresponding to the P-type point cloud.
  • the decoding end can determine the reference decoding unit of the current decoding unit; perform motion compensation on the reference decoding unit according to the motion vector information of the P-type point cloud to obtain the prediction information of the current decoding unit; based on the prediction information, perform motion compensation on the current decoding unit. At least one of geometric information and attribute information of the decoding unit is decoded.
  • decoding the geometric information of the current decoding unit according to the prediction information is taken as an example.
  • the prediction information can be understood as the prediction unit of the current decoding unit.
  • the space occupancy of the current decoding unit can be predicted based on the space occupancy in the prediction unit, and then the geometry code stream of the current decoding unit can be decoded based on the predicted space occupancy of the current decoding unit to obtain the geometry information of the current decoding unit.
  • decoding the attribute information of the current decoding unit according to the prediction information is used as an example.
  • the prediction information can be understood as the prediction unit of the current decoding unit.
  • at least one reference point of the point is obtained in the prediction unit, and based on the attribute information of the at least one reference point, the attribute information of the point is predicted to obtain the attribute prediction value of the point.
  • the attribute code stream is decoded to obtain the attribute residual value of the point, and the attribute reconstruction value of the point is determined based on the attribute prediction value and attribute residual value of the point.
  • the attribute reconstruction value of each point in the current decoding unit can be determined, and then the attribute reconstruction value of the current decoding unit can be obtained.
  • the decoding end may also use a method to decode the current decoding unit based on at least one of the classification information and the motion vector information of the current decoding unit. This embodiment of the present application does not limit this.
  • the decoding end decodes the point cloud code stream and determines at least one of the classification information and the motion vector information of the current decoding unit, wherein the classification information is determined based on the first parameter, and the motion vector information It is determined based on the second parameter, the first parameter is used to indicate the calculation period of classification information, and the second parameter is used to indicate the calculation period of motion vector information.
  • the current decoding unit is decoded according to at least one of the classification information and the motion vector information of the current decoding unit. That is, in this embodiment of the present application, classification information and motion vector information are periodically calculated. Compared with calculating classification information and motion vector information once for each decoding unit, this embodiment of the present application greatly reduces the number of calculations of classification information and motion vector information. , reducing the encoding and decoding processing time and improving encoding and decoding efficiency.
  • the above takes the decoding end as an example to introduce the point cloud decoding method provided by the embodiment of the present application in detail.
  • the following takes the encoding end as an example to introduce the point cloud encoding method provided by the embodiment of the present application.
  • Figure 6 is a schematic flow chart of a point cloud encoding method provided by an embodiment of the present application.
  • the point cloud encoding method in the embodiment of the present application can be completed by the point cloud encoding device shown in the above-mentioned Figure 1 or Figure 2.
  • the point cloud decoding method in this embodiment of the present application includes:
  • the first parameter is used to indicate the calculation period of classification information
  • the second parameter is used to indicate the calculation period of motion vector information
  • inter-frame prediction mainly includes steps such as motion estimation and motion compensation.
  • the motion estimation step the spatial motion offset vectors of two adjacent frames are calculated and written into the code stream.
  • the motion compensation step the calculated motion vector is further used to calculate the spatial offset of the point cloud, and the offset point cloud frame is used as a reference to further improve the coding efficiency of the current frame.
  • the embodiments of the present application do not limit the specific content of the motion vector information of the current coding unit, and may be motion information involved in steps such as motion estimation and motion compensation.
  • the motion vector information may be the spatial motion offset vectors of two adjacent frames in motion estimation, that is, the motion vector.
  • the motion vector information can also be the motion estimation ME (Motion Estimation) between two adjacent frames in motion compensation.
  • ME Motion Estimation
  • the motions of different objects may be different.
  • take point cloud data captured by a lidar sensor on a moving vehicle In this point cloud data, roads and objects usually have different motions. Since the distance between the road and the radar sensor is relatively constant and the road changes slightly from one vehicle location to the next, the point representing the road moves very little relative to the radar sensor location. In contrast, objects such as buildings, road signs, vegetation, or other vehicles have larger motions. Since road and object points have different motions, dividing point cloud data into roads and objects will improve the accuracy of global motion estimation and compensation, thereby improving compression efficiency.
  • point cloud data using inter-frame prediction in order to improve the accuracy of inter-frame prediction and improve compression efficiency, for a coding unit, it is necessary to classify the point cloud in the coding unit, for example, classify the point cloud in the coding unit.
  • the point cloud is divided into road point cloud and non-road point cloud.
  • the encoding end in order to avoid calculating classification information and motion vector information once for each coding unit, at least one of the first parameter and the second parameter is set.
  • the first parameter is used to indicate the calculation period of classification information
  • the second parameter is used to indicate the calculation period of motion vector information.
  • the encoding end can calculate the cycle based on the classification information indicated by the first parameter, and calculate the classification information periodically, and/or calculate the cycle based on the motion vector information indicated by the second parameter, and calculate the motion vector information periodically, thereby reducing the number of classifications.
  • the number of calculations of information and/or motion vector information improves coding efficiency.
  • the calculation cycle of the above classification information can be understood as calculating classification information every at least one coding unit, or calculating classification information every at least one point cloud frame.
  • the calculation cycle of the above motion vector information can be understood as calculating motion vector information every at least one coding unit, or calculating motion vector information every at least one point cloud frame.
  • the above first parameter can be represented by classification_period.
  • motion_period can be used as the second parameter above.
  • the above-mentioned first parameter and second parameter can have different representation forms.
  • they can be set to classification_period_log2 and motion_period_log2, respectively indicating that the calculation period of the classification information takes 2 logarithms and the calculation period of the motion vector takes 2 logarithms. still fall within the protection scope of this application.
  • At least one of the first parameter and the second parameter is a preset value.
  • At least one of the first parameter and the second parameter is a parameter input by the user.
  • S202 Determine at least one of classification information and motion vector information of the current coding unit according to at least one of the first parameter and the second parameter.
  • the point cloud data can be divided into at least one encoding unit.
  • the encoding process of each encoding unit is independent, and the decoding process of each encoding unit is basically the same.
  • the embodiment of the present application takes the coding unit currently being coded, that is, the current coding unit, as an example for description.
  • the embodiment of the present application does not limit the specific size of the current coding unit, which can be determined according to actual needs.
  • the current coding unit is the current point cloud frame, that is, a point cloud frame can be coded as a coding unit.
  • the current coding unit is a partial area of the current point cloud frame.
  • the current point cloud frame is divided into multiple areas, and one area is used as a coding unit for separate encoding.
  • the embodiments of this application do not limit the specific method of dividing the current point cloud frame into multiple areas.
  • the current point cloud frame is divided into multiple point cloud slices.
  • the sizes of the multiple point cloud slices may or may not be exactly the same, and each point cloud slice is treated as a coding unit and encoded separately.
  • the current point cloud frame is divided into multiple point cloud blocks.
  • the size of the multiple point cloud blocks may be the same or not exactly the same.
  • One point cloud block is used as a coding unit and is encoded separately. .
  • specific implementation methods for determining at least one of the classification information and motion vector information of the current coding unit based on at least one of the first parameter and the second parameter in S202 include but are not limited to the following:
  • Method 1 The encoding end determines at least one of classification information and motion vector information through the following step S202-A:
  • S202-A Determine the classification information of the current coding unit according to the first parameter, and/or determine the motion vector information of the current coding unit according to the second parameter.
  • the encoding end can determine the classification information of the current coding unit according to the first parameter, and obtain the motion vector of the current coding unit according to existing methods. information.
  • the encoding end may determine the motion vector information of the current coding unit according to the second parameter, and obtain the classification information of the current coding unit according to existing methods.
  • the encoding end may determine the classification information of the current coding unit according to the first parameter, and determine the motion vector information of the current coding unit according to the second parameter.
  • the encoding end determines the classification information of the coding unit according to the first parameter, instead of determining the classification information of the point cloud in each coding unit one by one. And/or, the encoding end determines the motion vector information of the coding unit according to the second parameter, instead of determining the motion vector information of each coding unit one by one.
  • the coding end determines the classification information of the current coding unit based on the first parameter.
  • the specific implementation methods include but are not limited to the following:
  • Method 1 the above S202-A includes the following steps of S202-A-11 and S202-A-12:
  • the classification information calculation cycles corresponding to different coding units in the point cloud sequence may be the same or different, and the embodiment of the present application does not limit this.
  • the first parameter indicates a calculation cycle of the classification information of each coding unit in the point cloud sequence.
  • the first parameter indicates that classification information is calculated every K coding units.
  • the calculation cycles of the classification information corresponding to different coding units in the point cloud sequence are not exactly the same, then multiple first parameters can be used to indicate the calculation cycles of the classification information of each coding unit in the point cloud sequence. For example, three first parameters are determined, where the first first parameter indicates that the classification information is calculated once every K1 coding units, the second first parameter indicates that the classification information is calculated once every K2 coding units, and the third first parameter indicates that the classification information is calculated once every K2 coding units. Indicates that classification information is calculated once every K3 coding units.
  • the calculation period of the classification information corresponding to the current coding unit can be determined according to the first parameter.
  • the current coding unit is the current point cloud frame
  • the first parameter indicates that the classification information is calculated once every 4 point cloud frames.
  • the current coding unit is the 6th point cloud frame in the coding sequence, and the 0th point cloud frame in the coding sequence.
  • the classification information is calculated once for each point cloud frame, the classification information is calculated once for the 5th point cloud frame, and the classification information is calculated once for the 10th point cloud frame.
  • the 0th point cloud frame to the 4th point cloud frame can be understood as classification
  • the first calculation cycle of the information, the 5th point cloud frame to the 9th point cloud frame can be understood as the second calculation cycle of the classification information, and the current coding unit is in the second calculation cycle, and then the second calculation cycle is calculation cycle, which is determined as the classification information calculation cycle corresponding to the current coding unit.
  • the encoding end After determining the classification information calculation period corresponding to the current coding unit according to the above steps, the encoding end determines the classification information of the current coding unit according to the classification information calculation period.
  • the embodiments of this application do not limit the specific way in which the encoding end determines the classification information of the current coding unit based on the classification information calculation cycle corresponding to the current coding unit.
  • both encoding and decoding ends agree that the classification information of the coding unit within the classification information calculation cycle is a default value of 1, and the encoding end determines the default value of 1 as the classification information of the current coding unit.
  • both coding and decoding parties agree to use a preset calculation method to calculate the classification information of the coding unit within the classification information calculation cycle. For example, if the current coding unit is an area of the current point cloud frame, the classification information of the current coding unit can be determined based on the classification information of the coded point clouds around the current coding unit in the current point cloud frame.
  • the encoding end may determine the classification information of the current coding unit according to the position of the current coding unit in the classification information calculation cycle corresponding to the current coding unit.
  • Example 1 if the current coding unit is the first coding unit within the classification information calculation cycle, identify the category of the point cloud in the current point coding unit to obtain the classification information of the current point coding unit.
  • Example 2 If the current coding unit is not the first coding unit within the classification information calculation cycle, the classification information of the current coding unit is determined based on the coded information or the default value.
  • the encoding end may determine the classification information of the current coding unit based on whether the current coding unit is the first coding unit within the classification information calculation cycle.
  • the classification information calculation period corresponding to the current coding unit is the 5th point cloud frame to the 9th point cloud frame. If the current coding unit is the 5th point cloud frame in the coding sequence, then the current point The category of the point cloud in the coding unit is identified to obtain the classification information of the current point coding unit. If the current coding unit is not the fifth point cloud frame, for example, the sixth point cloud frame, the coding end determines the default value as the classification information of the current coding unit, or determines the classification information of the current coding unit based on the coded information.
  • This application implements no restrictions on the specific implementation method of determining the classification information of the current coding unit based on the coded information in the above Example 2.
  • the classification information of the current coding unit is determined according to the classification information of the first coding unit in the calculation period based on the classification information corresponding to the current coding unit. For example, determine the classification information of the first coding unit within the classification information calculation cycle as the classification information of the current coding unit, or process the classification information of the first coding unit within the classification information calculation cycle to obtain Classification information of the current coding unit.
  • the classification information of the current coding unit is determined according to the following step 11:
  • Step 11 Determine the classification information of the current coding unit based on the classification information of M coding units.
  • the M coding units are M coded coding units located before the current coding unit in the coding sequence, and M is a positive integer.
  • the embodiment of the present application does not limit the specific selection method of the above M coding units.
  • the above-mentioned M coding units are consecutively adjacent in the coding order, with no interval in between.
  • the above-mentioned M coding units may be any M coding units located before the current coding unit in the coding sequence, that is, these M coding units may be adjacent or not completely adjacent. Embodiments of the present application There are no restrictions on this.
  • the encoding end obtains the M coding units located before the current coding unit in the coding sequence from the coded information, and based on the values of these M coding units Classification information, determine the classification information of the current coding unit.
  • the implementation method of determining the classification information of the current coding unit based on the classification information of the M coding units in step 12 at least includes the following examples:
  • the classification information of a coding unit located before the current coding unit in the coding sequence is determined as the classification information of the current coding unit. For example, if the current coding unit is the sixth point cloud frame in the coding sequence, then the classification information of the fifth point cloud frame in the coding sequence is determined as the classification information of the current coding unit.
  • the average value of the classification information of M coding units is determined as the classification information of the current coding unit.
  • the weighted average of the classification information of M coding units is determined as the classification information of the current coding unit.
  • the encoding end writes the first parameter into the point cloud code stream, and if the current encoding unit is the first encoding unit within the classification information calculation cycle, the classification of the current encoding unit is Information is written into the point cloud code stream.
  • the first parameter is written into the point cloud code stream, and if the current coding unit is not the first coding unit within the classification information calculation cycle, the current coding unit is skipped. Classification information is written into the point cloud code stream.
  • the encoding end can also write the first parameter into the code stream, and at the same time, write the classification information of the first coding unit within the classification information calculation cycle into the code stream, and for the classification information located in The classification information of the coding unit in the middle of the calculation cycle (that is, the first coding unit not in the calculation cycle) is not written into the code stream, which can reduce the burden on the code stream.
  • the encoding end writes the classification information of the current coding unit into the point cloud code stream, and skips writing the first parameter into the point cloud code stream.
  • the encoding end can also determine the classification information of the current coding unit according to the following method 2.
  • Example 1 if the current coding unit is the NKth coding unit in the coding sequence, identify the category of the point cloud in the current point coding unit and obtain the classification information of the current point coding unit.
  • K and N are both positive integers.
  • Example 2 If the current coding unit is not the NKth coding unit in the coding sequence, the classification information of the current coding unit is determined based on the coded information or the default value.
  • the first parameter is written into the point cloud code stream, and if the current coding unit is the NKth coding unit in the coding sequence, the classification information of the current coding unit is written into the code stream. flow.
  • the first parameter is written into the point cloud code stream, and if the current coding unit is not the NKth coding unit in the coding sequence, the classification of the current coding unit is skipped. Information is written into the code stream.
  • the coding end writes the classification information of the coding units numbered 0 or an integer multiple of K into the code stream. For other coding unit, does not write classification information into the code stream.
  • a point cloud sequence includes 1000 point cloud frames. Assume that one point cloud frame is used as a coding unit. In this way, the number of coding times on the coding end is 1000/K instead of 1000 times, which greatly reduces the number of coding times and reduces the coding burden on the coding end. Improve coding efficiency.
  • the classification information of the current coding unit can be written into the point cloud code stream, and writing the first parameter into the point cloud code stream can be skipped.
  • the specific process of determining the classification information of the current coding unit based on the coded information can refer to the descriptions of steps 11 and 12 above. This will not be described again.
  • the classification information of the current coding unit can be determined.
  • classification information can be understood as the information required to divide the point cloud into different categories.
  • the embodiments of this application do not limit the specific expression form of the classification information.
  • the classification information includes at least one of a first height threshold and a second height threshold, and the first height threshold and the second height threshold are used for classification of the point cloud in the current coding unit.
  • At least one of the first height threshold and the second height threshold is a preset value.
  • At least one of the first height threshold and the second height threshold is a statistical value.
  • a histogram is used to count the height values of the point cloud midpoints.
  • the horizontal axis of the histogram is the height value of the point cloud midpoint, and the vertical axis of the histogram is the number of points at that height value.
  • Figure 5 takes the radar point cloud as an example for statistics.
  • the height of the radar is the height zero point, so the height values of most points are negative.
  • the threshold value higher than the center a times (for example, 1.5 times) of the standard deviation is recorded as the first height threshold Top_thr.
  • the threshold value that is lower than b times (for example, 1.5 times) the standard deviation of the center is recorded as the second height threshold Bottom_thr.
  • first height threshold and second height threshold divide the point cloud into different categories.
  • the point cloud whose height value is between the first height threshold and the second height threshold is recorded as the first type of point cloud
  • the points whose height value is greater than the first height threshold and the height value is less than the second height threshold are recorded as the first type point cloud.
  • Clouds are recorded as point clouds of the second type.
  • the first parameter classification_period may include at least one of the first sub-parameter top_threshold_period and the second sub-parameter bottom_threshold_period;
  • the first sub-parameter top_threshold_period is used to indicate the calculation period of the first height threshold
  • the second sub-parameter bottom_threshold_period is used to indicate the calculation period of the second height threshold.
  • first sub-parameter and second sub-parameter can be assigned values independently.
  • calculation period of the first height threshold and the calculation period of the second height threshold may be the same or different, and this embodiment of the present application does not limit this.
  • the coding end determines the motion vector information of the current coding unit according to the second parameter.
  • the specific implementation methods include but are not limited to the following:
  • Method 1 the above S202-A includes the following steps of S202-A-21 and S202-A-22:
  • the calculation cycles of motion vector information corresponding to different coding units in the point cloud sequence may be the same or different, and the embodiment of the present application does not limit this.
  • the second parameter indicates the calculation period of the motion vector information of each coding unit in the point cloud sequence.
  • the second parameter indicates that motion vector information is calculated every R coding units.
  • a plurality of second parameters are used to indicate the calculation cycle of the motion vector information corresponding to each coding unit in the point cloud sequence. For example, three second parameters are determined, where the first second parameter indicates that the motion vector information is calculated once every R1 coding units, the second second parameter indicates that the motion vector information is calculated once every R2 coding units, and the third second parameter indicates that the motion vector information is calculated once every R2 coding units. The two parameters indicate that motion vector information is calculated once every R3 coding units.
  • the second parameter indicates the calculation period of the motion vector information
  • the calculation period of the motion vector information corresponding to the current coding unit can be determined according to the second parameter.
  • the current coding unit is the current point cloud frame
  • the second parameter indicates that motion vector information is calculated every 4 point cloud frames. Assume that the current coding unit is the 6th point cloud frame in the coding sequence, and the 6th point cloud frame in the coding sequence.
  • the 0th point cloud frame calculates the motion vector information once
  • the 5th point cloud frame calculates the motion vector information once
  • the 10th point cloud frame calculates the motion vector information once, among which, the 0th point cloud frame to the 4th point cloud frame It can be understood as the first calculation cycle of motion vector information.
  • the fifth point cloud frame to the ninth point cloud frame can be understood as the second calculation cycle of motion vector information, and the current coding unit is in the second calculation cycle. within, and then determine the second calculation cycle as the motion vector information calculation cycle corresponding to the current coding unit.
  • the encoding end After determining the motion vector information calculation period corresponding to the current coding unit according to the above steps, the encoding end determines the motion vector information of the current coding unit according to the motion vector information calculation period.
  • the embodiments of this application do not limit the specific manner in which the encoding end determines the motion vector information of the current coding unit based on the calculation cycle of the motion vector information corresponding to the current coding unit.
  • the encoding and decoding ends agree that the motion vector information of the coding unit within the motion vector information calculation period is a default value of 1, and the encoding end determines the default value of 1 as the motion vector information of the current coding unit.
  • both encoding and decoding parties agree to use a preset calculation method to calculate the motion vector information of the coding unit within the motion vector information calculation period. For example, if the current coding unit is an area of the current point cloud frame, the motion vector information of the current coding unit can be determined based on the motion vector information of the coded area around the current coding unit in the current point cloud frame.
  • the coding end may determine the motion vector information of the current coding unit according to the position of the current coding unit in the motion vector information calculation cycle corresponding to the current coding unit.
  • Example 1 If the current coding unit is the first coding unit within the motion vector information calculation cycle, the motion vector information of the current coding unit is determined based on the reference coding unit of the current coding unit.
  • Example 2 If the current coding unit is not the first coding unit within the motion vector information calculation cycle, the motion vector information of the current coding unit is determined based on the coded information or the default value.
  • the encoding end can determine the motion of the current coding unit according to whether the current coding unit is the first coding unit within the motion vector information calculation period. vector information.
  • the motion vector information calculation period corresponding to the current coding unit is the 5th point cloud frame to the 9th point cloud frame. If the current coding unit is the 5th point cloud frame in the coding sequence, the coding end The motion vector information of the current point coding unit is determined directly according to the reference coding unit of the current coding unit. If the current coding unit is not the fifth point cloud frame, for example, the sixth point cloud frame, the coding end determines the default value as the motion vector information of the current coding unit, or determines the motion vector of the current coding unit based on the coded information. information.
  • the implementation of this application does not limit the specific implementation method of determining the motion vector information of the current coding unit based on the coded information in the above Example 2.
  • the motion vector information of the current coding unit is determined based on the motion vector information of the first coding unit within the calculation period based on the motion vector information corresponding to the current coding unit. For example, determine the motion vector information of the first coding unit within the motion vector information calculation period as the motion vector information of the current coding unit, or determine the motion vector information of the first coding unit within the motion vector information calculation period. The information is processed to obtain the motion vector information of the current coding unit.
  • the motion vector information of the current coding unit is determined according to the following step 21:
  • Step 21 Determine the motion vector information of the current coding unit based on the motion vector information of S coding units.
  • the S coding units are S coded coding units located before the current coding unit in the coding order, and S is a positive integer.
  • the embodiment of the present application does not limit the specific selection method of the above S coding units.
  • the above-mentioned S coding units are consecutively adjacent in the coding order, with no interval in between.
  • the above-mentioned S coding units may be any S coding units located before the current coding unit in the coding sequence, that is, these S coding units may be adjacent or not completely adjacent. Embodiments of the present application There are no restrictions on this.
  • the encoding end obtains the S coding units that are located before the current coding unit in the coding sequence from the coded information. According to the coding units of these S coding units, Motion vector information determines the motion vector information of the current coding unit.
  • the implementation method of determining the motion vector information of the current coding unit based on the motion vector information of S coding units in step 22 at least includes the following examples:
  • the motion vector information of a coding unit located before the current coding unit in the coding order is determined as the motion vector information of the current coding unit.
  • the motion vector information of the fifth point cloud frame in the coding sequence is determined as the motion vector information of the current coding unit.
  • the average value of the motion vector information of S coding units is determined as the motion vector information of the current coding unit.
  • the weighted average of the motion vector information of S coding units is determined as the motion vector information of the current coding unit.
  • the second parameter is written into the point cloud code stream, and if the current coding unit is the first coding unit within the motion vector information calculation cycle, the motion vector of the current coding unit is Information is written into the point cloud code stream.
  • the second parameter is written into the point cloud code stream, and if the current coding unit is not the first coding unit within the motion vector information calculation cycle, the current coding unit is skipped.
  • the motion vector information is written into the point cloud code stream.
  • the encoding end can also write the second parameter into the code stream, and at the same time, write the motion vector information of the first coding unit within the motion vector information calculation period into the code stream.
  • the motion vector information of the coding unit in the middle of the motion vector information calculation cycle (that is, the first coding unit in a non-calculation cycle) is not written into the code stream, which can reduce the burden on the code stream.
  • the motion vector information of the current coding unit is written into the point cloud code stream, and writing the second parameter into the point cloud code stream is skipped.
  • the encoding end can also determine the motion vector information of the current coding unit according to the following method 2.
  • Example 1 if the current coding unit is the NR-th coding unit in the coding sequence, determine the motion vector information of the current coding unit based on the reference coding unit of the current coding unit, and R and N are both positive integers.
  • Example 2 if the current coding unit is not the NR-th coding unit in the coding order, determine the motion vector information of the current coding unit based on the coded information or the default value.
  • the calculation period of the motion vector information corresponding to each coding unit in the point cloud series is the same.
  • the motion vector information is calculated once every R coding units.
  • the encoding end encodes the current coding unit, it determines whether the current coding unit is the NR-th coding unit in the coding sequence, that is, whether the serial number of the current coding unit in the coding sequence is an integer multiple of R. If the coding end determines that the current coding unit is the NR-th coding unit in the coding order, the motion vector information of the current coding unit is determined based on the reference coding unit of the current coding unit. If the current coding unit is not the NR-th coding unit in the coding sequence, the default value is determined as the motion vector information of the current coding unit, or the motion vector information of the current coding unit is determined based on the coded information.
  • the second parameter is written into the point cloud code stream, and if the current coding unit is the NKth coding unit in the coding sequence, the motion vector information of the current coding unit is written into code stream.
  • the second parameter is written into the point cloud code stream, and if the current coding unit is not the NKth coding unit in the coding sequence, the motion vector of the current coding unit is skipped. Information is written into the code stream.
  • a point cloud sequence includes 1000 point cloud frames. Assume that one point cloud frame is used as a coding unit. In this way, the number of coding times on the coding end is 1000/R instead of 1000 times, which greatly reduces the number of coding times and reduces the coding burden on the coding end. Improve coding efficiency.
  • the motion vector information of the current coding unit is written into the point cloud code stream, and writing the second parameter into the point cloud code stream is skipped.
  • the specific process of determining the motion vector information of the current coding unit based on the coded information may refer to the descriptions of steps 21 and 22 above. I won’t go into details here.
  • the encoding end can also determine the motion vector information of the current coding unit according to the following method 3.
  • Method 3 The encoding end determines the motion vector information based on the degree of change of the classification information of the encoding unit. That is, the encoding end determines the motion vector information of the current coding unit according to the following steps 1 and 2:
  • Step 1 Determine the degree of change of the classification information based on the first parameter
  • Step 2 Determine the motion vector information of the current coding unit according to the degree of change.
  • the motion vector information of different coding units if the classification information of different coding units does not change significantly, it means that the motion vector information of different coding units may not change significantly. On the contrary, if the classification information of different coding units changes greatly, it means that the motion vector information of different coding units may also change greatly. Therefore, the motion vector information of the current coding unit can be determined according to the degree of change of the classification information of different coding units.
  • the embodiment of the present application does not limit the specific implementation method of determining the degree of change of the classification information of the point cloud according to the first parameter in step 1 above.
  • the classification information of multiple coding units is determined according to the first parameter, and the degree of change of the classification information is determined based on the classification information of the multiple coding units. For example, when the classification information of the multiple coding units differs greatly, it means that the degree of change of the classification information is large. If the difference of the classification information of the multiple coding units is small, it means that the degree of change of the classification information is small.
  • the degree of change of the classification information is determined based on the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit.
  • the classification information of the current coding unit is determined according to the first parameter.
  • the degree of change between the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit for example, the difference between the classification information of the current coding unit and the classification information of the reference coding unit of the current coding unit
  • the absolute value of the value is determined as the degree of change in the classification information.
  • the motion vector information of the current coding unit is determined based on the degree of change of the classification information.
  • the default value or the motion vector information of the previous coding unit of the current coding unit in the coding order is determined as the motion vector information of the current coding unit.
  • the motion vector information of the current coding unit is determined according to the reference coding unit of the current coding unit.
  • the motion vector information of the current coding unit can be determined.
  • motion vector information can be understood as motion information required by the encoding end for inter-frame prediction.
  • the embodiments of this application do not limit the specific expression form of the motion vector information.
  • the motion vector information includes at least one of a rotation matrix and an offset vector.
  • the rotation matrix describes the three-dimensional rotation of the coding unit and the reference coding unit
  • the offset vector describes the offset in three directions of the coordinate origins of the coding unit and the reference coding unit.
  • the second parameter motion_period includes at least one of the third sub-parameter rotation_matrix_period and the fourth sub-parameter translation_vector_period.
  • the third sub-parameter rotation_matrix_period is used to indicate the calculation period of the rotation matrix
  • the fourth sub-parameter translation_vector_period is used to indicate the calculation period of the offset vector.
  • the above third sub-parameter and fourth sub-parameter can be assigned values independently.
  • calculation period of the rotation matrix and the calculation period of the offset vector may be the same or different, and the embodiment of the present application does not limit this.
  • the coding end writes at least one of the classification information and motion vector information of the current coding unit into the point In the cloud code stream.
  • the encoding end after determining at least one of the classification information and the motion vector information of the current coding unit according to the above steps, the encoding end performs the following step S203.
  • the category of the point cloud in the current coding unit is determined based on the classification information of the current coding unit, and different motion vector information is used to frame the point clouds of different categories. prediction. For example, taking the point cloud data scanned by vehicle-mounted radar as an example, the point cloud can be divided into road points and object points. The motion vector information of road points and object points is different.
  • the embodiment of the present application does not limit the specific process of encoding the current coding unit according to at least one of the classification information and the motion vector information of the current coding unit in the above S203.
  • the point cloud in the current coding unit can be divided into multiple according to the classification information. category. Different motion vector information is assigned to each category, where the motion vector information assigned to different categories can be preset values corresponding to different categories, or values calculated based on the categories. This embodiment of the present application does not limit this.
  • the coding end can determine the classification information of the current coding unit on its own, for example Classification information of the current coding unit is determined based on the coding information of coded units surrounding the current coding unit. Then, according to the classification information, the point cloud in the current coding unit is divided into multiple categories. According to the motion vector information of the current coding unit, the motion vector information of each category of point cloud in the current coding unit is determined.
  • the above-determined motion vector information can be determined as the motion vector information of the first type point cloud, and the motion vector information of the second type point cloud can be determined.
  • the above S203 includes the following steps:
  • the encoding end uses the classification information of the current coding unit to divide the point cloud in the current coding unit into P-type point clouds, and determines the motion vector information corresponding to the P-type point cloud based on the motion vector information of the current coding unit. , and then encode the current coding unit according to the motion vector information corresponding to the P-type point cloud. That is, in this embodiment of the present application, different types of point clouds in the current encoding unit are encoded using different motion vector information, thereby improving the accuracy of encoding.
  • the embodiments of this application do not limit the specific types of the above-mentioned S203-A, classifying the point cloud in the current coding unit into P-type point clouds according to the classification information of the current coding unit.
  • the classification information of the current coding unit may be a category identifier.
  • each point in the current coding unit includes a category identifier.
  • the point cloud in the current coding unit can be divided into P-type points according to the category identifier. cloud.
  • the classification information of the current coding unit includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold.
  • the above S203-A includes the following steps:
  • the point cloud with a height value greater than the first height threshold in the current coding unit is divided into a type of point cloud, and the point cloud with a height value in the current coding unit between the first height threshold and the second height threshold is divided into A type of point cloud is divided into point clouds whose height value in the current coding unit is less than the second height threshold.
  • the point clouds whose height value in the current coding unit is less than or equal to the first height threshold and greater than or equal to the second height threshold are classified into the first type of point cloud; the point clouds whose height value in the current coding unit is greater than the first height Point clouds with a threshold value or less than the second height threshold value are classified into the second type of point cloud.
  • the motion vector information corresponding to the P-type point cloud is determined based on the motion vector information of the current coding unit.
  • the motion vector information of the current coding unit is determined as the motion vector information of one type of point cloud in the P type point cloud, and the motion vector information of other types of point clouds in the P type point cloud can be preset values.
  • the P type point cloud includes the above-mentioned first type point cloud and the second type point cloud
  • the motion vector information of the current coding unit can be determined as the motion vector information of the second type point cloud
  • the first type point cloud can be determined as the motion vector information of the second type point cloud.
  • the motion vector information of the point cloud is a preset value, for example, a zero vector.
  • the first type of point cloud can be understood as road point cloud
  • the second type of point cloud can be understood as non-road point cloud. Since the road changes little, non-road point cloud is the focus of research, so , the motion vector information of the current coding unit is determined as the motion vector information of the non-road point, and the determined road point is predicted to be static, that is, zero motion, that is, the motion vector information of the road point is a zero vector.
  • the current coding unit is coded according to the motion vector information corresponding to the P-type point cloud.
  • the embodiments of this application do not limit the specific implementation method of encoding the current coding unit according to the motion vector information corresponding to the P-type point cloud.
  • the encoding end can determine the reference coding unit of the current coding unit; perform motion compensation on the reference coding unit according to the motion vector information of the P-type point cloud to obtain the prediction information of the current coding unit; based on the prediction information, perform motion compensation on the current coding unit.
  • At least one of geometric information and attribute information of the encoding unit is encoded.
  • the geometric information of the current coding unit is encoded according to the prediction information.
  • the prediction information can be understood as the prediction unit of the current coding unit.
  • the space occupancy of the current coding unit can be predicted based on the space occupancy in the prediction unit, and then the geometric information of the current coding unit can be encoded based on the predicted space occupancy of the current coding unit to obtain the geometry code stream of the current coding unit.
  • the attribute information of the current coding unit is encoded according to the prediction information.
  • the prediction information can be understood as the prediction unit of the current coding unit.
  • the attribute information of the point is predicted to obtain the attribute prediction value of the point.
  • the attribute residual value of the point is determined, and then the attribute residual value of the point is encoded to form an attribute code stream.
  • the encoding end may also use a method to encode the current coding unit based on at least one of the classification information and the motion vector information of the current coding unit. This embodiment of the present application does not limit this.
  • the encoding end can write at least one of the first parameter and the second parameter into the code stream.
  • At least one of the above-mentioned first parameter and second parameter can be stored in the form of an unsigned integer, recorded as u(v), indicating that v bits are used to describe a parameter.
  • At least one of the above-mentioned first parameter and the second parameter can also be stored in the form of unsigned exponential Golomb coding, recorded as ue(v), which means that the value of the above parameter first undergoes exponential Golomb coding to become v Bit 01 bit sequence, and then write it into the code stream.
  • the encoding end writes at least one of the first parameter and the second parameter into the sequence header parameter set.
  • the first parameter is used to indicate that classification information is calculated once at intervals of multiple point cloud frames; and/or the second parameter is used to indicate that motion vector information is calculated once at intervals of multiple point cloud frames.
  • the encoding end writes at least one of the first parameter and the second parameter into the point cloud header information.
  • the first parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the classification information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames, and i is a positive integer; and /Or, the second parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the motion vector information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames.
  • the first parameter and the second parameter are stored in the point cloud header information, as shown in Table 2.
  • the first parameter is used to indicate that multiple point cloud slices are spaced in one point cloud frame, and the classification information is calculated once; and/or the second parameter is used to indicate that multiple point cloud slices are spaced in one point cloud frame, Calculate motion vector information once.
  • the first parameter and the second parameter are stored in the point cloud header information, as shown in Table 3.
  • the encoding end before determining the first parameter and the second parameter, the encoding end first needs to determine the first flag inter_prediction_flag.
  • the first flag inter_prediction_flag is used to indicate whether to perform inter-frame prediction encoding; if the first flag inter_prediction_flag indicates to perform inter-frame prediction encoding, During predictive coding, at least one of a first parameter and a second parameter is determined.
  • the encoding end determines at least one of a first parameter and a second parameter.
  • the first parameter is used to indicate the calculation cycle of classification information
  • the second parameter is used to indicate the motion vector.
  • the calculation period of the information; determining at least one of the classification information and the motion vector information of the current coding unit according to at least one of the first parameter and the second parameter; according to the classification information and the motion vector of the current coding unit At least one of the information encodes the current coding unit.
  • this embodiment of the present application by periodically calculating classification information and motion vector information, compared to calculating classification information and motion vector information once for each coding unit, this embodiment of the present application greatly reduces the calculation of classification information and motion vector information. times, reducing encoding processing time and improving encoding efficiency.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in this application.
  • the execution of the examples does not constitute any limitations.
  • the term "and/or" is only an association relationship describing associated objects, indicating that three relationships can exist. Specifically, A and/or B can represent three situations: A exists alone, A and B exist simultaneously, and B exists alone.
  • the character "/" in this article generally indicates that the related objects are an "or" relationship.
  • Figure 7 is a schematic block diagram of a point cloud decoding device provided by an embodiment of the present application.
  • the point cloud decoding device 10 may include:
  • Determining unit 11 used to decode the point cloud code stream and determine at least one of classification information and motion vector information of the current decoding unit.
  • the classification information is determined based on the first parameter
  • the motion vector information is determined based on the second parameter. It is determined that the first parameter is used to indicate the calculation period of classification information, and the second parameter is used to indicate the calculation period of motion vector information;
  • the decoding unit 12 is configured to decode the current decoding unit according to at least one of classification information and motion vector information of the current decoding unit.
  • the determining unit 11 is specifically configured to decode at least one of the classification information and the motion vector information of the current decoding unit from the point cloud code stream.
  • the determining unit 11 is specifically configured to decode at least one of the first parameter and the second parameter from the point cloud code stream; determine the Classification information of the current decoding unit, and/or determining motion vector information of the current decoding unit according to the second parameter.
  • the determining unit 11 is specifically configured to determine the classification information calculation period corresponding to the current decoding unit according to the first parameter; and determine the classification information of the current decoding unit according to the classification information calculation period. .
  • the determining unit 11 is specifically configured to decode the point cloud code stream to obtain the current point decoding unit if the current decoding unit is the first decoding unit within the classification information calculation cycle. classification information.
  • the determining unit 11 is specifically configured to determine the current decoding unit according to decoded information or a default value if the current decoding unit is not the first decoding unit within the classification information calculation cycle. classification information.
  • the first parameter indicates that classification information is calculated once for every K decoding units, and the determining unit 11 is specifically used to decode all decoding units if the current decoding unit is the NKth decoding unit in the decoding sequence. Describe the point cloud code stream to obtain the classification information of the current decoding unit, and the K and N are both positive integers.
  • the determination unit 11 is also configured to determine the classification information of the current decoding unit based on decoded information or a default value if the current decoding unit is not the NKth decoding unit in the decoding sequence.
  • the determining unit 11 is specifically configured to determine the classification information of the current decoding unit according to the classification information of the M decoding units.
  • the M decoding units are located in the current decoding unit in the decoding order.
  • the previous M decoded decoding units where M is a positive integer.
  • the determining unit 11 is specifically configured to determine, if M is equal to 1, the classification information of a decoding unit located before the current decoding unit in the decoding order as the current decoding unit. classification information.
  • the determination unit 11 is specifically configured to perform preset processing on the classification information of M decoding units if M is greater than 1, and determine the processing result as the classification information of the current decoding unit.
  • the determining unit 11 is specifically configured to determine the average value of the classification information of the M decoding units as the classification information of the current decoding unit.
  • the classification information includes at least one of a first height threshold and a second height threshold
  • the first parameter includes at least one of a first sub-parameter and a second sub-parameter
  • the first sub-parameter is used to indicate the calculation period of the first height threshold
  • the second sub-parameter is used to indicate the calculation period of the second height threshold
  • the first height threshold and the second height is used to classify the point cloud in the current decoding unit.
  • the determining unit 11 is specifically configured to determine the motion vector information calculation period corresponding to the current decoding unit according to the second parameter; determine the motion vector information calculation period corresponding to the current decoding unit according to the motion vector information calculation period. Motion vector information.
  • the determining unit 11 is specifically configured to decode the point cloud code stream to obtain the current point decoding if the current decoding unit is the first decoding unit within the motion vector information calculation cycle.
  • the motion vector information of the unit is specifically configured to decode the point cloud code stream to obtain the current point decoding if the current decoding unit is the first decoding unit within the motion vector information calculation cycle.
  • the determining unit 11 is specifically configured to determine the current decoding unit according to decoded information or a default value if the current decoding unit is not the first decoding unit within the motion vector information calculation period. The motion vector information of the unit.
  • the first parameter indicates that motion vector information is calculated every R decoding units, and the determination unit 11 is specifically used to determine if the current decoding unit is the NR-th decoding unit in the decoding sequence. Decode the point cloud code stream to obtain the motion vector information of the current decoding unit, and both R and N are positive integers.
  • the determining unit 11 is also configured to determine the motion vector information of the current decoding unit based on decoded information or a default value if the current decoding unit is not the NR-th decoding unit in the decoding order. .
  • the determining unit 11 is specifically configured to determine the motion vector information of the current decoding unit according to the motion vector information of S decoding units, and the S decoding units are located in the current decoding unit in the decoding order.
  • the previous S decoded decoding units where S is a positive integer.
  • the determining unit 11 is specifically configured to determine, if the S is equal to 1, the motion vector information of a decoding unit located before the current decoding unit in the decoding order as the current decoding unit. The motion vector information of the unit.
  • the determination unit 11 is specifically configured to perform preset processing on the motion vector information of S decoding units if the S is greater than 1, and determine the processing result as the motion vector information of the current decoding unit. .
  • the determining unit 11 is specifically configured to determine the average value of the motion vector information of the S decoding units as the motion vector information of the current decoding unit.
  • the determining unit 11 is further configured to determine the degree of change of the classification information according to the first parameter; and determine the motion vector information of the current decoding unit according to the degree of change.
  • the determining unit 11 is specifically configured to determine the classification information of the current decoding unit according to the first parameter; determine the classification information of the current decoding unit and the reference decoding unit of the current decoding unit. The degree of change between the classification information.
  • the determining unit 11 is specifically configured to, if the degree of change is less than or equal to a first preset value, change the motion vector of the decoding unit before the current decoding unit in the decoding order to a default value or Information is determined as the motion vector information of the current decoding unit.
  • the determining unit 11 is specifically configured to decode the point cloud code stream to obtain the motion vector information of the current decoding unit if the degree of change is greater than a first preset value.
  • the motion vector information includes at least one of a rotation matrix and an offset vector
  • the second parameter includes at least one of a third sub-parameter and a fourth sub-parameter
  • the third sub-parameter is used to indicate the calculation period of the rotation matrix
  • the fourth sub-parameter is used to indicate the calculation period of the offset vector.
  • the determining unit 11 is also configured to decode the point cloud code stream if the current decoding unit is the first decoding unit in the decoding order to obtain the classification information and motion of the current decoding unit. At least one of the vector messages.
  • the decoding unit 12 is specifically configured to divide the point cloud in the current decoding unit into type P point clouds according to the classification information of the current decoding unit, where P is a positive integer greater than 1; According to the motion vector information of the current decoding unit, the motion vector information corresponding to the P-type point cloud is determined; according to the motion vector information corresponding to the P-type point cloud, the current decoding unit is decoded.
  • the classification information includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold.
  • the decoding unit 12 is specifically configured to calculate the height threshold according to the first height threshold. and the second height threshold to classify the point cloud in the current decoding unit into a P-type point cloud.
  • the P-type point cloud includes a first-type point cloud and a second-type point cloud
  • the decoding unit 12 is specifically configured to convert the height value in the current decoding unit to less than or equal to the first type point cloud.
  • a point cloud with a height threshold that is greater than or equal to the second height threshold is divided into the first type of point cloud; the height value in the current decoding unit is greater than the first height threshold or less than the second Point clouds with a height threshold are classified into the second type of point cloud.
  • the decoding unit 12 is specifically used to determine the reference decoding unit of the current decoding unit; perform motion compensation on the reference decoding unit according to the motion vector information of the P-type point cloud to obtain the current Prediction information of the decoding unit; decoding at least one of the geometric information and attribute information of the current decoding unit according to the prediction information.
  • the current decoding unit is the current point cloud frame, or a spatial region of the current point cloud frame.
  • the determining unit 12 is specifically configured to decode the sequential header parameter set to obtain at least one of the first parameter and the second parameter.
  • the first parameter is used to indicate that the classification information is calculated once at intervals of multiple point cloud frames; and/or,
  • the second parameter is used to indicate that motion vector information is calculated once at intervals of multiple point cloud frames.
  • the current decoding unit is the current point cloud slice
  • the determining unit 12 is specifically configured to decode the point cloud slice header information to obtain at least one of the first parameter and the second parameter.
  • the first parameter is used to indicate that for the i-th point cloud patch in the point cloud frame, the classification information of the i-th point cloud patch is calculated once at intervals of multiple point cloud frames, and the i is a positive integer; and/or,
  • the second parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the motion vector information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames.
  • the first parameter is used to indicate that multiple point cloud slices are spaced in a point cloud frame, and the classification information is calculated once; and/or,
  • the second parameter is used to calculate motion vector information once when multiple point cloud slices are spaced in a point cloud frame.
  • the determining unit 12 is specifically configured to decode the point cloud code stream to obtain a first identifier, and the first identifier is used to indicate whether to perform inter-frame prediction decoding;
  • the point cloud code stream is decoded to obtain at least one of the first parameter and the second parameter.
  • the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
  • the point cloud decoding device 10 shown in FIG. 7 may correspond to the corresponding subject in performing the point cloud decoding method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the point cloud decoding device 10 In order to implement the corresponding processes in the point cloud decoding method, for the sake of simplicity, they will not be described again here.
  • Figure 8 is a schematic block diagram of a point cloud encoding device provided by an embodiment of the present application.
  • the point cloud encoding device 20 includes:
  • the first determination unit 21 is configured to determine at least one of a first parameter and a second parameter, the first parameter is used to indicate the calculation period of classification information, and the second parameter is used to indicate the calculation period of motion vector information;
  • the second determination unit 22 is configured to determine at least one of the classification information and the motion vector information of the current coding unit according to at least one of the first parameter and the second parameter;
  • the encoding unit 23 is configured to encode the current encoding unit according to at least one of the classification information and the motion vector information of the current encoding unit.
  • the second determination unit 22 is specifically configured to determine the classification information of the current coding unit according to the first parameter, and/or determine the motion of the current coding unit according to the second parameter. vector information.
  • the second determination unit 22 is specifically configured to determine the classification information calculation period corresponding to the current coding unit according to the first parameter; and determine the classification information calculation period of the current coding unit according to the classification information calculation period. Classified information.
  • the second determining unit 22 is specifically configured to identify the category of the current coding unit if the current coding unit is the first coding unit within the classification information calculation cycle, and obtain Classification information of the current point encoding unit.
  • the second determining unit 22 is specifically configured to determine the current coding unit based on coded information or a default value if the current coding unit is not the first coding unit within the classification information calculation cycle. Classification information of coding units.
  • the encoding unit 23 is also used to write the first parameter into the point cloud code stream, and if the current encoding unit is the first encoding unit within the classification information calculation cycle, then Write the classification information of the current coding unit into the point cloud code stream.
  • the encoding unit 23 is also used to write the first parameter into the point cloud code stream, and if the current encoding unit is not the first encoding unit within the classification information calculation cycle, Then it is skipped to write the classification information of the current coding unit into the point cloud code stream.
  • the first parameter indicates that classification information is calculated every K coding units.
  • the second determining unit 22 is specifically used if the current coding unit is the NKth coding unit in the coding sequence. unit, the category of the point cloud in the current point coding unit is identified to obtain the classification information of the current point coding unit, and K and N are both positive integers.
  • the second determination unit 22 is also configured to determine the classification of the current coding unit according to the coded information or a default value if the current coding unit is not the NKth coding unit in the coding sequence. information.
  • the coding unit 23 is also used to write the first parameter into the point cloud code stream, and if the current coding unit is the NKth coding unit in the coding sequence, write the current coding unit into the point cloud code stream. Classification information of coding units is written into the code stream.
  • the encoding unit 23 is also used to write the first parameter into the point cloud code stream, and if the current encoding unit is not the NKth encoding unit in the encoding sequence, skip the The classification information of the current coding unit is written into the code stream.
  • the encoding unit 23 is also configured to write the classification information of the current encoding unit into the point cloud code stream, and skip writing the first parameter into the point cloud code stream.
  • the second determining unit 22 is specifically configured to determine the classification information of the current coding unit according to the classification information of M coding units.
  • the M coding units are located in the current coding unit in the decoding order.
  • the second determining unit 22 is specifically configured to determine, if M is equal to 1, the classification information of a coding unit located before the current coding unit in the coding sequence as the current coding unit. Classification information of coding units.
  • the second determination unit 22 is specifically configured to perform preset processing on the classification information of M coding units if M is greater than 1, and determine the processing result as the classification information of the current coding unit. .
  • the second determining unit 22 is specifically configured to determine the average value of the classification information of the M coding units as the classification information of the current coding unit.
  • the classification information includes at least one of a first height threshold and a second height threshold
  • the first parameter includes at least one of a first sub-parameter and a second sub-parameter
  • the first sub-parameter is used to indicate the calculation period of the first height threshold
  • the second sub-parameter is used to indicate the calculation period of the second height threshold
  • the threshold is used for classification of the current coding unit.
  • the second determination unit 22 is specifically configured to determine the motion vector information calculation period corresponding to the current coding unit according to the second parameter; determine the current coding period according to the motion vector information calculation period. The motion vector information of the unit.
  • the second determining unit 22 is specifically configured to determine, according to the reference coding unit of the current coding unit, if the current coding unit is the first coding unit within the motion vector information calculation period. Motion vector information of the current point encoding unit.
  • the second determining unit 22 is specifically configured to, if the current coding unit is not the first coding unit within the motion vector information calculation cycle, determine the coding unit based on the coded information or a default value. Motion vector information of the current coding unit.
  • the encoding unit 23 is also used to write the second parameter into the point cloud code stream, and if the current encoding unit is the first encoding unit within the motion vector information calculation cycle, Then the motion vector information of the current coding unit is written into the point cloud code stream.
  • the encoding unit 23 is also used to write the second parameter into the point cloud code stream, and if the current encoding unit is not the first encoding unit within the motion vector information calculation cycle, , then skip writing the motion vector information of the current coding unit into the point cloud code stream.
  • the first parameter indicates that motion vector information is calculated every R coding units.
  • the second determination unit 22 is specifically used if the current coding unit is the NRth coding unit in the coding sequence. , then the motion vector information of the current point coding unit is determined according to the reference coding unit of the current coding unit, and both R and N are positive integers.
  • the second determination unit 22 is also configured to determine the motion of the current coding unit based on coded information or a default value if the current coding unit is not the NR-th coding unit in the coding sequence. vector information.
  • the coding unit 23 is also used to write the second parameter into the point cloud code stream, and if the current coding unit is the NKth coding unit in the coding sequence, write the current coding unit into the point cloud code stream.
  • the motion vector information of the coding unit is written into the code stream.
  • the encoding unit 22 is also used to write the second parameter into the point cloud code stream, and if the current encoding unit is not the NKth encoding unit in the encoding sequence, skip the The motion vector information of the current coding unit is written into the code stream.
  • the encoding unit 22 is also configured to write the motion vector information of the current encoding unit into the point cloud code stream, and skip writing the second parameter into the point cloud code stream.
  • the second determining unit 22 is specifically configured to determine the motion vector information of the current coding unit based on the motion vector information of S coding units, which are located in the current coding sequence in the coding sequence. S coded coding units before the coding unit, where S is a positive integer.
  • the second determining unit 22 is specifically configured to determine, if S is equal to 1, the motion vector information of a coding unit before the current coding unit in the coding sequence as the Motion vector information of the current coding unit.
  • the second determination unit 22 is specifically configured to perform preset processing on the motion vector information of S coding units if the S is greater than 1, and determine the processing result as the motion of the current coding unit. vector information.
  • the second determination unit 22 is specifically configured to determine the average value of the motion vector information of the S coding units as the motion vector information of the current coding unit.
  • the second determination unit 22 is further configured to determine the degree of change of the classification information of different coding units according to the first parameter; and to determine the motion vector information of the current coding unit according to the degree of change.
  • the second determining unit 22 is specifically configured to determine the classification information of the current coding unit according to the first parameter, determine the classification information of the current coding unit, and determine the reference of the current coding unit. The degree of variation between categorical information of coding units.
  • the second determining unit 22 is specifically configured to, if the degree of change is less than or equal to the first preset value, change the default value or change the value of the previous coding unit of the current coding unit in the coding order.
  • Motion vector information is determined as the motion vector information of the current coding unit.
  • the second determination unit 22 is specifically configured to determine the motion vector information of the current coding unit according to the reference coding unit of the current coding unit if the degree of change is greater than the first preset value.
  • the encoding unit 23 is also configured to write the first parameter into the point cloud code stream, and skip writing the second parameter into the point cloud code stream.
  • the motion vector information includes at least one of a rotation matrix and an offset vector
  • the second parameter includes at least one of a third sub-parameter and a fourth sub-parameter
  • the third sub-parameter is used to indicate the calculation period of the rotation matrix
  • the fourth sub-parameter is used to indicate the calculation period of the offset vector.
  • the encoding unit 23 is specifically configured to classify the point cloud in the current encoding unit into P type point clouds according to the classification information of the current encoding unit, where P is a positive integer greater than 1; According to the motion vector information of the current encoding unit, the motion vector information corresponding to the P-type point cloud is determined; according to the motion vector information corresponding to the P-type point cloud, the current encoding unit is encoded.
  • the classification information includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold.
  • the encoding unit 23 is specifically configured to calculate the height threshold according to the first height threshold. and the second height threshold to classify the point cloud in the current coding unit into a P-type point cloud.
  • the P-type point cloud includes a first-type point cloud and a second-type point cloud.
  • the encoding unit 23 is specifically configured to set the height value in the current encoding unit to be less than or equal to the first height.
  • Point clouds that are greater than or equal to the second height threshold are classified into the first type of point cloud; the height value in the current coding unit is greater than the first height threshold or less than the second height threshold.
  • the point cloud is divided into the second type of point cloud.
  • the coding unit 23 is specifically configured to perform motion compensation on the reference coding unit of the current coding unit according to the motion vector information of the P-type point cloud to obtain the prediction information of the current coding unit; according to The prediction information encodes at least one of geometric information and attribute information of the current coding unit.
  • the current coding unit is the current point cloud frame, or a spatial region of the current point cloud frame.
  • the encoding unit 23 is also configured to write at least one of the first parameter and the second parameter into the sequence header parameter set.
  • the first parameter is used to indicate that the classification information is calculated once at intervals of multiple point cloud frames; and/or,
  • the second parameter is used to indicate that motion vector information is calculated once at intervals of multiple point cloud frames.
  • the current encoding unit is the current point cloud slice
  • the encoding unit 23 is also configured to write at least one of the first parameter and the second parameter into the point cloud slice header information.
  • the first parameter is used to indicate that for the i-th point cloud patch in the point cloud frame, the classification information of the i-th point cloud patch is calculated once at intervals of multiple point cloud frames, and the i is a positive integer; and/or,
  • the second parameter is used to indicate that for the i-th point cloud piece in the point cloud frame, the motion vector information of the i-th point cloud piece is calculated once at intervals of multiple point cloud frames.
  • the first parameter is used to indicate that multiple point cloud slices are spaced in a point cloud frame, and the classification information is calculated once; and/or,
  • the second parameter is used to calculate motion vector information once when multiple point cloud slices are spaced in a point cloud frame.
  • the first determining unit 21 is also used to determine a first flag, which is used to indicate whether to perform inter-frame prediction encoding; if the first flag indicates to perform inter-frame prediction encoding, then At least one of the first parameter and the second parameter is determined.
  • the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
  • the point cloud encoding device 20 shown in FIG. 8 may correspond to the corresponding subject in performing the point cloud encoding method of the embodiment of the present application, and the aforementioned and other operations and/or functions of each unit in the point cloud encoding device 20 In order to implement the corresponding processes in the point cloud encoding method, for the sake of simplicity, they will not be described again here.
  • the software unit may be located in a mature storage medium in this field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
  • Figure 9 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 30 may be a point cloud decoding device or a point cloud encoding device described in the embodiment of the present application.
  • the electronic device 30 may include:
  • Memory 33 and processor 32 the memory 33 is used to store the computer program 34 and transmit the program code 34 to the processor 32.
  • the processor 32 can call and run the computer program 34 from the memory 33 to implement the method in the embodiment of the present application.
  • the processor 32 may be configured to perform the steps in the above method 200 according to instructions in the computer program 34 .
  • the processor 32 may include but is not limited to:
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the memory 33 includes but is not limited to:
  • Non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM Random Access Memory
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • Direct Rambus RAM Direct Rambus RAM
  • the computer program 34 can be divided into one or more units, and the one or more units are stored in the memory 33 and executed by the processor 32 to complete the tasks provided by this application.
  • the one or more units may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program 34 in the electronic device 30 .
  • the electronic device 30 may also include:
  • Transceiver 33 the transceiver 33 can be connected to the processor 32 or the memory 33 .
  • the processor 32 can control the transceiver 33 to communicate with other devices. Specifically, it can send information or data to other devices, or receive information or data sent by other devices.
  • Transceiver 33 may include a transmitter and a receiver.
  • the transceiver 33 may further include an antenna, and the number of antennas may be one or more.
  • bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
  • Figure 10 is a schematic block diagram of a point cloud encoding and decoding system provided by an embodiment of the present application.
  • the point cloud encoding and decoding system 40 may include: a point cloud encoder 41 and a point cloud decoder 42, where the point cloud encoder 41 is used to perform the point cloud encoding method involved in the embodiment of the present application.
  • the decoder 42 is used to execute the point cloud decoding method involved in the embodiment of the present application.
  • This application also provides a code stream, which is generated according to the above encoding method.
  • This application also provides a computer storage medium on which a computer program is stored.
  • the computer program When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
  • embodiments of the present application also provide a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the available media may be magnetic media (such as floppy disks, hard disks, tapes), optical media (such as digital video discs (DVD)), or semiconductor media (such as solid state disks (SSD)), etc. .
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or it may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in various embodiments of the present application can be integrated into a processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种点云解码、编码方法与装置,该方法包括:解码点云码流,确定当前解码单元的分类信息和运动信息中的至少一个,其中,分类信息是基于第一参数确定的,运动向量信息是基于第二参数确定的,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期;根据当前解码单元的分类信息和运动信息中的至少一个,对当前解码单元进行解码。即本申请周期性的计算分类信息和运动向量信息,相比于针对每一个解码单元计算一次分类信息和运动向量信息,大大降低了分类信息和运动向量信息的计算次数,降低了编解码的处理时间,提升了编解码效率。

Description

点云编解码方法、装置、设备及存储介质 技术领域
本申请涉及点云技术领域,尤其涉及一种点云编解码方法、装置、设备及存储介质。
背景技术
通过采集设备对物体表面进行采集,形成点云数据,点云数据包括几十万甚至更多的点。在视频制作过程中,将点云数据以点云媒体文件的形式在点云编码设备和点云解码设备之间传输。但是,如此庞大的点给传输带来了挑战,因此,点云编码设备需要对点云数据进行压缩后传输。
目前在使用帧间预测的点云编解码中,针对每一个编码单元,需要计算一次分类信息和运动向量信息。这将增大编解码的处理时间,降低编解码效率。
发明内容
本申请实施例提供了一种点云编解码方法、装置、设备及存储介质,以降低编解码处理时间,提升编解码效率。
第一方面,本申请实施例提供一种点云解码方法,包括:
解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,所述分类信息是基于第一参数确定的,所述运动向量信息是基于第二参数确定的,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
根据所述当前解码单元的分类信息和运动向量信息中的至少一个,对所述当前解码单元进行解码。
第二方面,本申请提供了一种点云编码方法,包括:
确定第一参数和第二参数中的至少一个,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个;
根据所述当前编码单元的分类信息和运动向量信息中的至少一个,对所述当前编码单元进行编码。
第三方面,本申请提供了一种点云解码装置,用于执行上述第一方面或其各实现方式中的方法。具体地,该装置包括用于执行上述第一方面或其各实现方式中的方法的功能单元。
第四方面,本申请提供了一种点云编码装置,用于执行上述第二方面或其各实现方式中的方法。具体地,该装置包括用于执行上述第二方面或其各实现方式中的方法的功能单元。
第五方面,提供了一种点云解码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第一方面或其各实现方式中的方法。
第六方面,提供了一种点云编码器,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行上述第二方面或其各实现方式中的方法。
第七方面,提供了一种点云编解码系统,包括点云编码器和点云解码器。点云解码器用于执行上述第一方面或其各实现方式中的方法,点云编码器用于执行上述第二方面或其各实现方式中的方法。
第八方面,提供了一种芯片,用于实现上述第一方面至第二方面中的任一方面或其各实现方式中的方法。具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的设备执行如上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第九方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十一方面,提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。
第十二方面,提供了一种码流,码流是基于上述第二方面的方法生成的,可选的,上述码流包括第一参数和第二参数中的至少一个。
基于以上技术方案,通过解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,其中,分类信息是基于第一参数确定的,运动向量信息是基于第二参数确定的,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。接着,根据当前解码单元的分类信息和运动向量信息中的至少一个,对当前解码单元进行解码。即本申请实施例,周期性的计算分类信息和运动向量信息,相比于针对每一个解码单元计算一次分类信息和运动向量信息,本申请实施例大大降低了分类信息和运动向量信息的计算次数,降低了编解码的处理时间,提升了编解码效率。
附图说明
图1为本申请实施例涉及的一种点云编解码系统的示意性框图;
图2是本申请实施例提供的点云编码器的示意性框图;
图3是本申请实施例提供的点云解码器的示意性框图;
图4为本申请一实施例提供的点云解码方法流程示意图;
图5为本申请实施例涉及的点云直方图;
图6为本申请一实施例提供的点云编码方法流程示意图;
图7是本申请实施例提供的点云解码装置的示意性框图;
图8是本申请实施例提供的点云编码装置的示意性框图;
图9是本申请实施例提供的电子设备的示意性框图;
图10是本申请实施例提供的点云编解码系统的示意性框图。
具体实施方式
本申请可应用于点云上采样技术领域,例如可以应用于点云压缩技术领域。
为了便于理解本申请的实施例,首先对本申请实施例涉及到的相关概念进行如下简单介绍:
点云(Point Cloud)是指空间中一组无规则分布的、表达三维物体或三维场景的空间结构及表面属性的离散点集。
点云数据(Point Cloud Data)是点云的具体记录形式,点云中的点可以包括点的位置信息和点的属性信息。例如,点的位置信息可以是点的三维坐标信息。点的位置信息也可称为点的几何信息。例如,点的属性信息可包括颜色信息、反射率信息、法向量信息等等。例如,所述颜色信息可以是任意一种色彩空间上的信息。例如,所述颜色信息可以是(RGB)。再如,所述颜色信息可以是于亮度色度(YcbCr,YUV)信息。例如,Y表示明亮度(Luma),Cb(U)表示蓝色色差,Cr(V)表示红色,U和V表示为色度(Chroma)用于描述色差信息。例如,根据激光测量原理得到的点云,所述点云中的点可以包括点的三维坐标信息和点的激光反射强度(reflectance)。再如,根据摄影测量原理得到的点云,所述点云中的点可以可包括点的三维坐标信息和点的颜色信息。再如,结合激光测量和摄影测量原理得到点云,所述点云中的点可以可包括点的三维坐标信息、点的激光反射强度(reflectance)和点的颜色信息。
点云数据的获取途径可以包括但不限于以下至少一种:(1)计算机设备生成。计算机设备可以根据虚拟三维物体及虚拟三维场景的生成点云数据。(2)3D(3-Dimension,三维)激光扫描获取。通过3D激光扫描可以获取静态现实世界三维物体或三维场景的点云数据,每秒可以获取百万级点云数据;(3)3D摄影测量获取。通过3D摄影设备(即一组摄像机或具有多个镜头和传感器的摄像机设备)对现实世界的视觉场景进行采集以获取现实世界的视觉场景的点云数据,通过3D摄影可以获得动态现实世界三维物体或三维场景的点云数据。(4)通过医学设备获取生物组织器官的点云数据。在医学领域可以通过磁共振成像(Magnetic Resonance Imaging,MRI)、电子计算机断层扫描(Computed Tomography,CT)、电磁定位信息等医学设备获取生物组织器官的点云数据。
点云可以按获取的途径分为:密集型点云和稀疏性点云。
点云按照数据的时序类型划分为:
第一类静态点云:即物体是静止的,获取点云的设备也是静止的;
第二类动态点云:物体是运动的,但获取点云的设备是静止的;
第三类动态获取点云:获取点云的设备是运动的。
按点云的用途分为两大类:
类别一:机器感知点云,其可以用于自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等场景;
类别二:人眼感知点云,其可以用于数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。
随着三维重建和三维成像技术的发展,点云被广泛应用于虚拟现实、沉浸式远程呈现、三维打印等领域。但由于三维点云往往具有庞大数量的点,且点的分布在空间中具有无序性;同时,每个点又往往具有丰富的属性信息,导致一个点云具有庞大的数据量,给点云的存储和传输都带来了巨大的挑战。因此,点云压缩编码技术是点云处理和应用的关键技术之一。
下面对点云编解码的相关知识进行介绍。
图1为本申请实施例涉及的一种点云编解码系统的示意性框图。需要说明的是,图1只是一种示例,本申请实施例的点云编解码系统包括但不限于图1所示。如图1所示,该点云编解码系统100包含编码设备110和解码设备120。其中编码设备用于对点云数据进行编码(可以理解成压缩)产生码流,并将码流传输给解码设备。解码设备对编码设备编码产生的码流进行解码,得到解码后的点云数据。
本申请实施例的编码设备110可以理解为具有点云编码功能的设备,解码设备120可以理解为具有点云解码功能的设备,即本申请实施例对编码设备110和解码设备120包括更广泛的装置,例如包含智能手机、台式计算机、移动计算装置、笔记本(例如,膝上型)计算机、平板计算机、机顶盒、电视、相机、显示装置、数字媒体播放器、点云游戏控制台、车载计算机等。
在一些实施例中,编码设备110可以经由信道130将编码后的点云数据(如码流)传输给解码设备120。信道130可以包括能够将编码后的点云数据从编码设备110传输到解码设备120的一个或多个媒体和/或装置。
在一个实例中,信道130包括使编码设备110能够实时地将编码后的点云数据直接发射到解码设备120的一个或多个通信媒体。在此实例中,编码设备110可根据通信标准来调制编码后的点云数据,且将调制后的点云数据发射到解码设备120。其中通信媒体包含无线通信媒体,例如射频频谱,可选的,通信媒体还可以包含有线通信媒体,例如一根或多根物理传输线。
在另一实例中,信道130包括存储介质,该存储介质可以存储编码设备110编码后的点云数据。存储介质包含多种本地存取式数据存储介质,例如光盘、DVD、快闪存储器等。在该实例中,解码设备120可从该存储介质中获取编码后的点云数据。
在另一实例中,信道130可包含存储服务器,该存储服务器可以存储编码设备110编码后的点云数据。在此实例中,解码设备120可以从该存储服务器中下载存储的编码后的点云数据。可选的,该存储服务器可以存储编码后的点云数据且可以将该编码后的点云数据发射到解码设备120,例如web服务器(例如,用于网站)、文件传送协议(FTP)服务器等。
一些实施例中,编码设备110包含点云编码器112及输出接口113。其中,输出接口113可以包含调制器/解调器(调制解调器)和/或发射器。
在一些实施例中,编码设备110除了包括点云编码器112和输入接口113外,还可以包括点云源111。
点云源111可包含点云采集装置(例如,扫描仪)、点云存档、点云输入接口、计算机图形系统中的至少一个,其中,点云输入接口用于从点云内容提供者处接收点云数据,计算机图形系统用于产生点云数据。
点云编码器112对来自点云源111的点云数据进行编码,产生码流。点云编码器112经由输出接口113将编码后的点云数据直接传输到解码设备120。编码后的点云数据还可存储于存储介质或存储服务器上,以供解码设备120后续读取。
在一些实施例中,解码设备120包含输入接口121和点云解码器122。
在一些实施例中,解码设备120除包括输入接口121和点云解码器122外,还可以包括显示装置123。
其中,输入接口121包含接收器及/或调制解调器。输入接口121可通过信道130接收编码后的点云数据。
点云解码器122用于对编码后的点云数据进行解码,得到解码后的点云数据,并将解码后的点云数据传输至显示装置123。
显示装置123显示解码后的点云数据。显示装置123可与解码设备120整合或在解码设备120外部。显示装置123可包括多种显示装置,例如液晶显示器(LCD)、等离子体显示器、有机发光二极管(OLED)显示器或其它类型的显示装置。
此外,图1仅为实例,本申请实施例的技术方案不限于图1,例如本申请的技术还可以应用于单侧的点云编码或单侧的点云解码。
目前的点云编码器可以采用国际标准组织运动图像专家组(Moving Picture Experts Group,MPEG)提出了两种点云压缩编码技术路线,分别是基于投影的点云压缩(Video-based Point Cloud Compression,VPCC)和基于几何的点云压缩(Geometry-based Point Cloud Compression,GPCC)。VPCC通过将三维点云投影到二维,利用现有的二维编码工具对投影后的二维图像进行编码,GPCC利用层级化的结构将点云逐级划分为多个单元,通过编码记录划分过程编码整个点云。
下面以GPCC编解码框架为例,对本申请实施例可适用的点云编码器和点云解码器进行说明。
图2是本申请实施例提供的点云编码器的示意性框图。
由上述可知点云中的点可以包括点的位置信息和点的属性信息,因此,点云中的点的编码主要包括位置编码和属性编码。在一些示例中点云中点的位置信息又称为几何信息,对应的点云中点的位置编码也可以称为几何编码。
在GPCC编码框架中,点云的几何信息和对应的属性信息是分开编码的。
位置编码的过程包括:首先建立包围点云所有点的最小正方体,该正方体称为最小包围盒。对最小包围盒进行八叉树划分,即将包围盒八等分为8个子立方体,对非空的(包含点云中的点)的子立方体继续进行八等分,直到划分得到的叶子结点为1×1×1的单位立方体时停止划分,在此过程中用8位二进制数编码每次划分产生的8个子立方体的占用情况,生成二进制的几何比特流,即几何码流。具体是,对点云中的点进行预处理,例如坐标变换、量化和移除重复点等;接着,对预处理后的点云进行几何编码,例如构建八叉树,基于构建的八叉树进行几何编码形成几何码流。同时,基于构建的八叉树输出的位置信息,对点云数据中各点的位置信息进行重建,得到各点的位置信息的重建值。
属性编码过程包括:通过给定输入点云的位置信息的重建信息和属性信息的原始值,选择三种预测模式的一种进行点云预测,对预测后的结果进行量化,并进行算术编码形成属性码流。
如图2所示,位置编码可通过以下单元实现:
坐标转换(Tanmsform coordinates)单元201、体素(Voxelize)单元202、八叉树划分(Analyze octree)单元203、几何重建(Reconstruct geometry)单元204、第一算术编码(Arithmetic enconde)单元205以及表面拟合单元(Analyze surface approximation)206。
坐标转换单元201可用于将点云中点的世界坐标变换为相对坐标。例如,点的几何坐标分别减去xyz坐标轴的最小值,相当于去直流操作,以实现将点云中的点的坐标从世界坐标转换为相对坐标。
体素(Voxelize)单元202也称为量化和移除重复点(Quantize and remove points)单元,可通过量化减少坐标的数目;量化后原先不同的点可能被赋予相同的坐标,基于此,可通过去重操作将重复的点删除;例如,具有相同量化位置和不同属性信息的多个云可通过属性转换合并到一个云中。在本申请的一些实施例中,体素单元202为可选的单元模块。
八叉树划分单元203可利用八叉树(octree)编码方式,编码量化的点的位置信息。例如,将点云按照八叉树的形式进行划分,由此,点的位置可以和八叉树的位置一一对应,通过统计八叉树中有点的位置,并将其标识(flag)记为1,以进行几何编码。
在一些实施例中,在基于三角面片集(trianglesoup,trisoup)的几何信息编码过程中,同样也要通过八叉树划分单元203对点云进行八叉树划分,但区别于基于八叉树的几何信息编码,该trisoup不需要将点云逐级划分到边长为1x1x1的单位立方体,而是划分到block(子块)边长为W时停止划分,基于每个block中点云的分布所形成的表面,得到该表面与block的十二条边所产生的至多十二个vertex(交点),通过表面拟合单元206对交点进行表面拟合,对拟合后的交 点进行几何编码。
几何重建单元204可以基于八叉树划分单元203输出的位置信息或表面拟合单元206拟合后的交点进行位置重建,得到点云数据中各点的位置信息的重建值。
算术编码单元205可以采用熵编码方式对八叉树分析单元203输出的位置信息或对表面拟合单元206拟合后的交点进行算术编码,例如将八叉树分析单元203输出的位置信息利用算术编码方式生成几何码流;几何码流也可称为几何比特流(geometry bitstream)。
属性编码可通过以下单元实现:
颜色转换(Transform colors)单元210、重着色(Transfer attributes)单元211、区域自适应分层变换(Region Adaptive Hierarchical Transform,RAHT)单元212、生成LOD(Generate LOD)单元213以及提升(lifting transform)单元214、量化系数(Quantize coefficients)单元215以及算术编码单元216。
需要说明的是,点云编码器200可包含比图2更多、更少或不同的功能组件。
颜色转换单元210可用于将点云中点的RGB色彩空间变换为YCbCr格式或其他格式。
重着色单元211利用重建的几何信息,对颜色信息进行重新着色,使得未编码的属性信息与重建的几何信息对应起来。
经过重着色单元211转换得到点的属性信息的原始值后,可选择任一种变换单元,对点云中的点进行变换。变换单元可包括:RAHT变换212和提升(lifting transform)单元214。其中,提升变化依赖生成细节层(level of detail,LOD)。
RAHT变换和提升变换中的任一项可以理解为用于对点云中点的属性信息进行预测,以得到点的属性信息的预测值,进而基于点的属性信息的预测值得到点的属性信息的残差值。例如,点的属性信息的残差值可以是点的属性信息的原始值减去点的属性信息的预测值。
在本申请的一实施例中,生成LOD单元生成LOD的过程包括:根据点云中点的位置信息,获取点与点之间的欧式距离;根据欧式距离,将点分为不同的细节表达层。在一个实施例中,可以将欧式距离进行排序后,将不同范围的欧式距离划分为不同的细节表达层。例如,可以随机挑选一个点,作为第一细节表达层。然后计算剩余点与该点的欧式距离,并将欧式距离符合第一阈值要求的点,归为第二细节表达层。获取第二细节表达层中点的质心,计算除第一、第二细节表达层以外的点与该质心的欧式距离,并将欧式距离符合第二阈值的点,归为第三细节表达层。以此类推,将所有的点都归到细节表达层中。通过调整欧式距离的阈值,可以使得每层LOD层的点的数量是递增的。应理解,LOD划分的方式还可以采用其它方式,本申请对此不进行限制。
需要说明的是,可以直接将点云划分为一个或多个细节表达层,也可以先将点云划分为多个点云切块(slice),再将每一个点云切块划分为一个或多个LOD层。
例如,可将点云划分为多个点云切块,每个点云切块的点的个数可以在55万-110万之间。每个点云切块可看成单独的点云。每个点云切块又可以划分为多个细节表达层,每个细节表达层包括多个点。在一个实施例中,可根据点与点之间的欧式距离,进行细节表达层的划分。
量化单元215可用于量化点的属性信息的残差值。例如,若量化单元215和RAHT变换单元212相连,则量化单元215可用于量化RAHT变换单元212输出的点的属性信息的残差值。
算术编码单元216可使用零行程编码(Zero run length coding)对点的属性信息的残差值进行熵编码,以得到属性码流。所述属性码流可以是比特流信息。
图3是本申请实施例提供的点云解码器的示意性框图。
如图3所示,解码器300可以从编码设备获取点云码流,通过解析码得到点云中的点的位置信息和属性信息。点云的解码包括位置解码和属性解码。
位置解码的过程包括:对几何码流进行算术解码;构建八叉树后进行合并,对点的位置信息进行重建,以得到点的位置信息的重建信息;对点的位置信息的重建信息进行坐标变换,得到点的位置信息。点的位置信息也可称为点的几何信息。
属性解码过程包括:通过解析属性码流,获取点云中点的属性信息的残差值;通过对点的属性信息的残差值进行反量化,得到反量化后的点的属性信息的残差值;基于位置解码过程中获取的点的位置信息的重建信息,选择如下RAHT逆变换和提升逆变换中的一种进行点云预测,得到预测值,预测值与残差值相加得到点的属性信息的重建值;对点的属性信息的重建值进行颜色空间逆转换,以得到解码点云。
如图3所示,位置解码可通过以下单元实现:
算数解码单元301、八叉树合成(synthesize octree)单元302、表面拟合单元(Synthesize suface approximation)303、几何重建(Reconstruct geometry)单元304以及逆坐标变换(inverse transform coordinates)单元305。
属性编码可通过以下单元实现:
算数解码单元310、反量化(inverse quantize)单元311、RAHT逆变换单元312、生成LOD(Generate LOD)单元313、提升逆变换(Inverse lifting)单元314以及逆颜色转换(inverse trasform colors)单元315。
需要说明的是,解压缩是压缩的逆过程,类似的,解码器300中的各个单元的功能可参见编码器200中相应的单元的功能。另外,点云解码器300可包含比图3更多、更少或不同的功能组件。
例如,解码器300可根据点云中点与点之间的欧式距离将点云划分为多个LOD;然后,依次对LOD中点的属性信息进行解码;例如,计算零行程编码技术中零的数量(zero_cnt),以基于zero_cnt对残差进行解码;接着,解码框架200可基于解码出的残差值进行反量化,并基于反量化后的残差值与当前点的预测值相加得到该点云的重建值,直到解码完所有的点云。当前点将会作为后续LOD中点的最邻近点,并利用当前点的重建值对后续点的属性信息进行 预测。
上述是基于GPCC编解码框架下的点云编解码器的基本流程,随着技术的发展,该框架或流程的一些模块或步骤可能会被优化,本申请适用于该基于GPCC编解码框架下的点云编解码器的基本流程,但不限于该框架及流程。
由于连续采集的点云序列中临近帧具有较高相关性,在一些实施例中,可以引入帧间预测提升点云编码效率。帧间预测主要包括运动估计、运动补偿等步骤,在运动估计步骤中,相邻两帧的空间运动偏移向量被计算得出并写入码流。在运动补偿步骤中,计算得到的运动向量进一步被用于计算点云的空间偏移,并使用偏移后的点云帧作为参考,进一步提升当前帧的编码效率。考虑到雷达点云空间跨度较大,不同部分的运动向量差别较大,在一些实施例中,将雷达点云划分为道路和非道路两部分,只使用非道路部分估计全局的运动向量。
由上述可知,目前在使用帧间预测的点云编解码中,针对每一个编码单元,需要计算一次分类信息和运动向量信息,这将增大编解码的处理时间,降低了点云的编解码效率。
为了解决上述技术问题,本申请实施例基于连续点云帧内容的相似度,不针对每一个编码单元计算一次分类信息和运动向量信息,而是间隔多个编码单元计算一次分类信息和运动向量信息,进而减少了分类信息和运动向量信息的计算次数,降低了编解码的处理时间,提升了编解码效率。
下面结合具体的实施例,对本申请实施例涉及的点云编解码方法进行介绍。
首先,以解码端为例,对本申请实施例提供的点云解码方法进行介绍。
图4为本申请一实施例提供的点云解码方法流程示意图。本申请实施例的点云解码方法可以由上述图1或图3所示的点云解码设备完成。
如图4所示,本申请实施例的点云解码方法包括:
S101、解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个。
其中,分类信息是基于第一参数确定的,运动向量信息是基于第二参数确定的,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。
由上述描述可知,连续采集的点云序列中临近帧具有较高相关性,因此可以引入帧间预测提升点云编解码效率。
其中,帧间预测主要包括运动估计、运动补偿等步骤。在一些实施例中,在运动估计步骤中,相邻两帧的空间运动偏移向量被计算得出并写入码流。在运动补偿步骤中,计算得到的运动向量进一步被用于计算点云的空间偏移,并使用偏移后的点云帧作为参考,进一步提升当前帧的编码效率。
本申请实施例对当前解码单元的运动向量信息的具体内容不做限制,可以为运动估计、运动补偿等步骤涉及的运动信息。
例如,运动向量信息可以为运动估计中,相邻两帧的空间运动偏移向量,即运动矢量。
再例如,运动向量信息还可以为运动补偿中,相邻两帧之间的运动估计ME(Motion Estimation)。
在实际场景中,不同物体的运动可能不同,示例性的,以移动车辆上的激光雷达传感器捕获的点云数据为例,在该点云数据中,道路和物体通常具有不同的运动。由于道路和雷达传感器之间的距离相对恒定,并且道路从一个车辆位置到下一个车辆位置发生微小变化,因此表示道路的点相对于雷达传感器位置的移动很小。相比之下,建筑物、路标、植被或其他车辆等物体具有较大的运动。由于道路和物体点具有不同的运动,因此将点云数据划分为道路和物体,将提高全局运动估计和补偿的准确性,从而提高压缩效率。也就是说,对于使用帧间预测的点云数据,为了提高帧间预测的准确性,提升压缩效率,针对一个解码单元,需要对该解码单元中的点云进行分类,例如将该解码单元中的点云划分为道路点云和非道路点云。
在一些实施例中,通过分类信息来指示解码单元中点云的分类,其中分类信息可以理解为将点云分成几个类别所需的信息。
在本申请实施例中,当前解码单元的分类信息可以理解为当前解码单元中点云的分类信息,即将当前解码单元中的点云分成几个类别所需的信息。
在点云解码过程中,可以将点云数据划分为至少一个解码单元,每一个解码单元的解码过程独立,且每一个解码单元的解码过程基本一致。为了便于描述,本申请实施例以当前正在解码的解码单元,即当前解码单元为例进行说明。
本申请实施例对当前解码单元的具体大小不做限制,可以根据实际需要确定。
在一些实施例中,当前解码单元为当前点云帧,即可以将一个点云帧作为一个解码单元进行解码。
在一些实施例中,当前解码单元为当前点云帧的部分区域,例如将当前点云帧划分为多个区域,将一个区域作为一个解码单元,进行单独解码。
本申请实施例对将当前点云帧划分为多个区域的具体方式不做限制。
在一种示例中,将当前点云帧划分为多个点云片,这多个点云片的大小可以相同,也可以不完全相同,将一个点云片作为一个解码单元,进行单独解码。
在另一种示例中,将当前点云帧划分为多个点云块,这多个点云块的大小可以相同,也可以不完全相同,将一个点云块作为一个解码单元,进行单独解码。
本申请实施例,为了避免对每一个解码单元计算一次分类信息和运动向量信息,则设置第一参数和第二参数中的至少一个。其中,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。这样,编码端可以根据第一参数指示的分类信息计算周期,周期性的计算分类信息,和/或根据第二参数指示的运动向量信息计算周期,周期性的计算运动向量信息,进而减少了分类信息和/或运动向量信息的计算次数,提升了编解码效率。
在一些实施例中,上述分类信息的计算周期可以理解为每隔至少一个解码单元,计算一次分类信息,或每隔至少一个点云帧,计算一次分类信息。
在一些实施例中,上述运动向量信息的计算周期可以理解为每隔至少一个解码单元,计算一次运动向量信息,或 每隔至少一个点云帧,计算一次运动向量信息。
本申请实施例中,上述S101中解码端解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个的具体实现方式包括但不限于如下几种:
方式一,解码端从点云码流中,解码出当前解码单元的分类信息和运动向量信息中的至少一个。
在该方式一中,编码端根据第一参数和/或第二参数,可以确定出每一个解码单元的分类信息,和/或确定出每一个解码单元的运动向量信息。接着,编码端将每一个解码单元的分类信息和运动向量信息中的至少一个,写入点云码流中。这样,解码端可以通过直接解码码流,得到每一个解码单元的分类信息,和/或每一个解码单元的运动向量信息。
在该方式一的一种可能的实现方式中,编码端可以跳过将第一参数和/或第二参数写入点云码流,即编码端未将第一参数和/或第二参数写入点云码流,而是直接将每一个解码单元的分类信息,和/或每一个解码单元的运动向量信息写入点云码流。这样,解码端可以采用已有的解码方法,从码流中直接解码出每一个解码单元的分类信息,和/或每一个解码单元的运动向量信息,进而在提升编码效率的提前下,不增加解码复杂度。
在一些实施例中,解码端还可以根据如下方式二,确定出当前解码单元的分类信息和运动向量信息中的至少一个。
方式二,解码端通过如下步骤S101-A和S101-B,确定出分类信息和运动向量信息中的至少一个:
S101-A、从点云码流中,解码出第一参数和第二参数中的至少一个;
S101-B、根据第一参数,确定当前解码单元的分类信息,和/或根据第二参数,确定当前解码单元的运动向量信息。
需要说明的是,上述第一参数和第二参数可以单独使用,在一种示例中,编码端将第一参数写入点云码流中,但是未将第二参数写入点云码流,这样,解码端可以根据第一参数,确定当前解码单元的分类信息,通过解码点云码流,得到当前解码单元的运动向量信息。在另一种示例中,编码端将第二参数写入点云码流中,但是未将第一参数写入点云码流,这样,解码端可以根据第二参数,确定当前解码单元的运动向量信息,通过解码点云码流,得到当前解码单元的分类信息。在又一种示例中,编码端将第一参数和第二参数均写入点云码流,这样解码端可以根据第一参数确定当前解码单元的分类信息,以及根据第二参数确定当前解码单元的运动向量信息。
在该方式二中,若编码端将第一参数写入点云码流,则跳过将每一个解码单元的分类信息写入点云码流,对应的,解码端根据第一参数,确定解码单元的分类信息,而不是通过逐一解码得到解码单元中的点云的分类信息。和/或,若编码端将第二参数写入点云码流,则跳过将每一个解码单元的分类信息写入点云码流,对应的,解码端根据第二参数,确定解码单元的运动向量信息,而不是通过逐一解码得到解码单元的运动向量信息。由此可知,编码端将第一参数和/或第二参数写入码流,而跳过将每一个解码单元的分类信息和/或运动向量信息写入点云码流,可以降低解码处理时间,且降低了编码各解码单元的分类信息和/或运动向量信息的码流负担。
可选地,上述第一参数和第二参数中的至少一个,可以以无符号整数形式存储,记为u(v),表示使用v位比特位描述一个参数。
可选的,上述第一参数和第二参数中的至少一个,也可以以无符号指数哥伦布编码的形式进行存储,记为ue(v),表示上述参数取值先经过指数哥伦布编码变成v位01比特序列,再写入码流。
在一些实施例中,编码端将第一参数和第二参数中的至少一个写入序列头参数集中,此时,解码端通过解码顺序头参数集,得到第一参数和所述第二参数中的至少一个。
在一种示例中,第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
示例性的,第一参数和第二参数在序列头参数集中的存储方式,如表1所示:
表1
Figure PCTCN2022105000-appb-000001
表1中,classification_period表示第一参数,motion_period表示第二参数,classification_info表示分类信息,motion_info表示运动向量信息。
在一些实施例中,编码端将第一参数和第二参数中的至少一个写入点云片头信息中,此时,解码端通过解码点云 片头信息,得到第一参数和第二参数中的至少一个。
在一种示例中,第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次第i个点云片的分类信息,i为正整数;和/或,第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次第i个点云片的运动向量信息。
在该示例中,第一参数和第二参数在点云片头信息中的存储方式,如表2所示:
表2
Figure PCTCN2022105000-appb-000002
表2中,classification_frame_period表示第一参数,motion_frame_period表示第二参数,classification_info表示分类信息,motion_info表示运动向量信息。
在一种示例中,第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
在该示例中,第一参数和第二参数在点云片头信息中的存储方式,如表3所示:
表3
Figure PCTCN2022105000-appb-000003
表3中,classification_slice_period表示第一参数,motion_slice_period表示第二参数,classification_info表示分类信息,motion_info表示运动向量信息。
在一些实施例中,解码端在解码第一参数和第二参数之前,首先需要解码点云码流,得到第一标识inter_prediction_flag,第一标识inter_prediction_flag用于指示是否进行帧间预测解码;若第一标志inter_prediction_flag指示进行帧间预测编码时,解码点云码流,得到第一参数和第二参数中的至少一个。
下面对上述S101-B中,解码端根据第一参数,确定当前解码单元的分类信息的具体过程进行介绍。
上述S101-B中,解码端根据第一参数,确定当前解码单元的分类信息的具体实现方式包括但不限于如下几种:
方式1,上述S101-B包括如下S101-B-11和S101-B-12的步骤:
S101-B-11、根据第一参数,确定当前解码单元对应的分类信息计算周期;
S101-B-12、根据分类信息计算周期,确定当前解码单元的分类信息。
本申请实施例中,点云序列中不同的解码单元对应的分类信息计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,若点云序列中不同的解码单元对应的分类信息的计算周期相同,则可以在码流中写入一个第一参数,用该第一参数指示该点云序列中各解码单元的分类信息的计算周期。例如第一参数指示每隔K个解码单元,计算一次分类信息。
在一些实施例中,若点云序列中不同的解码单元对应的分类信息的计算周期不完全相同,则可以在码流中写入多个第一参数,用这多个第一参数指示该点云序列中各解码单元的分类信息的计算周期。例如,在码流中写入3个第一参数,其中第一个第一参数指示间隔K1个解码单元计算一次分类信息,第二个第一参数指示间隔K2个解码单元计算一次分类信息,第三个第一参数指示间隔K3个解码单元计算一次分类信息。
由上述可知,无论第一参数以何种形式指示分类信息的计算周期,则针对当前解码单元,可以根据码流解码出的第一参数,确定出当前解码单元对应的分类信息计算周期。例如,当前解码单元为当前点云帧,第一参数指示每间隔4个点云帧,计算一次分类信息,假设当前解码单元为解码顺序中的第6个点云帧,解码顺序中的第0个点云帧计算一次分类信息,第5个点云帧计算一次分类信息,第10个点云帧计算一次分类信息,其中,第0个点云帧到第4个点云帧可以理解为分类信息的第一个计算周期,第5个点云帧到第9个点云帧可以理解为分类信息的第二个计算周期,而该当前解码单元处于第二个计算周期内,进而将第二个计算周期,确定为当前解码单元对应的分类信息计算周期。
解码端根据上述步骤,确定出当前解码单元对应的分类信息计算周期后,根据该分类信息计算周期,确定当前解码单元的分类信息。
本申请实施例对解码端根据当前解码单元对应的分类信息计算周期,确定当前解码单元的分类信息的具体方式不做限制。
在一些实施例中,编解码两端约定,该分类信息计算周期内的解码单元的分类信息为默认值1,进而解码端将该默认值1确定为当前解码单元的分类信息。
在一些实施例中,编解码两端约定,采用预设的计算方法,计算出该分类信息计算周期内的解码单元的分类信息。例如,当前解码单元为当前点云帧的一个区域,则可以根据该当前点云帧中,当前解码单元周围已解码的点云的分类信息,确定当前解码单元的分类信息。
在一些实施例中,编码端将一个分类信息计算周期内的第一个解码单元的分类信息写入码流,而该分类信息计算周期内的其他解码单元的分类信息未写入码流。这样,解码端可以根据当前解码单元在当前解码单元对应的分类信息计算周期中的位置,确定当前解码单元的分类信息。
示例1,若当前解码单元为分类信息计算周期内的第一个解码单元,则解码点云码流,得到当前点解码单元的分类信息。
示例2,若当前解码单元为分类信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定当前解码单元的分类信息。
在该实施例中,编码端将第一参数,以及每个分类信息计算周期内的第一解码单元的分类信息写入点云码流,而未将分类信息计算周期内的其他解码单元的分类信息写入点云码流。这样,解码端在确定当前解码单元对应的分类信息计算周期后,可以根据当前解码单元是否为该分类信息计算周期内的第一个解码单元,来确定当前解码单元的分类信息。
继续参照上述示例,假设当前解码单元对应的分类信息计算周期为第5个点云帧到第9个点云帧,若当前解码单元为解码顺序中的第5个点云帧,则解码端直接从码流中,解码出当前解码单元的分类信息。若当前解码单元不是第5个点云帧,例如为第6个点云帧,则解码端将默认值确定为当前解码单元的分类信息,或者根据已解码信息,确定当前解码单元的分类信息。
本申请实施对上述示例2中,根据已解码信息,确定当前解码单元的分类信息的具体实现方式不做限制。
在一种可能的实现方式中,根据当前解码单元对应的分类信息计算周期内的第一个解码单元的分类信息,确定当前解码单元的分类信息。例如,将该分类信息计算周期内的第一个解码单元的分类信息,确定为当前解码单元的分类信息,或者,对该分类信息计算周期内的第一个解码单元的分类信息进行处理,得到当前解码单元的分类信息。
在一种可能的实现方式中,根据如下步骤11,确定当前解码单元的分类信息:
步骤11、根据M个解码单元的分类信息,确定当前解码单元的分类信息,M个解码单元为解码顺序中位于当前解码单元之前的M个已解码的解码单元,M为正整数。
本申请实施例对上述M个解码单元的具体选择方式不做限制。
在一些实施例中,上述M个解码单元在解码顺序中依次相邻,中间没有间隔。
在一些实施例中,上述M个解码单元可以是解码顺序中,位于当前解码单元之前的任意M个解码单元,即这M个解码单元可以相邻,也可以不完全相邻,本申请实施例对此不做限制。
由于相邻点云帧之间的内容具体相关性,在该实现方式中,在已解码信息中,获取解码顺序中位于当前解码单元之前的M个解码单元,根据这M个解码单元的分类信息,确定当前解码单元的分类信息。
其中,步骤11中根据M个解码单元的分类信息,确定当前解码单元的分类信息的实现方式至少包括如下几种示例所示:
第一种示例,若M等于1,则将解码顺序中,位于当前解码单元之前的一个解码单元的分类信息,确定为当前解码单元的分类信息。例如,当前解码单元为解码顺序中的第6个点云帧,则将解码顺序中第5个点云帧的分类信息,确定为该当前解码单元的分类信息。
第二种示例中,若M大于1,则对M个解码单元的分类信息进行预设处理,并将处理结果确定为当前解码单元的 分类信息。
例如,将M个解码单元的分类信息的平均值,确定为当前解码单元的分类信息。
再例如,将M个解码单元的分类信息的加权平均值,确定为当前解码单元的分类信息。可选的,这M个解码单元中,在解码顺序中距离当前解码单元越近,则权重越大,距离当前解码单元越远,则权重越小。
解码端除了通过上述方式1,确定出当前解码单元的分类信息外,还可以根据如下方式2,确定出当前解码单元的分类信息。
方式2,若第一参数指示每K个解码单元,计算一次分类信息,则上述S101-B的实现方式至少包括如下两种示例:
示例1,若当前解码单元为解码顺序中的第NK个解码单元,则解码点云码流,得到当前解码单元的分类信息,K、N均为正整数。
示例2,若当前解码单元为解码顺序中的非第NK个解码单元,则根据已解码信息或默认值,确定当前解码单元的分类信息。
在该方式2中,点云系列中每一个解码单元对应的分类信息计算周期相同,例如每K个解码单元计算一次分类信息。这样,编码端将解码顺序中的编号为0,以及编号为K的整数倍的解码单元(即第NK个解码单元)的分类信息,写入码流,而未将编号不为K的整数倍的解码单元(即非第NK个解码单元)的分类信息写入码流,进而降低码流负担。对应的,解码端对当前解码单元进行解码时,判断当前解码单元是否为解码顺序中的第NK个解码单元,即判断当前解码单元在解码顺序中的序号是否为K的整数倍。若解码端确定当前解码单元为解码顺序中的第NK个解码单元,则从码流中,解码出当前解码单元的分类信息。若当前解码单元不是解码顺序中的第NK个解码单元,则将默认值确定为当前解码单元的分类信息,或者根据已解码信息,确定当前解码单元的分类信息。
在该实现方式2中,若第一参数指示每K个解码单元,计算一次分类信息时,则解码端每K个解码单元,从码流中解码一次分类信息,可以降低解码端的解码次数。例如,点云序列包括1000个点云帧,假设将一个点云帧作为一个解码单元,这样解码端的解码次数为1000/K,而不是1000次,大大降低了解码次数,减轻解码端的解码负担,提升解码效率。
在该方式2中,若当前解码单元为解码顺序中的非第NK个解码单元,则根据已解码信息,确定当前解码单元的分类信息的具体过程可以参照上述步骤11和步骤12的描述,在此不再赘述。
本申请实施例中,根据上述方式,可以确定出当前解码单元的分类信息。
其中,分类信息可以理解为将点云划分为不同类别所需的信息。本申请实施例对分类信息的具体表现形式不做限制。
在一些实施例中,分类信息包括第一高度阈值和第二高度阈值中的至少一个,第一高度阈值和第二高度阈值用于当前解码单元中点云的分类。
可选的,上述第一高度阈值和第二高度阈值中的至少一个为预设值。
可选的,上述第一高度阈值和第二高度阈值中的至少一个为统计值。例如,图5所示,使用直方图对点云中点的高度值进行统计,直方图的横轴是点云中点的高度值,直方图的纵轴为在该高度值下的点数。图5是以雷达点云为例进行统计,以雷达所在高度为高度零点,因此大部分点的高度值为负值。接着,获取直方图的峰值对应的高度值,并计算高度值的标准差,则以峰值对应高度为中心,高于中心a倍(例如1.5倍)标准差的阈值记为第一高度阈值Top_thr,低于中心b倍(例如1.5倍)标准差的阈值记为第二高度阈值Bottom_thr。
上述第一高度阈值和第二高度阈值将点云划分为不同的类别。例如,将点云中高度值位于第一高度阈值和第二高度阈值之间的点云记为第一类点云,将高度值大于第一高度阈值,以及高度值小于第二高度阈值的点云记为第二类点云。
在一些实施例中,若分类信息包括第一高度阈值和第二高度阈值中的至少一个,对应的,第一参数classification_period可以包括第一子参数top_threshold_period和第二子参数bottom_threshold_period中的至少一个;
其中,第一子参数top_threshold_period用于指示第一高度阈值的计算周期,第二子参数bottom_threshold_period用于指示第二高度阈值的计算周期。
上述第一子参数和第二子参数可以分别独立赋值。
可选的,上述第一高度阈值的计算周期与第二高度阈值的计算周期可以相同,也可以不同,本申请实施例对此不做限制。
上文对S101-B中根据第一参数,确定当前解码单元的分类信息的具体过程进行介绍,下面对S101-B中根据第二参数,确定当前解码单元的运动向量信息的具体实现过程进行介绍。
上述S101-B中,解码端根据第二参数,确定当前解码单元的运动向量信息的具体实现方式包括但不限于如下几种:
方式1,上述S101-B包括如下S101-B-21和S101-B-22的步骤:
S101-B-21、根据第二参数,确定当前解码单元对应的运动向量信息计算周期;
S101-B-22、根据运动向量信息计算周期,确定当前解码单元的运动向量信息。
本申请实施例中,点云序列中不同的解码单元对应的运动向量信息计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,若点云序列中不同的解码单元对应的运动向量信息的计算周期相同,则可以在码流中写入一个第二参数,用该第二参数指示该点云序列中各解码单元的运动向量信息的计算周期。例如第二参数指示每隔R个解码单元,计算一次运动向量信息。
在一些实施例中,若点云序列中不同的解码单元对应的运动向量信息的计算周期不完全相同,则可以在码流中写入多个第二参数,用这多个第二参数指示该点云序列中各解码单元对应的运动向量信息的计算周期。例如,在码流中写入3个第二参数,其中第一个第二参数指示间隔R1个解码单元计算一次运动向量信息,第二个第二参数指示间隔R2个解码单元计算一次运动向量信息,第三个第二参数指示间隔R3个解码单元计算一次运动向量信息。
由上述可知,无论第二参数以何种形式指示运动向量信息的计算周期,则针对当前解码单元,可以根据码流解码出的第二参数,确定出当前解码单元对应的运动向量信息计算周期。例如,当前解码单元为当前点云帧,第二参数指示每间隔4个点云帧,计算一次运动向量信息,假设当前解码单元为解码顺序中的第6个点云帧,解码顺序中的第0个点云帧计算一次运动向量信息,第5个点云帧计算一次运动向量信息,第10个点云帧计算一次运动向量信息,其中,第0个点云帧到第4个点云帧可以理解为运动向量信息的第一个计算周期,第5个点云帧到第9个点云帧可以理解为运动向量信息的第二个计算周期,而该当前解码单元处于第二个计算周期内,进而将第二个计算周期,确定为当前解码单元对应的运动向量信息计算周期。
解码端根据上述步骤,确定出当前解码单元对应的运动向量信息计算周期后,根据该运动向量信息计算周期,确定当前解码单元的运动向量信息。
本申请实施例对解码端根据当前解码单元对应的运动向量信息计算周期,确定当前解码单元的运动向量信息的具体方式不做限制。
在一些实施例中,编解码两端约定,该运动向量信息计算周期内的解码单元的运动向量信息为默认值1,进而解码端将该默认值1确定为当前解码单元的运动向量信息。
在一些实施例中,编解码两端约定,采用预设的计算方法,计算出该运动向量信息计算周期内的解码单元的运动向量信息。例如,当前解码单元为当前点云帧的一个区域,则可以根据该当前点云帧中,当前解码单元周围已解码区域的运动向量信息,确定当前解码单元的运动向量信息。
在一些实施例中,编码端将一个运动向量信息计算周期内的第一个解码单元的运动向量信息写入码流,而该运动向量信息计算周期内的其他解码单元的运动向量信息未写入码流。这样,解码端可以根据当前解码单元在当前解码单元对应的运动向量信息计算周期中的位置,确定当前解码单元的运动向量信息。
示例1,若当前解码单元为运动向量信息计算周期内的第一个解码单元,则解码点云码流,得到当前点解码单元的运动向量信息。
示例2,若当前解码单元为运动向量信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定当前解码单元的运动向量信息。
在该实施例中,编码端将第二参数,以及每个运动向量信息计算周期内的第一解码单元的运动向量信息写入点云码流,而未将运动向量信息计算周期内的其他解码单元的运动向量信息写入点云码流。这样,解码端在确定当前解码单元对应的运动向量信息计算周期后,可以根据当前解码单元是否为该运动向量信息计算周期内的第一个解码单元,来确定当前解码单元的运动向量信息。
继续参照上述示例,假设当前解码单元对应的运动向量信息计算周期为第5个点云帧到第9个点云帧,若当前解码单元为解码顺序中的第5个点云帧,则解码端直接从码流中,解码出当前解码单元的运动向量信息。若当前解码单元不是第5个点云帧,例如为第6个点云帧,则解码端将默认值确定为当前解码单元的运动向量信息,或者根据已解码信息,确定当前解码单元的运动向量信息。
本申请实施对上述示例2中,根据已解码信息,确定当前解码单元的运动向量信息的具体实现方式不做限制。
在一种可能的实现方式中,根据当前解码单元对应的运动向量信息计算周期内的第一个解码单元的运动向量信息,确定当前解码单元的运动向量信息。例如,将该运动向量信息计算周期内的第一个解码单元的运动向量信息,确定为当前解码单元的运动向量信息,或者,对该运动向量信息计算周期内的第一个解码单元的运动向量信息进行处理,得到当前解码单元的运动向量信息。
在一种可能的实现方式中,根据如下步骤21,确定当前解码单元的运动向量信息:
步骤21、根据S个解码单元的运动向量信息,确定当前解码单元的运动向量信息,S个解码单元为解码顺序中位于所述当前解码单元之前的S个已解码的解码单元,S为正整数。
本申请实施例对上述S个解码单元的具体选择方式不做限制。
在一些实施例中,上述S个解码单元在解码顺序中依次相邻,中间没有间隔。
在一些实施例中,上述S个解码单元可以是解码顺序中,位于当前解码单元之前的任意S个解码单元,即这S个解码单元可以相邻,也可以不完全相邻,本申请实施例对此不做限制。
由于相邻点云帧之间的内容具体相关性,在该实现方式中,解码端从已解码信息中,获取解码顺序中位于当前解码单元之前的S个解码单元,根据这S个解码单元的运动向量信息,确定当前解码单元的运动向量信息。
其中,步骤21中根据S个解码单元的运动向量信息,确定当前解码单元的运动向量信息的实现方式至少包括如下几种示例所示:
第一种示例,若S等于1,则将解码顺序中,位于当前解码单元之前的一个解码单元的运动向量信息,确定为当前解码单元的运动向量信息。例如,当前解码单元为解码顺序中的第6个点云帧,则将解码顺序中第5个点云帧的运动向量信息,确定为该当前解码单元的运动向量信息。
第二种示例中,若S大于1,则对S个解码单元的运动向量信息进行预设处理,并将处理结果确定为当前解码单元的运动向量信息。
例如,将S个解码单元的运动向量信息的平均值,确定为当前解码单元的运动向量信息。
再例如,将S个解码单元的运动向量信息的加权平均值,确定为当前解码单元的运动向量信息。可选的,这S个解码单元中,在解码顺序中距离当前解码单元越近,则权重越大,距离当前解码单元越远,则权重越小。
解码端除了通过上述方式1,确定出当前解码单元的运动向量信息外,还可以根据如下方式2,确定出当前解码单元的运动向量信息。
方式2,若第二参数指示每R个解码单元,计算一次运动向量信息,则上述S101-B的实现方式至少包括如下两种示例:
示例1,若当前解码单元为解码顺序中的第NR个解码单元,则解码点云码流,得到当前解码单元的运动向量信息,R、N均为正整数。
示例2,若当前解码单元为解码顺序中的非第NR个解码单元,则根据已解码信息或默认值,确定当前解码单元的运动向量信息。
在该方式2中,点云系列中每一个解码单元对应的运动向量信息计算周期相同,例如每R个解码单元计算一次运动向量信息。这样,编码端将解码顺序中的编号为0,以及编号为R的整数倍的解码单元(即第NR个解码单元)的运动向量信息,写入码流,而未将编号不为R的整数倍的解码单元(即非第NR个解码单元)的运动向量信息写入码流,进而降低码流负担。对应的,解码端对当前解码单元进行解码时,判断当前解码单元是否为解码顺序中的第NR个解码单元,即判断当前解码单元在解码顺序中的序号是否为R的整数倍。若解码端确定当前解码单元为解码顺序中的第NR个解码单元,则从码流中,解码出当前解码单元的运动向量信息。若当前解码单元不是解码顺序中的第NR个解码单元,则将默认值确定为当前解码单元的运动向量信息,或者根据已解码信息,确定当前解码单元的运动向量信息。
在该实现方式2中,若第二参数指示每R个解码单元,计算一次运动向量信息时,则解码端每R个解码单元,从码流中解码一次运动向量信息,可以降低解码端的解码次数。例如,点云序列包括1000个点云帧,假设将一个点云帧作为一个解码单元,这样解码端的解码次数为1000/R,而不是1000次,大大降低了解码次数,减轻解码端的解码负担,提升解码效率。
在该方式2中,若当前解码单元为解码顺序中的非第NR个解码单元,则根据已解码信息,确定当前解码单元的运动向量信息的具体过程可以参照上述步骤21和步骤22的描述,在此不再赘述。
解码端除了根据上述方式1和方式2所示的方法,确定出当前解码单元的运动向量信息外,还可以根据如下方式3,确定出当前解码单元的运动向量信息。
方式3,解码端根据解码单元的分类信息的变化程度,确定运动向量信息。即,解码端根据如下步骤1和步骤2,确定出当前解码单元的运动向量信息:
步骤1、根据第一参数,确定分类信息的变化程度;
步骤2、根据变化程度,确定当前解码单元的运动向量信息。
本申请实施例中,若不同的解码单元的分类信息的变化不大时,则表示不同的解码单元的运动向量信息变化可能也不大。相反的,若不同的解码单元的分类信息的变化较大时,则表示不同的解码单元的运动向量信息变化可能也较大。因此,可以根据不同解码单元的分类信息的变化程度,来确定当前解码单元的运动向量信息。
本申请实施例对上述步骤1中根据第一参数,确定点云的分类信息的变化程度的具体实现方式不做限制。
在一些实施例中,根据第一参数,确定出多个解码单元的分类信息,根据多个解码单元的分类信息,确定分类信息的变化程度。例如,这多个解码单元的分类信息相差较大时,则表示分类信息的变化程度较大,若这多个解码单元的分类信息相差较小时,则表示分类信息的变化程度较小。
在一些实施例中,根据当前解码单元的分类信息,与当前解码单元的参考解码单元的分类信息,确定分类信息的变化程度。
示例性的,根据所述第一参数,确定当前解码单元的分类信息,具体过程可以参照上述实施例的描述,在此不再赘述。接着,确定当前解码单元的分类信息,与当前解码单元的参考解码单元的分类信息之间的变化程度,例如,将当前解码单元的分类信息,与当前解码单元的参考解码单元的分类信息的差值的绝对值,确定为分类信息的变化程度。
根据上述方法,确定出分类信息的变化程度后,根据该分类信息的变化程度,确定当前解码单元的运动向量信息。
例如,若分类信息的变化程度小于或等于第一预设值,则将默认值、或者将解码顺序中当前解码单元的前一解码单元的运动向量信息,确定为当前解码单元的运动向量信息。
再例如,若变化程度大于第一预设值,则解码点云码流,得到当前解码单元的运动向量信息。
本申请实施例中,根据上述方式,可以确定出当前解码单元的运动向量信息。
其中,运动向量信息可以理解为解码端进行帧间预测所需的运动信息。本申请实施例对运动向量信息的具体表现形式不做限制。
在一些实施例中,运动向量信息包括旋转矩阵和偏移向量中的至少一个。其中,旋转矩阵描述解码单元与参考解码单元的三维旋转,偏移向量描述解码单元与参考解码单元的坐标原点在三个方向上的偏移量。
在一种示例中,当旋转矩阵为
Figure PCTCN2022105000-appb-000004
则表示当前解码单元与参考解码单元相比没有发生旋转。
在一种示例中,当偏移向量为
Figure PCTCN2022105000-appb-000005
则表示当前解码单元与参考解码单元的坐标原点没有发生偏移。
若当前解码单元与参考解码单元相比,既没有发生旋转也没有发生偏移,则两个解码单元之间的运动向量记为零运动向量。
在一些实施例中,若运动向量信息包括旋转矩阵和偏移向量中的至少一个,对应的,第二参数motion_period包括第三子参数rotation_matrix_period和第四子参数translation_vector_period中的至少一个。
其中,第三子参数rotation_matrix_period用于指示旋转矩阵的计算周期,第四子参数translation_vector_period用 于指示偏移向量的计算周期。
上述第三子参数和第四子参数可以分别独立赋值。
可选的,上述旋转矩阵的计算周期与偏移向量的计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,若当前解码单元为解码顺序中的第一个解码单元,即解码序号为0时,则编码端将当前解码单元的分类信息和运动向量信息中的至少一个,写入点云码流中。这样,解码端可以解码点云码流,直接得到当前解码单元的分类信息和运动向量信息中的至少一个。
本申请实施例中,解码端根据上述步骤,确定出当前解码单元的分类信息和运动向量信息中的至少一个后,执行如下S102的步骤。
S102、根据当前解码单元的分类信息和运动向量信息中的至少一个,对当前解码单元进行解码。
由于不同的物体的运动不同,因此,为了提高解码准确性,则根据当前解码单元的分类信息,确定出当前解码单元中点云的类别,对不同类别的点云采用不同的运动向量信息进行帧间预测。例如,以车载雷达扫描的点云数据为例,点云可以分为道路点和物体点,道路点和物体点的运动向量信息不同,
本申请实施例对上述S102中,根据当前解码单元的分类信息和运动向量信息中的至少一个,对当前解码单元进行解码的具体过程不做限制。
在一些实施例中,若根据上述方法,确定出当前解码单元的分类信息,但是未确定出当前解码单元的运动向量信息,则可以根据分类信息,将当前解码单元中的点云划分为多个类别。针对每个类别赋予不同的运动向量信息,其中为不同类别赋予的运动向量信息可以为不同类别对应的预设值,或者为根据类别计算得到的值,本申请实施例对此不做限制。
在一些实施例中,若根据上述方法,确定出当前解码单元的运动向量信息,而未确定出当前解码单元的分类信息,此时,解码端可以根据自行确定出当前解码单元的分类信息,例如根据当前解码单元周围已解码单元的解码信息,确定当前解码单元的分类信息。接着,根据分类信息,将当前解码单元中的点云划分为多个类别。根据当前解码单元的运动向量信息,确定当前解码单元中每一个类别的点云的运动向量信息。例如,当前解码单元包括第一类点云和第二类点云,则可以将上述确定的运动向量信息,确定为第一类点云的运动向量信息,确定第二类点云的运动向量信息为预设值,例如为零向量。
在一些实施例中,若根据上述步骤,确定出当前解码单元的分类信息,以及当前解码单元的运动向量信息,此时上述S102包括如下步骤:
S102-A、根据当前解码单元的分类信息,将当前解码单元中的点云划分为P类点云,P为大于1的正整数。
在该实施例中,解码端通过当前解码单元的分类信息,将当前解码单元中的点云划分为P类点云,根据当前解码单元的运动向量信息,确定P类点云对应的运动向量信息,进而根据P类点云对应的运动向量信息,对当前解码单元进行解码。即本申请实施例,针对当前解码单元中不同类别的点云,采用不同的运动向量信息进行解码,进而提高了解码的准确性。
本申请实施例对上述S102-A、根据当前解码单元的分类信息,将当前解码单元中的点云划分为P类点云的具体类型不做限制。
在一些实施例中,当前解码单元的分类信息可以为类别标识,例如当前解码单元中每一个点包括一个类别标识,这样可以根据类别标识,将当前解码单元中的点云,划分为P类点云。
在一些实施例中,当前解码单元的分类信息包括第一高度阈值和第二高度阈值,且第一高度阈值大于第二高度阈值,此时,上述S102-A包括如下步骤:
S102-A1、根据第一高度阈值和第二高度阈值,将当前解码单元中点云划分为P类点云。
例如,将当前解码单元中高度值大于第一高度阈值的点云,划分为一类点云,将当前解码单元中高度值位于第一高度阈值和第二高度阈值之间的点云,划分为一类点云,将当前解码单元中高度值小于第二高度阈值之间的点云,划分为一类点云。
再例如,将当前解码单元中高度值,小于或等于第一高度阈值且大于或等于第二高度阈值的点云,划分为第一类点云;将当前解码单元中高度值,大于第一高度阈值或者小于第二高度阈值的点云,划分为第二类点云。
S102-B、根据当前解码单元的运动向量信息,确定P类点云对应的运动向量信息。
根据上述步骤,将当前解码单元中的点云划分为P类点云后,根据当前解码单元的运动向量信息,确定P类点云对应的运动向量信息。例如,将当前解码单元的运动向量信息,确定为P类点云中一类点云的运动向量信息,P类点云中的其他类点云的运动向量信息可以为预设值。
在一种示例中,P类点云包括上述第一类点云和第二类点云,则可以将当前解码单元的运动向量信息,确定为第二类点云的运动向量信息,而第一类点云的运动向量信息为预设值,例如为零向量。
以车载点云为例,第一类点云可以理解为道路点云,第二类点云可以理解为非道路点云,由于道路的变化不大,因此,非道路点云为研究重点,因此,将当前解码单元的运动向量信息,确定为非道路点的运动向量信息,而确定道路点则被预测为静态,即零运动,即道路点的运动向量信息为零向量。
S102-C、根据P类点云对应的运动向量信息,对当前解码单元进行解码。
根据上述方法,确定出当前解码单元中,P类点云对应的运动向量信息后,根据P类点云对应的运动向量信息,对当前解码单元进行解码。
本申请实施例对根据P类点云对应的运动向量信息,对当前解码单元进行解码的具体实现方式不做限制。
在一些实施例中,解码端可以确定当前解码单元的参考解码单元;根据P类点云的运动向量信息,对参考解码单元进行运动补偿,得到当前解码单元的预测信息;根据预测信息,对当前解码单元的几何信息和属性信息中的至少一 个进行解码。
在一种示例中,以根据预测信息,对当前解码单元的几何信息进行解码为例,该预测信息可以理解为当前解码单元的预测单元。这样可以根据预测单元中空间占用情况,预测当前解码单元的空间占用情况,进而根据预测的当前解码单元的空间占用情况,对当前解码单元的几何码流进行解码,得到当前解码单元的几何信息。
在另一种示例中,以根据预测信息,对当前解码单元的属性信息进行解码为例,该预测信息可以理解为当前解码单元的预测单元。这样针对当前解码单元中的每一个点,在该预测单元中获取该点的至少一个参考点,根据这至少一个参考点的属性信息,预测该点的属性信息,得到该点的属性预测值。接着,解码属性码流,得到该点的属性残差值,根据该点的属性预测值和属性残差值,确定出该点的属性重建值。参照上述方法,可以确定出当前解码单元中的每一个点的属性重建值,进而得到当前解码单元的属性重建值。
需要说明的是,解码端还可以采用的方法,根据当前解码单元的分类信息和运动向量信息中的至少一个,对当前解码单元进行解码,本申请实施例对此不做限制。
本申请实施例提供的点云解码方法,解码端解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,其中,分类信息是基于第一参数确定的,运动向量信息是基于第二参数确定的,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。接着,根据当前解码单元的分类信息和运动向量信息中的至少一个,对当前解码单元进行解码。即本申请实施例,周期性的计算分类信息和运动向量信息,相比于针对每一个解码单元计算一次分类信息和运动向量信息,本申请实施例大大降低了分类信息和运动向量信息的计算次数,降低了编解码的处理时间,提升了编解码效率。
上文以解码端为例,对本申请实施例提供的点云解码方法进行详细介绍,下面以编码端为例,对本申请实施例提供的点云编码方法进行介绍。
图6为本申请一实施例提供的点云编码方法流程示意图。本申请实施例的点云编码方法可以由上述图1或图2所示的点云编码设备完成。
如图6所示,本申请实施例的点云解码方法包括:
S201、确定第一参数和第二参数中的至少一个。
其中,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。
由上述描述可知,连续采集的点云序列中临近帧具有较高相关性,因此可以引入帧间预测提升点云编码效率。
其中,帧间预测主要包括运动估计、运动补偿等步骤。在一些实施例中,在运动估计步骤中,相邻两帧的空间运动偏移向量被计算得出并写入码流。在运动补偿步骤中,计算得到的运动向量进一步被用于计算点云的空间偏移,并使用偏移后的点云帧作为参考,进一步提升当前帧的编码效率。
本申请实施例对当前编码单元的运动向量信息的具体内容不做限制,可以为运动估计、运动补偿等步骤涉及的运动信息。
例如,运动向量信息可以为运动估计中,相邻两帧的空间运动偏移向量,即运动矢量。
再例如,运动向量信息还可以为运动补偿中,相邻两帧之间的运动估计ME(Motion Estimation)。
在实际场景中,不同物体的运动可能不同,示例性的,以移动车辆上的激光雷达传感器捕获的点云数据为例,在该点云数据中,道路和物体通常具有不同的运动。由于道路和雷达传感器之间的距离相对恒定,并且道路从一个车辆位置到下一个车辆位置发生微小变化,因此表示道路的点相对于雷达传感器位置的移动很小。相比之下,建筑物、路标、植被或其他车辆等物体具有较大的运动。由于道路和物体点具有不同的运动,因此将点云数据划分为道路和物体,将提高全局运动估计和补偿的准确性,从而提高压缩效率。也就是说,对于使用帧间预测的点云数据,为了提高帧间预测的准确性,提升压缩效率,针对一个编码单元,需要对该编码单元中的点云进行分类,例如将该编码单元中的点云划分为道路点云和非道路点云。
本申请实施例,为了避免对每一个编码单元计算一次分类信息和运动向量信息,则设置第一参数和第二参数中的至少一个。其中,第一参数用于指示分类信息的计算周期,第二参数用于指示运动向量信息的计算周期。这样,编码端可以根据第一参数指示的分类信息计算周期,周期性的计算分类信息,和/或根据第二参数指示的运动向量信息计算周期,周期性的计算运动向量信息,进而减少了分类信息和/或运动向量信息的计算次数,提升了编码效率。
在一些实施例中,上述分类信息的计算周期可以理解为每隔至少一个编码单元,计算一次分类信息,或每隔至少一个点云帧,计算一次分类信息。
在一些实施例中,上述运动向量信息的计算周期可以理解为每隔至少一个编码单元,计算一次运动向量信息,或每隔至少一个点云帧,计算一次运动向量信息。
可选的,上述第一参数可以用classification_period表示。
可选的,上述第二参数可以用motion_period。
可选地,上述第一参数和第二参数可以有不同的表示形式,例如,可设置为classification_period_log2和motion_period_log2,分别表示分类信息的计算周期取2对数和运动向量的计算周期取2对数,仍属于本申请的保护范围内。
在一些实施例中,上述第一参数和第二参数中的至少一个为预设值。
在一些实施例中,上述第一参数和第二参数中的至少一个为用户输入的参数。
S202、根据第一参数和第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个。
在点云编码过程中,可以将点云数据划分为至少一个编码单元,每一个编码单元的编码过程独立,且每一个编码单元的解码过程基本一致。为了便于描述,本申请实施例以当前正在编码的编码单元,即当前编码单元为例进行说明。
本申请实施例对当前编码单元的具体大小不做限制,可以根据实际需要确定。
在一些实施例中,当前编码单元为当前点云帧,即可以将一个点云帧作为一个编码单元进行编码。
在一些实施例中,当前编码单元为当前点云帧的部分区域,例如将当前点云帧划分为多个区域,将一个区域作为一个编码单元,进行单独编码。
本申请实施例对将当前点云帧划分为多个区域的具体方式不做限制。
在一种示例中,将当前点云帧划分为多个点云片,这多个点云片的大小可以相同,也可以不完全相同,将一个点云片作为一个编码单元,进行单独编码。
在另一种示例中,将当前点云帧划分为多个点云块,这多个点云块的大小可以相同,也可以不完全相同,将一个点云块作为一个编码单元,进行单独编码。
本申请实施例中,上述S202中根据第一参数和第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个的具体实现方式包括但不限于如下几种:
方式一,编码端通过如下步骤S202-A,确定出分类信息和运动向量信息中的至少一个:
S202-A、根据第一参数,确定当前编码单元的分类信息,和/或根据第二参数,确定当前编码单元的运动向量信息。
需要说明的是,上述第一参数和第二参数可以单独使用,在一种示例中,编码端可以根据第一参数,确定当前编码单元的分类信息,根据已有方法得到当前编码单元的运动向量信息。在另一种示例中,编码端可以根据第二参数,确定当前编码单元的运动向量信息,根据已有方法得到当前编码单元的分类信息。在又一种示例中,编码端可以根据第一参数确定当前编码单元的分类信息,以及根据第二参数确定当前编码单元的运动向量信息。
在该方式一中,编码端根据第一参数,确定编码单元的分类信息,而不是逐一确定每个编码单元中的点云的分类信息。和/或,编码端根据第二参数,确定编码单元的运动向量信息,而不是逐一确定每一个编码单元的运动向量信息。
下面对上述S202-A中,编码端根据第一参数,确定当前编码单元的分类信息的具体过程进行介绍。
上述S202-A中,编码端根据第一参数,确定当前编码单元的分类信息的具体实现方式包括但不限于如下几种:
方式1,上述S202-A包括如下S202-A-11和S202-A-12的步骤:
S202-A-11、根据第一参数,确定当前编码单元对应的分类信息计算周期;
S202-A-12、根据分类信息计算周期,确定当前编码单元的分类信息。
本申请实施例中,点云序列中不同的编码单元对应的分类信息计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,第一参数指示该点云序列中各编码单元的分类信息的计算周期。例如第一参数指示每隔K个编码单元,计算一次分类信息。
在一些实施例中,点云序列中不同的编码单元对应的分类信息的计算周期不完全相同,则可以用多个第一参数指示该点云序列中各编码单元的分类信息的计算周期。例如,确定3个第一参数,其中第一个第一参数指示间隔K1个编码单元计算一次分类信息,第二个第一参数指示间隔K2个编码单元计算一次分类信息,第三个第一参数指示间隔K3个编码单元计算一次分类信息。
由上述可知,无论第一参数以何种形式指示分类信息的计算周期,则针对当前编码单元,可以根据第一参数,确定出当前编码单元对应的分类信息计算周期。例如,当前编码单元为当前点云帧,第一参数指示每间隔4个点云帧,计算一次分类信息,假设当前编码单元为编码顺序中的第6个点云帧,编码顺序中的第0个点云帧计算一次分类信息,第5个点云帧计算一次分类信息,第10个点云帧计算一次分类信息,其中,第0个点云帧到第4个点云帧可以理解为分类信息的第一个计算周期,第5个点云帧到第9个点云帧可以理解为分类信息的第二个计算周期,而该当前编码单元处于第二个计算周期内,进而将第二个计算周期,确定为当前编码单元对应的分类信息计算周期。
编码端根据上述步骤,确定出当前编码单元对应的分类信息计算周期后,根据该分类信息计算周期,确定当前编码单元的分类信息。
本申请实施例对编码端根据当前编码单元对应的分类信息计算周期,确定当前编码单元的分类信息的具体方式不做限制。
在一些实施例中,编解码两端约定,该分类信息计算周期内的编码单元的分类信息为默认值1,进而编码端将该默认值1确定为当前编码单元的分类信息。
在一些实施例中,编解码两端约定,采用预设的计算方法,计算出该分类信息计算周期内的编码单元的分类信息。例如,当前编码单元为当前点云帧的一个区域,则可以根据该当前点云帧中,当前编码单元周围已编码的点云的分类信息,确定当前编码单元的分类信息。
在一些实施例中,编码端可以根据当前编码单元在当前编码单元对应的分类信息计算周期中的位置,确定当前编码单元的分类信息。
示例1,若当前编码单元为分类信息计算周期内的第一个编码单元,则对当前点编码单元中点云的类别进行识别,得到所述当前点编码单元的分类信息。
示例2,若当前编码单元为分类信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的分类信息。
在该实施例中,编码端在确定当前编码单元对应的分类信息计算周期后,可以根据当前编码单元是否为该分类信息计算周期内的第一个编码单元,来确定当前编码单元的分类信息。
继续参照上述示例,假设当前编码单元对应的分类信息计算周期为第5个点云帧到第9个点云帧,若当前编码单元为编码顺序中的第5个点云帧,则对当前点编码单元中点云的类别进行识别,得到所述当前点编码单元的分类信息。若当前编码单元不是第5个点云帧,例如为第6个点云帧,则编码端将默认值确定为当前编码单元的分类信息,或者 根据已编码信息,确定当前编码单元的分类信息。
本申请实施对上述示例2中,根据已编码信息,确定当前编码单元的分类信息的具体实现方式不做限制。
在一种可能的实现方式中,根据当前编码单元对应的分类信息计算周期内的第一个编码单元的分类信息,确定当前编码单元的分类信息。例如,将该分类信息计算周期内的第一个编码单元的分类信息,确定为当前编码单元的分类信息,或者,对该分类信息计算周期内的第一个编码单元的分类信息进行处理,得到当前编码单元的分类信息。
在一种可能的实现方式中,根据如下步骤11,确定当前编码单元的分类信息:
步骤11、根据M个编码单元的分类信息,确定当前编码单元的分类信息,M个编码单元为编码顺序中位于当前编码单元之前的M个已编码的编码单元,M为正整数。
本申请实施例对上述M个编码单元的具体选择方式不做限制。
在一些实施例中,上述M个编码单元在编码顺序中依次相邻,中间没有间隔。
在一些实施例中,上述M个编码单元可以是编码顺序中,位于当前编码单元之前的任意M个编码单元,即这M个编码单元可以相邻,也可以不完全相邻,本申请实施例对此不做限制。
由于相邻点云帧之间的内容具体相关性,在该实现方式中,编码端从已编码信息中,获取编码顺序中位于当前编码单元之前的M个编码单元,根据这M个编码单元的分类信息,确定当前编码单元的分类信息。
其中,步骤12中根据M个编码单元的分类信息,确定当前编码单元的分类信息的实现方式至少包括如下几种示例所示:
第一种示例,若M等于1,则将编码顺序中,位于当前编码单元之前的一个编码单元的分类信息,确定为当前编码单元的分类信息。例如,当前编码单元为编码顺序中的第6个点云帧,则将编码顺序中第5个点云帧的分类信息,确定为该当前编码单元的分类信息。
第二种示例中,若M大于1,则对M个编码单元的分类信息进行预设处理,并将处理结果确定为当前编码单元的分类信息。
例如,将M个编码单元的分类信息的平均值,确定为当前编码单元的分类信息。
再例如,将M个编码单元的分类信息的加权平均值,确定为当前编码单元的分类信息。可选的,这M个编码单元中,在编码顺序中距离当前编码单元越近,则权重越大,距离当前编码单元越远,则权重越小。
在上述方式1中的一些实施例中,编码端将第一参数写入点云码流,且若当前编码单元为分类信息计算周期内的第一个编码单元时,则将当前编码单元的分类信息写入点云码流。
在上述方式1中的一些实施例中,将第一参数写入点云码流,且若当前编码单元为分类信息计算周期内的非第一个编码单元时,则跳过将当前编码单元的分类信息写入点云码流。
也就是说,在该方式1中,编码端还可以将第一参数写入码流,同时,将分类信息计算周期内的第一个编码单元的分类信息写入码流,而对于位于分类信息计算周期中间的编码单元(即非计算周期内的第一编码单元)的分类信息,则不写入码流,这样可以减少码流的负担。
在一些实施例中,在该方式1中,编码端将当前编码单元的分类信息写入点云码流,且跳过将第一参数写入点云码流。
编码端除了通过上述方式1,确定出当前编码单元的分类信息外,还可以根据如下方式2,确定出当前编码单元的分类信息。
方式2,若第一参数指示每K个编码单元,计算一次分类信息,则上述S202-A的实现方式至少包括如下两种示例:
示例1,若当前编码单元为编码顺序中的第NK个编码单元,则对当前点编码单元中点云的类别进行识别,得到当前点编码单元的分类信息,K、N均为正整数。
示例2,若当前编码单元为编码顺序中的非第NK个编码单元,则根据已编码信息或默认值,确定当前编码单元的分类信息。
在该方式2的一种实施例中,将第一参数写入点云码流,且若当前编码单元为编码顺序中的第NK个编码单元时,则将当前编码单元的分类信息写入码流。
在该方式2的另一种实施例中,将第一参数写入点云码流,且若当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将当前编码单元的分类信息写入码流。
在该实现方式2中,若第一参数指示每K个编码单元,计算一次分类信息时,则编码端将编号为0或K的整数倍的编码单元的分类信息写入码流,对于其他编码单元,不将分类信息写入码流。例如,点云序列包括1000个点云帧,假设将一个点云帧作为一个编码单元,这样编码端的编码次数为1000/K,而不是1000次,大大降低了编码次数,减轻编码端的编码负担,提升编码效率。
在该方式2的另一种实施例中,可以将当前编码单元的分类信息写入点云码流,且跳过将第一参数写入所述点云码流。
在该方式2中,若当前编码单元为编码顺序中的非第NK个编码单元,则根据已编码信息,确定当前编码单元的分类信息的具体过程可以参照上述步骤11和步骤12的描述,在此不再赘述。
本申请实施例中,根据上述方式,可以确定出当前编码单元的分类信息。
其中,分类信息可以理解为将点云划分为不同类别所需的信息。本申请实施例对分类信息的具体表现形式不做限制。
在一些实施例中,分类信息包括第一高度阈值和第二高度阈值中的至少一个,第一高度阈值和第二高度阈值用于当前编码单元中点云的分类。
可选的,上述第一高度阈值和第二高度阈值中的至少一个为预设值。
可选的,上述第一高度阈值和第二高度阈值中的至少一个为统计值。例如,图5所示,使用直方图对点云中点的高度值进行统计,直方图的横轴是点云中点的高度值,直方图的纵轴为在该高度值下的点数。图5是以雷达点云为例进行统计,以雷达所在高度为高度零点,因此大部分点的高度值为负值。接着,获取直方图的峰值对应的高度值,并计算高度值的标准差,则以峰值对应高度为中心,高于中心a倍(例如1.5倍)标准差的阈值记为第一高度阈值Top_thr,低于中心b倍(例如1.5倍)标准差的阈值记为第二高度阈值Bottom_thr。
上述第一高度阈值和第二高度阈值将点云划分为不同的类别。例如,将点云中高度值位于第一高度阈值和第二高度阈值之间的点云记为第一类点云,将高度值大于第一高度阈值,以及高度值小于第二高度阈值的点云记为第二类点云。
在一些实施例中,若分类信息包括第一高度阈值和第二高度阈值中的至少一个,对应的,第一参数classification_period可以包括第一子参数top_threshold_period和第二子参数bottom_threshold_period中的至少一个;
其中,第一子参数top_threshold_period用于指示第一高度阈值的计算周期,第二子参数bottom_threshold_period用于指示第二高度阈值的计算周期。
上述第一子参数和第二子参数可以分别独立赋值。
可选的,上述第一高度阈值的计算周期与第二高度阈值的计算周期可以相同,也可以不同,本申请实施例对此不做限制。
上文对S202-A中根据第一参数,确定当前编码单元的分类信息的具体过程进行介绍,下面对S202-A中根据第二参数,确定当前编码单元的运动向量信息的具体实现过程进行介绍。
上述S202-A中,编码端根据第二参数,确定当前编码单元的运动向量信息的具体实现方式包括但不限于如下几种:
方式1,上述S202-A包括如下S202-A-21和S202-A-22的步骤:
S202-A-21、根据第二参数,确定当前编码单元对应的运动向量信息计算周期;
S202-A-22、根据运动向量信息计算周期,确定当前编码单元的运动向量信息。
本申请实施例中,点云序列中不同的编码单元对应的运动向量信息计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,第二参数指示该点云序列中各编码单元的运动向量信息的计算周期。例如第二参数指示每隔R个编码单元,计算一次运动向量信息。
在一些实施例中,通过多个第二参数指示该点云序列中各编码单元对应的运动向量信息的计算周期。例如,确定3个第二参数,其中第一个第二参数指示间隔R1个编码单元计算一次运动向量信息,第二个第二参数指示间隔R2个编码单元计算一次运动向量信息,第三个第二参数指示间隔R3个编码单元计算一次运动向量信息。
由上述可知,无论第二参数以何种形式指示运动向量信息的计算周期,则针对当前编码单元,可以根据第二参数,确定出当前编码单元对应的运动向量信息计算周期。例如,当前编码单元为当前点云帧,第二参数指示每间隔4个点云帧,计算一次运动向量信息,假设当前编码单元为编码顺序中的第6个点云帧,编码顺序中的第0个点云帧计算一次运动向量信息,第5个点云帧计算一次运动向量信息,第10个点云帧计算一次运动向量信息,其中,第0个点云帧到第4个点云帧可以理解为运动向量信息的第一个计算周期,第5个点云帧到第9个点云帧可以理解为运动向量信息的第二个计算周期,而该当前编码单元处于第二个计算周期内,进而将第二个计算周期,确定为当前编码单元对应的运动向量信息计算周期。
编码端根据上述步骤,确定出当前编码单元对应的运动向量信息计算周期后,根据该运动向量信息计算周期,确定当前编码单元的运动向量信息。
本申请实施例对编码端根据当前编码单元对应的运动向量信息计算周期,确定当前编码单元的运动向量信息的具体方式不做限制。
在一些实施例中,编解码两端约定,该运动向量信息计算周期内的编码单元的运动向量信息为默认值1,进而编码端将该默认值1确定为当前编码单元的运动向量信息。
在一些实施例中,编解码两端约定,采用预设的计算方法,计算出该运动向量信息计算周期内的编码单元的运动向量信息。例如,当前编码单元为当前点云帧的一个区域,则可以根据该当前点云帧中,当前编码单元周围已编码区域的运动向量信息,确定当前编码单元的运动向量信息。
在一些实施例中,编码端可以根据当前编码单元在当前编码单元对应的运动向量信息计算周期中的位置,确定当前编码单元的运动向量信息。
示例1,若当前编码单元为运动向量信息计算周期内的第一个编码单元,则根据当前编码单元的参考编码单元,确定当前点编码单元的运动向量信息。
示例2,若当前编码单元为运动向量信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定当前编码单元的运动向量信息。
在该实施例中,编码端在确定当前编码单元对应的运动向量信息计算周期后,可以根据当前编码单元是否为该运动向量信息计算周期内的第一个编码单元,来确定当前编码单元的运动向量信息。
继续参照上述示例,假设当前编码单元对应的运动向量信息计算周期为第5个点云帧到第9个点云帧,若当前编码单元为编码顺序中的第5个点云帧,则编码端直接根据当前编码单元的参考编码单元,确定当前点编码单元的运动向量信息。若当前编码单元不是第5个点云帧,例如为第6个点云帧,则编码端将默认值确定为当前编码单元的运动向量信息,或者根据已编码信息,确定当前编码单元的运动向量信息。
本申请实施对上述示例2中,根据已编码信息,确定当前编码单元的运动向量信息的具体实现方式不做限制。
在一种可能的实现方式中,根据当前编码单元对应的运动向量信息计算周期内的第一个编码单元的运动向量信息,确定当前编码单元的运动向量信息。例如,将该运动向量信息计算周期内的第一个编码单元的运动向量信息,确定为当前编码单元的运动向量信息,或者,对该运动向量信息计算周期内的第一个编码单元的运动向量信息进行处理,得到当前编码单元的运动向量信息。
在一种可能的实现方式中,根据如下步骤21,确定当前编码单元的运动向量信息:
步骤21、根据S个编码单元的运动向量信息,确定当前编码单元的运动向量信息,S个编码单元为编码顺序中位于当前编码单元之前的S个已编码的编码单元,S为正整数。
本申请实施例对上述S个编码单元的具体选择方式不做限制。
在一些实施例中,上述S个编码单元在编码顺序中依次相邻,中间没有间隔。
在一些实施例中,上述S个编码单元可以是编码顺序中,位于当前编码单元之前的任意S个编码单元,即这S个编码单元可以相邻,也可以不完全相邻,本申请实施例对此不做限制。
由于相邻点云帧之间的内容具体相关性,在该实现方式中,编码端从已编码信息中,获取编码顺序中位于当前编码单元之前的S个编码单元,根据这S个编码单元的运动向量信息,确定当前编码单元的运动向量信息。
其中,步骤22中根据S个编码单元的运动向量信息,确定当前编码单元的运动向量信息的实现方式至少包括如下几种示例所示:
第一种示例,若S等于1,则将编码顺序中,位于当前编码单元之前的一个编码单元的运动向量信息,确定为当前编码单元的运动向量信息。例如,当前编码单元为编码顺序中的第6个点云帧,则将编码顺序中第5个点云帧的运动向量信息,确定为该当前编码单元的运动向量信息。
第二种示例中,若S大于1,则对S个编码单元的运动向量信息进行预设处理,并将处理结果确定为当前编码单元的运动向量信息。
例如,将S个编码单元的运动向量信息的平均值,确定为当前编码单元的运动向量信息。
再例如,将S个编码单元的运动向量信息的加权平均值,确定为当前编码单元的运动向量信息。可选的,这S个编码单元中,在编码顺序中距离当前编码单元越近,则权重越大,距离当前编码单元越远,则权重越小。
在该方式1的一种实施例中,将第二参数写入点云码流,且若当前编码单元为运动向量信息计算周期内的第一个编码单元时,则将当前编码单元的运动向量信息写入点云码流。
在该方式1的一种实施例中,将第二参数写入点云码流,且若当前编码单元为运动向量信息计算周期内的非第一个编码单元时,则跳过将当前编码单元的运动向量信息写入点云码流。
也就是说,在该方式1中,编码端还可以将第二参数写入码流,同时,将运动向量信息计算周期内的第一个编码单元的运动向量信息写入码流,而对于位于运动向量信息计算周期中间的编码单元(即非计算周期内的第一编码单元)的运动向量信息,则不写入码流,这样可以减少码流的负担。
在该方式1的一种实施例中,将当前编码单元的运动向量信息写入点云码流,且跳过将第二参数写入点云码流。
编码端除了通过上述方式1,确定出当前编码单元的运动向量信息外,还可以根据如下方式2,确定出当前编码单元的运动向量信息。
方式2,若第二参数指示每R个编码单元,计算一次运动向量信息,则上述S202-A的实现方式至少包括如下两种示例:
示例1,若当前编码单元为编码顺序中的第NR个编码单元,则根据当前编码单元的参考编码单元,确定当前点编码单元的运动向量信息,R、N均为正整数。
示例2,若当前编码单元为编码顺序中的非第NR个编码单元,则根据已编码信息或默认值,确定当前编码单元的运动向量信息。
在该方式2中,点云系列中每一个编码单元对应的运动向量信息计算周期相同,例如每R个编码单元计算一次运动向量信息。这样,编码端对当前编码单元进行编码时,判断当前编码单元是否为编码顺序中的第NR个编码单元,即判断当前编码单元在编码顺序中的序号是否为R的整数倍。若编码端确定当前编码单元为编码顺序中的第NR个编码单元,则根据当前编码单元的参考编码单元,确定当前点编码单元的运动向量信息。若当前编码单元不是编码顺序中的第NR个编码单元,则将默认值确定为当前编码单元的运动向量信息,或者根据已编码信息,确定当前编码单元的运动向量信息。
在该方式2的一种实施例中,将第二参数写入点云码流,且若当前编码单元为编码顺序中的第NK个编码单元时,则将当前编码单元的运动向量信息写入码流。
在该方式2的一种实施例中,将第二参数写入点云码流,且若当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将当前编码单元的运动向量信息写入码流。
在该实现方式2中,若第二参数指示每R个编码单元,计算一次运动向量信息时,则编码端每R个编码单元,编码一次运动向量信息,可以降低运动向量信息的编码次数。例如,点云序列包括1000个点云帧,假设将一个点云帧作为一个编码单元,这样编码端的编码次数为1000/R,而不是1000次,大大降低了编码次数,减轻编码端的编码负担,提升编码效率。
在该方式2的一种实施例中,将当前编码单元的运动向量信息写入点云码流,且跳过将第二参数写入点云码流。
在该方式2中,若当前编码单元为编码顺序中的非第NR个编码单元,则根据已编码信息,确定当前编码单元的运动向量信息的具体过程可以参照上述步骤21和步骤22的描述,在此不再赘述。
编码端除了根据上述方式1和方式2所示的方法,确定出当前编码单元的运动向量信息外,还可以根据如下方式3,确定出当前编码单元的运动向量信息。
方式3,编码端根据编码单元的分类信息的变化程度,确定运动向量信息。即,编码端根据如下步骤1和步骤2, 确定出当前编码单元的运动向量信息:
步骤1、根据第一参数,确定分类信息的变化程度;
步骤2、根据变化程度,确定当前编码单元的运动向量信息。
本申请实施例中,若不同的编码单元的分类信息的变化不大时,则表示不同的编码单元的运动向量信息变化可能也不大。相反的,若不同的编码单元的分类信息的变化较大时,则表示不同的编码单元的运动向量信息变化可能也较大。因此,可以根据不同编码单元的分类信息的变化程度,来确定当前编码单元的运动向量信息。
本申请实施例对上述步骤1中根据第一参数,确定点云的分类信息的变化程度的具体实现方式不做限制。
在一些实施例中,根据第一参数,确定出多个编码单元的分类信息,根据多个编码单元的分类信息,确定分类信息的变化程度。例如,这多个编码单元的分类信息相差较大时,则表示分类信息的变化程度较大,若这多个编码单元的分类信息相差较小时,则表示分类信息的变化程度较小。
在一些实施例中,根据当前编码单元的分类信息,与当前编码单元的参考编码单元的分类信息,确定分类信息的变化程度。
示例性的,根据所述第一参数,确定当前编码单元的分类信息,具体过程可以参照上述实施例的描述,在此不再赘述。接着,确定当前编码单元的分类信息,与当前编码单元的参考编码单元的分类信息之间的变化程度,例如,将当前编码单元的分类信息,与当前编码单元的参考编码单元的分类信息的差值的绝对值,确定为分类信息的变化程度。
根据上述方法,确定出分类信息的变化程度后,根据该分类信息的变化程度,确定当前编码单元的运动向量信息。
例如,若分类信息的变化程度小于或等于第一预设值,则将默认值、或者将编码顺序中当前编码单元的前一编码单元的运动向量信息,确定为当前编码单元的运动向量信息。
再例如,若变化程度大于第一预设值,则根据当前编码单元的参考编码单元,确定当前编码单元的运动向量信息。
本申请实施例中,根据上述方式,可以确定出当前编码单元的运动向量信息。
其中,运动向量信息可以理解为编码端进行帧间预测所需的运动信息。本申请实施例对运动向量信息的具体表现形式不做限制。
在一些实施例中,运动向量信息包括旋转矩阵和偏移向量中的至少一个。其中,旋转矩阵描述编码单元与参考编码单元的三维旋转,偏移向量描述编码单元与参考编码单元的坐标原点在三个方向上的偏移量。
在一些实施例中,若运动向量信息包括旋转矩阵和偏移向量中的至少一个,对应的,第二参数motion_period包括第三子参数rotation_matrix_period和第四子参数translation_vector_period中的至少一个。
其中,第三子参数rotation_matrix_period用于指示旋转矩阵的计算周期,第四子参数translation_vector_period用于指示偏移向量的计算周期。
上述第三子参数和第四子参数可以分别独立赋值。
可选的,上述旋转矩阵的计算周期与偏移向量的计算周期可以相同,也可以不同,本申请实施例对此不做限制。
在一些实施例中,若当前编码单元为编码顺序中的第一个编码单元,即编码序号为0时,则编码端将当前编码单元的分类信息和运动向量信息中的至少一个,写入点云码流中。
本申请实施例中,编码端根据上述步骤,确定出当前编码单元的分类信息和运动向量信息中的至少一个后,执行如下S203的步骤。
S203、根据当前编码单元的分类信息和运动向量信息中的至少一个,对当前编码单元进行编码。
由于不同的物体的运动不同,因此,为了提高编码准确性,则根据当前编码单元的分类信息,确定出当前编码单元中点云的类别,对不同类别的点云采用不同的运动向量信息进行帧间预测。例如,以车载雷达扫描的点云数据为例,点云可以分为道路点和物体点,道路点和物体点的运动向量信息不同,
本申请实施例对上述S203中,根据当前编码单元的分类信息和运动向量信息中的至少一个,对当前编码单元进行编码的具体过程不做限制。
在一些实施例中,若根据上述方法,确定出当前编码单元的分类信息,但是未确定出当前编码单元的运动向量信息,则可以根据分类信息,将当前编码单元中的点云划分为多个类别。针对每个类别赋予不同的运动向量信息,其中为不同类别赋予的运动向量信息可以为不同类别对应的预设值,或者为根据类别计算得到的值,本申请实施例对此不做限制。
在一些实施例中,若根据上述方法,确定出当前编码单元的运动向量信息,而未确定出当前编码单元的分类信息,此时,编码端可以根据自行确定出当前编码单元的分类信息,例如根据当前编码单元周围已编码单元的编码信息,确定当前编码单元的分类信息。接着,根据分类信息,将当前编码单元中的点云划分为多个类别。根据当前编码单元的运动向量信息,确定当前编码单元中每一个类别的点云的运动向量信息。例如,当前编码单元包括第一类点云和第二类点云,则可以将上述确定的运动向量信息,确定为第一类点云的运动向量信息,确定第二类点云的运动向量信息为预设值,例如为零向量。
在一些实施例中,若根据上述步骤,确定出当前编码单元的分类信息,以及当前编码单元的运动向量信息,此时上述S203包括如下步骤:
S203-A、根据当前编码单元的分类信息,将当前编码单元中的点云划分为P类点云,P为大于1的正整数。
在该实施例中,编码端通过当前编码单元的分类信息,将当前编码单元中的点云划分为P类点云,根据当前编码单元的运动向量信息,确定P类点云对应的运动向量信息,进而根据P类点云对应的运动向量信息,对当前编码单元进行编码。即本申请实施例,针对当前编码单元中不同类别的点云,采用不同的运动向量信息进行编码,进而提高了编码的准确性。
本申请实施例对上述S203-A、根据当前编码单元的分类信息,将当前编码单元中的点云划分为P类点云的具体类 型不做限制。
在一些实施例中,当前编码单元的分类信息可以为类别标识,例如当前编码单元中每一个点包括一个类别标识,这样可以根据类别标识,将当前编码单元中的点云,划分为P类点云。
在一些实施例中,当前编码单元的分类信息包括第一高度阈值和第二高度阈值,且第一高度阈值大于第二高度阈值,此时,上述S203-A包括如下步骤:
S203-A1、根据第一高度阈值和第二高度阈值,将当前编码单元中点云划分为P类点云。
例如,将当前编码单元中高度值大于第一高度阈值的点云,划分为一类点云,将当前编码单元中高度值位于第一高度阈值和第二高度阈值之间的点云,划分为一类点云,将当前编码单元中高度值小于第二高度阈值之间的点云,划分为一类点云。
再例如,将当前编码单元中高度值,小于或等于第一高度阈值且大于或等于第二高度阈值的点云,划分为第一类点云;将当前编码单元中高度值,大于第一高度阈值或者小于第二高度阈值的点云,划分为第二类点云。
S203-B、根据当前编码单元的运动向量信息,确定P类点云对应的运动向量信息。
根据上述步骤,将当前编码单元中的点云划分为P类点云后,根据当前编码单元的运动向量信息,确定P类点云对应的运动向量信息。例如,将当前编码单元的运动向量信息,确定为P类点云中一类点云的运动向量信息,P类点云中的其他类点云的运动向量信息可以为预设值。
在一种示例中,P类点云包括上述第一类点云和第二类点云,则可以将当前编码单元的运动向量信息,确定为第二类点云的运动向量信息,而第一类点云的运动向量信息为预设值,例如为零向量。
以车载点云为例,第一类点云可以理解为道路点云,第二类点云可以理解为非道路点云,由于道路的变化不大,因此,非道路点云为研究重点,因此,将当前编码单元的运动向量信息,确定为非道路点的运动向量信息,而确定道路点则被预测为静态,即零运动,即道路点的运动向量信息为零向量。
S203-C、根据P类点云对应的运动向量信息,对当前编码单元进行编码。
根据上述方法,确定出当前编码单元中,P类点云对应的运动向量信息后,根据P类点云对应的运动向量信息,对当前编码单元进行编码。
本申请实施例对根据P类点云对应的运动向量信息,对当前编码单元进行编码的具体实现方式不做限制。
在一些实施例中,编码端可以确定当前编码单元的参考编码单元;根据P类点云的运动向量信息,对参考编码单元进行运动补偿,得到当前编码单元的预测信息;根据预测信息,对当前编码单元的几何信息和属性信息中的至少一个进行编码。
在一种示例中,以根据预测信息,对当前编码单元的几何信息进行编码为例,该预测信息可以理解为当前编码单元的预测单元。这样可以根据预测单元中空间占用情况,预测当前编码单元的空间占用情况,进而根据预测的当前编码单元的空间占用情况,对当前编码单元的几何信息进行编码,得到当前编码单元的几何码流。
在另一种示例中,以根据预测信息,对当前编码单元的属性信息进行编码为例,该预测信息可以理解为当前编码单元的预测单元。这样针对当前编码单元中的每一个点,在该预测单元中获取该点的至少一个参考点,根据这至少一个参考点的属性信息,预测该点的属性信息,得到该点的属性预测值。接着,根据该点的属性预测值和属性值,确定该点的属性残差值,进而对该点的属性残差值进行编码,形成属性码流。
需要说明的是,编码端还可以采用的方法,根据当前编码单元的分类信息和运动向量信息中的至少一个,对当前编码单元进行编码,本申请实施例对此不做限制。
由上述可知,编码端可以将第一参数和第二参数中的至少一个写入码流。
可选地,上述第一参数和第二参数中的至少一个,可以以无符号整数形式存储,记为u(v),表示使用v位比特位描述一个参数。
可选的,上述第一参数和第二参数中的至少一个,也可以以无符号指数哥伦布编码的形式进行存储,记为ue(v),表示上述参数取值先经过指数哥伦布编码变成v位01比特序列,再写入码流。
在一些实施例中,编码端将第一参数和第二参数中的至少一个写入序列头参数集中。
在一种示例中,第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
示例性的,第一参数和第二参数在序列头参数集中的存储方式,如表1所示。
在一些实施例中,编码端将第一参数和第二参数中的至少一个写入点云片头信息中。
在一种示例中,第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次第i个点云片的分类信息,i为正整数;和/或,第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次第i个点云片的运动向量信息。
在该示例中,第一参数和第二参数在点云片头信息中的存储方式,如表2所示。
在一种示例中,第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
在该示例中,第一参数和第二参数在点云片头信息中的存储方式,如表3所示。
在一些实施例中,编码端在确定第一参数和第二参数之前,首先需要确定第一标识inter_prediction_flag,第一标识inter_prediction_flag用于指示是否进行帧间预测编码;若第一标志inter_prediction_flag指示进行帧间预测编码时,确定第一参数和第二参数中的至少一个。
本申请实施例提供的点云编码方法,编码端确定第一参数和第二参数中的至少一个,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个;根据所述当前编码单元的分类信息和运动向量信息中的 至少一个,对所述当前编码单元进行编码。即本申请实施例,通过周期性的计算分类信息和运动向量信息,相比于针对每一个编码单元计算一次分类信息和运动向量信息,本申请实施例大大降低了分类信息和运动向量信息的计算次数,降低了编码处理时间,提升了编码效率。
应理解,图4至图6仅为本申请的示例,不应理解为对本申请的限制。
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。
还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。另外,本申请实施例中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。具体地,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
上文结合图4至图6,详细描述了本申请的方法实施例,下文结合图7至图10,详细描述本申请的装置实施例。
图7是本申请实施例提供的点云解码装置的示意性框图。
如图7所示,该点点云解码装置10可包括:
确定单元11,用于解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,所述分类信息是基于第一参数确定的,所述运动向量信息是基于第二参数确定的,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
解码单元12,用于根据所述当前解码单元的分类信息和运动向量信息中的至少一个,对所述当前解码单元进行解码。
在一些实施例中,确定单元11,具体用于从所述点云码流中,解码出所述当前解码单元的分类信息和运动向量信息中的至少一个。
在一些实施例中,确定单元11,具体用于从所述点云码流中,解码出所述第一参数和所述第二参数中的至少一个;根据所述第一参数,确定所述当前解码单元的分类信息,和/或根据所述第二参数,确定所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于根据所述第一参数,确定所述当前解码单元对应的分类信息计算周期;根据所述分类信息计算周期,确定所述当前解码单元的分类信息。
在一些实施例中,确定单元11,具体用于若所述当前解码单元为所述分类信息计算周期内的第一个解码单元,则解码所述点云码流,得到所述当前点解码单元的分类信息。
在一些实施例中,确定单元11,具体用于若所述当前解码单元为所述分类信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的分类信息。
在一些实施例中,所述第一参数指示每K个解码单元,计算一次分类信息,确定单元11,具体用于若所述当前解码单元为解码顺序中的第NK个解码单元,则解码所述点云码流,得到所述当前解码单元的分类信息,所述K、N均为正整数。
在一些实施例中,确定单元11,还用于若所述当前解码单元为解码顺序中的非第NK个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的分类信息。
在一些实施例中,确定单元11,具体用于根据所述M个解码单元的分类信息,确定所述当前解码单元的分类信息,所述M个解码单元为解码顺序中位于所述当前解码单元之前的M个已解码的解码单元,所述M为正整数。
在一些实施例中,确定单元11,具体用于若所述M等于1,则将所述解码顺序中,位于所述当前解码单元之前的一个解码单元的分类信息,确定为所述当前解码单元的分类信息。
在一些实施例中,确定单元11,具体用于若所述M大于1,则对M个解码单元的分类信息进行预设处理,并将处理结果确定为所述当前解码单元的分类信息。
在一些实施例中,确定单元11,具体用于将所述M个解码单元的分类信息的平均值,确定为所述当前解码单元的分类信息。
在一些实施例中,所述分类信息包括第一高度阈值和第二高度阈值中的至少一个,所述第一参数包括第一子参数和第二子参数中的至少一个;
所述第一子参数用于指示所述第一高度阈值的计算周期,所述第二子参数用于指示所述第二高度阈值的计算周期,所述第一高度阈值和所述第二高度阈值用于所述当前解码单元中点云的分类。
在一些实施例中,确定单元11,具体用于根据所述第二参数,确定所述当前解码单元对应的运动向量信息计算周期;根据所述运动向量信息计算周期,确定所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于若所述当前解码单元为所述运动向量信息计算周期内的第一个解码单元,则解码所述点云码流,得到所述当前点解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于若所述当前解码单元为所述运动向量信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的运动向量信息。
在一些实施例中,所述第一参数指示每隔R个解码单元,计算一次运动向量信息,确定单元11,具体用于若所述当前解码单元为解码顺序中的第NR个解码单元,则解码所述点云码流,得到所述当前解码单元的运动向量信息,所述R、N均为正整数。
在一些实施例中,确定单元11,还用于若所述当前解码单元为解码顺序中的非第NR个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于根据S个解码单元的运动向量信息,确定所述当前解码单元的运动向量信息,所述S个解码单元为解码顺序中位于所述当前解码单元之前的S个已解码的解码单元,所述S为正整数。
在一些实施例中,确定单元11,具体用于若所述S等于1,则将所述解码顺序中,位于所述当前解码单元之前的一个解码单元的运动向量信息,确定为所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于若所述S大于1,则对S个解码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于将所述S个解码单元的运动向量信息的平均值,确定为所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,还用于根据所述第一参数,确定分类信息的变化程度;根据所述变化程度,确定所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于根据所述第一参数,确定所述当前解码单元的分类信息;确定所述当前解码单元的分类信息,与所述当前解码单元的参考解码单元的分类信息之间的变化程度。
在一些实施例中,确定单元11,具体用于若所述变化程度小于或等于第一预设值,则将默认值、或者将解码顺序中所述当前解码单元的前一解码单元的运动向量信息,确定为所述当前解码单元的运动向量信息。
在一些实施例中,确定单元11,具体用于若所述变化程度大于第一预设值,则解码所述点云码流,得到所述当前解码单元的运动向量信息。
在一些实施例中,所述运动向量信息包括旋转矩阵和偏移向量中的至少一个,所述第二参数包括第三子参数和第四子参数中的至少一个;
所述第三子参数用于指示所述旋转矩阵的计算周期,所述第四子参数用于指示所述偏移向量的计算周期。
在一些实施例中,确定单元11,还用于若所述当前解码单元为解码顺序中的第一个解码单元,则解码所述点云码流,得到所述当前解码单元的分类信息和运动向量信息中的至少一个。
在一些实施例中,解码单元12,具体用于根据所述当前解码单元的分类信息,将所述当前解码单元中的点云划分为P类点云,所述P为大于1的正整数;根据所述当前解码单元的运动向量信息,确定所述P类点云对应的运动向量信息;根据所述P类点云对应的运动向量信息,对所述当前解码单元进行解码。
在一些实施例中,所述分类信息包括第一高度阈值和第二高度阈值,且所述第一高度阈值大于所述第二高度阈值,解码单元12,具体用于根据所述第一高度阈值和所述第二高度阈值,将所述当前解码单元中点云划分为P类点云。
在一些实施例中,所述P类点云包括第一类点云和第二类点云,所述解码单元12,具体用于将所述当前解码单元中高度值,小于或等于所述第一高度阈值且大于或等于所述第二高度阈值的点云,划分为所述第一类点云;将所述当前解码单元中高度值,大于所述第一高度阈值或者小于所述第二高度阈值的点云,划分为所述第二类点云。
在一些实施例中,解码单元12,具体用于确定所述当前解码单元的参考解码单元;根据所述P类点云的运动向量信息,对所述参考解码单元进行运动补偿,得到所述当前解码单元的预测信息;根据所述预测信息,对所述当前解码单元的几何信息和属性信息中的至少一个进行解码。
在一些实施例中,所述当前解码单元为当前点云帧,或者为所述当前点云帧的一空间区域。
在一些实施例中,确定单元12,具体用于解码顺序头参数集,得到所述第一参数和所述第二参数中的至少一个。
在一些实施例中,所述第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,
所述第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
在一些实施例中,所述当前解码单元为当前点云片,确定单元12,具体用于解码点云片头信息,得到所述第一参数和所述第二参数中的至少一个。
在一些实施例中,所述第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的分类信息,所述i为正整数;和/或,
所述第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的运动向量信息。
在一些实施例中,所述第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,
所述第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
在一些实施例中,确定单元12,具体用于解码所述点云码流,得到第一标识,所述第一标识用于指示是否进行帧间预测解码;
若所述第一标志指示进行帧间预测编码时,解码所述点云码流,得到所述第一参数和所述第二参数中的至少一个。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图7所示的点云解码装置10可以对应于执行本申请实施例的点云解码方法中的相应主体,并且点云解码装置10中的各个单元的前述和其它操作和/或功能分别为了实现点云解码方法中的相应流程,为了简洁,在此不再赘述。
图8是本申请实施例提供的点云编码装置的示意性框图。
如图8所示,点云编码装置20包括:
第一确定单元21,用于确定第一参数和第二参数中的至少一个,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
第二确定单元22,用于根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个;
编码单元23,用于根据所述当前编码单元的分类信息和运动向量信息中的至少一个,对所述当前编码单元进行编码。
在一些实施例中,第二确定单元22,具体用于根据所述第一参数,确定所述当前编码单元的分类信息,和/或根据所述第二参数,确定所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于根据所述第一参数,确定所述当前编码单元对应的分类信息计算周期;根据所述分类信息计算周期,确定所述当前编码单元的分类信息。
在一些实施例中,第二确定单元22,具体用于若所述当前编码单元为所述分类信息计算周期内的第一个编码单元,则对所述当前点编码单元的类别进行识别,得到所述当前点编码单元的分类信息。
在一些实施例中,第二确定单元22,具体用于若所述当前编码单元为所述分类信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的分类信息。
在一些实施例中,编码单元23,还用于将所述第一参数写入点云码流,且若所述当前编码单元为所述分类信息计算周期内的第一个编码单元时,则将所述当前编码单元的分类信息写入所述点云码流。
在一些实施例中,编码单元23,还用于将所述第一参数写入点云码流,且若所述当前编码单元为所述分类信息计算周期内的非第一个编码单元时,则跳过将所述当前编码单元的分类信息写入所述点云码流。
在一些实施例中,所述第一参数指示每隔K个编码单元,计算一次分类信息,所述第二确定单元22,具体用于若所述当前编码单元为编码顺序中的第NK个编码单元,则对所述当前点编码单元中点云的类别进行识别,得到所述当前点编码单元的分类信息,所述K、N均为正整数。
在一些实施例中,第二确定单元22,还用于若所述当前编码单元为编码顺序中的非第NK个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的分类信息。
在一些实施例中,编码单元23,还用于将所述第一参数写入点云码流,且若所述当前编码单元为编码顺序中的第NK个编码单元时,则将所述当前编码单元的分类信息写入所述码流。
在一些实施例中,编码单元23,还用于将所述第一参数写入点云码流,且若所述当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将所述当前编码单元的分类信息写入所述码流。
在一些实施例中,编码单元23,还用于将所述当前编码单元的分类信息写入点云码流,且跳过将所述第一参数写入所述点云码流。
在一些实施例中,第二确定单元22,具体用于根据M个编码单元的分类信息,确定所述当前编码单元的分类信息,所述M个编码单元为解码顺序中位于所述当前编码单元之前的M个已编码的编码单元,所述M为正整数。
在一些实施例中,第二确定单元22,具体用于若所述M等于1,则将所述编码顺序中,位于所述当前编码单元之前的一个编码单元的分类信息,确定为所述当前编码单元的分类信息。
在一些实施例中,第二确定单元22,具体用于若所述M大于1,则对M个编码单元的分类信息进行预设处理,并将处理结果确定为所述当前编码单元的分类信息。
在一些实施例中,第二确定单元22,具体用于将所述M个编码单元的分类信息的平均值,确定为所述当前编码单元的分类信息。
在一些实施例中,所述分类信息包括第一高度阈值和第二高度阈值中的至少一个,所述第一参数包括第一子参数和第二子参数中的至少一个;
所述第一子参数用于指示所述第一高度阈值的计算周期,所述第二子参数用于指示所述第二高度阈值的计算周期,所述第一高度阈值和所述第二高度阈值用于所述当前编码单元的分类。
在一些实施例中,第二确定单元22,具体用于根据所述第二参数,确定所述当前编码单元对应的运动向量信息计算周期;根据所述运动向量信息计算周期,确定所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于若所述当前编码单元为所述运动向量信息计算周期内的第一个编码单元,则根据所述当前编码单元的参考编码单元,确定所述当前点编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于若所述当前编码单元为所述运动向量信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的运动向量信息。
在一些实施例中,编码单元23,还用于将所述第二参数写入点云码流,且若所述当前编码单元为所述运动向量信息计算周期内的第一个编码单元时,则将所述当前编码单元的运动向量信息写入所述点云码流。
在一些实施例中,编码单元23,还用于将所述第二参数写入点云码流,且若所述当前编码单元为所述运动向量信息计算周期内的非第一个编码单元时,则跳过将所述当前编码单元的运动向量信息写入所述点云码流。
在一些实施例中,所述第一参数指示每隔R个编码单元,计算一次运动向量信息,第二确定单元22,具体用于若所述当前编码单元为编码顺序中的第NR个编码单元,则根据所述当前编码单元的参考编码单元,确定所述当前点编码单元的运动向量信息,所述R、N均为正整数。
在一些实施例中,第二确定单元22,还用于若所述当前编码单元为编码顺序中的非第NR个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的运动向量信息。
在一些实施例中,编码单元23,还用于将所述第二参数写入点云码流,且若所述当前编码单元为编码顺序中的第NK个编码单元时,则将所述当前编码单元的运动向量信息写入所述码流。
在一些实施例中,编码单元22,还用于将所述第二参数写入点云码流,且若所述当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将所述当前编码单元的运动向量信息写入所述码流。
在一些实施例中,编码单元22,还用于将所述当前编码单元的运动向量信息写入点云码流,且跳过将所述第二参数写入所述点云码流。
在一些实施例中,第二确定单元22,具体用于根据S个编码单元的运动向量信息,确定所述当前编码单元的运动 向量信息,所述S个编码单元为编码顺序中位于所述当前编码单元之前的S个已编码的编码单元,所述S为正整数。
在一些实施例中,第二确定单元22,具体用于若所述S等于1,则将所述编码顺序中,位于所述当前编码单元之前的一个编码单元的运动向量信息,确定为所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于若所述S大于1,则对S个编码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于将所述S个编码单元的运动向量信息的平均值,确定为所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,还用于根据所述第一参数,确定不同编码单元的分类信息的变化程度;根据所述变化程度,确定所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于根据所述第一参数,确定所述当前编码单元的分类信息,确定所述当前编码单元的分类信息,与所述当前编码单元的参考编码单元的分类信息之间的变化程度。
在一些实施例中,第二确定单元22,具体用于若所述变化程度小于或等于第一预设值,则将默认值、或者将编码顺序中所述当前编码单元的前一编码单元的运动向量信息,确定为所述当前编码单元的运动向量信息。
在一些实施例中,第二确定单元22,具体用于若所述变化程度大于第一预设值,则根据所述当前编码单元的参考编码单元,确定所述当前编码单元的运动向量信息。
在一些实施例中,编码单元23,还用于将所述第一参数写入点云码流,且跳过将所述第二参数写入所述点云码流。
在一些实施例中,所述运动向量信息包括旋转矩阵和偏移向量中的至少一个,所述第二参数包括第三子参数和第四子参数中的至少一个;
所述第三子参数用于指示所述旋转矩阵的计算周期,所述第四子参数用于指示所述偏移向量的计算周期。
在一些实施例中,编码单元23,具体用于根据所述当前编码单元的分类信息,将所述当前编码单元中的点云划分为P类点云,所述P为大于1的正整数;根据所述当前编码单元的运动向量信息,确定所述P类点云对应的运动向量信息;根据所述P类点云对应的运动向量信息,对所述当前编码单元进行编码。
在一些实施例中,所述分类信息包括第一高度阈值和第二高度阈值,且所述第一高度阈值大于所述第二高度阈值,编码单元23,具体用于根据所述第一高度阈值和所述第二高度阈值,将所述当前编码单元中点云划分为P类点云。
在一些实施例中,所述P类点云包括第一类点云和第二类点云,编码单元23,具体用于将所述当前编码单元中高度值,小于或等于所述第一高度阈值且大于或等于所述第二高度阈值的点云,划分为所述第一类点云;将所述当前编码单元中高度值,大于所述第一高度阈值或者小于所述第二高度阈值的点云,划分为所述第二类点云。
在一些实施例中,编码单元23,具体用于根据所述P类点云的运动向量信息,对所述当前编码单元的参考编码单元进行运动补偿,得到所述当前编码单元的预测信息;根据所述预测信息,对所述当前编码单元的几何信息和属性信息中的至少一个进行编码。
在一些实施例中,所述当前编码单元为当前点云帧,或者为所述当前点云帧的一空间区域。
在一些实施例中,编码单元23,还用于将所述第一参数和所述第二参数中的至少一个,写入序列头参数集中。
在一些实施例中,所述第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,
所述第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
在一些实施例中,所述当前编码单元为当前点云片,编码单元23,还用于将所述第一参数和所述第二参数中的至少一个,写入点云片头信息中。
在一些实施例中,所述第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的分类信息,所述i为正整数;和/或,
所述第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的运动向量信息。
在一些实施例中,所述第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,
所述第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
在一些实施例中,第一确定单元21,还用于确定第一标识,所述第一标识用于指示是否进行帧间预测编码;若所述第一标志指示进行帧间预测编码时,则确定所述第一参数和所述第二参数中的至少一个。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图8所示的点云编码装置20可以对应于执行本申请实施例的点云编码方法中的相应主体,并且点点云编码装置20中的各个单元的前述和其它操作和/或功能分别为了实现点云编码方法中的相应流程,为了简洁,在此不再赘述。
上文中结合附图从功能单元的角度描述了本申请实施例的装置和系统。应理解,该功能单元可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件单元组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件单元组合执行完成。可选地,软件单元可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图9是本申请实施例提供的电子设备的示意性框图。
如图9所示,该电子设备30可以为本申请实施例所述的点云解码设备,或者点云编码设备,该电子设备30可包括:
存储器33和处理器32,该存储器33用于存储计算机程序34,并将该程序代码34传输给该处理器32。换言之,该处理器32可以从存储器33中调用并运行计算机程序34,以实现本申请实施例中的方法。
例如,该处理器32可用于根据该计算机程序34中的指令执行上述方法200中的步骤。
在本申请的一些实施例中,该处理器32可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本申请的一些实施例中,该存储器33包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本申请的一些实施例中,该计算机程序34可以被分割成一个或多个单元,该一个或者多个单元被存储在该存储器33中,并由该处理器32执行,以完成本申请提供的方法。该一个或多个单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序34在该电子设备30中的执行过程。
如图9所示,该电子设备30还可包括:
收发器33,该收发器33可连接至该处理器32或存储器33。
其中,处理器32可以控制该收发器33与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器33可以包括发射机和接收机。收发器33还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该电子设备30中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
图10是本申请实施例提供的点云编解码系统的示意性框图。
如图10所示,该点云编解码系统40可包括:点云编码器41和点云解码器42,其中点云编码器41用于执行本申请实施例涉及的点云编码方法,点云解码器42用于执行本申请实施例涉及的点云解码方法。
本申请还提供了一种码流,该码流是根据上述编码方法生成的。
本申请还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。或者说,本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字点云光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。例如,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。

Claims (91)

  1. 一种点云解码方法,其特征在于,包括:
    解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,所述分类信息是基于第一参数确定的,所述运动向量信息是基于第二参数确定的,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
    根据所述当前解码单元的分类信息和运动向量信息中的至少一个,对所述当前解码单元进行解码。
  2. 根据权利要求1所述的方法,其特征在于,所述解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,包括:
    从所述点云码流中,解码出所述当前解码单元的分类信息和运动向量信息中的至少一个。
  3. 根据权利要求1所述的方法,其特征在于,所述解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,包括:
    从所述点云码流中,解码出所述第一参数和所述第二参数中的至少一个;
    根据所述第一参数,确定所述当前解码单元的分类信息,和/或根据所述第二参数,确定所述当前解码单元的运动向量信息。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一参数,确定所述当前解码单元的分类信息,包括:
    根据所述第一参数,确定所述当前解码单元对应的分类信息计算周期;
    根据所述分类信息计算周期,确定所述当前解码单元的分类信息。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述分类信息计算周期,确定所述当前解码单元的分类信息,包括:
    若所述当前解码单元为所述分类信息计算周期内的第一个解码单元,则解码所述点云码流,得到所述当前点解码单元的分类信息。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述分类信息计算周期,确定所述当前解码单元的分类信息,包括:
    若所述当前解码单元为所述分类信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的分类信息。
  7. 根据权利要求3所述的方法,其特征在于,所述第一参数指示每K个解码单元,计算一次分类信息,所述根据所述第一参数,确定所述当前解码单元的分类信息,包括:
    若所述当前解码单元为解码顺序中的第NK个解码单元,则解码所述点云码流,得到所述当前解码单元的分类信息,所述K、N均为正整数。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    若所述当前解码单元为解码顺序中的非第NK个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的分类信息。
  9. 根据权利要求6或8所述的方法,其特征在于,所述根据已解码信息,确定所述当前解码单元的分类信息,包括:
    根据M个解码单元的分类信息,确定所述当前解码单元的分类信息,所述M个解码单元为解码顺序中位于所述当前解码单元之前的M个已解码的解码单元,所述M为正整数。
  10. 根据权利要求9所述的方法,其特征在于,所述根据M个解码单元的分类信息,确定所述当前解码单元的分类信息,包括:
    若所述M等于1,则将所述解码顺序中,位于所述当前解码单元之前的一个解码单元的分类信息,确定为所述当前解码单元的分类信息。
  11. 根据权利要求9所述的方法,其特征在于,所述根据M个解码单元的分类信息,确定所述当前解码单元的分类信息,包括:
    若所述M大于1,则对M个解码单元的分类信息进行预设处理,并将处理结果确定为所述当前解码单元的分类信息。
  12. 根据权利要求11所述的方法,其特征在于,所述对M个解码单元的分类信息进行预设处理,并将处理结果确定为所述当前解码单元的分类信息,包括:
    将所述M个解码单元的分类信息的平均值,确定为所述当前解码单元的分类信息。
  13. 根据权利要求3-8、10-12任一项所述的方法,其特征在于,所述分类信息包括第一高度阈值和第二高度阈值中的至少一个,所述第一参数包括第一子参数和第二子参数中的至少一个;
    所述第一子参数用于指示所述第一高度阈值的计算周期,所述第二子参数用于指示所述第二高度阈值的计算周期,所述第一高度阈值和所述第二高度阈值用于所述当前解码单元中点云的分类。
  14. 根据权利要求3所述的方法,其特征在于,所述根据所述第二参数,确定所述当前解码单元的运动向量信息,包括:
    根据所述第二参数,确定所述当前解码单元对应的运动向量信息计算周期;
    根据所述运动向量信息计算周期,确定所述当前解码单元的运动向量信息。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述运动向量信息计算周期,确定所述当前解码单元的运动向量信息,包括:
    若所述当前解码单元为所述运动向量信息计算周期内的第一个解码单元,则解码所述点云码流,得到所述当前点 解码单元的运动向量信息。
  16. 根据权利要求14所述的方法,其特征在于,所述根据所述运动向量信息计算周期,确定所述当前解码单元的运动向量信息,包括:
    若所述当前解码单元为所述运动向量信息计算周期内的非第一个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的运动向量信息。
  17. 根据权利要求3所述的方法,其特征在于,所述第一参数指示每隔R个解码单元,计算一次运动向量信息,所述根据所述第二参数,确定所述当前解码单元的运动向量信息,包括:
    若所述当前解码单元为解码顺序中的第NR个解码单元,则解码所述点云码流,得到所述当前解码单元的运动向量信息,所述R、N均为正整数。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    若所述当前解码单元为解码顺序中的非第NR个解码单元,则根据已解码信息或默认值,确定所述当前解码单元的运动向量信息。
  19. 根据权利要求16或18所述的方法,其特征在于,所述根据已解码信息,确定所述当前解码单元的运动向量信息,包括:
    根据S个解码单元的运动向量信息,确定所述当前解码单元的运动向量信息,所述S个解码单元为解码顺序中位于所述当前解码单元之前的S个已解码的解码单元,所述S为正整数。
  20. 根据权利要求19所述的方法,其特征在于,所述根据所述S个解码单元的运动向量信息,确定所述当前解码单元的运动向量信息,包括:
    若所述S等于1,则将所述解码顺序中,位于所述当前解码单元之前的一个解码单元的运动向量信息,确定为所述当前解码单元的运动向量信息。
  21. 根据权利要求19所述的方法,其特征在于,所述根据所述S个解码单元的运动向量信息,确定所述当前解码单元的运动向量信息,包括:
    若所述S大于1,则对S个解码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前解码单元的运动向量信息。
  22. 根据权利要求21所述的方法,其特征在于,所述对S个解码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前解码单元的运动向量信息,包括:
    将所述S个解码单元的运动向量信息的平均值,确定为所述当前解码单元的运动向量信息。
  23. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    根据所述第一参数,确定分类信息的变化程度;
    根据所述变化程度,确定所述当前解码单元的运动向量信息。
  24. 根据权利要求23所述的方法,其特征在于,所述根据所述第一参数,确定分类信息的变化程度,包括:
    根据所述第一参数,确定所述当前解码单元的分类信息;
    确定所述当前解码单元的分类信息,与所述当前解码单元的参考解码单元的分类信息之间的变化程度。
  25. 根据权利要求23所述的方法,其特征在于,所述根据所述变化程度,确定所述当前解码单元的运动向量信息,包括:
    若所述变化程度小于或等于第一预设值,则将默认值、或者将解码顺序中所述当前解码单元的前一解码单元的运动向量信息,确定为所述当前解码单元的运动向量信息。
  26. 根据权利要求23所述的方法,其特征在于,所述根据所述变化程度,确定所述当前解码单元的运动向量信息,包括:
    若所述变化程度大于第一预设值,则解码所述点云码流,得到所述当前解码单元的运动向量信息。
  27. 根据权利要求14-18、20-26任一项所述的方法,其特征在于,所述运动向量信息包括旋转矩阵和偏移向量中的至少一个,所述第二参数包括第三子参数和第四子参数中的至少一个;
    所述第三子参数用于指示所述旋转矩阵的计算周期,所述第四子参数用于指示所述偏移向量的计算周期。
  28. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述当前解码单元为解码顺序中的第一个解码单元,则解码所述点云码流,得到所述当前解码单元的分类信息和运动向量信息中的至少一个。
  29. 根据权利要求1-8、10-12、14-18、20-26任一项所述的方法,其特征在于,所述根据所述当前解码单元的分类信息和运动向量信息中的至少一个,对所述当前解码单元进行解码,包括:
    根据所述当前解码单元的分类信息,将所述当前解码单元中的点云划分为P类点云,所述P为大于1的正整数;
    根据所述当前解码单元的运动向量信息,确定所述P类点云对应的运动向量信息;
    根据所述P类点云对应的运动向量信息,对所述当前解码单元进行解码。
  30. 根据权利要求29所述的方法,其特征在于,所述分类信息包括第一高度阈值和第二高度阈值,且所述第一高度阈值大于所述第二高度阈值,所述根据所述当前解码单元的分类信息,将所述当前解码单元中的点云划分为P类点云,包括:
    根据所述第一高度阈值和所述第二高度阈值,将所述当前解码单元中点云划分为P类点云。
  31. 根据权利要求30所述的方法,其特征在于,所述P类点云包括第一类点云和第二类点云,所述根据所述第一高度阈值和所述第二高度阈值,将所述当前解码单元中点云划分为P类点云,包括:
    将所述当前解码单元中高度值,小于或等于所述第一高度阈值且大于或等于所述第二高度阈值的点云,划分为所述第一类点云;
    将所述当前解码单元中高度值,大于所述第一高度阈值或者小于所述第二高度阈值的点云,划分为所述第二类点云。
  32. 根据权利要求29所述的方法,其特征在于,所述根据所述P类点云的运动向量信息,对所述当前解码单元进行解码,包括:
    确定所述当前解码单元的参考解码单元;
    根据所述P类点云的运动向量信息,对所述参考解码单元进行运动补偿,得到所述当前解码单元的预测信息;
    根据所述预测信息,对所述当前解码单元的几何信息和属性信息中的至少一个进行解码。
  33. 根据权利要求1-8、10-12、14-18、20-26任一项所述的方法,其特征在于,所述当前解码单元为当前点云帧,或者为所述当前点云帧的一空间区域。
  34. 根据权利要求3-8、10-12、14-18、20-26任一项所述的方法,其特征在于,所述从所述点云码流中,解码出所述第一参数和所述第二参数中的至少一个,包括:
    解码顺序头参数集,得到所述第一参数和所述第二参数中的至少一个。
  35. 根据权利要求34所述的方法,其特征在于,所述第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,
    所述第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
  36. 根据权利要求3-8、10-12、14-18、20-26任一项所述的方法,其特征在于,所述当前解码单元为当前点云片,所述从所述点云码流中,解码出所述第一参数和所述第二参数中的至少一个,包括:
    解码点云片头信息,得到所述第一参数和所述第二参数中的至少一个。
  37. 根据权利要求36所述的方法,其特征在于,所述第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的分类信息,所述i为正整数;和/或,
    所述第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的运动向量信息。
  38. 根据权利要求36所述的方法,其特征在于,所述第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,
    所述第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
  39. 根据权利要求3-8、10-12、14-18、20-26任一项所述的方法,其特征在于,所述从所述点云码流中,解码出所述第一参数和所述第二参数中的至少一个,包括:
    解码所述点云码流,得到第一标识,所述第一标识用于指示是否进行帧间预测解码;
    若所述第一标志指示进行帧间预测编码时,解码所述点云码流,得到所述第一参数和所述第二参数中的至少一个。
  40. 一种点云编码方法,其特征在于,包括:
    确定第一参数和第二参数中的至少一个,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
    根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个;
    根据所述当前编码单元的分类信息和运动向量信息中的至少一个,对所述当前编码单元进行编码。
  41. 根据权利要求40所述的方法,其特征在于,所述根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个,包括:
    根据所述第一参数,确定所述当前编码单元的分类信息,和/或根据所述第二参数,确定所述当前编码单元的运动向量信息。
  42. 根据权利要求41所述的方法,其特征在于,所述根据所述第一参数,确定所述当前编码单元的分类信息,包括:
    根据所述第一参数,确定所述当前编码单元对应的分类信息计算周期;
    根据所述分类信息计算周期,确定所述当前编码单元的分类信息。
  43. 根据权利要求42所述的方法,其特征在于,所述根据所述分类信息计算周期,确定所述当前编码单元的分类信息,包括:
    若所述当前编码单元为所述分类信息计算周期内的第一个编码单元,则对所述当前点编码单元中点云的类别进行识别,得到所述当前点编码单元的分类信息。
  44. 根据权利要求42所述的方法,其特征在于,所述根据所述分类信息计算周期,确定所述当前编码单元的分类信息,包括:
    若所述当前编码单元为所述分类信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的分类信息。
  45. 根据权利要求43所述的方法,其特征在于,所述方法还包括:
    将所述第一参数写入点云码流,且若所述当前编码单元为所述分类信息计算周期内的第一个编码单元时,则将所述当前编码单元的分类信息写入所述点云码流。
  46. 根据权利要求44所述的方法,其特征在于,所述方法还包括:
    将所述第一参数写入点云码流,且若所述当前编码单元为所述分类信息计算周期内的非第一个编码单元时,则跳过将所述当前编码单元的分类信息写入所述点云码流。
  47. 根据权利要求42所述的方法,其特征在于,所述第一参数指示每隔K个编码单元,计算一次分类信息,所 述根据所述第一参数,确定所述当前编码单元的分类信息,包括:
    若所述当前编码单元为编码顺序中的第NK个编码单元,则对所述当前点编码单元中点云的类别进行识别,得到所述当前点编码单元的分类信息,所述K、N均为正整数。
  48. 根据权利要求47所述的方法,其特征在于,所述方法还包括:
    若所述当前编码单元为编码顺序中的非第NK个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的分类信息。
  49. 根据权利要求47所述的方法,其特征在于,所述方法还包括:
    将所述第一参数写入点云码流,且若所述当前编码单元为编码顺序中的第NK个编码单元时,则将所述当前编码单元的分类信息写入所述码流。
  50. 根据权利要求48所述的方法,其特征在于,所述方法还包括:
    将所述第一参数写入点云码流,且若所述当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将所述当前编码单元的分类信息写入所述码流。
  51. 根据权利要求41-50任一项所述的方法,其特征在于,所述方法还包括:
    将所述当前编码单元的分类信息写入点云码流,且跳过将所述第一参数写入所述点云码流。
  52. 根据权利要求44或48所述的方法,其特征在于,所述根据已编码信息,确定所述当前编码单元的分类信息,包括:
    根据所述M个编码单元的分类信息,确定所述当前编码单元的分类信息,所述M个编码单元为解码顺序中位于所述当前编码单元之前的M个已编码的编码单元,所述M为正整数。
  53. 根据权利要求52所述的方法,其特征在于,所述根据所述M个编码单元的分类信息,确定所述当前编码单元的分类信息,包括:
    若所述M等于1,则将所述编码顺序中,位于所述当前编码单元之前的一个编码单元的分类信息,确定为所述当前编码单元的分类信息。
  54. 根据权利要求52所述的方法,其特征在于,所述根据所述M个编码单元的分类信息,确定所述当前编码单元的分类信息,包括:
    若所述M大于1,则对M个编码单元的分类信息进行预设处理,并将处理结果确定为所述当前编码单元的分类信息。
  55. 根据权利要求54所述的方法,其特征在于,所述对M个编码单元的分类信息进行预设处理,并将处理结果确定为所述当前编码单元的分类信息,包括:
    将所述M个编码单元的分类信息的平均值,确定为所述当前编码单元的分类信息。
  56. 根据权利要求42-50任一项所述的方法,其特征在于,所述分类信息包括第一高度阈值和第二高度阈值中的至少一个,所述第一参数包括第一子参数和第二子参数中的至少一个;
    所述第一子参数用于指示所述第一高度阈值的计算周期,所述第二子参数用于指示所述第二高度阈值的计算周期,所述第一高度阈值和所述第二高度阈值用于所述当前编码单元中点云的分类。
  57. 根据权利要求42所述的方法,其特征在于,所述根据所述第二参数,确定所述当前编码单元的运动向量信息,包括:
    根据所述第二参数,确定所述当前编码单元对应的运动向量信息计算周期;
    根据所述运动向量信息计算周期,确定所述当前编码单元的运动向量信息。
  58. 根据权利要求57所述的方法,其特征在于,所述根据所述运动向量信息计算周期,确定所述当前编码单元的运动向量信息,包括:
    若所述当前编码单元为所述运动向量信息计算周期内的第一个编码单元,则根据所述当前编码单元的参考编码单元,确定所述当前点编码单元的运动向量信息。
  59. 根据权利要求57所述的方法,其特征在于,所述根据所述运动向量信息计算周期,确定所述当前编码单元的运动向量信息,包括:
    若所述当前编码单元为所述运动向量信息计算周期内的非第一个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的运动向量信息。
  60. 根据权利要求58所述的方法,其特征在于,所述方法还包括:
    将所述第二参数写入点云码流,且若所述当前编码单元为所述运动向量信息计算周期内的第一个编码单元时,则将所述当前编码单元的运动向量信息写入所述点云码流。
  61. 根据权利要求59所述的方法,其特征在于,所述方法还包括:
    将所述第二参数写入点云码流,且若所述当前编码单元为所述运动向量信息计算周期内的非第一个编码单元时,则跳过将所述当前编码单元的运动向量信息写入所述点云码流。
  62. 根据权利要求42所述的方法,其特征在于,所述第一参数指示每隔R个编码单元,计算一次运动向量信息,所述根据所述第二参数,确定所述当前编码单元的运动向量信息,包括:
    若所述当前编码单元为编码顺序中的第NR个编码单元,则根据所述当前编码单元的参考编码单元,确定所述当前点编码单元的运动向量信息,所述R、N均为正整数。
  63. 根据权利要求62所述的方法,其特征在于,所述方法还包括:
    若所述当前编码单元为编码顺序中的非第NR个编码单元,则根据已编码信息或默认值,确定所述当前编码单元的运动向量信息。
  64. 根据权利要求62所述的方法,其特征在于,所述方法还包括:
    将所述第二参数写入点云码流,且若所述当前编码单元为编码顺序中的第NK个编码单元时,则将所述当前编码单元的运动向量信息写入所述码流。
  65. 根据权利要求63所述的方法,其特征在于,所述方法还包括:
    将所述第二参数写入点云码流,且若所述当前编码单元为编码顺序中的非第NK个编码单元时,则跳过将所述当前编码单元的运动向量信息写入所述码流。
  66. 根据权利要求57-65任一项所述的方法,其特征在于,所述方法还包括:
    将所述当前编码单元的运动向量信息写入点云码流,且跳过将所述第二参数写入所述点云码流。
  67. 根据权利要求59或63所述的方法,其特征在于,所述根据已编码信息,确定所述当前编码单元的运动向量信息,包括:
    根据S个编码单元的运动向量信息,确定所述当前编码单元的运动向量信息,所述S个编码单元为编码顺序中位于所述当前编码单元之前的S个已编码的编码单元,所述S为正整数。
  68. 根据权利要求67所述的方法,其特征在于,所述根据所述S个编码单元的运动向量信息,确定所述当前编码单元的运动向量信息,包括:
    若所述S等于1,则将所述编码顺序中,位于所述当前编码单元之前的一个编码单元的运动向量信息,确定为所述当前编码单元的运动向量信息。
  69. 根据权利要求67所述的方法,其特征在于,所述根据所述S个编码单元的运动向量信息,确定所述当前编码单元的运动向量信息,包括:
    若所述S大于1,则对S个编码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前编码单元的运动向量信息。
  70. 根据权利要求69所述的方法,其特征在于,所述对S个编码单元的运动向量信息进行预设处理,并将处理结果确定为所述当前编码单元的运动向量信息,包括:
    将所述S个编码单元的运动向量信息的平均值,确定为所述当前编码单元的运动向量信息。
  71. 根据权利要求42所述的方法,其特征在于,所述方法还包括,包括:
    根据所述第一参数,确定不同编码单元的分类信息的变化程度;
    根据所述变化程度,确定所述当前编码单元的运动向量信息。
  72. 根据权利要求71所述的方法,其特征在于,所述根据所述第一参数,确定不同编码单元的分类信息的变化程度,包括:
    根据所述第一参数,确定所述当前编码单元的分类信息,
    确定所述当前编码单元的分类信息,与所述当前编码单元的参考编码单元的分类信息之间的变化程度。
  73. 根据权利要求72所述的方法,其特征在于,所述根据所述变化程度,确定所述当前编码单元的运动向量信息,包括:
    若所述变化程度小于或等于第一预设值,则将默认值、或者将编码顺序中所述当前编码单元的前一编码单元的运动向量信息,确定为所述当前编码单元的运动向量信息。
  74. 根据权利要求72所述的方法,其特征在于,所述根据所述变化程度,确定所述当前编码单元的运动向量信息,包括:
    若所述变化程度大于第一预设值,则根据所述当前编码单元的参考编码单元,确定所述当前编码单元的运动向量信息。
  75. 根据权利要求71所述的方法,其特征在于,所述方法还包括:
    将所述第一参数写入点云码流,且跳过将所述第二参数写入所述点云码流。
  76. 根据权利要求57-65任一项所述的方法,其特征在于,所述运动向量信息包括旋转矩阵和偏移向量中的至少一个,所述第二参数包括第三子参数和第四子参数中的至少一个;
    所述第三子参数用于指示所述旋转矩阵的计算周期,所述第四子参数用于指示所述偏移向量的计算周期。
  77. 根据权利要求40-50、57-65任一项所述的方法,其特征在于,所述根据所述当前编码单元的分类信息和运动向量信息中的至少一个,对所述当前编码单元进行编码,包括:
    根据所述当前编码单元的分类信息,将所述当前编码单元中的点云划分为P类点云,所述P为大于1的正整数;
    根据所述当前编码单元的运动向量信息,确定所述P类点云对应的运动向量信息;
    根据所述P类点云对应的运动向量信息,对所述当前编码单元进行编码。
  78. 根据权利要求77所述的方法,其特征在于,所述分类信息包括第一高度阈值和第二高度阈值,且所述第一高度阈值大于所述第二高度阈值,所述根据所述当前编码单元的分类信息,将所述当前编码单元中的点云划分为P类点云,包括:
    根据所述第一高度阈值和所述第二高度阈值,将所述当前编码单元中点云划分为P类点云。
  79. 根据权利要求78所述的方法,其特征在于,所述P类点云包括第一类点云和第二类点云,所述根据所述第一高度阈值和所述第二高度阈值,将所述当前编码单元中点云划分为P类点云,包括:
    将所述当前编码单元中高度值,小于或等于所述第一高度阈值且大于或等于所述第二高度阈值的点云,划分为所述第一类点云;
    将所述当前编码单元中高度值,大于所述第一高度阈值或者小于所述第二高度阈值的点云,划分为所述第二类点云。
  80. 根据权利要求77所述的方法,其特征在于,所述根据所述P类点云的运动向量信息,对所述当前编码单元进行编码,包括:
    根据所述P类点云的运动向量信息,对所述当前编码单元的参考编码单元进行运动补偿,得到所述当前编码单元的预测信息;
    根据所述预测信息,对所述当前编码单元的几何信息和属性信息中的至少一个进行编码。
  81. 根据权利要求40-50、57-65任一项所述的方法,其特征在于,所述当前编码单元为当前点云帧,或者为所述当前点云帧的一空间区域。
  82. 根据权利要求40-50、57-65任一项所述的方法,其特征在于,所述方法还包括:
    将所述第一参数和所述第二参数中的至少一个,写入序列头参数集中。
  83. 根据权利要求82所述的方法,其特征在于,所述第一参数用于指示间隔多个点云帧,计算一次分类信息;和/或,
    所述第二参数用于指示间隔多个点云帧,计算一次运动向量信息。
  84. 根据权利要求40-50、57-65任一项所述的方法,其特征在于,所述当前编码单元为当前点云片,所述方法还包括:
    将所述第一参数和所述第二参数中的至少一个,写入点云片头信息中。
  85. 根据权利要求84所述的方法,其特征在于,所述第一参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的分类信息,所述i为正整数;和/或,
    所述第二参数用于指示对于点云帧中的第i个点云片,间隔多个点云帧,计算一次所述第i个点云片的运动向量信息。
  86. 根据权利要求85所述的方法,其特征在于,所述第一参数用于指示一个点云帧中间隔多个点云片,计算一次分类信息;和/或,
    所述第二参数用于一个点云帧中间隔多个点云片,计算一次运动向量信息。
  87. 根据权利要求40-50、57-65任一项所述的方法,其特征在于,所述确定第一参数和第二参数中的至少一个之前,所述方法还包括:
    确定第一标识,所述第一标识用于指示是否进行帧间预测编码;
    所述确定第一参数和第二参数中的至少一个,包括:
    若所述第一标志指示进行帧间预测编码时,则确定所述第一参数和所述第二参数中的至少一个。
  88. 一种点云解码装置,其特征在于,包括:
    确定单元,用于解码点云码流,确定当前解码单元的分类信息和运动向量信息中的至少一个,所述分类信息是基于第一参数确定的,所述运动向量信息是基于第二参数确定的,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
    解码单元,用于根据所述当前解码单元的分类信息和运动向量信息中的至少一个,对所述当前解码单元进行解码。
  89. 一种点云编码装置,其特征在于,包括:
    第一确定单元,用于确定第一参数和第二参数中的至少一个,所述第一参数用于指示分类信息的计算周期,所述第二参数用于指示运动向量信息的计算周期;
    第二确定单元,用于根据所述第一参数和所述第二参数中的至少一个,确定当前编码单元的分类信息和运动向量信息中的至少一个;
    编码单元,用于根据所述当前编码单元的分类信息和运动向量信息中的至少一个,对所述当前编码单元进行编码。
  90. 一种电子设备,其特征在于,包括:处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器用于调用并运行所述存储器中存储的计算机程序,以执行如权利要求48-97任一项所述的方法。
  91. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求1至39或40至87任一项所述的方法。
PCT/CN2022/105000 2022-07-11 2022-07-11 点云编解码方法、装置、设备及存储介质 WO2024011381A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/105000 WO2024011381A1 (zh) 2022-07-11 2022-07-11 点云编解码方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/105000 WO2024011381A1 (zh) 2022-07-11 2022-07-11 点云编解码方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024011381A1 true WO2024011381A1 (zh) 2024-01-18

Family

ID=89535242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105000 WO2024011381A1 (zh) 2022-07-11 2022-07-11 点云编解码方法、装置、设备及存储介质

Country Status (1)

Country Link
WO (1) WO2024011381A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001298728A (ja) * 2000-04-12 2001-10-26 Meidensha Corp 遠方監視システム及び画像符号化処理方法
JP2006086684A (ja) * 2004-09-15 2006-03-30 Victor Co Of Japan Ltd 動画像符号化装置及び動画像符号化用プログラム
CN101340578A (zh) * 2007-07-03 2009-01-07 株式会社日立制作所 运动矢量估计装置、编码器及摄像机
CN101404729A (zh) * 2007-10-04 2009-04-08 索尼株式会社 图像处理装置、图像处理方法和程序
US20210247771A1 (en) * 2018-10-15 2021-08-12 Mitsubishi Electric Corporation Information processing device
CN113920483A (zh) * 2021-09-14 2022-01-11 征图三维(北京)激光技术有限公司 道路点云中物体的分类方法、装置、电子设备及存储介质
CN114143556A (zh) * 2021-12-28 2022-03-04 苏州联视泰电子信息技术有限公司 一种用于压缩三维声纳点云数据的帧间编解码方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001298728A (ja) * 2000-04-12 2001-10-26 Meidensha Corp 遠方監視システム及び画像符号化処理方法
JP2006086684A (ja) * 2004-09-15 2006-03-30 Victor Co Of Japan Ltd 動画像符号化装置及び動画像符号化用プログラム
CN101340578A (zh) * 2007-07-03 2009-01-07 株式会社日立制作所 运动矢量估计装置、编码器及摄像机
CN101404729A (zh) * 2007-10-04 2009-04-08 索尼株式会社 图像处理装置、图像处理方法和程序
US20210247771A1 (en) * 2018-10-15 2021-08-12 Mitsubishi Electric Corporation Information processing device
CN113920483A (zh) * 2021-09-14 2022-01-11 征图三维(北京)激光技术有限公司 道路点云中物体的分类方法、装置、电子设备及存储介质
CN114143556A (zh) * 2021-12-28 2022-03-04 苏州联视泰电子信息技术有限公司 一种用于压缩三维声纳点云数据的帧间编解码方法

Similar Documents

Publication Publication Date Title
US20220321912A1 (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
US11910017B2 (en) Method for predicting point cloud attribute, encoder, decoder, and storage medium
WO2022133753A1 (zh) 点云编解码方法与系统、及点云编码器与点云解码器
WO2024011381A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2022226850A1 (zh) 点云质量增强方法、编码和解码方法及装置、存储介质
WO2024026712A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2023024842A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024065269A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024145933A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024145934A1 (zh) 点云编解码方法、装置、设备及存储介质
TWI806481B (zh) 點雲中鄰居點的選擇方法及裝置、編碼設備、解碼設備及電腦設備
WO2022257528A1 (zh) 点云属性的预测方法、装置及相关设备
WO2024145935A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024145913A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024145912A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2022257145A1 (zh) 点云属性的预测方法、装置及编解码器
WO2022257150A1 (zh) 点云编解码方法、装置、点云编解码器及存储介质
WO2023024840A1 (zh) 点云编解码方法、编码器、解码器及存储介质
WO2024065271A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024065270A1 (zh) 点云编解码方法、装置、设备及存储介质
WO2024065272A1 (zh) 点云编解码方法、装置、设备及存储介质
US20240087174A1 (en) Coding and decoding point cloud attribute information
US20230412837A1 (en) Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device
US20240179347A1 (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
CN117615136A (zh) 点云解码方法、点云编码方法、解码器、电子设备以及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22950512

Country of ref document: EP

Kind code of ref document: A1