WO2021115466A1 - 点云数据的编码方法、解码方法、存储介质及设备 - Google Patents

点云数据的编码方法、解码方法、存储介质及设备 Download PDF

Info

Publication number
WO2021115466A1
WO2021115466A1 PCT/CN2020/135982 CN2020135982W WO2021115466A1 WO 2021115466 A1 WO2021115466 A1 WO 2021115466A1 CN 2020135982 W CN2020135982 W CN 2020135982W WO 2021115466 A1 WO2021115466 A1 WO 2021115466A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
dimensional
sequence
sequence group
Prior art date
Application number
PCT/CN2020/135982
Other languages
English (en)
French (fr)
Inventor
李革
何盈燊
王静
邵薏婷
高文
Original Assignee
鹏城实验室
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鹏城实验室, 北京大学深圳研究生院 filed Critical 鹏城实验室
Publication of WO2021115466A1 publication Critical patent/WO2021115466A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Definitions

  • the present invention relates to the technical field of point cloud processing, in particular to a point cloud data encoding method, decoding method, storage medium and equipment.
  • Three-dimensional point cloud is an important form of digital representation of the real world. With the rapid development of 3D scanning equipment (laser, radar, etc.), the accuracy and resolution of point clouds are getting higher and higher. High-precision point clouds are widely used in the construction of urban digital maps, and play a technical support role in many popular researches such as smart cities, unmanned driving, and cultural relics protection.
  • the point cloud data is obtained by sampling the surface of the object by a three-dimensional scanning device.
  • the number of points in a frame of point cloud data is generally at the level of one million. Each point can contain location information and attribute information such as color and reflectivity. The amount of data is very large. .
  • the currently commonly used three-dimensional unit data encoding technology generally encodes three-dimensional point cloud data, and there is a problem of large data volume when encoding three-dimensional point cloud data, resulting in low encoding efficiency of three-dimensional point cloud data.
  • the technical problem to be solved by the present invention is to provide a point cloud data encoding method, decoding method, storage medium and terminal equipment in view of the shortcomings of the prior art.
  • a method for encoding point cloud data comprising:
  • the occupancy map and the one-dimensional sequence group are encoded to obtain the code stream corresponding to the point cloud data.
  • the one-dimensional sequence group is generated according to the occupant map and a preset scanning order, wherein the preset scanning order specifically includes:
  • the preset scanning order is Morton order
  • the preset scanning sequence is a coordinate increasing scanning sequence.
  • the one-dimensional sequence group includes:
  • One-dimensional depth sequence and/or
  • One-dimensional color sequence and/or
  • the coding method of the point cloud data wherein the method further includes:
  • the longest side of the point cloud data set is divided by the shortest side alignment
  • the point cloud is divided into blocks of the specified size.
  • generating a placeholder map according to the point cloud data to be encoded specifically includes:
  • a first preset number is used to represent each of the selected pixels, and a second preset number is used to represent unselected pixels in the two-dimensional point cloud data, so as to generate a placeholder corresponding to the two-dimensional image layer Figure.
  • generating a placeholder map according to the point cloud data to be encoded specifically includes:
  • the coordinate information of the data point is converted into spherical coordinate information
  • the converted spherical coordinate information is respectively mapped into two-dimensional point cloud data, and a placeholder map is generated according to the two-dimensional point cloud data.
  • the generating a placeholder map according to the two-dimensional point cloud data specifically includes:
  • a two-dimensional image layer is generated according to the two-dimensional point cloud data, and a placeholder map is generated according to the two-dimensional image layer.
  • the two-dimensional image layer includes several two-dimensional image layers; each two-dimensional image layer corresponds to a placeholder map.
  • the encoding the placeholder map and the one-dimensional sequence group to obtain the code stream corresponding to the point cloud data specifically includes:
  • the updated one-dimensional sequence group is coded to obtain the code stream corresponding to the point cloud data.
  • a method for decoding point cloud data includes:
  • the one-dimensional sequence group includes:
  • One-dimensional depth sequence and/or
  • One-dimensional color sequence and/or
  • the generating point cloud data according to the occupant map and the one-dimensional sequence group specifically includes:
  • the point cloud data corresponding to the code stream is determined according to the two-dimensional image layer.
  • the preset scanning order includes: a coordinate increment scanning order or a two-dimensional Morton scanning order.
  • the determining the point cloud data corresponding to the code stream according to the two-dimensional image layer specifically includes:
  • the spherical coordinate information is converted into coordinate information of the three-dimensional point cloud data.
  • the decoding according to the code stream to obtain the one-dimensional sequence group corresponding to the code stream specifically includes:
  • the reconstruction value of the previous point of each point of the one-dimensional candidate sequence is used as the predicted value, and the value of each point and its corresponding predicted value are superimposed as the value of the current point , To get a one-dimensional sequence group.
  • a computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the point cloud as described above Steps in the data encoding method, or to implement any of the steps in the point cloud data decoding method described above.
  • a terminal device which includes a processor, a memory, and a communication bus; the memory stores a computer readable program that can be executed by the processor;
  • the communication bus realizes connection and communication between the processor and the memory
  • the present invention provides a point cloud data encoding method, decoding method, storage medium and terminal equipment.
  • the encoding method generates a placeholder map based on the point cloud data to be encoded;
  • the coded point cloud data, the placeholder map, and the preset scanning order generate a one-dimensional sequence group; the placeholder map and the one-dimensional sequence group are coded to obtain a code stream corresponding to the point cloud data.
  • the invention reduces the amount of data during coding by converting the three-dimensional point cloud data into a one-dimensional sequence group and encoding the one-dimensional sequence group, thereby improving the coding efficiency of the point cloud data.
  • Fig. 1 is a flowchart of a point cloud data encoding method provided by the present invention.
  • FIG. 2 is a schematic diagram of the process of converting a two-dimensional image layer into a one-dimensional sequence in the point cloud data encoding method provided by the present invention.
  • FIG. 3 is a schematic diagram of the process of converting a one-dimensional sequence into a two-dimensional image layer in the point cloud data encoding method provided by the present invention.
  • FIG. 4 is a schematic diagram of data points in a three-dimensional Cartesian coordinate system in the point cloud data encoding method provided by the present invention.
  • Fig. 5 is a schematic diagram of data points in a spherical coordinate system in the point cloud data coding method provided by the present invention.
  • FIG. 6 is a schematic diagram of a Morton sequence in the point cloud data encoding method provided by the present invention.
  • FIG. 7 is a schematic diagram of another Morton sequence in the point cloud data encoding method provided by the present invention.
  • FIG. 8 is a schematic diagram of the process of mapping two-dimensional point cloud data into several two-dimensional image layers in the method for encoding point cloud data provided by the present invention.
  • Fig. 9 is a flowchart of a point cloud data decoding method provided by the present invention.
  • Fig. 10 is a structural principle diagram of a terminal device provided by the present invention.
  • the present invention provides a point cloud data encoding method, decoding method, storage medium and terminal equipment.
  • the present invention will be further described in detail below with reference to the drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not used to limit the present invention.
  • This embodiment provides a method for encoding point cloud data.
  • the method can be executed by an encoding device, which can be implemented by software, and applied to smart terminals such as PCs, servers, clouds, tablets, or personal digital assistants.
  • the point cloud data encoding method provided in this embodiment specifically includes:
  • the point cloud data to be encoded may be a frame of point cloud data scanned by a three-dimensional scanning device, or a frame of point cloud data sent by an external device, or a frame of point cloud data obtained through the cloud.
  • the cloud data can also be two-dimensional point cloud data scanned by a two-dimensional scanning device.
  • the data point may include position information and attribute information.
  • the position information may be expressed as coordinate information of the data point, and the attribute information may include color information and/or Reflectance information, etc., where the attribute information of each data point is bound and stored with the location information of the data point.
  • the point cloud data may be a radar point cloud data set obtained by ordinary radar scanning, or may be a dense point cloud data set. Therefore, when the point cloud data to be coded is obtained, the data type of the point cloud data to be coded can be judged. If the data type of the point cloud data to be coded is a radar point cloud data set, it is directly based on the point cloud data to be coded Generate a placeholder map; if the data type of the point cloud data to be encoded is a dense point cloud data set, the point cloud data can be divided into several point cloud data blocks, and each point cloud data block is executed according to the The encoded point cloud data generates a placeholder map.
  • each point cloud data block can be used as a point cloud data to be coded, and each point cloud data block is used as the point cloud data to be coded.
  • the point cloud data is the same as the process of directly using the point cloud data as the point cloud data to be encoded.
  • the division method may be preset, and when the point cloud data to be encoded is acquired, The point cloud data to be coded can be divided directly according to the preset dividing method to obtain several point cloud data blocks, and each point cloud data block is used as a point cloud data to be coded to perform the Steps of generating a placeholder map based on the point cloud data to be encoded.
  • the dividing method may be to divide the point cloud data to be coded with the shortest edge alignment on the longest side of the point cloud data to be coded, so as to divide the point cloud data to be coded into several point cloud data blocks, for example, the point cloud data to be coded
  • the long side of the point cloud data is 4096
  • the short side is 1024 in length.
  • the remainder is added to the last point cloud data block obtained by the division.
  • the dividing method may also be that for a given point cloud data block size (for example, 1000*1000*1000, etc.) as a unit, the point cloud data to be encoded is divided into several point cloud data blocks, where the fixed point cloud data block can be divided into Cuboid or cube, etc.
  • the length, width and height of the point cloud data to be encoded is 2048*2048*2048.
  • the length, width and height are not divisible, the remainder that cannot be divisible is added to the last point cloud data block corresponding to it.
  • generating a placeholder map according to the point cloud data to be encoded specifically includes:
  • S22 Map the converted spherical coordinate information into two-dimensional point cloud data, and generate a placeholder map according to the two-dimensional point cloud data.
  • the point cloud data to be encoded includes several data points, and each data point can be represented as a three-dimensional coordinate point.
  • the coordinate information of the data point is converted into spherical coordinate information, that is, a mapping relationship is established between the three-dimensional Cartesian coordinate system and the spherical coordinate system .
  • the mapping relationship the three-dimensional coordinate point in the three-dimensional Cartesian coordinate system is converted into the spherical coordinate point in the spherical coordinate.
  • the spherical center of the spherical coordinate system needs to be determined first, where the spherical center can be the coordinate origin of the Cartesian coordinate system, namely (0, 0,0) point, the center of the sphere may also be the mean value of the coordinate information of all data points in the point cloud data to be coded.
  • the mean value is used as the center of the sphere, the center of the sphere may be the average value of each data point in the point cloud data to be coded.
  • the coordinate information is subtracted from the mean value to update the coordinate information of the data point and convert the updated coordinate information into spherical coordinate information.
  • the point (0, 0, 0) is taken as the center of the sphere with the spherical coordinates as an example to explain the conversion of the coordinate information of the data point into the spherical coordinate information.
  • the adjustment method can be: when x is a positive number and y is a positive number, Unchanged; when x is negative and y is positive, When x is negative and y is positive, When x is negative and y is negative,
  • the acquired spherical coordinates are mapped to a two-dimensional image to obtain the two-dimensional point cloud data coordinates corresponding to each spherical coordinate.
  • the two-dimensional point cloud data carries depth information
  • a placeholder map is generated according to the coordinates of the two-dimensional point cloud data.
  • the point cloud data to be encoded carries attribute information
  • the two-dimensional point cloud data carries attribute information, where the attribute information may be color information and/or reflectance information.
  • the process of mapping the converted spherical coordinate information into two-dimensional point cloud data may specifically be: Assuming that the coordinates of the corresponding two-dimensional point cloud data can be expressed as (x 1 , y 1 , z 1 ), where x 1 and y 1 represent the abscissa and ordinate of the two-dimensional point cloud data in a two-dimensional Cartesian coordinate system Coordinates, z 1 represents the depth information corresponding to the two-dimensional point cloud data.
  • the generating a placeholder map according to the two-dimensional point cloud data specifically includes:
  • a two-dimensional image layer is generated according to the two-dimensional point cloud data, and a placeholder map is generated according to the two-dimensional image layer.
  • the two-dimensional image layer is a two-dimensional image obtained by mapping two-dimensional point cloud data to a two-dimensional image layer, wherein each two-dimensional point cloud data is mapped to a pixel in the two-dimensional image layer,
  • the abscissa of the two-dimensional point data is the abscissa of the corresponding pixel
  • the ordinate of the two-dimensional point cloud data is the ordinate of the corresponding pixel
  • one pixel can correspond to multiple two-dimensional point cloud data, namely There may be several two-dimensional point cloud data in the two-dimensional point cloud data, the abscissa of the several two-dimensional point cloud data is the same, the ordinate is the same, and the depth information of each two-dimensional point cloud data in the several two-dimensional point cloud data different.
  • the two-dimensional point cloud data when generating a two-dimensional image layer based on two-dimensional point cloud data, the two-dimensional point cloud data can be mapped to several two-dimensional image layers.
  • the process of generating a placeholder map according to the two-dimensional image layer may be: for each two-dimensional image layer, generating according to the two-dimensional image layer The placeholder map corresponding to the two-dimensional image layer.
  • the two-dimensional image layer includes several two-dimensional image layers
  • at least one of the abscissa and the ordinate of the two-dimensional point cloud data contained in each two-dimensional image layer is different, and each two-dimensional point cloud The data are all mapped to a two-dimensional image layer.
  • several two-dimensional image layers can be obtained by mapping according to a preset mapping rule.
  • the mapping rule may be to first map the two-dimensional point cloud data to a two-dimensional image, and record the two-dimensional point cloud data corresponding to each pixel in the two-dimensional image to obtain the two-dimensional point cloud data corresponding to each pixel.
  • the selection method for selecting the two-dimensional point cloud data from the two-dimensional point cloud data set can be preset, for example, the depth information is selected in ascending order, or the depth information is selected in descending order, or Random selection, etc
  • the two-dimensional point cloud data set includes two-dimensional point cloud data set A and two-dimensional point cloud data set B, where the two-dimensional point cloud data set A includes two-dimensional point cloud data a(x a , y a , z a ), two-dimensional point cloud data set B includes two-dimensional point cloud data And 2D point cloud data
  • the abscissa and ordinate of b 1 and b 2 are the same, the depth information corresponding to b 1 and the depth information corresponding to b 2 are different, and Then, if the method is selected in the descending order of the depth information, a and b 2 synthesize the first image, and b 1 synthesize the second image; if the method is selected in the descending order of the depth information, then a and b 2 synthesize the second image; b 1 synthesizes the first image, b 2 synthesizes the second image.
  • the point cloud data to be encoded when the point cloud data to be encoded is two-dimensional point cloud data, the two-dimensional image layer can be determined according to the two-dimensional point cloud data. Therefore, generating a placeholder map based on the point cloud data to be encoded may generate a placeholder map for a two-dimensional image layer converted from the point cloud data to be encoded. Wherein, said generating a placeholder map according to the point cloud data to be encoded specifically includes:
  • a first preset number is used to represent each of the selected pixels, and a second preset number is used to represent unselected pixels in the two-dimensional point cloud data, so as to generate a placeholder corresponding to the two-dimensional image layer Figure.
  • the occupant map is generated according to the correspondence between each pixel in the two-dimensional image layer and the two-dimensional point cloud data, and is used to represent the distribution information corresponding to the two-dimensional point cloud data corresponding to each two-dimensional image layer. It is understandable that the occupant map is used to indicate the pixels corresponding to the two-dimensional point cloud data in the two-dimensional image layer, but not the pixels corresponding to the two-dimensional point cloud data. It is understandable that for each pixel in the two-dimensional image layer, it is determined whether the pixel has corresponding two-dimensional point cloud data. When the pixel has corresponding two-dimensional point cloud data, the pixel The value of is set as the first preset data.
  • the value of the pixel is set as the second preset data to obtain a placeholder image, so that according to the placeholder
  • the graph can determine the pixels corresponding to the two-dimensional point cloud data and the pixels not corresponding to the two-dimensional point cloud data in the two-dimensional image.
  • the first preset data and the second preset data are both preset, for example, the first preset data and the second preset data are both coded with a 1-bit image, where the first preset data is 1. , The second preset data is 0.
  • S20 Generate a one-dimensional sequence group according to the point cloud data to be coded, the occupancy map, and the preset scanning order.
  • the one-dimensional sequence group includes at least a depth sequence, the depth sequence is a one-dimensional depth vector, and the one-dimensional depth sequence includes the position information of each two-dimensional point cloud data in the two-dimensional point cloud data and each two-dimensional point cloud data.
  • the depth information corresponding to the point cloud data wherein the two-dimensional coordinate information is determined according to the ordering of the one-dimensional depth sequence, and the depth information is determined according to the component value of the one-dimensional depth sequence.
  • the one-dimensional sequence group may also include a one-dimensional color sequence and/or a one-dimensional reflectance sequence.
  • the position information of each component in the sequence can be determined
  • each component value corresponds to the attribute information corresponding to the sequence, where the attribute information can be color information or reflectance information.
  • the two-dimensional point cloud data may include color information and/or reflectance information in addition to two-dimensional coordinate information and depth information, for example, two-dimensional point cloud data
  • the coordinate information of can be expressed as (x 1 , y 1 , z 1 , c 1 , f 1 ), where c 1 can be color information, and f 1 can be reflectance information; the c 1 and f 1 can include also may not contain, as a point cloud comprising c 1 and f 1, the point cloud data and said c 1 f 1 may be directly encoded data corresponding to the color information c 1 as the value data to be encoded
  • the reflectance information of the corresponding data point is taken as the f 1 value.
  • the two-dimensional point cloud data contains color information and/or reflectance information
  • the depth color, color information, and reflectance information are all individually converted into one-dimensional sequences.
  • the one-dimensional sequence group includes one-dimensional depth Sequence, and one-dimensional color sequence and/or one-dimensional reflectance sequence, where the one-dimensional depth sequence, and the one-dimensional color sequence and/or one-dimensional reflectance sequence of the same position information components correspond to the same two-dimensional point cloud data , Only the component values represent different information of the two-dimensional point cloud data, where a one-dimensional depth sequence represents depth information, a one-dimensional color sequence represents color information, and a one-dimensional reflectance sequence represents reflectance information.
  • the preset order is the coordinate increasing scan order or the Morton order generated according to the two-dimensional point cloud data corresponding to the two-dimensional image layer, for example, as shown in FIG.
  • the Morton sequence and the placeholder map convert the two-dimensional image layer into a one-dimensional sequence.
  • the Morton order is a Morton code that encodes the pixel coordinates of the corresponding two-dimensional point cloud data in the two-dimensional image layer, and the pixels corresponding to the two-dimensional image layer are sorted according to the Morton code to obtain the Morton order .
  • the method of generating Morton code may be to generate Morton code with X coordinate in low position and Y coordinate in high position, and the Morton order sorted according to Morton code is to increase by X coordinate first, and then increase by Y coordinate.
  • the Morton order shown in Figure 7 is obtained; it can also be that the Y coordinate of the Morton code is in the low position and the X coordinate is in the high position.
  • the sorted point cloud order is to increase by the Y coordinate first, and then increase by the X coordinate, for example ,
  • the Morton sequence as shown in Figure 8 is obtained.
  • the encoding the placeholder map and the one-dimensional sequence group to obtain the code stream corresponding to the point cloud data specifically includes:
  • the updated one-dimensional sequence group is coded to obtain the code stream corresponding to the point cloud data.
  • the preset order is the Morton order generated according to the two-dimensional point cloud data corresponding to the two-dimensional image layer
  • the similarity of the points in the Morton order in the three-dimensional space is high, so that the similarity in the one-dimensional
  • the component of the current point can be subtracted from the component of the previous point to obtain the residual between the two components.
  • the residual part is encoded, which can reduce code rate consumption.
  • the information corresponding to each component can be determined according to a one-dimensional sequence.
  • the component when the one-dimensional sequence is a depth sequence, the component represents depth information; when the one-dimensional sequence is a color sequence, the component represents color information; when the one-dimensional sequence is a color sequence, the component represents color information; When it is a reflectance sequence, the classification represents color information.
  • the encoding of the one-dimensional sequence group refers to separately encoding each one-dimensional sequence and a placeholder map in the one-dimensional sequence group, wherein each one-dimensional sequence in the one-dimensional sequence group is independent Encoding, and different encoding methods can be used for different sequences.
  • PNG image encoding tools for encoding, use 16bit or 8bit to encode the depth sequence, 8bit to encode the color sequence, and 8bit to encode the reflectance sequence; when using JPEG images
  • the coding tool (the tool supports lossy and lossless modes) is used to encode the depth sequence with 16bit or 8bit, the color sequence with 8bit, and the reflectance sequence with 8bit.
  • video encoding tools such as HEVC can also be used to encode depth sequences, color sequences, and reflectance sequences.
  • each code stream obtained by the encoding can be bound to obtain a code stream corresponding to the one-dimensional sequence group.
  • the one-dimensional sequence group includes several one-dimensional sequence groups
  • each one-dimensional sequence group may be separately And the corresponding place map is encoded to obtain the code stream corresponding to each one-dimensional sequence group, and the code stream corresponding to each one-dimensional sequence group is bound to obtain the code stream corresponding to the point cloud data to be encoded.
  • the method provided in this embodiment is compared with the existing platform TMC13v7, and Table 1 is tested. It can be concluded from Table 1 that under the condition of lossless geometric lossless attributes, the geometric code rate of this embodiment only needs 69.27%, and the overall code rate of this embodiment only needs 78.14%.
  • this embodiment provides a point cloud data encoding method.
  • the encoding method includes generating a placeholder map according to the point cloud data to be encoded; according to the point cloud data to be encoded, the placeholder map, and presets A one-dimensional sequence group is generated in the scanning sequence; the placeholder map and the one-dimensional sequence group are encoded to obtain a code stream corresponding to the point cloud data.
  • the present invention reduces the amount of data during encoding and further improves the coding efficiency of the point cloud data.
  • This embodiment provides a point cloud data decoding method, which is used to decode the point cloud data encoding method described in the foregoing embodiment to obtain a code stream. As shown in FIG. 9, the method includes:
  • M20 Generate point cloud data according to the occupancy map and the one-dimensional sequence group.
  • decoding the code stream refers to using an encoding tool corresponding to the code stream to decode the code stream to obtain a one-dimensional sequence group and a placeholder map.
  • the one-dimensional sequence group includes a depth sequence and a color sequence. And/or reflectivity sequence.
  • the description of the depth sequence, the color sequence and/or the reflectance sequence can refer to the description of the first embodiment, which will not be repeated here.
  • the generating point cloud data according to the occupant map and the one-dimensional sequence group specifically includes:
  • the point cloud data corresponding to the code stream is determined according to the two-dimensional image layer.
  • the placeholder image is a placeholder image carried by the code stream, and the point cloud data encoding is generated and encoded into the codestream, and the placeholder image is used to represent the pixels and the two-dimensional points in the two-dimensional image layer. Correspondence of cloud data.
  • the two-dimensional image layer corresponding to the one-dimensional sequence group can be determined according to the one-dimensional sequence group, the placeholder image, and the preset scanning sequence, wherein the two-dimensional image layer is generated according to the one-dimensional sequence group.
  • the determination process is an inverse process of generating a one-dimensional sequence group based on a two-dimensional image layer. For details, refer to the description of generating a one-dimensional sequence group based on a two-dimensional image layer, which will not be repeated here.
  • the two-dimensional point cloud data is generated according to the one-dimensional sequence group, and the two-dimensional point cloud data is generated according to the two-dimensional sequence group.
  • the point cloud data determining the point cloud data corresponding to the code stream includes: for each one-dimensional sequence group, determining the one-dimensional sequence group according to the one-dimensional sequence group, the preset sequence, and the place map corresponding to the one-dimensional sequence group The two-dimensional image layer corresponding to the sequence group; the two-dimensional point cloud data corresponding to the code stream is determined according to the determined two-dimensional image layers.
  • the preset order is the preset order described in the first embodiment, that is, the preset order may include the preset order being the coordinate increasing scanning order or generating according to the two-dimensional point cloud data corresponding to the two-dimensional image layer Morton order.
  • the preset order is Morton order
  • the coding is to perform residual operation on the one-dimensional sequence, so that the one-dimensional sequence group is a residual sequence, so that the one-dimensional sequence group, the preset sequence, and the When the placeholder map generates point cloud data, the residual sequence needs to be converted into a one-dimensional sequence master, for example, the conversion process shown in Figure 3.
  • the one-dimensional sequence group corresponding to the code stream obtained by decoding according to the code stream may include directly decoding the one-dimensional sequence group corresponding to the code stream according to the code stream, or directly according to the code stream. After decoding to the candidate sequence group, a one-dimensional sequence group is generated according to the candidate sequence group. Therefore, the one-dimensional sequence group corresponding to the code stream obtained by decoding according to the code stream specifically includes:
  • the one-dimensional sequence group is regarded as a one-dimensional sequence group
  • a one-dimensional candidate sequence is obtained by decoding according to the code stream, starting from the second value of the one-dimensional candidate sequence, the reconstructed value of the previous point of each point of the one-dimensional candidate sequence is used as the predicted value, and the value of each point The value and its corresponding predicted value are superimposed as the value of the current point to obtain a one-dimensional sequence group.
  • the determining the point cloud data corresponding to the code stream according to the two-dimensional image layer specifically includes:
  • the spherical coordinate information is converted into coordinate information of the three-dimensional point cloud data.
  • the coordinate information (x 1 , y 1 , z 1 ) of the two-dimensional point cloud data is mapped to spherical coordinates Calculate the coordinates of the penalty ball according to the x 1 coordinates of the image Angle; Calculate the angle ⁇ of the penalty ball coordinates according to the y 1 coordinates of the image; calculate the three-dimensional radius r3D of the penalty ball coordinates according to the z 1 of the image.
  • (x 1 ,y 1 ,z 1 ) and The meaning and mapping relationship of is the same as the meaning and mapping relationship in the first embodiment, and will not be repeated here.
  • the way to determine the signs of x and y more than the When, x is positive and y is negative; Greater than ⁇ and less than or equal to When, x is negative and y is negative; more than the When it is less than or equal to ⁇ , x is negative and y is positive; Greater than 0 and less than or equal to When, x is a positive number and y is a positive number.
  • the way to determine the sign of z can be: ⁇ is greater than When z is negative; ⁇ is less than or equal to When z is a positive number.
  • the center of the sphere of the spherical coordinates is the origin using Cartesian coordinates during encoding
  • the center of the sphere of the spherical coordinates can be added to the coordinate information obtained for each three-dimensional point cloud data to obtain the point cloud data.
  • the point cloud data is divided into blocks, after each point cloud data block is obtained, the obtained point cloud data blocks are merged according to the division method to obtain the point cloud data.
  • this embodiment provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be Or multiple processors execute to implement the steps in the point cloud data encoding method or decoding method as described in the foregoing embodiment.
  • the present invention also provides a terminal device, as shown in FIG. 10, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, It may also include a communication interface (Communications Interface) 23 and a bus 24.
  • the processor 20, the display screen 21, the memory 22, and the communication interface 23 can communicate with each other through the bus 24.
  • the display screen 21 is set to display a user guide interface preset in the initial setting mode.
  • the communication interface 23 can transmit information.
  • the processor 20 can call the logic instructions in the memory 22 to execute the method in the foregoing embodiment.
  • logic instructions in the memory 22 can be implemented in the form of software functional units and when sold or used as independent products, they can be stored in a computer readable storage medium.
  • the memory 22 can be configured to store software programs and computer-executable programs, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 20 executes functional applications and data processing by running software programs, instructions, or modules stored in the memory 22, that is, implements the method in the first embodiment or the second embodiment.
  • the memory 22 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal device.
  • the memory 22 may include a high-speed random access memory, and may also include a non-volatile memory.
  • U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes can also be temporary State storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本发明公开了一种点云数据的编码方法、解码方法、存储介质及终端设备,所述编码方法包括根据待编码的点云数据生成占位图;根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组;对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。本发明通过将点云数据转换为一维序列组,并对一维序列组进行编码,降低了编码时的数据量,进而提高了点云数据的编码效率。

Description

点云数据的编码方法、解码方法、存储介质及设备 技术领域
本发明涉及点云处理技术领域,特别涉及一种点云数据的编码方法、解码方法、存储介质及设备。
背景技术
三维点云是现实世界数字化的重要表现形式。随着三维扫描设备(激光、雷达等)的快速发展,点云的精度以及分辨率也越来越高。高精度点云广泛应用于城市数字化地图的构建,在如智慧城市、无人驾驶、文物保护等众多热门研究中起技术支撑作用。
点云数据是通过三维扫描设备对物体表面采样所获取的,一帧点云数据的点数一般是百万级别,其中每个点可以包含位置信息以及颜色和反射率等属性信息,数据量十分庞大。而目前普遍使用的三维单元数据编码技术普遍是对三维点云数据进行编码,而三维点云数据编码时会存在数据量大问题,从而造成三维点云数据编码效率低。
发明内容
本发明要解决的技术问题在于,针对现有技术的不足,提供一种点云数据的编码方法、解码方法、存储介质及终端设备问题。
为了解决上述技术问题,本发明所采用的技术方案如下:
一种点云数据的编码方法,所述方法包括:
根据待编码的点云数据生成占位图;
根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组;
对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。
所述点云数据的编码方法,其中,所述根据占位图和预设扫描顺序生成一维序列组,其中所述预设扫描顺序具体包括:
所述预设扫描顺序为莫顿顺序;或
所述预设扫描顺序为坐标递增扫描顺序。
所述点云数据的编码方法,其中,所述一维序列组包括:
一维深度序列;和/或
一维颜色序列;和/或
一维反射率序列。
所述点云数据的编码方法,其中,所述方法还包括:
对点云数据集的最长边,用最短边对齐进行划分;或
对于给定的三个维度上的划分数值n、m、k,把点云的x、y、z轴分别划分成n、m、k份,总共生成n*m*k块;或
对于指定大小的长方体把点云划分成所述指定大小的块。
所述点云数据的编码方法,其中,当所述待编码的点云数据为二维点云数据时,所述根据待编码的点云数据生成占位图具体包括:
选取所述二维点云数据中对应有数据的像素点;
采用第一预设数字表示所述选取到的每个像素点,并采用第二预设数字表示二维点云数据中未被选取的像素点,以生成所述二维图像层对应的占位图。
所述点云数据的编码方法,其中,当所述待编码的点云数据为三维点云数据时,所述根据待编码的点云数据生成占位图具体包括:
针对于待编码的点云数据中的每个数据点,将该数据点的坐标信息转换为球坐标信息;
将转换得到的各球坐标信息分别映射为二维点云数据,并根据所述二维点云数据生成占位图。
所述点云数据的编码方法,其中,所述根据所述二维点云数据生成占位图具体为:
根据所述二维点云数据生成二维图像层,并根据所述二维图像层生成占位图。
所述点云数据的编码方法,其中,所述二维图像层包括若干二维图像层;每个二维图像层对应一占位图。
所述点云数据的编码方法,其中,所述对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流具体包括:
对于所述一维序列组中每个一维序列中的每个分量,将该分量的前一分量作为该分量的预测值;
根据所述预测值以及该分量计算该分量的残差,并采用残差替换该分量,以更新所述一维序列组;
对更新后一维序列组进行编码,以得到所述点云数据对应的码流。
一种点云数据的解码方法,所述解码方法包括:
根据码流解码得到所述码流对应的占位图及一维序列组;
根据所述占位图及一维序列组生成点云数据。
所述点云数据的解码方法,其中,所述一维序列组包括:
一维深度序列;和/或
一维颜色序列;和/或
一维反射率序列。
所述点云数据的解码方法,其中,所述根据所述占位图及一维序列组生成点云数据具体包括:
根据所述占位图、预设扫描顺序以及所述一维序列组,确定所述一维序列组对应的二维图像层;
根据所述二维图像层确定所述码流对应的点云数据。
所述点云数据的解码方法,其中,所述预设扫描顺序包括:坐标递增扫描顺序或者二维莫顿扫描顺序。
所述点云数据的解码方法,其中,当所述码流对应的点云数据为三维点云数据时,所述根据所述二维图像层确定所述码流对应的点云数据具体包括:
将所述二维图像层中的每个点的坐标信息映射为球坐标信息;
将所述球坐标信息转换为三维点云数据的坐标信息。
所述点云数据的解码方法,其中,所述根据码流解码得到所述码流对应的一维序列组具体包括:
根据码流解码直接得到一维序列组;或
根据码流解码得到一维候选序列;
从一维候选序列的第二个值开始,将所述一维候选序列的每个点的前一点重建值作为预测值,并将每个点的值与其对应的预测值叠加作为当前点的值,以得到一维序列组。
一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上任一所述的点云数据的编码方法中的步骤,或者以实现如上任一所述的点云数据的解码方法中的步骤。
一种终端设备,其包括:处理器、存储器及通信总线;所述存储器上存储有可被所述处理器执行的计算机可读程序;
所述通信总线实现处理器和存储器之间的连接通信;
所述处理器执行所述计算机可读程序时实现如上任一所述的点云数据的编码方法中的步骤,或者实现如上任一所述的点云数据的解码方法中的步骤。
有益效果:与现有技术相比,本发明提供了一种点云数据的编码方法、解码方法、存储介质及终端设备,所述编码方法根据待编码的点云数据生成占位图;根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组;对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。本发明通过将三维点云数据转换为一维序列组,并对一维序列组进行编码,降低了编码时的数据量,进而提高了点云数据的编码效率。
附图说明
图1为本发明提供的点云数据的编码方法的流程图。
图2为本发明提供的点云数据的编码方法中将二维图像层转换为一维序列的流程示意图。
图3为本发明提供的点云数据的编码方法中将一维序列转换为二维图像层的流程示意图。
图4为本发明提供的点云数据的编码方法中三维笛卡尔坐标系中数据点的示意图。
图5为本发明提供的点云数据的编码方法中球坐标系中数据点的示意图。
图6为本发明提供的点云数据的编码方法中一种莫顿顺序的示意图。
图7为本发明提供的点云数据的编码方法中另一种莫顿顺序的示意图。
图8为本发明提供的点云数据的编码方法中二维点云数据映射为若干二维图像层的流程示意图。
图9为本发明提供的点云数据的解码方法的流程图。
图10为本发明提供的终端设备的结构原理图。
具体实施方式
本发明提供一种点云数据的编码方法、解码方法、存储介质及终端设备,为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
下面结合附图,通过对实施例的描述,对发明内容作进一步说明。
实施例一
本实施例提供的一种点云数据的编码方法。该方法可以由编码装置来执行,所述装置可以由软件实现,应用于诸如PC机、服务器、云端、平板电脑或个人数字助理等之类的智能终端上。参见图1,本实施例提供的点云数据的编码方法具体包括:
S10、根据待编码的点云数据生成占位图。
具体地,所述待编码的点云数据可以是通过三维扫描设备扫描得到的一帧点云数据,也可以是外部设备发送的一帧点云数据,还可以是通过云端获取到的一帧点云数据,也可以是通过二维扫描设备扫描得到的二维点云数据。此外,对于待编码的点云数据中的每个数据点,该数据点均可以包括位置信息以及属性信息,所述位置信息可以表示为数据点的坐标信息,属性信息可以包括颜色信息和/或反射率信息等,其中,每个数据点的属性信息与该数据点的位置信息绑定存储。
进一步,在本实施例的一个实现方式中,所述点云数据可以为通过普通雷达扫描得到的雷达点云数据集,也可以是致密点云数据集。从而在获取到待编码的点云数据时,可以判断待编码的点云数据的数据类型,若待编码的点云数据的数据类型为雷达点云数据集,则直接根据待编码的点云数据生成占位图;若待编码的点云数据的数据类型为致密点云数据集,则可以将所述点云数据划分为若干点云数据块,并分别对每个点云数据块执行根据待编码的点云数据生成占位图。当然,值得说明的是,当将待编码的点云数据划分为若干点云数据块时,每个点云数据块可以作为一个待编码的点云数据,并且每个点云数据块作为待编码的点云数据,与直接将点云数据作为待编码的点云数据的处理过程一样。
进一步,在本实施例的一个实现方式中,当需要将待编码的点云数据划分为若干点云数据块时,所述划分方法可以为预先设置,在获取到待编码的点云数据时,可以直接根据所述预先设置的划分方法对所述待编码的点云数据进行划分,以得到若干点云数据块,并将每个点云数据块作为一个待编码的点云数据,以执行将根据待编码的点云数据生成占位图的步骤。此外,所述划分方法可以为在待编码的点云数据的最长边上,用最短边对齐进行划分,以将所述待编码的点云数据划分为若干点云数据块,例如,待编码的点云数据的长边为4096,短边长度为1024,则在长边按照每块长度1024把点云平均切分为4096/1024=4份,当然,若长边不能被短边整除,则将余数加到整除得到的最后一个点云数据块内。所述划分方法也可以为对于给定的三个n、m、k,把点云的x、y、z轴分别划分成n、m、k块,以将所述待编码的点云数据划分为n*m*k块点云数据块,例如,对于给定的三个参数2、3、2,把点云的x、y、z轴分别划分成2、3、2块,总共生成2*3*2=12块。所述划分方法还可以是对于给定点云数据块大小(例如, 1000*1000*1000等)为单元,将待编码的点云数据划分为若干点云数据块,其中定点云数据块可以划分为长方体或者正方体等。例如,待编码的点云数据的长宽高为2048*2048*2048,对于给定的块长宽高1024*1024*1024,,总共划分为2*2*2=8块正方体,当然,对于长宽高中不能整除的情况,将不能整除的余数加到其对应的最后一个点云数据块内。
进一步,在本实施例的一个实现方式中,所述当所述待编码的点云数据为三维点云数据时,所述根据待编码的点云数据生成占位图具体包括:
S21、针对于待编码的点云数据中的每个数据点,将该数据点的坐标信息转换为球坐标信息;
S22、将转换得到的各球坐标信息分别映射为二维点云数据,并根据所述二维点云数据生成占位图。
具体地,所述待编码的点云数据中包括有若干数据点,每个数据点均可以表示为一个三维坐标点。由此,对于待编码的点云数据中的每个数据点,将该数据点的坐标信息转换为球坐标信息,也就是说,在三维笛卡尔坐标系与球坐标系之间建立一个映射关系,通过该映射关系将三维笛卡尔坐标系下的三维坐标点转换为球坐标下的球坐标点。此外,在三维笛卡尔坐标系与球坐标系之间建立一个映射关系之前,需要先确定球坐标系的球心,其中,所述球心可以为笛卡尔坐标系的坐标原点,即(0,0,0)点,所述球心也可以为待编码的点云数据中所有数据点的坐标信息的均值,当采用均值作为球心时,将待编码的点云数据中每个数据点的坐标信息减去均值,以更新该数据点的坐标信息,并将更新后的坐标信息转换为球坐标信息。
进一步,在本实施例中,以采用(0,0,0)点为球坐标的球心为例对将该数据点的坐标信息转换为球坐标信息进行说明。如图4和5所示,对于每个数据点(x,y,z),设该数据点对应的球坐标为
Figure PCTCN2020135982-appb-000001
所述球坐标的获取过程可以为:首先计算该数据点映射至xy平面上时,映射点到圆顶的距离
Figure PCTCN2020135982-appb-000002
根据所述r2D计算
Figure PCTCN2020135982-appb-000003
并根据x坐标和y坐标的正负关系对
Figure PCTCN2020135982-appb-000004
的角度进行调整,其中,
Figure PCTCN2020135982-appb-000005
之后计算数据点到球心的距离
Figure PCTCN2020135982-appb-000006
并根据z坐标和r2D计算θ,并根据z坐标的正负调整θ的角度大小,其中,sinθ=abs(z)/r3D;最后将r3D取整作为球坐标系r, 以得到该数据点对应的球坐标。此外,
Figure PCTCN2020135982-appb-000007
的调整方式可以为:当x是正数,y是正数时,
Figure PCTCN2020135982-appb-000008
不变;当x是负数,y是正数时,
Figure PCTCN2020135982-appb-000009
当x是负数,y是正数时,
Figure PCTCN2020135982-appb-000010
当x是负数,y是负数时,
Figure PCTCN2020135982-appb-000011
θ的调整方式可以为:当z是正数时,θ不变;当z是负数时,θ=-θ。
进一步,在本实施例的一个实现方式中,在获取到各数据点对应的球坐标后,将获取到各球坐标映射到二维图像上,以得到各球坐标对应的二维点云数据坐标,其中,所述二维点云数据携带深度信息,在根据二维点云数据坐标生成占位图。当然,当待编码的点云数据携带属性信息时,所述二维点云数据携带属性信息,其中,所述属性信息可以为颜色信息和/或反射率信息等。此外,在本实施例的一个可能实现方式中,所述将转换得到的各球坐标信息分别映射为二维点云数据的过程具体可以为:对于每个球坐标的
Figure PCTCN2020135982-appb-000012
假设其对应的二维点云数据的坐标可以表示为(x 1,y 1,z 1),其中,x 1和y 1表示二维笛卡尔坐标系下二维点云数据的横坐标和纵坐标,z 1表示二维点云数据对应的深度信息。将
Figure PCTCN2020135982-appb-000013
转换为(x 1,y 1,z 1)为建立
Figure PCTCN2020135982-appb-000014
和(x 1,y 1,z 1)的对应关系,并通过对应关系将
Figure PCTCN2020135982-appb-000015
转换为(x 1,y 1,z 1),其中,所述对应关系可以为x 1=round(a/2*)*imag_X,其中,imag_X是图像在水平方向上的分辨率,当待编码的点云数据对应的扫描方式为固定扫描步长
Figure PCTCN2020135982-appb-000016
扫描时,
Figure PCTCN2020135982-appb-000017
当待编码的点云数据对应的扫描方式为可变扫描步长
Figure PCTCN2020135982-appb-000018
扫描时,
Figure PCTCN2020135982-appb-000019
y 1=round(b/2*)*imag_Y,其中,imag_Y是图像在竖直方向上的分辨率,当待编码的点云数据对应的扫描方式为固定扫描步长
Figure PCTCN2020135982-appb-000020
扫描时,b=θ,当待编码的点云数据对应的扫描方式为可变扫描步长
Figure PCTCN2020135982-appb-000021
扫描时,b=f(θ);z 1=r。
进一步,在本实施例的一个实现方式中,所述根据所述二维点云数据生成占位图具体为:
根据所述二维点云数据生成二维图像层,并根据所述二维图像层生成占位图。
具体地,所述二维图像层是通过将二维点云数据映射至二维图像层得到的二维图像,其中,每个二维点云数据映射为二维图像层中的一个像素点,二维点数据的横坐标为其对应的像素点的横坐标,二维点云数据的纵坐标为其对应的像素点的纵坐标,并且一个像素点可以对应多个二维点云数据,即二维点云数据中可以存在若干二维点云数据,该 若干二维点云数据的横坐标相等,纵坐标相等,而该若干二维点云数据中每个二维点云数据的深度信息不同。由此,在根据二维点云数据生成二维图像层时,可以将二维点云数据映射至若干二维图像层中。相应的,当所述二维图像层包括若干二维图像层时,根据所述二维图像层生成占位图的过程可以为:针对于每个二维图像层,根据该二维图像层生成该二维图像层对应的占位图。
进一步,当所述二维图像层包括若干二维图像层时,每个二维图像层包含的二维点云数据的横坐标和纵坐标中至少有一个不相同,并且每个二维点云数据均映射至一个二维图像层。其中,若干二维图像层可以是根据预设的映射规则映射得到。所述映射规则可以为首先将二维点云数据映射至一张二维图像中,并且记录二维图像中每个像素点对应的二维点云数据,以得到各像素点对应的二维点云数据集,之后首先在每个二维点云数据集中选取一个二维点云数据,并将选取到的所有二维点云数据映射至第一图像层中;之后在继续从每个二维点云数据中选取一个二维点云数据,并将选取到所有二维点云数据映射在第二图像层,依次类推直至映射至预设数量的图像层,或者各二维点云数据集中均未存在未被选取的二维点云数据,其中,对于每个二维点云数据集,每次选取的二维点云数据均互不相同,并且当映射至预设数量的图像层时完成二维图像层选取,那么将各二维点云数据集中未被选取的二维点云数据丢弃。此外,从二维点云数据集中选取二维点云数据的选取方式可以是预先设置的,例如,由深度信息由大到小的顺序选取,或者由深度信息由小到大的顺序心情,或者随机选取等。
举例说明:如图6所示,二维点云数据集包括二维点云数据集A和二维点云数据集B,其中,二维点云数据集A包括二维点云数据a(x a,y a,z a),二维点云数据集B包括二维点云数据
Figure PCTCN2020135982-appb-000022
和二维点云数据
Figure PCTCN2020135982-appb-000023
其中,b 1和b 2的横坐标和纵坐标相同,b 1对应的深度信息和b 2对应的深度信息不同,并且
Figure PCTCN2020135982-appb-000024
那么,若按照由深度信息由大到小的顺序选取方式,则a与b 2合成第一图像,b 1合成第二图像;若按照由深度信息由小到大的顺序选取方式,则a与b 1合成第一图像,b 2合成第二图像。
进一步,在本实施例的一个实现中,当所述待编码的点云数据为二维点云数据,可以根据二维点云数据确定二维图像层。因而,根据待编码的点云数据生成占位图可以为根据待编码点云数据转换得到的二维图像层生成占位图。其中,所述根据待编码的点云 数据生成占位图具体包括:
选取所述二维点云数据中对应有数据的像素点;
采用第一预设数字表示所述选取到的每个像素点,并采用第二预设数字表示二维点云数据中未被选取的像素点,以生成所述二维图像层对应的占位图。
具体地,所述占位图时根据二维图像层中各像素点与二维点云数据的对应关系生成,用于表示各二维图像层对应的二维点云数据对应的分布信息。可以理解的是,占位图用于表示二维图像层中对应有二维点云数据的像素点,未对应二维点云数据的像素点。可以理解的是,对于二维图像层中的每个像素点,判断该像素点是否在存在对应的二维点云数据,当该像素点存在对应的二维点云数据时,将该像素点的值设置为第一预设数据,当该像素点未存在对应的二维点云数据时,将该像素点的值设置为第二预设数据,以得到占位图,这样根据该占位图可以确定二维图像中对应有二维点云数据的像素点和未对应二维点云数据的像素点。其中,所述第一预设数据和第二预设数据均为预先设置的,例如,第一预设数据和第二预设数据均用1bit的图像编码,其中,第一预设数据为1,第二预设数据为0。
S20、根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组。
具体地,所述一维序列组至少包括深度序列,所述深度序列为一维深度向量,所述一维深度序列包括二维点云数据中各二维点云数据的位置信息以及各二维点云数据对应的深度信息,其中,所述二维坐标信息根据所述一维深度序列的排序确定,深度信息根据一维深度序列的分量值确定。此外,所述一维序列组还可以包括一维颜色序列和/或一维反射率序列,所述一维颜色序列/或一维反射率序列中,各分量在该序列中的位置信息可以确定二维点云数据的位置信息,各分量值对应该序列对应的属性信息,其中,属性信息可以为颜色信息或反射率信息。可以理解的是,在获取到二维点云数据后,二维点云数据除包括二维坐标信息以及深度信息外,还可以包括颜色信息和/或反射率信息,例如,二维点云数据的坐标信息可以表示为(x 1,y 1,z 1,c 1,f 1),其中,c 1可以为颜色信息,f 1可以为反射率信息;所述c 1和f 1可以包含也可以不包含,当包含c 1和f 1时,所述c 1和f 1可以直接将待编码的点云数据中对应的数据点的颜色信息作为c 1值,将待编码的点云数据中对应的数据点的反射率信息作为f 1值。当然,值得说明的,当二维点云数据包含 颜色信息和/或反射率信息时,深度颜色、颜色信息以及反射率信息均单独转换一维序列,相应的,一维序列组包括一维深度序列,以及一维颜色序列和/或一维反射率序列,其中,一维深度序列,以及一维颜色序列和/或一维反射率序列中相同位置信息的分量对应相同的二维点云数据,仅是分量值表示该二维点云数据的不同信息,其中,一维深度序列表示深度信息,一维颜色序列表示颜色信息,一维反射率序列表示反射率信息。
进一步,在本实施例的一个实现方式中,所述预设顺序为坐标递增扫描顺序或者根据二维图像层对应的二维点云数据生成的莫顿顺序,例如,如图2所示的根据莫顿顺序以及占位图将二维图像层转换为一维序列。所述莫顿顺序为将二维图像层中对应的二维点云数据的像素坐标编码成的莫顿码,并将二维图像层对应的像素点按照莫顿码进行排序以得到莫顿顺序。其中,所述生成莫顿码的方式可以为生成莫顿码X坐标在低位,Y坐标在高位,而根据莫顿码排序出来的莫顿顺序是先按X坐标增加,再按Y坐标增加,例如,得到如图7所示的莫顿顺序;还可以是生成莫顿码Y坐标在低位,X坐标在高位,排序出来的点云顺序是先按Y坐标增加,再按X坐标增加,例如,得到如图8所述的莫顿顺序。
进一步,在本实施例的一个实现方式中,所述对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流具体包括:
对于所述一维序列组中每个一维序列中的每个分量,将该分量的前一分量作为该分量的预测值;
根据所述预测值以及该分量计算该分量的残差,并采用残差替换该分量,以更新所述一维序列组;
对更新后一维序列组进行编码,以得到所述点云数据对应的码流。
具体地,当所述预设顺序为根据二维图像层对应的二维点云数据生成的莫顿顺序时,莫顿顺序下相近的点在三维空间上的相近性高,从而在对一维序列组进行编码前,可以使用当前点的分量减去前一个点的分量,得到两个分量之间的残差,在编码的时候只编码残差部分,这样可以降低码率消耗。其中,每个分量对应的信息可以根据一维序列来确定,例如,当一维序列为深度序列时,分量表示深度信息;当一维序列为颜色序列时,分量表示颜色信息;当一维序列为反射率序列时,分类表示颜色信息。
S30、对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。
具体地,所述对所述一维序列组进行编码指的是分别对一维序列组中的各一维序列以及占位图进行编码,其中,一维序列组中的各一维序列均独立编码,并且对于不同序列可以采用不同的编码方式,例如,当使用PNG图像编码工具进行编码时,用16bit或8bit编码深度序列,用8bit编码颜色序列,用8bit编码反射率序列;当使用JPEG图像编码工具(该工具支持有损和无损两种模式)进行编码时,用16bit或8bit编码深度序列,用8bit编码颜色序列,用8bit编码反射率序列。当然,还可以使用HEVC等视频编码工具编码深度序列、颜色序列以及反射率序列。此外,当对一维序列组中的各一维序列独立编码完成后,可以将编码得到的各码流绑定以得到一维序列组对应的码流。
进一步,当所述二维图像层包括若干二维图像层时,一维序列组包括若干一维序列组,而当一维序列组包括若干一维序列组时,可以分别对各一维序列组以及其对应的占位图进行编码,以得到各一维序列组对应的码流,并在各一维序列组对应的码流进行绑定,以得待编码的点云数据对应的码流。
此外,为了进一步本实施例提供的点云数据的编码方法,下面将本实施例提供的方法与现有平台TMC13v7相比较,测试表1。由表1可以得出,在无损几何无损属性的条件下,本实施例的几何码率只需要69.27%,本实施例的整体码率只需要78.14%。
Figure PCTCN2020135982-appb-000025
综上,本实施例提供了一种点云数据的编码方法,所述编码方法包括根据待编码的点云数据生成占位图;根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组;对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。本发明通过将点云数据转换为一维序列组,并对一维序列组进行编码,降低了编码时的数据量,进而提高了点云数据的编码效率。
实施例二
本实施例提供了一种点云数据的解码方法,该方法用于解码如上述实施例所述的点 云数据的编码方法编码得到码流,如图9所示,所述方法包括:
M10、根据码流解码得到所述码流对应的占位图及一维序列组;
M20、根据所述占位图及一维序列组生成点云数据。
具体地,解码所述码流指的是采用码流对应的编码工具对所述码流进行解码,以得到一维序列组和占位图,所述一维序列组包括深度序列,以及颜色序列和/或反射率序列。其中,所述深度序列,以及颜色序列和/或反射率序列的说明可以参照实施例一的说明,这里就不再赘述。
进一步,在本实施例的一个实现方式中,所述根据所述占位图及一维序列组生成点云数据具体包括:
根据所述占位图、预设扫描顺序以及所述一维序列组,确定所述一维序列组对应的二维图像层;
根据所述二维图像层确定所述码流对应的点云数据。
具体地,所述占位图为码流携带的占位图,为点云数据编码是生成并编码至码流中,所述占位图用于表示二维图像层中像素点与二维点云数据的对应关系。在获取到占位图之后,可以根据一维序列组、占位图以及预设扫描顺序确定所述一维序列组对应的二维图像层,其中,根据一维序列组生成二维图像层的确定过程为根据二维图像层生成一维序列组的逆过程,具体可以参照根据二维图像层生成一维序列组的说明,这里就不再赘述。
此外,由于根据码流可能会解码到多组一维序列组,从而在对根据一维序列组生成二维图像层时,需要根据每组一维序列组生成其对应的二维图像层,之后在根据获取到所有二维图像层来确定二维点云数据。由此,在本实施例的一个实现方式中,当所述一维序列组包括若干一维序列组时,所述根据所述一维序列组生成二维点云数据,并根据所述二维点云数据确定所述码流对应的点云数据包括:针对于每个一维序列组,根据该一维序列组、预设顺序以及该一维序列组对应的占位图,确定该一维序列组对应的二维图像层;根据确定得到的各二维图像层确定所述码流对应的二维点云数据。
进一步,所述预设顺序为实施例一所述的预设顺序,即所述预设顺序可以包括所述预设顺序为坐标递增扫描顺序或者根据二维图像层对应的二维点云数据生成的莫顿顺 序。而当所述预设顺序为莫顿顺序时,在编码是对所述一维序列进行残差运算,从而该一维序列组为残差序列,从而在根据一维序列组、预设顺序以及占位图生成点云数据时,需要将残差序列转换为一维序列主,例如,如图3所示的转换过程。相应的,所述根据码流解码得到所述码流对应的一维序列组可以包括根据所述码流直接解码到所述码流对应的一维序列组,也可以是根据所述码流直接解码到候选序列组,再根据候选序列组生成一维序列组。由此,所述根据码流解码得到所述码流对应的一维序列组具体包括:
当根据码流解码得到一维序列组时,将所述一维序列组作为一维序列组;
当根据码流解码得到一维候选序列时,从一维候选序列的第二个值开始,将所述一维候选序列的每个点的前一点重建值作为预测值,并将每个点的值与其对应的预测值叠加作为当前点的值,以得到一维序列组。
进一步,在本实施例的一个实现方式中,当码流对应的点云数据为三维点云数据时,在获取到二维点云数据后,需要将二维点云数转换为球坐标,之后再将球坐标转换为三维点云数据坐标,以得到点云数据。相应的,所述根据所述二维图像层确定所述码流对应的点云数据具体包括:
将所述二维图像层中的每个点的坐标信息映射为球坐标信息;
将所述球坐标信息转换为三维点云数据的坐标信息。
具体地,将对二维点云数据的坐标信息(x 1,y 1,z 1)映射到球坐标
Figure PCTCN2020135982-appb-000026
根据图像的x 1坐标计算出点球坐标的
Figure PCTCN2020135982-appb-000027
角度;根据图像的y 1坐标计算出点球坐标的θ角度;根据图像的z 1计算出点球坐标的三维半径r3D。其中,(x 1,y 1,z 1)和
Figure PCTCN2020135982-appb-000028
的意义以及映射关系与实施例一中的意义和映射关系相同,这里就不再赘述。
进一步,对球坐标
Figure PCTCN2020135982-appb-000029
几何数据转换为笛卡尔坐标(x,y,z),根据三维半径确定点到圆心的距离;根据三维半径r3D确定点到圆心的距离;根据θ角度的大小确定z的正负,根据
Figure PCTCN2020135982-appb-000030
角度的大小确定x,y的正负;计算xy平面上的半径r2D=cosθ*r3D,计算
Figure PCTCN2020135982-appb-000031
得到x的绝对值,乘以x的符号得到x值;根据xy平面上的半径r2D,计算的y=r2D*sinθ得到y的绝对值,乘以y的符号得到y的值;根据三维半径r3D计算z=sinθ*r3D得到z的绝对值,乘以z的符号得到z的值。其中,x和y的符号的确定方式,
Figure PCTCN2020135982-appb-000032
大于
Figure PCTCN2020135982-appb-000033
的时候,x是正数,y是负数;
Figure PCTCN2020135982-appb-000034
大于π小于等于
Figure PCTCN2020135982-appb-000035
的时候,x是负 数,y是负数;
Figure PCTCN2020135982-appb-000036
大于
Figure PCTCN2020135982-appb-000037
小于等于π的时候,x是负数,y是正数;
Figure PCTCN2020135982-appb-000038
大于0小于等于
Figure PCTCN2020135982-appb-000039
的时候,x是正数,y是正数。z的符号的确定方式可以为:θ大于
Figure PCTCN2020135982-appb-000040
的时候,z是负数;θ小于或等于
Figure PCTCN2020135982-appb-000041
的时候,z是正数。
进一步,当编码时球坐标的球心为采用笛卡尔坐标原点时,可以在获取得到每个三维点云数据的坐标信息上加球坐标的球心,以得到点云数据。当然,当点云数据进行块划分的时,在获取各点云数据块后,按照划分方式将获取到各点云数据块合并,以得到点云数据。
实施例三
基于上述点云数据的编码方法、解码方法,本实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述实施例所述的点云数据的编码方法或解码方法中的步骤。
实施例四
基于上述点云数据的编码方法、解码方法,本发明还提供了一种终端设备,如图10所示,其包括至少一个处理器(processor)20;显示屏21;以及存储器(memory)22,还可以包括通信接口(Communications Interface)23和总线24。其中,处理器20、显示屏21、存储器22和通信接口23可以通过总线24完成相互间的通信。显示屏21设置为显示初始设置模式中预设的用户引导界面。通信接口23可以传输信息。处理器20可以调用存储器22中的逻辑指令,以执行上述实施例中的方法。
此外,上述的存储器22中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器22作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令或模块。处理器20通过运行存储在存储器22中的软件程序、指令或模块,从而执行功能应用以及数据处理,即实现上述实施例一或实施例二中的方法。
存储器22可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、 至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器。例如,U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
此外,上述存储介质以及终端设备中的多条指令处理器加载并执行的具体过程在上述方法中已经详细说明,在这里就不再一一陈述。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (17)

  1. 一种点云数据的编码方法,其特征在于,所述方法包括:
    根据待编码的点云数据生成占位图;
    根据待编码的点云数据、所述占位图、预设扫描顺序生成一维序列组;
    对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流。
  2. 根据权利要求1所述点云数据的编码方法,其特征在于,所述根据占位图和预设扫描顺序生成一维序列组,其中所述预设扫描顺序具体包括:
    所述预设扫描顺序为莫顿顺序;或
    所述预设扫描顺序为坐标递增扫描顺序。
  3. 根据权利要求1所述点云数据的编码方法,其特征在于,所述一维序列组包括:
    一维深度序列;和/或
    一维颜色序列;和/或
    一维反射率序列。
  4. 根据权力要求1所述的点云编码方法,其特征在于,还包括:
    对点云数据集的最长边,用最短边对齐进行划分;或
    对于给定的三个维度上的划分数值n、m、k,把点云的x、y、z轴分别划分成n、m、k份,总共生成n*m*k块;或
    对于指定大小的长方体把点云划分成所述指定大小的块。
  5. 根据权利要求1所述点云数据的编码方法,其特征在于,当所述待编码的点云数据为二维点云数据时,所述根据待编码的点云数据生成占位图具体包括:
    选取所述二维点云数据中对应有数据的像素点;
    采用第一预设数字表示所述选取到的每个像素点,并采用第二预设数字表示二维点云数据中未被选取的像素点,以生成所述二维图像层对应的占位图。
  6. 根据权利要求1所述点云数据的编码方法,其特征在于,当所述待编码的点云数据为三维点云数据时,所述根据待编码的点云数据生成占位图具体包括:
    针对于待编码的点云数据中的每个数据点,将该数据点的坐标信息转换为球坐标信息;
    将转换得到的各球坐标信息分别映射为二维点云数据,并根据所述二维点云数据生成占位图。
  7. 根据权利要求6所述点云数据的编码方法,其特征在于,所述根据所述二维点云数据生成占位图具体为:
    根据所述二维点云数据生成二维图像层,并根据所述二维图像层生成占位图。
  8. 根据权利要求7所述点云数据的编码方法,其特征在于,所述二维图像层包括若干二维图像层;每个二维图像层对应一占位图。
  9. 根据权利要求1所述点云数据的编码方法,其特征在于,所述对所述占位图和一维序列组进行编码,以得到所述点云数据对应的码流具体包括:
    对于所述一维序列组中每个一维序列中的每个分量,将该分量的前一分量作为该分量的预测值;
    根据所述预测值以及该分量计算该分量的残差,并采用残差替换该分量,以更新所述一维序列组;
    对更新后一维序列组进行编码,以得到所述点云数据对应的码流。
  10. 一种点云数据的解码方法,其特征在于,所述解码方法包括:
    根据码流解码得到所述码流对应的占位图及一维序列组;
    根据所述占位图及一维序列组生成点云数据。
  11. 根据权利要求10所述点云数据的解码方法,其特征在于,所述一维序列组包括:
    一维深度序列;和/或
    一维颜色序列;和/或
    一维反射率序列。
  12. 根据权利要求10所述点云数据的解码方法,其特征在于,所述根据所述占位图及一维序列组生成点云数据具体包括:
    根据所述占位图、预设扫描顺序以及所述一维序列组,确定所述一维序列组对应的二维图像层;
    根据所述二维图像层确定所述码流对应的点云数据。
  13. 根据权利要求12所述点云数据的解码方法,其特征在于,所述预设扫描顺序包括:坐标递增扫描顺序或者二维莫顿扫描顺序。
  14. 根据权利要求12所述点云数据的解码方法,其特征在于,当所述码流对应的 点云数据为三维点云数据时,所述根据所述二维图像层确定所述码流对应的点云数据具体包括:
    将所述二维图像层中的每个点的坐标信息映射为球坐标信息;
    将所述球坐标信息转换为三维点云数据的坐标信息。
  15. 根据权利要求10所述点云数据的解码方法,其特征在于,所述根据码流解码得到所述码流对应的一维序列组具体包括:
    根据码流解码直接得到一维序列组;或
    根据码流解码得到一维候选序列;
    从一维候选序列的第二个值开始,将所述一维候选序列的每个点的前一点重建值作为预测值,并将每个点的值与其对应的预测值叠加作为当前点的值,以得到一维序列组。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1~9任意一项所述的点云数据的编码方法中的步骤,或者以实现如权利要求10~15任意一项所述的点云数据的解码方法中的步骤。
  17. 一种终端设备,其特征在于,包括:处理器、存储器及通信总线;所述存储器上存储有可被所述处理器执行的计算机可读程序;
    所述通信总线实现处理器和存储器之间的连接通信;
    所述处理器执行所述计算机可读程序时实现如权利要求1~9任意一项所述的点云数据的编码方法中的步骤,或者实现如权利要求10~15任意一项所述的点云数据的解码方法中的步骤。
PCT/CN2020/135982 2019-12-13 2020-12-13 点云数据的编码方法、解码方法、存储介质及设备 WO2021115466A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911284170.6A CN112995758B (zh) 2019-12-13 2019-12-13 点云数据的编码方法、解码方法、存储介质及设备
CN201911284170.6 2019-12-13

Publications (1)

Publication Number Publication Date
WO2021115466A1 true WO2021115466A1 (zh) 2021-06-17

Family

ID=76329651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135982 WO2021115466A1 (zh) 2019-12-13 2020-12-13 点云数据的编码方法、解码方法、存储介质及设备

Country Status (2)

Country Link
CN (1) CN112995758B (zh)
WO (1) WO2021115466A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210201540A1 (en) * 2018-09-19 2021-07-01 Huawei Technologies Co., Ltd. Point cloud encoding method and encoder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110282581A1 (en) * 2010-05-12 2011-11-17 Gm Global Technology Operations, Inc. Object and vehicle detection and tracking using 3-d laser rangefinder
US20180268570A1 (en) * 2017-03-16 2018-09-20 Samsung Electronics Co., Ltd. Point cloud and mesh compression using image/video codecs
US20190114830A1 (en) * 2017-10-13 2019-04-18 Samsung Electronics Co., Ltd. 6dof media consumption architecture using 2d video decoder
US20190122393A1 (en) * 2017-10-21 2019-04-25 Samsung Electronics Co., Ltd Point cloud compression using hybrid transforms
CN110418135A (zh) * 2019-08-05 2019-11-05 北京大学深圳研究生院 一种基于邻居的权重优化的点云帧内预测方法及设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007063612A1 (ja) * 2005-11-30 2007-06-07 Sharp Kabushiki Kaisha 動画像符号化装置、動画像復号装置
US9390110B2 (en) * 2012-05-02 2016-07-12 Level Set Systems Inc. Method and apparatus for compressing three-dimensional point cloud data
WO2019013430A1 (en) * 2017-07-10 2019-01-17 Samsung Electronics Co., Ltd. COMPRESSION OF MAILLAGES AND POINT CLOUDS USING IMAGE / VIDEO CODECS
US10699444B2 (en) * 2017-11-22 2020-06-30 Apple Inc Point cloud occupancy map compression
US10867414B2 (en) * 2018-04-10 2020-12-15 Apple Inc. Point cloud attribute transfer algorithm
CN110363822A (zh) * 2018-04-11 2019-10-22 上海交通大学 一种3d点云压缩方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110282581A1 (en) * 2010-05-12 2011-11-17 Gm Global Technology Operations, Inc. Object and vehicle detection and tracking using 3-d laser rangefinder
US20180268570A1 (en) * 2017-03-16 2018-09-20 Samsung Electronics Co., Ltd. Point cloud and mesh compression using image/video codecs
US20190114830A1 (en) * 2017-10-13 2019-04-18 Samsung Electronics Co., Ltd. 6dof media consumption architecture using 2d video decoder
US20190122393A1 (en) * 2017-10-21 2019-04-25 Samsung Electronics Co., Ltd Point cloud compression using hybrid transforms
CN110418135A (zh) * 2019-08-05 2019-11-05 北京大学深圳研究生院 一种基于邻居的权重优化的点云帧内预测方法及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210201540A1 (en) * 2018-09-19 2021-07-01 Huawei Technologies Co., Ltd. Point cloud encoding method and encoder
US11875538B2 (en) * 2018-09-19 2024-01-16 Huawei Technologies Co., Ltd. Point cloud encoding method and encoder

Also Published As

Publication number Publication date
CN112995758B (zh) 2024-02-06
CN112995758A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
CN111145090B (zh) 一种点云属性编码方法、解码方法、编码设备及解码设备
US11450031B2 (en) Significant coefficient flag encoding for point cloud attribute compression
US11252441B2 (en) Hierarchical point cloud compression
US11454710B2 (en) Point cloud compression using a space filling curve for level of detail generation
US10853973B2 (en) Point cloud compression using fixed-point numbers
US11044478B2 (en) Compression with multi-level encoding
WO2022042539A1 (zh) 一种基于空间顺序的点云分层方法、点云预测方法及设备
JP7303992B2 (ja) 点群表現を介したメッシュ圧縮
US20230046917A1 (en) In-tree geometry quantization of point clouds
US20160127746A1 (en) Limited error raster compression
US20210211703A1 (en) Geometry information signaling for occluded points in an occupancy map video
WO2021023206A1 (zh) 基于邻居权重优化的点云属性预测、编码和解码方法及设备
CN113518226A (zh) 一种基于地面分割的g-pcc点云编码改进方法
WO2021115466A1 (zh) 点云数据的编码方法、解码方法、存储介质及设备
CN115088017A (zh) 点云的树内几何量化
CN114915795A (zh) 基于二维规则化平面投影的点云编解码方法及装置
WO2023278829A1 (en) Attribute coding in geometry point cloud coding
WO2023028177A1 (en) Attribute coding in geometry point cloud coding
AU2012292957A1 (en) A method of processing information that is indicative of a shape
US20230306683A1 (en) Mesh patch sub-division
US11915373B1 (en) Attribute value compression for a three-dimensional mesh using geometry information to guide prediction
WO2024074123A1 (en) Method, apparatus, and medium for point cloud coding
US20230306641A1 (en) Mesh geometry coding
WO2023179710A1 (zh) 编码方法及终端
CN117751387A (zh) 面元网格连通性编码

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20900546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/11/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20900546

Country of ref document: EP

Kind code of ref document: A1