WO2020147379A1 - Procédé et dispositif de filtrage de nuage de points et support de stockage - Google Patents

Procédé et dispositif de filtrage de nuage de points et support de stockage Download PDF

Info

Publication number
WO2020147379A1
WO2020147379A1 PCT/CN2019/115778 CN2019115778W WO2020147379A1 WO 2020147379 A1 WO2020147379 A1 WO 2020147379A1 CN 2019115778 W CN2019115778 W CN 2019115778W WO 2020147379 A1 WO2020147379 A1 WO 2020147379A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
current
adjacent
points
Prior art date
Application number
PCT/CN2019/115778
Other languages
English (en)
Chinese (zh)
Inventor
蔡康颖
张德军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020147379A1 publication Critical patent/WO2020147379A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • This application relates to the field of data processing technology, and in particular to a point cloud filtering method, device and storage medium.
  • the collection of point clouds has become more and more convenient, and the quality of the collected point clouds has become higher and higher, and the scale has become larger.
  • the collected point cloud needs to be segmented to obtain multiple point cloud blocks, and a point cloud occupancy map is generated from the multiple point cloud blocks, and the point cloud occupancy map is down-sampled.
  • the point cloud geometry is reconstructed through the point cloud occupancy map after down-sampling processing to obtain the reconstructed point cloud.
  • the point cloud geometry is reconstructed through the point cloud occupancy map after down-sampling processing to obtain the reconstructed point cloud.
  • each point cloud block will have reconstruction error, so it will increase the distance between two adjacent point cloud blocks in the reconstructed point cloud, so that the two adjacent point cloud There are gaps between the blocks. Since defects such as noise points and gaps will reduce the quality of the reconstructed point cloud, it is necessary to filter the reconstructed point cloud to remove the defects such as noise points and gaps to improve the quality of the reconstructed point cloud.
  • a point cloud filtering method including: determining the boundary point of each point cloud block in the reconstructed point cloud through the point cloud occupancy map, and dividing the bounding box of the reconstructed point cloud into multiple three-dimensional grids , And determine the reconstruction points that fall into each three-dimensional grid, so as to determine the centroid position of each three-dimensional grid according to the reconstruction points that fall into each three-dimensional grid. For any boundary point in each point cloud block, determine the three-dimensional grid into which the boundary point falls, and determine at least one three-dimensional grid adjacent to the three-dimensional grid into which the boundary point falls.
  • centroid position of the at least one three-dimensional grid and use the centroid position of the at least one three-dimensional network as the neighborhood of the boundary point to reconstruct the centroid position of the point to determine the location of the boundary point and the neighborhood of the boundary point
  • the distance between the centroid positions of the reconstructed points If the distance is greater than the distance threshold, the position of the boundary point is updated with the centroid position of the at least one three-dimensional grid.
  • the steps of dividing the three-dimensional grid and determining at least one adjacent three-dimensional grid are all implemented in three-dimensional space, and the above process needs to determine the centroid position of each three-dimensional grid, but the centroid of most three-dimensional grids The position will not be used later, so the above process is more complicated, and additional memory is needed to store the centroid position of each three-dimensional grid.
  • the present application provides a point cloud filtering method, device, and storage medium, which help improve the efficiency of point cloud filtering.
  • a point cloud filtering method which includes: determining the neighboring point cloud block of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud; Determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block; filter the current point cloud block according to one or more adjacent reconstruction points of the current boundary point.
  • the execution subject in the first aspect or any possible design of the first aspect may be an encoder or a decoder.
  • the reconstructed point cloud may be a point cloud obtained by reconstructing the geometry of the point cloud of the current frame by the point cloud reconstruction module 113 in the encoder 100 as shown in FIG. 2.
  • the reconstructed point cloud may also be a point cloud obtained by reconstructing the geometry of the point cloud of the current frame by the point cloud reconstruction module 206 in the decoder 200 as shown in FIG. 6.
  • the one or more point cloud blocks may be all the point cloud blocks in the reconstructed point cloud. Of course, the one or more point cloud blocks may also be part of the point cloud blocks in the reconstructed point cloud.
  • the current point cloud block may be any one of the one or more point cloud blocks included in the reconstructed point cloud.
  • the current point cloud block may also be one of the one or more point cloud blocks included in the reconstructed point cloud.
  • the adjacent point cloud block of the current point cloud block is a point cloud block that has an adjacent relationship with the current point cloud block in a three-dimensional space.
  • the current boundary point can be any boundary point in the current point cloud block, and the current boundary point can also be a specified boundary point in the current point cloud block.
  • the current projection plane can be determined by the adjacent point cloud block.
  • the current point cloud block is filtered according to one or more adjacent reconstruction points of the current boundary point to obtain a smooth reconstructed point cloud. Since the point cloud filtering method can determine the adjacent reconstruction points of the current boundary point in the three-dimensional space through the projection plane of the two-dimensional space, the process of determining the adjacent reconstruction points of the current boundary point is simpler, thereby reducing The complexity of filtering improves coding efficiency.
  • the current point cloud block may have adjacent reconstructed points of the current boundary point, and the adjacent point cloud blocks of the current point cloud block may also have adjacent reconstructed points of the current boundary point.
  • the neighboring reconstruction points of the current boundary point may be determined only from the neighboring point cloud blocks of the current point cloud block.
  • one or more adjacent reconstruction points of the current boundary point in the current point cloud block are determined through the projection plane corresponding to the adjacent point cloud block, including: In the projection plane corresponding to the block, determine M neighboring pixels of the current pixel, where the current boundary point corresponds to the current pixel in the projection plane corresponding to the neighboring point cloud block, and M is a positive integer; according to the current pixel Determine the L adjacent reconstruction points of the current boundary point, and L is a positive integer.
  • the correspondence between the current boundary point and the current pixel point is a correspondence in a projection relationship.
  • the current pixel point is to correspond to the current boundary point to indicate that the current pixel point is the pixel point corresponding to the current boundary point on the projection plane corresponding to the adjacent point cloud block.
  • the L adjacent reconstruction points of the current boundary point are one or more adjacent reconstruction points of the current boundary point. That is, L is an integer greater than or equal to 1.
  • the projection plane corresponding to the adjacent point cloud block determines the M adjacent pixel points of the current pixel, including: when the current boundary point is on the projection plane corresponding to the adjacent point cloud block
  • the projection plane corresponding to the projected adjacent point cloud block is obtained, where the projection plane corresponding to the projected adjacent point cloud block includes: a current pixel point corresponding to the current boundary point and the adjacent point cloud
  • the Q pixels corresponding to the P reconstructed points in the point cloud block, P and Q are positive integers; the M adjacent pixels of the current pixel are determined from the projection plane corresponding to the adjacent point cloud block that has been projected , M adjacent pixel points are included in the Q pixel points corresponding to the P reconstructed points included in the adjacent point cloud block.
  • multiple points in the three-dimensional space may correspond to the same point on the two-dimensional plane, that is, multiple reconstructed points in the reconstructed point cloud may correspond to the same pixel on the two-dimensional plane.
  • P reconstructed points in the adjacent point cloud block, and the P reconstructed points may correspond to Q pixel points on the projection plane corresponding to the adjacent point cloud block.
  • Q may be equal to P or less than P.
  • determining the L adjacent reconstruction points of the current boundary point according to the M adjacent pixels of the current pixel point includes: determining the corresponding first reconstruction point from the N first candidate reconstruction points A first candidate reconstruction point with a depth difference less than a depth threshold is an adjacent reconstruction point of the current boundary point, where the first depth difference is the first depth and each of the N first candidate reconstruction points.
  • the depth difference between the depth of a candidate reconstruction point relative to the projection plane corresponding to the adjacent point cloud block, the first depth is the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block
  • N first Candidate reconstruction points are the reconstruction points corresponding to M adjacent pixels in the reconstruction point cloud, and N is a positive integer.
  • the N first candidate reconstruction points are the corresponding reconstruction points of the M adjacent pixels in the reconstructed point cloud, and based on the above description, Multiple reconstructed points in the point cloud may correspond to the same pixel on the two-dimensional plane. At this time, the multiple reconstructed points corresponding to the same pixel have different depths relative to the projection plane corresponding to the adjacent point cloud block. Therefore, in order to improve the accuracy of determining the neighboring reconstruction points of the current boundary point, one or more neighboring reconstruction points of the current boundary point can be selected from N first candidate reconstruction points through the first depth difference. .
  • determining the L adjacent reconstruction points of the current boundary point according to the M adjacent pixels of the current pixel point includes: determining the corresponding first reconstruction point from the N first candidate reconstruction points A first candidate reconstruction point with a distance less than the first distance threshold is the adjacent reconstruction point of the current boundary point, where the first distance is the current boundary point and each of the N first candidate reconstruction points.
  • the N first candidate reconstruction points are the corresponding reconstruction points of the M adjacent pixel points in the reconstruction point cloud, and N is a positive integer.
  • the N first candidate reconstruction points are the corresponding reconstruction points of the M adjacent pixels in the reconstructed point cloud, and based on the above description, Multiple reconstructed points in the point cloud may correspond to the same pixel on the two-dimensional plane. At this time, the multiple reconstructed points corresponding to the same pixel have different depths relative to the projection plane corresponding to the adjacent point cloud block.
  • the two-dimensional coordinates of the current pixel point and the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block, and M adjacent pixels are used to determine the first distance, and then through the first distance, from Select one or more adjacent reconstruction points of the current boundary point among the N first candidate reconstruction points.
  • it can also be calculated in three-dimensional space. Specifically, it can be based on the three-dimensional coordinates of the current boundary point and the three-dimensional coordinates of each of the N first candidate reconstruction points, The first distance is determined, and then through the first distance, one or more adjacent reconstruction points of the current boundary point are selected from the N first candidate reconstruction points.
  • the N first candidate reconstruction points can be directly determined as the L neighboring reconstruction points of the current boundary point, and the N first candidate reconstruction points are the corresponding reconstruction points of the M neighboring pixels in the reconstructed point cloud. Construct a point where N and L are equal.
  • both the current point cloud block and the adjacent point cloud block may have adjacent reconstruction points of the current boundary point. Therefore, in a possible situation, the adjacent reconstruction point of the current boundary point can be determined from the current point cloud block and the adjacent point cloud block.
  • one or more adjacent reconstruction points of the current boundary point in the current point cloud block are determined through the projection plane corresponding to the adjacent point cloud block, including: In the corresponding projection plane and the projection plane corresponding to the adjacent point cloud block, determine S adjacent pixel points of the current pixel point, where the current boundary point corresponds to the current pixel point in the projection plane corresponding to the adjacent point cloud block, S is a positive integer; according to S adjacent pixels, U adjacent reconstruction points of the current boundary point are determined, and U is a positive integer.
  • U adjacent reconstruction points of the current boundary point are one or more adjacent reconstruction points of the current boundary point. That is, U is an integer greater than or equal to 1.
  • the projection plane corresponding to the current point cloud block determines the S adjacent pixels of the current pixel, including: when the current boundary point is adjacent
  • the projection plane corresponding to the projected adjacent point cloud block is obtained, where the projection plane corresponding to the adjacent point cloud block includes: a current pixel corresponding to the current boundary point Point and Q pixel points corresponding to P reconstructed points in the adjacent point cloud block, P and Q are positive integers; from the projection plane corresponding to the current point cloud block, determine the projection of the current boundary point to the current point cloud block T adjacent pixel points of the current pixel point i on the corresponding projection plane, from the projection plane corresponding to the adjacent point cloud block that has been projected, determine that the current boundary point is projected onto the projection plane corresponding to the adjacent point cloud block M adjacent pixels of the current pixel j, T adjacent pixels are included in the Y pixels corresponding to the X reconstruction points included in the current
  • the current boundary point needs to be projected on the projection plane corresponding to the current point cloud block, and the current boundary point needs to be projected on the projection plane corresponding to the adjacent point cloud block of the current point cloud block. That is, the current boundary point has a current pixel point on the projection plane corresponding to the current point cloud block, and there is also a current pixel point on the projection plane corresponding to the adjacent point cloud block of the current point cloud block.
  • the current pixel point that projects the current boundary point onto the projection plane corresponding to the current point cloud block is called the current pixel point i
  • the current boundary point is projected onto the projection plane corresponding to the adjacent point cloud block of the current point cloud block
  • the current pixel on above is called the current pixel j.
  • determining U adjacent reconstruction points of the current boundary point according to S adjacent pixel points includes: determining the corresponding first depth difference from the N first candidate reconstruction points The first candidate reconstruction point smaller than the depth threshold is the adjacent reconstruction point of the current boundary point; from the E second candidate reconstruction points, determine the second candidate reconstruction point whose corresponding second depth difference is less than the depth threshold Is the adjacent reconstruction point of the current boundary point, where the first depth difference is the difference between the first depth and each of the N first candidate reconstruction points relative to the adjacent point cloud block The depth difference between the depths of the projection planes.
  • the second depth difference is the difference between the second depth and each of the E second candidate reconstruction points with respect to the projection plane corresponding to the current point cloud block.
  • the depth difference between the depths, the first depth is the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block, and the second depth is the depth of the current boundary point relative to the projection plane corresponding to the current point cloud block, N
  • the first candidate reconstruction points are the corresponding reconstruction points of the M adjacent pixels in the reconstructed point cloud
  • the E second candidate reconstruction points are the corresponding reconstruction points of the T neighboring pixels in the reconstructed point cloud.
  • Structure point, N and T are positive integers
  • determining U adjacent reconstruction points of the current boundary point according to S adjacent pixel points includes: from the N first candidate reconstruction points, determining that the corresponding first distance is less than the first The first candidate reconstruction point with a distance threshold is the adjacent reconstruction point of the current boundary point; from the E second candidate reconstruction points, determine the second candidate reconstruction point whose corresponding second distance is less than the first distance threshold Is the adjacent reconstruction point of the current boundary point, where the first distance is the distance between the current boundary point and each of the N first candidate reconstruction points, and the second distance is the current boundary point
  • the distance from each second candidate reconstruction point in the E second candidate reconstruction points, N first candidate reconstruction points are the corresponding reconstruction points of M adjacent pixels in the reconstruction point cloud,
  • the E second candidate reconstructed points are the reconstructed points corresponding to T adjacent pixels in the reconstructed point cloud, and N and T are positive integers.
  • the reconstruction points corresponding to the S neighboring pixels in the reconstructed point cloud may be directly determined as U neighboring reconstruction points of the current boundary point.
  • the S adjacent pixels include the current boundary point projected to the current pixel point i on the projection plane corresponding to the current point cloud block, and the current boundary point projected to the adjacent point cloud block corresponding to the current pixel point i
  • the M adjacent pixels of the current pixel j on the projection plane therefore, in this case, the T adjacent pixels correspond to the E second candidate reconstruction points and M phases in the reconstructed point cloud.
  • the N first candidate reconstruction points corresponding to the neighboring pixel points in the reconstruction point cloud are U neighboring reconstruction points of the current boundary point. That is, in this case, the sum of E and N is U.
  • determining the adjacent point cloud block of the current point cloud block includes: determining each point in the one or more point cloud blocks The bounding box of the cloud block; from one or more point cloud blocks, it is determined that the point cloud block whose bounding box overlaps with the bounding box of the current point cloud block is the adjacent point cloud block of the current point cloud block.
  • the bounding box is a geometric body with a volume slightly larger than a point cloud block and simple characteristics.
  • the bounding box can enclose all the reconstructed points included in the point cloud block.
  • the bounding box may be a geometric body including multiple planes, which is not specifically limited in the embodiment of the present application.
  • the bounding box may be a hexahedron or the like.
  • each point cloud block can be composed of one or more reconstructed points in three-dimensional space
  • the one or more reconstructed points are usually discretely distributed points in three-dimensional space, so first determine each point cloud block
  • the bounding box of each point cloud block can be divided into three-dimensional space.
  • two point cloud blocks with overlapping bounding boxes can be regarded as adjacent point cloud blocks. Therefore, under such conditions, it can be determined that the point cloud block that overlaps the bounding box of the current point cloud block is the adjacent point cloud block of the current point cloud block, so that the adjacent point cloud block of the current point cloud block is determined
  • the process is more convenient.
  • determining the adjacent point cloud block of the current point cloud block includes: determining each point in the one or more point cloud blocks The expanded bounding box of the cloud block.
  • the expanded bounding box is obtained by expanding the bounding box of each point cloud block in one or more point cloud blocks; from one or more point cloud blocks, determine the expanded bounding box and the current point
  • the point cloud block with the overlapping part of the extended bounding box of the cloud block is the adjacent point cloud block of the current point cloud block.
  • the expanded bounding box is a bounding box obtained by expanding the bounding box of the point cloud block.
  • the expanded bounding box can be obtained by divergently expanding the volume of the bounding box by a preset ratio with the geometric center of the bounding box as the expanded center.
  • the preset ratio can be set according to the use requirements, which is not specifically limited in the embodiment of the application.
  • the preset ratio may be 5%, etc., that is, the expanded bounding box is obtained by divergently expanding the volume of the bounding box by 5% with the geometric center of the bounding box as the expansion center.
  • the expansion of the bounding box can also be implemented in other ways.
  • determining the adjacent point cloud block of the current point cloud block includes: determining each point in the one or more point cloud blocks The bounding box of the cloud block and the three-dimensional volume corresponding to the current boundary point, where the three-dimensional volume is the spatial volume where the adjacent reconstruction point of the current boundary point is located; from one or more point cloud blocks, select the surrounding The point cloud block in which the box and the bounding box of the current point cloud block and the three-dimensional space corresponding to the current boundary point all have overlapping parts are adjacent point cloud blocks of the current point cloud block.
  • the number of adjacent point cloud blocks of the current point cloud block is large, and some of these adjacent point cloud blocks may obviously not have adjacent reconstruction points of the current boundary point. Therefore, in order to reduce the computational complexity, it is necessary to select a bounding box from one or more point cloud blocks. Not only does it overlap with the bounding box of the current point cloud block, but also needs to overlap with the three-dimensional space corresponding to the current boundary point.
  • the selected point cloud block is regarded as the adjacent point cloud block of the current point cloud block.
  • determining the adjacent point cloud block of the current point cloud block includes: determining each point in the one or more point cloud blocks The expanded bounding box of the cloud block and the three-dimensional space volume corresponding to the current boundary point.
  • the expanded bounding box is obtained by expanding the bounding box of each point cloud block in one or more point cloud blocks, and the three-dimensional space volume is the The space volume where the adjacent reconstruction point of the current boundary point is located; from one or more point cloud blocks, select the extended bounding box and the extended bounding box of the current point cloud block and the three-dimensional space volume corresponding to the current boundary point overlap Part of the point cloud blocks are adjacent point cloud blocks of the current point cloud block.
  • the number of adjacent point cloud blocks of the current point cloud block is large, and some of these adjacent point cloud blocks may obviously not have adjacent reconstruction points of the current boundary point. Therefore, in order to reduce the computational complexity, it is necessary to select the expanded bounding box from one or more point cloud blocks. Not only does it overlap with the expanded bounding box of the current point cloud block, but also the three-dimensional space corresponding to the current boundary point exists. For the overlapping point cloud blocks, the selected point cloud block is regarded as the adjacent point cloud block of the current point cloud block.
  • filtering the current point cloud block according to one or more adjacent reconstruction points of the current boundary point including: determining the centroid position of one or more adjacent reconstruction points of the current boundary point ; If the distance between the position of the centroid and the position of the current boundary point is greater than the second distance threshold, the position of the current boundary point is updated, where the updated position of the current boundary point corresponds to the position of the centroid.
  • the current boundary point is a boundary point in the current point cloud block, and the filtering of the current boundary point can be applied to each boundary point in the current point cloud block.
  • the point filtering will not be repeated.
  • the filtering of the current point cloud block is completed.
  • the filtering of the current point cloud block is also applicable to the filtering of other point cloud blocks in the reconstructed point cloud. After the filtering of all the point cloud blocks in the reconstructed point cloud is completed, the reconstructed point cloud is completed ⁇ filtering.
  • the present application uses the current boundary point of the current point cloud block as an object to explain the technical solution of the present application, and the filtering process of other boundary points of the current point cloud block will not be repeated.
  • this application can reduce the complexity of point cloud filtering and improve the coding and decoding efficiency for the entire filtering process of the reconstructed point cloud, that is, it can traverse the entire reconstructed point cloud, and analyze multiple/all point cloud blocks in the reconstructed point cloud. Multiple/all boundary points in the filter are filtered. Once the point cloud data scale is larger, the complexity reduction effect of the technical solution provided by this application is better.
  • a point cloud coding method including: determining indication information for indicating whether to process the reconstructed point cloud of the point cloud to be coded according to the target filtering method.
  • the target filtering method includes: Any point cloud filtering method provided on the one hand; encode the instruction information into the code stream.
  • a point cloud decoding method including: parsing a code stream to obtain indication information, the indication information being used to indicate whether to process the reconstructed point cloud of the point cloud to be decoded according to the target filtering method, and the target filtering method It includes any one of the point cloud filtering methods provided in the first aspect; when the indication information is used to indicate that the reconstructed point cloud of the point cloud to be decoded is processed according to the target filtering method, the reconstruction of the point cloud to be decoded is performed according to the target filtering method. Construct a point cloud for filtering.
  • a point cloud filtering device including: a point set determining unit, configured to determine adjacent point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud; Determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block through the projection plane corresponding to the adjacent point cloud block; the filtering processing unit is used to reconstruct one or more adjacent reconstruction points of the current boundary point Point, filter the current point cloud block.
  • an encoder including: a point cloud filtering module for filtering the reconstructed point cloud of the point cloud to be encoded according to the target filtering method; an auxiliary information encoding module for determining indication information, and, The instruction information is compiled into the code stream, and the instruction information is used to indicate whether to process the reconstructed point cloud of the to-be-coded point cloud according to the target filtering method.
  • the target filtering method includes the point cloud filtering method provided in the first aspect described above.
  • a decoder including: an auxiliary information decoding module, used to parse the code stream to obtain indication information, the indication information is used to indicate whether to perform the reconstruction of the point cloud to be decoded according to the target filtering method Processing, the target filtering method includes any of the point cloud filtering methods provided in the first aspect; the point cloud filtering module is used when the indication information is used to instruct to process the reconstructed point cloud of the point cloud to be decoded according to the target filtering method At the time, the reconstructed point cloud of the point cloud to be decoded is filtered according to the target filtering method.
  • an encoder including: a point cloud filtering module, the point cloud filtering module is the point cloud filtering device provided in the fourth aspect; a texture map generating module, used to reconstruct the point cloud after filtering Generate a texture map of the point cloud to be encoded.
  • the point cloud filtering module is used to determine the neighboring point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud of the point cloud to be encoded, and pass the neighboring point cloud blocks Corresponding projection plane, determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block, and filter the current point cloud block according to one or more adjacent reconstruction points of the current boundary point; texture map
  • the generating module is used to generate a texture image of the point cloud to be coded according to the reconstructed point cloud after the filtering process.
  • a decoder including: a point cloud filtering module, the point cloud filtering module is the point cloud filtering device provided in the fourth aspect; the texture information reconstruction module is used to reconstruct points after filtering processing The texture information of the cloud is reconstructed.
  • the point cloud filtering module is used to determine the adjacent point cloud block of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud of the point cloud to be decoded, and pass the adjacent point cloud block
  • the corresponding projection plane determines one or more adjacent reconstruction points of the current boundary point in the current point cloud block, and filters the current point cloud block according to the set of adjacent reconstruction points of the current boundary point
  • texture information reconstruction module Used to reconstruct the texture information of the reconstructed point cloud after filtering processing.
  • the present application also provides a computer-readable storage medium, including program code, when the program code runs on a computer, the computer executes any point cloud provided in the first aspect and its possible designs. Decoding method.
  • the present application also provides a computer-readable storage medium, including program code, when the program code runs on a computer, the computer executes the point cloud coding method provided in the second aspect.
  • the present application also provides a computer-readable storage medium, including program code, when the program code runs on a computer, the computer executes the point cloud coding method provided in the third aspect.
  • FIG. 1 is a schematic block diagram of a point cloud decoding system provided by an embodiment of this application;
  • Fig. 2 is a schematic block diagram of an encoder that can be used in an embodiment of the present application
  • Fig. 3 is a schematic diagram of a point cloud applicable to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a point cloud patch applicable to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an occupancy map of a point cloud applicable to an embodiment of the present application.
  • Fig. 6 is a schematic block diagram of a decoder that can be used in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a point cloud filtering method provided by an embodiment of this application.
  • FIG. 8 is a two-dimensional schematic diagram of an implementation manner for determining adjacent point cloud blocks of a current point cloud block and a summary table of corresponding description information provided by an embodiment of the application;
  • FIG. 9 is a schematic diagram of determining M adjacent pixels of a current pixel provided by an embodiment of the application.
  • FIG. 10 is the first schematic diagram of determining the adjacent reconstruction points of the current boundary point according to an embodiment of this application.
  • FIG. 11 is a second schematic diagram of determining adjacent reconstruction points of the current boundary point according to an embodiment of this application.
  • FIG. 12 is a third schematic diagram of determining adjacent reconstruction points of current boundary points according to an embodiment of this application.
  • FIG. 13 is a fourth schematic diagram of determining adjacent reconstruction points of a current boundary point according to an embodiment of this application.
  • FIG. 14 is a schematic flowchart of a point cloud encoding method provided by an embodiment of this application.
  • 15 is a schematic flowchart of a point cloud decoding method provided by an embodiment of this application.
  • FIG. 16 is a schematic block diagram of a point cloud filtering device provided by an embodiment of the application.
  • FIG. 17 is a schematic block diagram of a first encoder provided by an embodiment of this application.
  • FIG. 18 is a schematic block diagram of a first decoder provided by an embodiment of this application.
  • FIG. 19 is a schematic block diagram of a second encoder provided by an embodiment of this application.
  • FIG. 20 is a schematic block diagram of a second decoder provided by an embodiment of this application.
  • FIG. 21 is a schematic block diagram of an implementation manner of a decoding device used in an embodiment of the present application.
  • FIG. 1 is a schematic block diagram of a point cloud decoding system provided by an embodiment of the application.
  • the term "point cloud decoding” or “decoding” may generally refer to point cloud encoding or point cloud decoding.
  • the point cloud decoding system includes a source device 10, a destination device 20, a link 30 and a storage device 40.
  • the source device 10 can generate coded point cloud data. Therefore, the source device 10 may also be referred to as a point cloud encoding device.
  • the destination device 20 may decode the encoded point cloud data generated by the source device 10. Therefore, the destination device 20 may also be referred to as a point cloud decoding device.
  • the link 30 can receive the coded point cloud data generated by the source device 10 and can transmit the coded point cloud data to the destination device 20.
  • the storage device 40 can receive the coded point cloud data generated by the source device 10, and can store the coded point cloud data, so that the destination device 20 can directly obtain the coded point cloud data from the storage device 40 data.
  • the storage device 40 may correspond to a file server or another intermediate storage device that can store the encoded point cloud data generated by the source device 10, so that the destination device 20 may stream or download the stored data stored in the storage device 40 Coded point cloud data.
  • Both the source device 10 and the destination device 20 may include one or more processors and a memory coupled to the one or more processors.
  • the memory may include random access memory (RAM) and read-only memory ( read-only memory, ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, which can be used to store the desired program in the form of instructions or data structures accessed by the computer Any other media of the code, etc.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory which can be used to store the desired program in the form of instructions or data structures accessed by the computer Any other media of the code, etc.
  • both the source device 10 and the destination device 20 may include desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, Televisions, cameras, display devices, digital media players, video game consoles, onboard computers, or the like.
  • the link 30 may include one or more media or devices capable of transmitting the encoded point cloud data from the source device 10 to the destination device 20.
  • the link 30 may include one or more communication media that enable the source device 10 to directly send the encoded point cloud data to the destination device 20 in real time.
  • the source device 10 may modulate the coded point cloud data according to a communication standard, which may be a wireless communication protocol or the like, and may send the modulated point cloud data to the destination device 20.
  • the one or more communication media may include wireless and/or wired communication media.
  • the one or more communication media may include a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, and the packet-based network may be a local area network, a wide area network, or a global network (for example, the Internet).
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from the source device 10 to the destination device 20, etc., which are not specifically limited in the embodiment of the present application.
  • the storage device 40 may store the received coded point cloud data sent by the source device 10, and the destination device 20 may directly obtain the coded point cloud data from the storage device 40 .
  • the storage device 40 may include any of a variety of distributed or locally accessed data storage media, for example, any of the multiple distributed or locally accessed data storage media may be a hard disk drive, Blu-ray disc, digital versatile disc (DVD), compact disc read-only memory (CD-ROM), flash memory, volatile or non-volatile memory, or used to store Any other suitable digital storage media for encoding point cloud data, etc.
  • the storage device 40 may correspond to a file server or another intermediate storage device that can store the encoded point cloud data generated by the source device 10, and the destination device 20 may be stored via streaming or downloading.
  • the file server may be any type of server capable of storing the coded point cloud data and transmitting the coded point cloud data to the destination device 20.
  • the file server may include a network server, a file transfer protocol (FTP) server, a network attached storage (NAS) device, or a local disk drive.
  • FTP file transfer protocol
  • NAS network attached storage
  • the destination device 20 can obtain the coded point cloud data through any standard data connection (including an Internet connection).
  • Any standard data connection can include a wireless channel (e.g., Wi-Fi connection), a wired connection (e.g., digital subscriber line (DSL), cable modem, etc.), or is suitable for obtaining encoded data stored on a file server The combination of the two point cloud data.
  • the transmission of the encoded point cloud data from the storage device 40 may be a streaming transmission, a download transmission, or a combination of both.
  • the point cloud decoding system shown in FIG. 1 is only one possible implementation, and the technology of the present application is not only applicable to the source device 10 shown in FIG. 1 that can encode the point cloud, but also The destination device 20 for decoding point cloud data may also be applicable to other devices that can encode point clouds and decode encoded point cloud data, which is not specifically limited in the embodiment of the present application.
  • the source device 10 includes a data source 120, an encoder 100 and an output interface 140.
  • the output interface 140 may include a regulator/demodulator (modem) and/or a transmitter, where the transmitter may also be referred to as a transmitter.
  • the data source 120 may include a point cloud capture device (for example, a camera, etc.), a point cloud archive containing previously captured point cloud data, a point cloud feed interface for receiving point cloud data from a point cloud content provider, and/or A computer graphics system used to generate point cloud data, or a combination of these sources of point cloud data.
  • the data source 120 may send a point cloud to the encoder 100, and the encoder 100 may encode the point cloud received from the data source 120 to obtain encoded point cloud data.
  • the encoder can send the encoded point cloud data to the output interface.
  • the source device 10 directly transmits the encoded point cloud data to the destination device 20 via the output interface 140.
  • the encoded point cloud data may also be stored on the storage device 40 for later acquisition by the destination device 20 and used for decoding and/or playback.
  • the destination device 20 includes an input interface 240, a decoder 200 and a display device 220.
  • the input interface 240 includes a receiver and/or a modem.
  • the input interface 240 can receive the encoded point cloud data via the link 30 and/or from the storage device 40, and then send it to the decoder 200.
  • the decoder 200 can decode the received encoded point cloud data to obtain the Decoded point cloud data.
  • the decoder may send the decoded point cloud data to the display device 220.
  • the display device 220 may be integrated with the destination device 20 or may be external to the destination device 20. Generally, the display device 220 displays the decoded point cloud data.
  • the display device 220 may be any one of various types of display devices.
  • the display device 220 may be a liquid crystal display (LCD), a plasma display, or an organic light-emitting diode (OLED). Display or other type of display device.
  • LCD liquid crystal display
  • OLED organic light-emitting
  • the encoder 100 and the decoder 200 may each be integrated with an audio encoder and decoder, and may include an appropriate multiplexer-demultiplexer (multiplexer- The demultiplexer (MUX-DEMUX) unit or other hardware and software is used to encode both audio and video in a common data stream or separate data streams.
  • MUX-DEMUX multiplexer-Demultiplexer
  • the MUX-DEMUX unit may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
  • Each of the encoder 100 and the decoder 200 may be any of the following circuits: one or more microprocessors, digital signal processing (DSP), application specific integrated circuits, ASICs ), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
  • DSP digital signal processing
  • ASICs application specific integrated circuits
  • FPGA field-programmable gate array
  • the device may store instructions for the software in a suitable non-volatile computer-readable storage medium, and may use one or more processors to execute the instructions in hardware So as to implement the technology of this application. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.
  • Each of the encoder 100 and the decoder 200 may be included in one or more encoders or decoders, and any one of the encoders or the decoders may be integrated as a combined encoding in the corresponding device Part of the decoder/decoder (codec).
  • This application may generally refer to the encoder 100 as another device that “signals” or “sends” certain information to, for example, the decoder 200.
  • the term “signaling” or “sending” may generally refer to the transmission of syntax elements and/or other data used to decode the encoded point cloud data. This transfer can occur in real time or almost real time. Alternatively, this communication may occur after a period of time, for example, when the syntax element is stored in a computer-readable storage medium in the encoded bitstream during encoding, and the decoding device may then store the syntax element on this medium. retrieve the syntax element at any time.
  • FIG. 2 is a schematic block diagram of an encoder 100 provided by an embodiment of the application.
  • Figure 2 illustrates the MPEG (Moving Picture Expert Group) Point Cloud Compression (PCC) coding framework as an example.
  • the encoder 100 may include a point cloud block (patch) information generation module 101, a packing module 102, a depth map generation module 103, a texture map generation module 104, a depth map filling module 105, a texture map filling module 106, and image-based Or video encoding module 107, occupancy map encoding module 108, auxiliary information encoding module 109, multiplexing module 110, point cloud occupancy map down-sampling module 111, point cloud occupancy map filling module 112, point cloud reconstruction module 113, and point cloud Filtering module 114.
  • PCC Point Cloud Compression
  • the patch information generation module 101 may receive one or more frames of point clouds sent by the data source 120.
  • the current frame point cloud will be unified for description in the following.
  • the patch information generation module 101 can determine the three-dimensional coordinates of each point included in the point cloud of the current frame in the three-dimensional space coordinate system, and the normal direction vector of each point in the three-dimensional space, and determine the three-dimensional coordinates of each point in the three-dimensional space.
  • the normal direction vector in the space and the predefined projection plane divide the current frame point cloud into multiple patches, each patch includes one or more points in the current frame point cloud, and each patch is a connected region.
  • the predefined projection plane can be the plane in the bounding box of the point cloud of the current frame, that is, the plane included in the bounding box of the point cloud of the current frame can be used as the projection plane of multiple patches, and each patch has one The corresponding projection plane.
  • the patch information generating module 101 projects each patch of the multiple patches from the three-dimensional space onto a corresponding projection plane, and the projection plane corresponding to each patch corresponds to an index.
  • the occupancy map of each patch and the depth map of each patch can be obtained.
  • the occupancy map of any patch may be a map composed of pixels corresponding to points included in the patch obtained by projecting the patch onto the corresponding projection plane.
  • the patch information generating module 101 may also determine the depth of each point included in each patch relative to the corresponding projection plane, and the two-dimensional coordinates of each point included in each patch to the two-dimensional projection plane.
  • the pixel points in the occupancy map of any patch are determined by converting the three-dimensional coordinates of the points included in the patch to the two-dimensional coordinates on the corresponding projection plane.
  • the patch information generation module can store or not store the occupancy map of each patch.
  • the patch information generation module 101 can also project the normal direction vector of each point in the current frame point cloud in the three-dimensional space, the coordinates of each patch in the three-dimensional space coordinate system, and the projection of each point included in each patch.
  • the spatial information of each patch such as the two-dimensional coordinates to the corresponding projection plane, the depth of each point included in each patch relative to the corresponding projection plane, and the index corresponding to the projection plane corresponding to each patch are sent as auxiliary information to
  • the auxiliary information encoding module 109 performs encoding, which may also be referred to as performing compression encoding.
  • the patch information generating module 101 may also send the occupancy map of each patch and the space information of each patch to the packaging module 102.
  • the patch information generating module 101 may also send the depth map of each patch to the depth map generating module 103.
  • the packing module 102 may pack the received occupancy map of each patch and the space information of each patch sent by the patch information generating module 101 to obtain the occupancy map of the point cloud of the current frame.
  • the packaging module 102 may arrange each patch in a specific order, for example, in descending order (or ascending order) of the width/height of each patch's occupancy map, and then, in order of each patch after the arrangement, Insert the occupancy map of each patch into the available area of the occupancy map of the current frame point cloud to obtain the occupancy map of the current frame point cloud and patch packaging information of the current frame point cloud.
  • the packing module 102 may send the occupancy map of the point cloud of the current frame to the point cloud occupancy map down-sampling module 111, and the packing module 102 may also send the patch packing information of the point cloud of the current frame to the depth map generating module 103 and the auxiliary information encoding module 109 .
  • FIG. 3 A schematic diagram of a frame of point cloud
  • Figure 4 is a schematic diagram of the patch of the point cloud of the frame
  • Figure 5 is the projection of each patch of the point cloud of the frame shown in Figure 4 onto the corresponding projection plane.
  • the occupancy map is a schematic diagram of the occupancy map of the point cloud of the frame obtained by the packing module 102.
  • the point cloud shown in FIG. 3 may be the point cloud of the current frame in the embodiment of the present application, and the patch of the point cloud shown in FIG.
  • the cloud occupancy map may be the occupancy map of the point cloud of the current frame in the embodiment of the present application.
  • the depth map generation module 103 may receive the patch packaging information of the point cloud of the current frame sent by the packaging module 102 and the depth map of each patch sent by the patch information generation module 101, and then pack it according to the patch of the current frame point cloud Information and the depth map of each patch generate a depth map of the point cloud of the current frame. Then the generated depth map of the current frame point cloud is sent to the depth map filling module 105 to fill the blank pixels in the depth map of the current frame point cloud to obtain the filled depth map of the current frame point cloud.
  • the depth map filling module 105 may send the obtained filled depth map of the current frame point cloud to the image or video-based encoding module 107 to perform image or video-based encoding on the filled depth map of the current frame point cloud, Obtain the reconstructed depth map of the current frame point cloud and the code stream including the encoded depth map of the current frame point cloud, and can send the obtained reconstructed current frame point cloud depth map to the point cloud reconstruction module 113. Send the code stream including the encoded depth map of the current frame point cloud to the multiplexing module 110.
  • the point cloud occupancy map down-sampling module 111 may perform down-sampling processing on the received occupancy map of the current frame point cloud sent by the packing module 102 to obtain a low-resolution occupancy map of the current frame point cloud. Among them, the downsampling process can improve the efficiency of processing the current frame point cloud occupancy map and reduce the sampling points of the current frame point cloud occupancy map. The resolution of the current frame point cloud occupancy map obtained after downsampling is usually lower than that before downsampling. small. Afterwards, the point cloud occupancy map down-sampling module 111 may also send the low-resolution current frame point cloud occupancy map to the occupancy map encoding module 108 and the point cloud occupancy map filling module 112.
  • the occupancy map encoding module 108 can encode the received low-resolution occupancy map of the current frame point cloud to obtain a code stream including the encoded low-resolution occupancy map of the current frame point cloud, and the occupancy map encoding module 108 also The code stream including the coded low-resolution occupancy map of the current frame point cloud may be sent to the multiplexing module 110.
  • the point cloud occupancy map filling module 112 fills the occupancy map of the current frame point cloud with the original resolution according to the received low-resolution current frame point cloud occupancy map to obtain the filled current frame point cloud occupancy map, The filled occupancy map of the current frame point cloud has the original resolution.
  • one pixel block of the occupancy map of the current frame point cloud with the original resolution is filled with the same value as the value of the corresponding pixel block in the occupancy map of the current frame point cloud of the low resolution to obtain the The filled occupancy map of the point cloud of the current frame.
  • the point cloud occupancy map filling module 112 may also send the filled occupancy map of the point cloud of the current frame to the point cloud reconstruction module 113.
  • the point cloud reconstruction module 113 may be based on the received occupancy map of the current frame point cloud sent by the point cloud occupancy map filling module 112, and the reconstructed current frame point cloud sent by the image or video-based encoding module 107.
  • the depth map and auxiliary information (patch packing information and patch space information) reconstruct the geometry of the current frame point cloud to output the reconstructed point cloud.
  • the point cloud reconstruction module 113 can also output the reconstructed point cloud Correspondence between reconstruction points and patches, and the packaging position of reconstruction points in the reconstruction point cloud.
  • the point cloud reconstruction module 113 may send the reconstructed point cloud, the corresponding relationship between the reconstructed points in the reconstructed point cloud and the patch to the point cloud filtering module 114, and the point cloud reconstruction module 113 may also reconstruct the reconstructed point cloud.
  • the packing positions of the points are sent to the texture generation module 104.
  • the point cloud filtering module 114 may filter the reconstructed point cloud after receiving the reconstructed point cloud sent by the point cloud reconstruction module 113 and the corresponding relationship between the reconstructed point in the reconstructed point cloud and the patch. Specifically, defects such as obvious noise points and gaps in the reconstructed point cloud can be removed to obtain a filtered reconstructed point cloud, which can also be referred to as a smooth reconstructed point cloud. Or it can be said that the point cloud filtering block 114 can smooth the reconstructed point cloud. Specifically, the point cloud filtering module 114 may determine the neighboring point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud. Then through the projection plane corresponding to the adjacent point cloud block, determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block, and finally according to one or more adjacent reconstruction points of the current boundary point, The current point cloud block is filtered.
  • the texture map generation module 104 receives the smooth reconstructed point cloud sent by the point cloud filtering module 114, the packaged position of the reconstructed point in the reconstructed point cloud sent by the point cloud reconstruction module 113, and is sent by the data source 120 After the current frame point cloud, the texture map of the current frame point cloud can be generated according to the smooth reconstructed point cloud, the packaging position of the reconstructed point in the reconstructed point cloud and the current frame point cloud, and the generated current frame point
  • the cloud texture map is sent to the texture map filling module 106 to fill the blank pixels in the texture map of the current frame point cloud to obtain the filled texture map of the current frame point cloud.
  • the texture map filling module 106 can send the obtained filled texture map of the current frame point cloud to the image or video-based encoding module 107 to perform image or video-based encoding on the filled texture map of the current frame point cloud, Obtain the code stream including the texture map of the reconstructed current frame point cloud.
  • the image or video-based encoding module 107 may also send the obtained code stream including the reconstructed texture map of the current frame point cloud to the multiplexing module 110.
  • the image or video-based encoding module 107, the occupancy map encoding module 108, and the auxiliary information encoding module 109 can send the obtained code streams to the multiplexing module 110, and the multiplexing module 110 can combine the received code streams
  • the combined code stream is formed, and the combined code stream is sent to the output interface 140.
  • the output interface 140 may send the combined code stream to the decoder 200.
  • the encoder 100 shown in FIG. 2 is only an embodiment provided by the present application. In a specific implementation manner, the encoder 100 may include more or less modules than those shown in FIG. 2 Module. The embodiment of the application does not specifically limit this.
  • FIG. 6 is a schematic block diagram of a decoder 200 provided by an embodiment of the application.
  • Figure 6 illustrates the MPEG PCC decoding framework as an example.
  • the decoder 200 may include a demultiplexing module 201, an image or video-based decoding module 202, an occupancy map decoding module 203, an auxiliary information decoding module 204, a point cloud occupancy map filling module 205, and a point cloud reconstruction module 206 , Point cloud filtering module 207 and point cloud texture information reconstruction module 208.
  • the demultiplexing module 201 may receive the combined code stream sent by the output interface 140 of the encoder 100 through the input interface 204, and send the combined code stream to the corresponding decoding module. Specifically, the demultiplexing module 201 sends the code stream of the coded current frame point cloud texture map and the coded current frame point cloud depth map to the image or video-based decoding module 202, and The code stream including the encoded low-resolution occupancy map of the current frame point cloud is sent to the occupancy map decoding module 203, and the code stream including the encoded auxiliary information is sent to the auxiliary information decoding module 204.
  • the image or video-based decoding module 202 can decode the received bitstream including the texture map of the encoded current frame point cloud and the bitstream including the encoded depth map of the current frame point cloud to obtain the reconstructed current frame.
  • the texture map information of the frame point cloud and the reconstructed depth map information of the current frame point cloud, and the reconstructed texture map information of the current frame point cloud can be sent to the point cloud texture information reconstruction module 208,
  • the constructed depth map information of the current frame point cloud is sent to the point cloud reconstruction module 206.
  • the occupancy map decoding module 203 can decode the received bitstream including the occupancy map of the low-resolution current frame point cloud that has been coded to obtain the reconstructed low-resolution occupancy map information of the current frame point cloud, and The reconstructed low-resolution current frame point cloud occupancy map information is sent to the point cloud occupancy map filling module 205.
  • the point cloud occupancy map filling module 205 can obtain the reconstructed occupancy map information of the current frame point cloud with the original resolution according to the reconstructed low-resolution occupancy map information of the current frame point cloud, and then the reconstructed has The occupancy map information of the point cloud of the current frame with the original resolution is sent to the point cloud reconstruction module 206.
  • the auxiliary information decoding module 204 may decode the received code stream including the encoded auxiliary information to obtain auxiliary information, and may send the auxiliary information to the point cloud reconstruction module 206.
  • the point cloud reconstruction module 206 can be based on the received depth map information of the reconstructed current frame point cloud sent by the image or video-based decoding module 202, and the reconstructed current frame point sent by the point cloud occupancy map filling module 205.
  • the occupancy map information of the cloud and the auxiliary information sent by the auxiliary information decoding module 204 reconstruct the geometry of the point cloud of the current frame to obtain the reconstructed point cloud.
  • the reconstructed point cloud is similar to the reconstructed point cloud obtained by the point cloud reconstruction module 112 in the encoder 100, and the specific reconstruction process can refer to the reconstruction process of the point cloud reconstruction module 112 in the encoder 100. I won't repeat it here.
  • the point cloud reconstruction module 206 may also send the reconstructed point cloud to the point cloud filtering module 207.
  • the point cloud filtering module 207 can filter the reconstructed point cloud according to the received reconstructed point cloud to obtain a smooth reconstructed point cloud.
  • the specific filtering process can refer to the filtering process of the point cloud filtering module 114 in the encoder 100 , I won’t repeat it here.
  • the point cloud filtering module 207 may send the smooth reconstructed point cloud to the texture information reconstruction module 208 of the point cloud.
  • the point cloud texture information reconstruction module 208 receives the smooth reconstructed point cloud sent by the point cloud filtering module 207 and the reconstructed current frame point cloud texture map information sent by the image or video-based decoding module 202 Afterwards, the texture information of the reconstructed point cloud can be reconstructed to obtain the reconstructed point cloud reconstructed by the texture information.
  • the decoder 200 shown in FIG. 6 is only an example. In a specific implementation, the decoder 200 may include more or fewer modules than those shown in FIG. 6. This embodiment of the present application does not limit this.
  • point cloud filtering method may be executed by the encoder 100 in the point cloud decoding system, and more specifically, may be executed by the point cloud filtering module 114 in the encoder 100; any of the following point cloud filtering methods may also be used. It is executed by the decoder 200 in the point cloud decoding system, and more specifically, executed by the point cloud filtering module 207 in the decoder 200.
  • FIG. 7 is a flowchart of a point cloud filtering method provided by an embodiment of the present application, and the method is applied to a point cloud decoding system. Referring to Figure 7, the method includes:
  • S701 Determine adjacent point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud.
  • the reconstructed point cloud may be a point cloud obtained by reconstructing the geometry of the point cloud of the current frame by the point cloud reconstruction module 113 in the encoder 100 as shown in FIG. 2.
  • the reconstructed point cloud may also be a point cloud obtained by reconstructing the geometry of the point cloud of the current frame by the point cloud reconstruction module 206 in the decoder 200 as shown in FIG. 6.
  • the point cloud reconstruction module 113 in the encoder 100 shown in FIG. 2 reconstructs the geometry of the point cloud of the current frame, and the obtained reconstructed point cloud is used as an example for description.
  • the reconstructed point cloud includes one or more reconstructed points, the one or more reconstructed points are points that make up the reconstructed point cloud, and the one or more reconstructed points are points in a three-dimensional space.
  • Each of the one or more point cloud blocks included in the reconstructed point cloud may consist of one or more reconstructed points, and each of the one or more point cloud blocks is A connected area.
  • the one or more point cloud blocks may be all point cloud blocks in the reconstructed point cloud.
  • the one or more point cloud blocks may also be part of the point cloud blocks in the reconstructed point cloud.
  • the current point cloud block may be any one of the one or more point cloud blocks included in the reconstructed point cloud.
  • the current point cloud block may also be one or more point cloud blocks included in the reconstructed point cloud.
  • the adjacent point cloud block of the current point cloud block is a point cloud block that has an adjacent relationship with the current point cloud block in a three-dimensional space.
  • the implementation of S701 can be any one of the following four implementations. Among them, refer to FIG. 8, which is a two-dimensional schematic diagram of the following four implementation manners and a summary table of corresponding description information.
  • S701 may include: determining the bounding box of each point cloud block in one or more point cloud blocks, and from the one or more point cloud blocks, determining the difference between the bounding box and the current point cloud block The point cloud blocks with overlapping parts of the bounding boxes are adjacent point cloud blocks of the current point cloud block.
  • the bounding box is a geometric body with a volume slightly larger than a point cloud block and simple characteristics.
  • the bounding box can enclose all the reconstructed points included in the point cloud block.
  • the bounding box may be a geometric body including multiple planes, which is not specifically limited in the embodiment of the present application.
  • the bounding box may be a hexahedron or the like.
  • each point cloud block can be composed of one or more reconstructed points in three-dimensional space
  • the one or more reconstructed points are usually discretely distributed points in three-dimensional space, so first determine each point cloud block
  • the bounding box of each point cloud block can be divided into three-dimensional space.
  • two point cloud blocks with overlapping bounding boxes can be regarded as adjacent point cloud blocks. Therefore, it can be determined that the point cloud block in which the bounding box and the bounding box of the current point cloud block overlap is the adjacent point cloud block of the current point cloud block, so that the process of determining the adjacent point cloud block of the current point cloud block is more convenient.
  • the bounding box of one or more point cloud blocks included in the reconstructed point cloud can be input as a piece of data to the point cloud filtering module 114, similarly, in the decoder 200 as shown in FIG. 6, the bounding box of one or more point cloud blocks included in the reconstructed point cloud may be input to the point cloud filtering module 207 as a piece of data.
  • S701 may include: determining the extended bounding box of each point cloud block in one or more point cloud blocks, and determining the extended bounding box and the current point cloud from the one or more point cloud blocks The point cloud block with overlapping parts of the extended bounding box of the block is the adjacent point cloud block of the current point cloud block.
  • the expanded bounding box is a bounding box obtained by expanding the bounding box of the point cloud block.
  • the expanded bounding box can be obtained by divergently expanding the volume of the bounding box by a preset ratio with the geometric center of the bounding box as the expanded center.
  • the preset ratio can be set according to the use requirements, which is not specifically limited in the embodiment of the application.
  • the preset ratio may be 5%, etc., that is, the expanded bounding box is obtained by divergently expanding the volume of the bounding box by 5% with the geometric center of the bounding box as the expansion center.
  • the expansion of the bounding box can also be implemented in other ways.
  • the extended bounding box of one or more point cloud blocks included in the reconstructed point cloud can be input as a piece of data to the point cloud filtering module 114, similarly
  • the extended bounding box of one or more point cloud blocks included in the reconstructed point cloud may be input to the point cloud filtering module 207 as a piece of data.
  • S701 may include: determining the bounding box of each point cloud block in one or more point cloud blocks and the three-dimensional volume corresponding to the current boundary point, where the three-dimensional volume is the adjacent weight of the current boundary point.
  • the spatial volume where the construction point is located from one or more point cloud blocks, select the point cloud block where the bounding box and the bounding box of the current point cloud block and the three-dimensional volume corresponding to the current boundary point have overlapping parts as the current point cloud The adjacent point cloud block of the block.
  • the current boundary point may be any boundary point in the current point cloud block, and the current boundary point may also be a specified boundary point in the current point cloud block. Since the current point cloud block is composed of one or more reconstructed points, the one or more reconstructed points are all points located in the three-dimensional space, and the boundary points in the current point cloud block are the one or more reconstructed points. Constructed points are reconstructed points located at the boundary of the current point cloud block. Therefore, the boundary points in the current point cloud block are also points in the three-dimensional space, and the current boundary points are also points in the three-dimensional space.
  • the embodiment of the present application determines the boundary point of the current point cloud block on a two-dimensional plane. Specifically, for any reconstructed point in the current point cloud block, it can be judged according to whether the neighboring pixels of the pixel corresponding to any reconstructed point in the current point cloud block are all valid pixels. That is, when the neighboring pixels of any reconstructed point corresponding to the pixel in the occupancy map of the current point cloud block are not all valid pixels, it can be determined that any reconstructed point is the boundary point of the current point cloud block .
  • the effective pixel point means that its corresponding reconstructed point belongs to the same point cloud block as any reconstructed point, that is, the current point cloud block.
  • the boundary points of other point cloud blocks are also determined according to this method.
  • the number of adjacent point cloud blocks of the current point cloud block is large, and some of these adjacent point cloud blocks may obviously not have adjacent reconstruction points of the current boundary point.
  • the three-dimensional volume corresponding to the current boundary point is the spatial volume where the adjacent reconstruction point of the current boundary point is located. Therefore, in order to reduce the computational complexity, it is necessary to select the bounding box from one or more point cloud blocks. There is an overlap with the bounding box of the current point cloud block, and a point cloud block with an overlapping part in the three-dimensional space corresponding to the current boundary point is also required, and the selected point cloud block is regarded as the adjacent point cloud block of the current point cloud block.
  • the three-dimensional volume corresponding to the current boundary point can be a sphere with the current boundary point as the center and the second distance threshold as the radius. Of course, it can also refer to the current boundary point as the center and the second distance. A cube with two times the side length.
  • the bounding box of one or more point cloud blocks included in the reconstructed point cloud and the three-dimensional volume corresponding to the current boundary point can be input to the point as a piece of data.
  • the cloud filtering module 114 similarly, in the decoder 200 as shown in FIG. 6, the bounding box of one or more point cloud blocks included in the reconstructed point cloud and the three-dimensional volume corresponding to the current boundary point can be used as one item
  • the data is input into the point cloud filtering module 207.
  • S701 may include: determining the extended bounding box of each point cloud block in one or more point cloud blocks and the three-dimensional volume corresponding to the current boundary point, from the one or more point cloud blocks, Select the point cloud block in which the extended bounding box and the extended bounding box of the current point cloud block and the three-dimensional volume corresponding to the current boundary point have overlapping parts as the adjacent point cloud block of the current point cloud block.
  • the number of adjacent point cloud blocks of the current point cloud block is large, and some of these adjacent point cloud blocks may obviously not have adjacent reconstruction points of the current boundary point. Therefore, in order to reduce the computational complexity, it is necessary to select the expanded bounding box from one or more point cloud blocks. Not only does it overlap with the expanded bounding box of the current point cloud block, but also the three-dimensional space corresponding to the current boundary point exists. For the overlapping point cloud blocks, the selected point cloud block is regarded as the adjacent point cloud block of the current point cloud block.
  • the extended bounding box of one or more point cloud blocks included in the reconstructed point cloud and the three-dimensional volume corresponding to the current boundary point can be input as a piece of data
  • the extended bounding box of one or more point cloud blocks included in the reconstructed point cloud and the three-dimensional volume corresponding to the current boundary point can be used as One item of data is input into the point cloud filtering module 207.
  • S702 Determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block through the projection plane corresponding to the adjacent point cloud block of the current point cloud block.
  • the projection plane corresponding to the adjacent point cloud block refers to a two-dimensional plane that has a projection relationship with the adjacent point cloud block
  • the projection plane corresponding to the adjacent point cloud block may be in the bounding box of the adjacent point cloud block Of a plane.
  • the angle between the normal direction vector of the reconstructed point in the adjacent point cloud block and the normal direction vector of the projection plane corresponding to the adjacent point cloud block is smaller than the preset angle, and the preset angle can be set Smaller.
  • projecting the adjacent point cloud blocks onto the corresponding projection plane may be to convert the three-dimensional coordinates of the reconstructed points in the adjacent point cloud blocks into two-dimensional coordinates on the projection plane corresponding to the adjacent point cloud blocks.
  • the three-dimensional coordinates of the reconstructed point can be determined according to the preset three-dimensional space coordinate system, and the reconstructed point can be converted into the two-dimensional coordinates on the projection plane corresponding to the adjacent point cloud block according to the projection corresponding to the adjacent point cloud block.
  • the two-dimensional coordinate system on the plane is determined.
  • the directions of the two coordinate axes of the two-dimensional coordinate system can be aligned with the The directions of two coordinate axes of the three coordinate axes of the three-dimensional coordinate system are the same, so that the rotation and translation matrix between the three-dimensional plane and the two-dimensional plane can be determined relatively simply and quickly, so that the adjacent The three-dimensional coordinates of the reconstructed point in the point cloud block are converted into two-dimensional coordinates on the projection plane corresponding to the adjacent point cloud block.
  • one or more adjacent reconstruction points of the current boundary point are reconstruction points that have an adjacent relationship with the current boundary point.
  • the current point cloud block can be filtered only according to the adjacent reconstruction points of the current boundary point in the adjacent point cloud block.
  • the current point cloud block can also be filtered according to the current point cloud block and the adjacent point cloud.
  • the adjacent reconstruction points of the current boundary point in the block filter the current point cloud block. Therefore, the following describes how to determine one or more adjacent reconstruction points of the current boundary point through two possible situations.
  • S702 may include the following steps (1)-(2).
  • the current pixel point can be the pixel point corresponding to the two-dimensional coordinate on the projection plane corresponding to the adjacent point cloud block by converting the three-dimensional coordinates of the current boundary point. It should be understood that the current boundary point is the same as the current pixel point.
  • the correspondence of is a correspondence in the projection relationship.
  • the current pixel point is to correspond to the current boundary point to indicate that the current pixel point is the pixel point corresponding to the current boundary point on the projection plane corresponding to the adjacent point cloud block.
  • step (1) can be implemented through the following steps (1-1)-(1-2).
  • the projection plane corresponding to the projected adjacent point cloud block is obtained, where the adjacent point cloud block that has been projected
  • the corresponding projection plane includes: a current pixel point corresponding to the current boundary point and Q pixel points corresponding to P reconstruction points in the adjacent point cloud block, and P and Q are positive integers.
  • the original point cloud corresponding to the reconstructed point cloud can be divided into one or more original point cloud blocks, and then Project each of the one or more original point cloud blocks onto the corresponding projection plane, that is, for any one of the one or more original point cloud blocks, the original point cloud block
  • the three-dimensional coordinates of the points in the point cloud block are converted to the two-dimensional coordinates on the projection plane corresponding to the original point cloud block.
  • the points corresponding to the two-dimensional coordinates are the points in the original point cloud block on the corresponding projection plane.
  • the patch information generating module 101 may store the projection plane projected by each original point cloud block, or not store it. It should be understood that the projection plane projected by each original point cloud block can be considered as the occupancy map of each original point cloud block.
  • the obtained projection plane corresponding to the projected adjacent point cloud block may include: a current pixel point corresponding to the current boundary point and Q pixel points corresponding to P reconstructed points in adjacent point cloud blocks.
  • the projection plane corresponding to the adjacent point cloud block is a projection that does not contain any pixel corresponding to the reconstructed point. flat.
  • the P reconstruction points in the neighboring point cloud block can be projected onto the corresponding projection plane to obtain the P reconstruction points The projection plane of the corresponding Q pixels.
  • the obtained projection plane corresponding to the projected adjacent point cloud block may include: a current pixel point corresponding to the current boundary point and Q pixel points corresponding to P reconstructed points in adjacent point cloud blocks.
  • multiple points in the three-dimensional space may correspond to the same point on the two-dimensional plane, that is, multiple reconstructed points in the reconstructed point cloud may correspond to the same pixel on the two-dimensional plane.
  • P reconstructed points in the adjacent point cloud block, and the P reconstructed points may correspond to Q pixel points on the projection plane corresponding to the adjacent point cloud block.
  • Q may be equal to P or less than P.
  • the projection corresponding to the projected adjacent point cloud block can be determined The distance between the pixel on the plane and the current pixel, and the pixel whose distance from the current pixel is less than the third distance threshold is regarded as the M adjacent pixels of the current pixel.
  • a circular area is drawn with the current pixel as the center and the radius of the first preset threshold, and the pixels included in the circular area M adjacent pixels as the current pixel.
  • a square area is drawn with the current pixel as the center and the second preset threshold is the side length, and the pixels included in the square area are taken as M adjacent pixels of the current pixel.
  • the M adjacent pixels of the current pixel can also be determined in other ways, which is not specifically limited in the embodiment of the present application. For example, referring to FIG. 9, determining the M adjacent pixels of the current pixel may be the pixels included in a circle with the current pixel as the center and R as the radius. Or the M adjacent pixels that determine the current pixel point may be pixels included in a square with the current pixel point as the center and 2R as the side length.
  • step (2) can be implemented by any one of the following two possible implementations.
  • the first possible implementation manner from the N first candidate reconstruction points, determine the first candidate reconstruction point whose corresponding first depth difference is less than the depth threshold as the adjacent reconstruction point of the current boundary point, and N
  • the first candidate reconstructed point is a reconstructed point corresponding to M adjacent pixel points in the reconstructed point cloud, and N is a positive integer.
  • the first depth is the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block
  • the first depth difference is the first depth and each of the N first candidate reconstruction points.
  • the first depth is the distance of the current boundary point in the projection direction relative to the projection plane corresponding to the adjacent point cloud block
  • the first depth difference is the difference between the first depth and the N first candidate reconstruction points.
  • the distance between the M neighboring pixels and the current pixel is certain Within range.
  • the N first candidate reconstruction points corresponding to M adjacent pixel points are points in the three-dimensional space. Since multiple points in the three-dimensional space may correspond to the same point on the two-dimensional plane, that is, the reconstruction Multiple reconstructed points in the point cloud may correspond to the same pixel on the two-dimensional plane.
  • the depths of the multiple reconstruction points corresponding to the same pixel point relative to the projection plane corresponding to the adjacent point cloud block are different, that is, each of the N first candidate reconstruction points is relatively
  • the depth of the projection plane corresponding to the adjacent point cloud block may be different, that is, the depth between each first candidate reconstruction point in the N first candidate reconstruction points and the projection plane corresponding to the adjacent point cloud block
  • the distance may be different. Therefore, there may be a case where the difference between the first depth and the depth of some first candidate reconstruction points relative to the projection plane corresponding to the adjacent point cloud block is greater than the depth threshold. It should be understood that there is no adjacent relationship between these first candidate reconstruction points and the current boundary point, and therefore cannot be regarded as adjacent reconstruction points of the current boundary point.
  • the depth threshold can be set in advance according to usage requirements, which is not specifically limited in the embodiment of the present application.
  • the first candidate reconstruction point corresponding to the neighboring pixel point a'of the M neighboring pixels in FIG. 10 is the first candidate reconstruction point a
  • the first depth and the first candidate The depth difference between the reconstruction point a and the depth of the projection plane corresponding to the adjacent point cloud block is obviously greater than the depth threshold, so the first candidate reconstruction point a cannot be used as the neighboring reconstruction point of the current boundary point.
  • the second possible implementation manner from the N first candidate reconstruction points, determine that the first candidate reconstruction point whose corresponding first distance is less than the first distance threshold is the adjacent reconstruction point of the current boundary point, and N
  • the first candidate reconstructed point is a reconstructed point corresponding to M adjacent pixel points in the reconstructed point cloud, and N is a positive integer.
  • the first distance is the distance between the current boundary point and each of the N first candidate reconstruction points.
  • the N first candidate reconstruction points corresponding to M adjacent pixels are points in three-dimensional space, based on the description of determining the M adjacent pixels of the current pixel in the above step (1), On the projection plane corresponding to the point cloud block, the distance between the M adjacent pixels and the current pixel is within a certain range. It can also be understood that the distance between the M adjacent pixels and the current pixel is relatively close.
  • the N first candidate reconstruction points corresponding to M adjacent pixels are points in three-dimensional space, and based on the above description, multiple reconstruction points in the reconstructed point cloud may correspond to the same one on the two-dimensional plane Pixel points, so there may be a first candidate reconstruction point with a first distance greater than the first distance threshold, that is, there may be a first candidate reconstruction point with a longer distance from the current boundary point.
  • the first candidate reconstruction point that is far from the current boundary point has no adjacent relationship with the current boundary point, and therefore cannot be regarded as the adjacent reconstruction point of the current boundary point. Therefore, it can be determined that the first candidate reconstruction point whose corresponding first distance is less than the first distance threshold is the adjacent reconstruction point of the current boundary point.
  • the first distance threshold may be preset according to usage requirements, which is not specifically limited in the embodiment of the present application.
  • the first distance threshold is set to R
  • the first candidate reconstruction point corresponding to the adjacent pixel point b′ among the M adjacent pixels in FIG. 11 is the first candidate reconstruction point b
  • the first distance between the first candidate reconstruction point b and the current boundary point is significantly greater than the first distance threshold R, so the first candidate reconstruction point b cannot be used as the adjacent reconstruction point of the current boundary point.
  • the depth of each reconstructed point relative to the projection plane can be recorded, and after projection, the second pixel point corresponding to the reconstructed point can also be determined.
  • Dimensional coordinates In this way, when determining the distance between the current boundary point and each of the N first candidate reconstruction points, it can be based on the two-dimensional coordinates of the current pixel point and the current boundary point relative to the adjacent point cloud. The depth of the projection plane corresponding to the block, and the two-dimensional coordinates of M adjacent pixel points and the depth of each first candidate reconstruction point in the N first candidate reconstruction points relative to the projection plane corresponding to the adjacent point cloud block to make sure.
  • it can also be calculated in three-dimensional space. Specifically, it can be calculated based on the three-dimensional coordinates of the current boundary point and the three-dimensional value of each of the N first candidate reconstruction points. The coordinates are used to determine the distance between the current boundary point and each of the N first candidate reconstruction points.
  • N first candidate reconstruction points can be directly determined as L adjacent reconstruction points of the current boundary point, and N first candidate reconstruction points are M adjacent pixels. The point corresponds to the reconstructed point in the reconstructed point cloud, where N and L are equal.
  • S702 may include the following steps (3)-(4).
  • the S of the current pixel can also be determined from the projection plane corresponding to the current point cloud block and the projection plane corresponding to the adjacent point cloud block. Adjacent pixels.
  • step (3) can be implemented through the following steps (3-1)-step (3-2).
  • the projection plane corresponding to the projected adjacent point cloud block is obtained, where the projection corresponding to the adjacent point cloud block
  • the plane includes: a current pixel point corresponding to the current boundary point and Q pixel points corresponding to P reconstruction points in the adjacent point cloud block, and P and Q are positive integers.
  • step (3-1) is similar to the above step (1-1), so it will not be repeated here.
  • the current boundary point needs to be projected on the projection plane corresponding to the current point cloud block, and the current boundary point needs to be projected on the projection plane corresponding to the adjacent point cloud block of the current point cloud block. That is, the current boundary point has a current pixel point on the projection plane corresponding to the current point cloud block, and there is also a current pixel point on the projection plane corresponding to the adjacent point cloud block of the current point cloud block.
  • the current pixel point that projects the current boundary point onto the projection plane corresponding to the current point cloud block is called the current pixel point i
  • the current boundary point is projected onto the projection plane corresponding to the adjacent point cloud block of the current point cloud block
  • the current pixel on above is called the current pixel j.
  • the method of determining the current boundary point to be projected to the M neighboring pixels of the current pixel j on the projection plane corresponding to the neighboring point cloud block is the same as in the above step (1-2)
  • the methods for determining the M adjacent pixels of the current pixel are the same or similar, and will not be repeated here.
  • step (4) can be implemented in any one of the following two possible implementation ways.
  • step (4) may include: from the N first candidate reconstruction points, determining that the first candidate reconstruction point whose corresponding first depth difference is less than the depth threshold is the phase of the current boundary point. Neighbor reconstruction points; from the E second candidate reconstruction points, determine the second candidate reconstruction point whose corresponding second depth difference is less than the depth threshold as the neighbor reconstruction point of the current boundary point, and N first candidates
  • the reconstruction points are the corresponding reconstruction points of M adjacent pixels in the reconstructed point cloud
  • the E second candidate reconstruction points are the corresponding reconstruction points of T adjacent pixels in the reconstructed point cloud
  • N And T are positive integers.
  • the first depth is the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block
  • the second depth is the depth of the current boundary point relative to the projection plane corresponding to the current point cloud block.
  • the first depth difference is the depth difference between the first depth and the depth of each of the N first candidate reconstruction points relative to the projection plane corresponding to the adjacent point cloud block
  • the second The depth difference is the depth difference between the second depth and the depth of each of the E second candidate reconstruction points with respect to the projection plane corresponding to the current point cloud block.
  • the second depth is similar to the first depth
  • the second depth difference is similar to the first depth. Both the first depth and the first depth difference have been done in the first possible implementation of step (2) above Explanation, so the difference between the second depth and the second depth will not be repeated here.
  • the second candidate reconstruction point whose corresponding second depth difference is less than the depth threshold is the neighbor reconstruction point of the current boundary point.
  • the second candidate reconstruction point whose corresponding second depth difference is less than the depth threshold is determined as the neighbor reconstruction point of the current boundary point.
  • the first candidate reconstruction point whose corresponding first depth difference is less than the depth threshold is determined to be the adjacent reconstruction point of the current boundary point in a similar manner.
  • the first method of step (2) please refer to the first method of step (2) above. The possible implementation methods are not repeated here. The following uses an example to illustrate this implementation.
  • the first candidate reconstruction point corresponding to the adjacent pixel point a'of the M adjacent pixels in FIG. 12 is the first candidate reconstruction point a
  • the first depth and the first candidate The depth difference between the reconstruction point a and the depth of the projection plane corresponding to the adjacent point cloud block is obviously greater than the depth threshold, so the first candidate reconstruction point a cannot be used as the neighboring reconstruction point of the current boundary point.
  • the second candidate reconstruction point corresponding to the adjacent pixel point c′ among the T adjacent pixels is the second candidate reconstruction point c, and the second depth corresponds to the second candidate reconstruction point c relative to the current point cloud block
  • the depth difference between the depths of the projection planes is obviously greater than the depth threshold, so the second candidate reconstruction point c cannot be used as the adjacent reconstruction point of the current boundary point.
  • step (4) may include: from the N first candidate reconstruction points, determining that the first candidate reconstruction point whose corresponding first distance is less than the first distance threshold is the phase of the current boundary point. Neighbor reconstruction points; from the E second candidate reconstruction points, determine the second candidate reconstruction point whose corresponding second distance is less than the first distance threshold as the neighbor reconstruction point of the current boundary point, and N first candidates
  • the reconstruction points are the corresponding reconstruction points of M adjacent pixels in the reconstructed point cloud
  • the E second candidate reconstruction points are the corresponding reconstruction points of T adjacent pixels in the reconstructed point cloud
  • N And T are positive integers.
  • the first distance is the distance between the current boundary point and each of the N first candidate reconstruction points
  • the second distance is the current boundary point and the E second candidate reconstruction points. The distance between each second candidate reconstruction point in the points.
  • the second distance is similar to the first distance, and the first distance has been described in the second possible implementation manner of step (2) above, so the second distance will not be repeated here. The following uses an example to illustrate this implementation.
  • the first distance threshold is set to R in FIG.
  • the first candidate reconstruction point corresponding to the neighboring pixel point b′ among the M neighboring pixels is the first candidate reconstruction point b
  • the first candidate reconstruction point b between the first candidate reconstruction point b and the current boundary point A distance is obviously greater than the first distance threshold R, so the first candidate reconstruction point b cannot be used as the adjacent reconstruction point of the current boundary point.
  • the second candidate reconstruction point corresponding to the neighboring pixel point d'among the T neighboring pixels is the second candidate reconstruction point d, and the second distance between the second candidate reconstruction point d and the current boundary point is obvious It is greater than the first distance threshold R, so the second candidate reconstruction point d cannot be used as the adjacent reconstruction point of the current boundary point.
  • the reconstructed point has an adjacent relationship with the current boundary point, in addition to the reconstructed point in the adjacent point cloud block of the current point cloud block, it can also be the current point cloud block except the current point cloud block. Other reconstruction points outside the boundary point. Therefore, in the second possible situation, not only determine the adjacent reconstruction point of the current boundary point from the adjacent point cloud block of the current point cloud block, but also determine the adjacent reconstruction point of the current boundary point from the current point cloud block Point, making the determined result of the adjacent reconstruction point of the current boundary point more accurate.
  • the U adjacent reconstruction points of the current boundary point be determined according to the above two possible implementation manners and the S adjacent pixel points, but also other methods can be used To determine, for example, the reconstruction points corresponding to the S neighboring pixels of the current pixel in the reconstructed point cloud can be directly determined as U neighboring reconstruction points of the current boundary point.
  • the S adjacent pixels include the current boundary point projected to the current pixel point i on the projection plane corresponding to the current point cloud block, and the current boundary point projected to the adjacent point cloud block corresponding to the current pixel point i
  • the M adjacent pixels of the current pixel j on the projection plane therefore, in this case, the T adjacent pixels correspond to the E second candidate reconstruction points and M phases in the reconstructed point cloud.
  • the N first candidate reconstruction points corresponding to the neighboring pixel points in the reconstruction point cloud are U neighboring reconstruction points of the current boundary point. That is, in this case, the sum of E and N is U.
  • S702 So far, the description of S702 has been completed, that is, one or more adjacent reconstruction points of the current boundary point in the current point cloud block are determined through S702.
  • S703 is used to describe the filtering of the current point cloud block.
  • S703 Filter the current point cloud block according to one or more adjacent reconstruction points of the current boundary point.
  • filtering is an operation to remove defects such as noise points and gaps from the current point cloud block.
  • the filtering can be performed by the point cloud filtering module 114 in the encoder 100.
  • the filtering may be performed by the point cloud filtering module 207 in the decoder 200.
  • the implementation process of filtering the current point cloud block includes: determining the centroid position of one or more adjacent reconstruction points of the current boundary point. If the distance between the position of the centroid and the position of the current boundary point is greater than the second distance threshold, the position of the current boundary point is updated, where the updated position of the current boundary point corresponds to the position of the centroid.
  • the centroid position of one or more adjacent reconstruction points can be determined by the three-dimensional coordinates of one or more adjacent reconstruction points. Specifically, the sum of x coordinates in the three-dimensional coordinates of one or more adjacent reconstruction points can be determined, and the sum of x coordinates can be divided by the total number of one or more adjacent reconstruction points to obtain one or more The x-coordinates of the centroids of adjacent reconstruction points.
  • the sum of y coordinates in the three-dimensional coordinates of one or more adjacent reconstruction points can be determined, and the sum of y coordinates can be divided by the total number of one or more adjacent reconstruction points to obtain one or more The y-coordinates of the centroids of two adjacent reconstructed points; the sum of z-coordinates in the three-dimensional coordinates of one or more adjacent reconstructed points can be determined, and the sum of z-coordinates can be divided by one or more adjacent reconstructed points Get the z coordinate of the centroid of one or more adjacent reconstruction points.
  • the three-dimensional coordinates of the center of mass can be obtained, that is, the position of the center of mass.
  • updating the position of the current boundary point refers to updating the position of the current boundary point with the centroid position.
  • the centroid position of the one or more adjacent reconstruction points can be used to update the position of the current boundary point to remove the current boundary point, that is, to remove the noise point of the current point cloud block.
  • the filtering of the current point cloud block is completed.
  • the filtering of the current point cloud block is completed.
  • the current boundary point is a boundary point in the current point cloud block. The above filtering of the current boundary point can be applied to each boundary point in the current point cloud block. Filtering will not be repeated.
  • the filtering of the current point cloud block is completed.
  • the filtering of the reconstructed point cloud is completed.
  • the filtering of the current point cloud block is also applicable to the filtering of other point cloud blocks in the reconstructed point cloud, and the filtering of other point cloud blocks in the reconstructed point cloud will not be repeated here.
  • the present application uses the current boundary point of the current point cloud block as an object to explain the technical solution of the present application, and the filtering process of other boundary points of the current point cloud block will not be repeated.
  • this application can reduce the complexity of point cloud filtering and improve the coding and decoding efficiency for the entire filtering process of the reconstructed point cloud, that is, it can traverse the entire reconstructed point cloud, and analyze multiple/all point cloud blocks in the reconstructed point cloud. Multiple/all boundary points in the filter are filtered. Once the point cloud data scale is larger, the complexity reduction effect of the technical solution provided by this application is better.
  • the neighboring point cloud blocks of the current point cloud block are first determined from one or more point cloud blocks included in the reconstructed point cloud. Since the pixel points of the adjacent point cloud block projected on the corresponding projection plane correspond to the reconstructed point in the adjacent point cloud block, the current point cloud block can be determined by the projection plane corresponding to the adjacent point cloud block. One or more adjacent reconstruction points of the boundary point. Finally, the current point cloud block is filtered according to one or more adjacent reconstruction points of the current boundary point to obtain a smooth reconstructed point cloud.
  • the point cloud filtering method can determine the adjacent reconstruction points of the current boundary point in the three-dimensional space through the projection plane of the two-dimensional space, the process of determining the adjacent reconstruction points of the current boundary point is simpler, thereby reducing The complexity of filtering improves coding efficiency.
  • FIG. 14 is a schematic flowchart of a point cloud encoding method provided by an embodiment of this application.
  • the execution subject of this embodiment may be an encoder. As shown in Figure 14, the method may include:
  • the target filtering method includes any point cloud filtering method provided in the embodiments of the present application. For example, it may be the point cloud filtering method shown in FIG. 7.
  • At least two filtering methods there may be at least two filtering methods.
  • One of the at least two filtering methods may be any point cloud filtering method provided in the embodiments of this application, and the other may be points provided in the prior art or in the future. Cloud filtering method.
  • the indication information may specifically be the index of the target filtering method.
  • the encoder can pre-appoint the indexes of at least two point cloud filtering methods supported by the encoder, and then, after the encoder determines the target filtering method, the index of the target filtering method is included as the indication information into the code stream .
  • the embodiment of the present application does not limit how the encoder determines which of the at least two filtering methods supported by the encoder is the target filtering method.
  • S1401 Compile the instruction information into the code stream.
  • the indication information is frame-level information.
  • FIG. 15 is a schematic flowchart of a point cloud decoding method provided by an embodiment of this application.
  • the execution subject of this embodiment may be a decoder. As shown in Figure 15, the method may include:
  • S1501 Parse the code stream to obtain indication information, which is used to indicate whether to process the reconstructed point cloud of the point cloud to be decoded according to the target filtering method; the target filtering method includes any point cloud filtering provided in the embodiments of this application method. For example, it may be the point cloud filtering method shown in FIG. 7.
  • the indication information is frame-level information.
  • the point cloud decoding method provided in this embodiment corresponds to the point cloud encoding method provided in FIG. 14.
  • the above mainly introduces the solutions provided by the embodiments of the present application from the perspective of a method.
  • it includes hardware structures and/or software modules corresponding to performing each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driven hardware depends on the specific application of the technical solution and design constraints. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of the present application may divide the encoder/decoder function modules according to the above method examples, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of the modules in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 16 is a schematic block diagram of a point cloud filtering device 1600 according to an embodiment of the application.
  • the point cloud filtering device 1600 may include a point set determining unit 1601 and a filtering processing unit 1602.
  • the point cloud filtering device 1600 may be the point cloud filtering module 114 in FIG. 2, and the point cloud filtering device 1600 may be the point cloud filtering module 207 in FIG. 6.
  • the point set determining unit 1601 is configured to determine the adjacent point cloud block of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud;
  • the projection plane corresponding to the block determines one or more adjacent reconstruction points of the current boundary point in the current point cloud block.
  • the filtering processing unit 1602 is configured to filter the current point cloud block according to one or more adjacent reconstruction points of the current boundary point. For example, in conjunction with FIG. 7, the point set determining unit 1601 may be used to perform S701 and S702, and the filtering processing unit 1602 may be used to perform S703.
  • the point set determining unit 1601 is specifically configured to: determine M adjacent pixel points of the current pixel point from the projection plane corresponding to the adjacent point cloud block, where the current boundary point corresponds to the phase
  • M is a positive integer. According to the M adjacent pixel points, determine the L adjacent reconstruction points of the current boundary point.
  • the point set determining unit 1601 is specifically configured to: when the current boundary point is After the projection is performed on the projection plane corresponding to the adjacent point cloud block, the projection plane corresponding to the projected adjacent point cloud block is obtained, where the projection plane corresponding to the adjacent point cloud block includes: one corresponding to the current boundary point
  • the current pixel and Q pixels corresponding to P reconstructed points in the adjacent point cloud block, P and Q are positive integers. Determine the M adjacent pixel points of the current pixel from the projection plane corresponding to the adjacent point cloud block that has been projected, and the M adjacent pixel points are included in the adjacent point cloud block and correspond to the P reconstructed points included Of Q pixels.
  • the point set determining unit 1601 is specifically configured to: reconstruct from the N first candidates Among the points, it is determined that the first candidate reconstruction point whose corresponding first depth difference is less than the depth threshold is the adjacent reconstruction point of the current boundary point, where the first depth difference is the first depth and the N first candidates respectively The depth difference between the depth of each first candidate reconstruction point in the reconstruction point relative to the projection plane corresponding to the adjacent point cloud block, the first depth is the projection plane corresponding to the current boundary point relative to the adjacent point cloud block
  • the N first candidate reconstruction points are the corresponding reconstruction points of M adjacent pixels in the reconstruction point cloud, and N is a positive integer.
  • the point set determining unit 1601 is specifically configured to: reconstruct from the N first candidates Among the points, it is determined that the first candidate reconstruction point whose corresponding first distance is less than the first distance threshold is the adjacent reconstruction point of the current boundary point, where the first distance is the current boundary point and the N first candidate reconstruction points
  • the distance between each first candidate reconstruction point in, N first candidate reconstruction points are the reconstruction points corresponding to M adjacent pixels in the reconstruction point cloud, and N is a positive integer.
  • the point set determining unit 1601 is specifically configured to: determine S adjacent pixel points of the current pixel point from the projection plane corresponding to the current point cloud block and the projection plane corresponding to the adjacent point cloud block , Where the current boundary point corresponds to the current pixel point in the projection plane corresponding to the adjacent point cloud block, and S is a positive integer; U adjacent reconstruction points of the current boundary point are determined according to the S adjacent pixel points.
  • the point set determining unit 1601 when the current boundary point is projected on the projection plane corresponding to the adjacent point cloud block, the projection plane corresponding to the projected adjacent point cloud block is obtained, where the adjacent point cloud block corresponds to the projection plane Including: a current pixel point corresponding to the current boundary point and Q pixel points corresponding to P reconstruction points in the adjacent point cloud block, P and Q are positive integers; from the projection plane corresponding to the current point cloud block Determine the current boundary point to be projected to the current pixel point i on the projection plane corresponding to the current point cloud block corresponding to the current pixel point i on the projection plane to determine the current boundary point projection to the phase
  • the adjacent point cloud block corresponds to the M adjacent pixels of the current pixel j on the projection plane, and the T adjacent pixels are included in the Y pixels corresponding to the
  • the point set determining unit 1601 is specifically configured to: reconstruct from the N first candidates Among the points, it is determined that the first candidate reconstruction point whose corresponding first depth difference is less than the depth threshold is the adjacent reconstruction point of the current boundary point, and from the E second candidate reconstruction points, the corresponding second depth difference is determined
  • the second candidate reconstruction point whose value is less than the depth threshold is the adjacent reconstruction point of the current boundary point, where the first depth difference is the first depth difference with each of the N first candidate reconstruction points.
  • the second depth difference is the second depth and each of the E second candidate reconstruction points.
  • the first depth is the depth of the current boundary point relative to the projection plane corresponding to the adjacent point cloud block
  • the second depth is the current boundary point relative to the current
  • the N first candidate reconstruction points are the corresponding reconstruction points of the M neighboring pixels in the reconstruction point cloud
  • the E second candidate reconstruction points are T neighbors
  • the pixel point corresponds to the reconstructed point in the reconstructed point cloud
  • N and T are positive integers.
  • the point set determining unit 1601 is specifically configured to: reconstruct from the N first candidates Among the points, it is determined that the first candidate reconstruction point whose corresponding first distance is less than the first distance threshold is the adjacent reconstruction point of the current boundary point. From the E second candidate reconstruction points, it is determined that the corresponding second distance is less than The second candidate reconstruction point of the first distance threshold is the adjacent reconstruction point of the current boundary point, where the first distance is the difference between the current boundary point and each of the N first candidate reconstruction points The second distance is the distance between the current boundary point and each second candidate reconstruction point in the E second candidate reconstruction points.
  • the N first candidate reconstruction points are M adjacent pixel points.
  • the corresponding reconstructed points in the reconstructed point cloud are reconstructed.
  • the E second candidate reconstructed points are the reconstructed points corresponding to the T adjacent pixel points in the reconstructed point cloud, and N and T are positive integers.
  • the point set determining unit 1601 is specifically configured to: determine The bounding box of each point cloud block in one or more point cloud blocks; from one or more point cloud blocks, determine the point cloud block whose bounding box overlaps with the bounding box of the current point cloud block as the current point cloud block Adjacent point cloud block.
  • the point set determining unit 1601 is specifically configured to: determine The expanded bounding box of each point cloud block in one or more point cloud blocks.
  • the expanded bounding box is obtained by expanding the bounding box of each point cloud block in one or more point cloud blocks; from one or more points Among cloud blocks, it is determined that the point cloud block in which the extended bounding box and the extended bounding box of the current point cloud block overlap are the adjacent point cloud blocks of the current point cloud block.
  • the point set determining unit 1601 is specifically configured to: determine The bounding box of each point cloud block in one or more point cloud blocks and the three-dimensional volume corresponding to the current boundary point, and the three-dimensional volume corresponding to the current boundary point is the spatial volume where the adjacent reconstruction point of the current boundary point is located; From one or more point cloud blocks, select a point cloud block in which the bounding box, the bounding box of the current point cloud block and the three-dimensional space corresponding to the current boundary point have overlapping parts as the adjacent point cloud block of the current point cloud block.
  • the point set determining unit 1601 is specifically configured to: determine The expanded bounding box of each point cloud block in one or more point cloud blocks and the three-dimensional space corresponding to the current boundary point.
  • the expanded bounding box is the expansion of the bounding box of each point cloud block in one or more point cloud blocks Obtained, the three-dimensional volume corresponding to the current boundary point is the spatial volume where the adjacent reconstruction point of the current boundary point is located; from one or more point cloud blocks, select the extended bounding box and the extended bounding box of the current point cloud block.
  • the point cloud blocks that have overlapping parts in the three-dimensional space corresponding to the current boundary point are adjacent point cloud blocks of the current point cloud block.
  • the filtering processing unit 1602 is specifically configured to: determine the centroid position of one or more adjacent reconstruction points of the current boundary point; if the distance between the centroid position and the position of the current boundary point is greater than The second distance threshold is to update the position of the current boundary point, where the updated position of the current boundary point corresponds to the centroid position.
  • the units in the point cloud filtering device 1600 provided in the embodiments of the present application are functional entities that implement the various execution steps included in the corresponding methods provided above, that is, they have the ability to fully implement the point cloud of the present application.
  • the functional main body of the expansion and deformation of these steps please refer to the introduction of the corresponding method above for details. For the sake of brevity, this article will not repeat them.
  • FIG. 17 is a schematic block diagram of an encoder 1700 according to an embodiment of the application.
  • the encoder 1700 may include a point cloud filtering module 1701 and an auxiliary information encoding module 1702.
  • the encoder 1700 may be the encoder 100 in FIG. 1.
  • the point cloud filtering module 1701 may be the point cloud filtering module 114 in FIG. 2
  • the auxiliary information encoding module 1702 may be the auxiliary information encoding in FIG. Module 109.
  • the point cloud filtering module 1701 is configured to perform filtering processing on the reconstructed point cloud of the point cloud to be coded according to the target filtering method.
  • the auxiliary information encoding module 1702 is used to determine the instruction information, and to encode the instruction information into the code stream.
  • the indication information is used to indicate whether to process the reconstructed point cloud of the to-be-coded point cloud according to the target filtering method; the target filtering method may be the point cloud filtering method shown in FIG. 7 provided above.
  • the point cloud filtering module 1701 further includes a point set determining unit 1703 and a filtering processing unit 1704 for processing the reconstructed point cloud of the point cloud to be coded according to the target filtering method.
  • the steps performed by the point set determining unit 1703 can refer to the steps performed by the aforementioned point set determining unit 1601
  • the steps performed by the filter processing unit 1704 can refer to the steps performed by the aforementioned filter processing unit 1602, which will not be repeated here.
  • modules in the encoder 1700 provided in the embodiments of the present application are functional entities that implement the various execution steps included in the corresponding methods provided above, that is, they are capable of fully implementing the point cloud filtering method of the present application. Please refer to the introduction of the corresponding method above for details of each step and the main function of the expansion and deformation of these steps. For the sake of brevity, this article will not repeat them.
  • FIG. 18 is a schematic block diagram of a decoder 1800 according to an embodiment of the application.
  • the decoder 1800 may include: an auxiliary information decoding module 1801 and a point cloud filtering module 1802.
  • the decoder 1800 may be the decoder 200 in FIG. 1.
  • the auxiliary information decoding module 1801 may be the auxiliary information decoding module 204 in FIG. 6, and the point cloud filter module 1802 may be the point cloud filter in FIG. 6.
  • Module 207 may be the point cloud filter in FIG. 6.
  • the auxiliary information decoding module 1801 is used to parse the code stream to obtain indication information, which is used to indicate whether to process the reconstructed point cloud of the point cloud to be decoded according to the target filtering method; the target filtering method may be provided above The point cloud filtering method shown in Figure 7.
  • the point cloud filtering module 1802 is configured to perform filtering processing on the reconstructed point cloud of the point cloud to be decoded according to the target filtering method when the instruction information is used to indicate that the reconstructed point cloud of the point cloud to be decoded is processed according to the target filtering method.
  • the steps performed by the auxiliary information decoding module 1801 may refer to the steps performed by the above-mentioned auxiliary information decoding module 1702, which will not be repeated here.
  • modules in the decoder 1800 provided in the embodiments of the present application are functional entities that implement the various execution steps included in the corresponding methods provided above, that is, they are capable of fully implementing the point cloud filtering method of the present application. Please refer to the introduction of the corresponding method above for details of each step and the main function of the expansion and deformation of these steps. For the sake of brevity, this article will not repeat them.
  • FIG. 19 is a schematic block diagram of an encoder 1900 according to an embodiment of the application.
  • the encoder 1900 may include a point cloud filtering module 1901 and a texture map generating module 1902.
  • the encoder 1900 may be the encoder 100 in FIG. 1.
  • the point cloud filtering module 1901 may be the point cloud filtering module 114 in FIG. 2
  • the texture map generation module 1902 may be the texture map generation in FIG. 2.
  • the point cloud filtering module 1901 is the aforementioned point cloud filtering device 1600.
  • the texture map generating module 1902 is configured to generate a texture map of the point cloud to be encoded according to the reconstructed point cloud after the filtering process.
  • the point cloud filtering module 1901 is used to determine the neighboring point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud of the point cloud to be coded, and pass the neighboring point cloud blocks.
  • the projection plane corresponding to the block determines one or more adjacent reconstruction points of the current boundary point in the current point cloud block, and filters the current point cloud block according to one or more adjacent reconstruction points of the current boundary point.
  • the texture map generating module 1902 is configured to generate a texture map of the point cloud to be encoded according to the reconstructed point cloud after the filtering process.
  • the point cloud filtering module 1901 also includes a point set determining unit 1903 (not shown in the figure) and a filtering processing unit 1904 (not shown in the figure), which are used to treat according to the target filtering method.
  • the reconstructed point cloud of the coded point cloud is processed.
  • the steps performed by the point set determining unit 1903 can refer to the steps performed by the aforementioned point set determining unit 1601
  • the steps performed by the filter processing unit 1904 can refer to the steps performed by the aforementioned filter processing unit 1602, which will not be repeated here.
  • the various modules in the encoder 1900 provided in the embodiments of the present application are functional entities that implement the various execution steps included in the corresponding methods provided above, that is, they have the ability to fully implement the point cloud filtering method of the present application. Please refer to the introduction of the corresponding method above for the details of the steps in the steps and the expansion and deformation of these steps. For the sake of brevity, this article will not repeat them.
  • FIG. 20 is a schematic block diagram of a decoder 2000 provided by an embodiment of this application.
  • the decoder 2000 may include: a point cloud filtering module 2001 and a texture information reconstruction module 2002.
  • the decoder 2000 may be the decoder 200 in FIG. 1.
  • the point cloud filtering module 2001 may be the point cloud filtering module 207 in FIG. 6, and the texture information reconstruction module 2002 may be the texture information in FIG. Refactoring module 208.
  • the point cloud filtering module 2001 is the point cloud filtering device 1600 in FIG. 16; the texture information reconstruction module 2002 is used to reconstruct the texture information of the reconstructed point cloud after the filtering process.
  • the point cloud filtering module 2001 is used to determine the neighboring point cloud blocks of the current point cloud block from one or more point cloud blocks included in the reconstructed point cloud of the point cloud to be coded, and pass the neighboring point cloud blocks
  • the projection plane corresponding to the block determine one or more adjacent reconstruction points of the current boundary point in the current point cloud block, and filter the current point cloud block according to one or more adjacent reconstruction points of the current boundary point; texture;
  • the information reconstruction module 2002 is used to reconstruct the texture information of the reconstructed point cloud after filtering processing.
  • the point cloud filtering module 2001 also includes a point set determining unit 2003 (not shown in the figure) and a filtering processing unit 2004 (not shown in the figure), which are used to treat according to the target filtering method.
  • the reconstructed point cloud of the coded point cloud is processed.
  • the steps performed by the point set determining unit 2003 can refer to the steps performed by the aforementioned point set determining unit 1601
  • the steps performed by the filtering processing unit 2004 can refer to the steps performed by the aforementioned filtering processing unit 1602, which will not be repeated here.
  • modules in the decoder 2000 provided in the embodiments of the present application are functional entities that implement the various execution steps included in the corresponding methods provided above, that is, they have the ability to fully implement the point cloud filtering method of the present application. Please refer to the introduction of the corresponding method above for the details of the steps in the steps and the expansion and deformation of these steps. For the sake of brevity, this article will not repeat them.
  • FIG. 21 is a schematic block diagram of an implementation manner of an encoding device or a decoding device (referred to as a decoding device 2100 for short) used in an embodiment of the present application.
  • the decoding device 2100 may include a processor 2110, a memory 2130, and a bus system 2150.
  • the processor 2110 and the memory 2130 are connected through a bus system 2150.
  • the memory 2130 is used to store instructions, and the processor 2110 is used to execute instructions stored in the memory 2130 to execute various point cloud filtering methods described in this application. In order to avoid repetition, they are not described in detail here.
  • the processor 2110 may be a central processing unit (CPU), and the processor 2110 may also be other general-purpose processors, DSP, ASIC, FPGA or other programmable logic devices, discrete gates. Or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 2130 may include a ROM device or a RAM device. Any other suitable type of storage device can also be used as the memory 2130.
  • the memory 2130 may include code and data 2131 accessed by the processor 2110 using the bus 2150.
  • the memory 2130 may further include an operating system 2133 and an application program 2135, the application program 2135 including allowing the processor 2110 to execute the point cloud encoding or decoding method described in this application (especially the method for filtering the current point cloud block described in this application) At least one program.
  • the application program 2135 may include applications 1 to N, which further include a point cloud encoding or decoding application (referred to as a point cloud decoding application) that executes the point cloud encoding or decoding method described in this application.
  • the bus system 2150 may also include a power bus, a control bus, and a status signal bus. However, for clear description, various buses are marked as the bus system 2150 in the figure.
  • the decoding device 2100 may further include one or more output devices, such as a display 2170.
  • the display 2170 may be a touch-sensitive display that merges the display with a touch-sensitive unit operable to sense touch input.
  • the display 2170 may be connected to the processor 2110 via the bus 2150.
  • the computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or a communication medium that includes any medium that facilitates the transfer of a computer program from one place to another (for example, according to a communication protocol) .
  • computer-readable media may generally correspond to non-transitory tangible computer-readable storage media, or communication media, such as signals or carrier waves.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or may be used to store instructions or data Any other medium that can be accessed by a computer in the form of a structured program code. And, any connection is properly called a computer-readable medium.
  • any connection is properly called a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • coaxial cable Wire, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio, and microwave are included in the definition of media.
  • the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are actually directed to non-transitory tangible storage media.
  • magnetic disks and optical disks include compact disks (CD), laser disks, optical disks, DVDs, and Blu-ray disks, where disks usually reproduce data magnetically, and optical disks use lasers to reproduce data optically. Combinations of the above should also be included in the scope of computer-readable media.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • the term "processor” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functions described in the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or in combination Into the combined codec.
  • the techniques can be fully implemented in one or more circuits or logic elements. Under one example, various illustrative logical blocks, units, and modules in the encoder 100 and the decoder 200 may be understood as corresponding circuit devices or logic elements.
  • the technology of the present application may be implemented in a variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or a set of ICs (eg, chipsets).
  • ICs integrated circuits
  • a set of ICs eg, chipsets
  • Various components, modules, or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but they do not necessarily need to be implemented by different hardware units.
  • various units can be combined with appropriate software and/or firmware in the codec hardware unit, or by interoperating hardware units (including one or more processors as described above). provide.

Abstract

La présente invention concerne un procédé et un dispositif de filtrage de nuage de points ainsi qu'un support de stockage appartenant au domaine technique du traitement des données. Le procédé comprend les étapes suivantes consistant à : déterminer, à partir d'un ou de plusieurs blocs de nuage de points inclus dans un nuage de points reconstruit, un bloc de nuage de points adjacent à un bloc de nuage de points actuel (S701) ; déterminer un ou plusieurs points reconstruits adjacents du point de limite actuel dans le bloc de nuage de points actuel au moyen d'un plan de projection correspondant au bloc de nuage de points adjacent au bloc de nuage de points actuel (S702) ; et filtrer le bloc de nuage de points actuel en fonction du ou des points reconstruits adjacents du point de limite actuel (S703). Selon le procédé de filtrage de nuage de points, le point reconstruit adjacent du point de limite actuel dans un espace tridimensionnel peut être déterminé par le plan de projection dans un espace bidimensionnel, ce qui simplifie le processus de détermination du point reconstruit adjacent du point limite actuel, réduit la complexité de filtrage et améliore l'efficacité de codage.
PCT/CN2019/115778 2019-01-15 2019-11-05 Procédé et dispositif de filtrage de nuage de points et support de stockage WO2020147379A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910037240.1 2019-01-15
CN201910037240.1A CN111435551B (zh) 2019-01-15 2019-01-15 点云滤波方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2020147379A1 true WO2020147379A1 (fr) 2020-07-23

Family

ID=71580051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115778 WO2020147379A1 (fr) 2019-01-15 2019-11-05 Procédé et dispositif de filtrage de nuage de points et support de stockage

Country Status (2)

Country Link
CN (1) CN111435551B (fr)
WO (1) WO2020147379A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310227A (zh) * 2023-05-18 2023-06-23 海纳云物联科技有限公司 三维稠密重建方法、装置、电子设备及介质
CN116681767A (zh) * 2023-08-03 2023-09-01 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117223031A (zh) * 2021-05-06 2023-12-12 Oppo广东移动通信有限公司 点云编解码方法、编码器、解码器及计算机存储介质
WO2023123471A1 (fr) * 2021-12-31 2023-07-06 Oppo广东移动通信有限公司 Procédé de codage et de décodage, flux de code, codeur, décodeur et support d'enregistrement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369313A (zh) * 2007-08-17 2009-02-18 鸿富锦精密工业(深圳)有限公司 点云噪声点过滤系统及方法
CN103679807A (zh) * 2013-12-24 2014-03-26 焦点科技股份有限公司 一种带边界约束的散乱点云重构方法
CN105630905A (zh) * 2015-12-14 2016-06-01 西安科技大学 一种基于散乱点云数据的分层式压缩方法及装置
US9472022B2 (en) * 2012-10-05 2016-10-18 University Of Southern California Three-dimensional point processing and model generation
CN107845073A (zh) * 2017-10-19 2018-03-27 华中科技大学 一种基于深度图的局部自适应三维点云去噪方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187704A1 (en) * 2010-02-04 2011-08-04 Microsoft Corporation Generating and displaying top-down maps of reconstructed 3-d scenes
KR101079475B1 (ko) * 2011-06-28 2011-11-03 (주)태일아이엔지 포인트 클라우드 필터링을 이용한 3차원 도시공간정보 구축 시스템
CN104427291B (zh) * 2013-08-19 2018-09-28 华为技术有限公司 一种图像处理方法及设备
US9547901B2 (en) * 2013-11-05 2017-01-17 Samsung Electronics Co., Ltd. Method and apparatus for detecting point of interest (POI) in three-dimensional (3D) point clouds
GB2528669B (en) * 2014-07-25 2017-05-24 Toshiba Res Europe Ltd Image Analysis Method
CN104240300B (zh) * 2014-08-29 2017-07-14 电子科技大学 基于分布式并行的大规模点云复杂空间曲面重构方法
US11297346B2 (en) * 2016-05-28 2022-04-05 Microsoft Technology Licensing, Llc Motion-compensated compression of dynamic voxelized point clouds
CN106548520A (zh) * 2016-11-16 2017-03-29 湖南拓视觉信息技术有限公司 一种点云数据去噪的方法和系统
CN107123164B (zh) * 2017-03-14 2020-04-28 华南理工大学 保持锐利特征的三维重建方法及系统
CN106960470B (zh) * 2017-04-05 2022-04-22 未来科技(襄阳)有限公司 三维点云曲面重建方法及装置
CN108986024B (zh) * 2017-06-03 2024-01-23 西南大学 一种基于网格的激光点云规则排列处理方法
CN107274376A (zh) * 2017-07-10 2017-10-20 南京埃斯顿机器人工程有限公司 一种工件三维点云数据平滑滤波方法
CN107767453B (zh) * 2017-11-01 2021-02-26 中北大学 一种基于规则约束的建筑物lidar点云重构优化方法
CN109118574A (zh) * 2018-07-04 2019-01-01 北京航空航天大学 一种基于三维特征提取的快速逆向建模方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369313A (zh) * 2007-08-17 2009-02-18 鸿富锦精密工业(深圳)有限公司 点云噪声点过滤系统及方法
US9472022B2 (en) * 2012-10-05 2016-10-18 University Of Southern California Three-dimensional point processing and model generation
CN103679807A (zh) * 2013-12-24 2014-03-26 焦点科技股份有限公司 一种带边界约束的散乱点云重构方法
CN105630905A (zh) * 2015-12-14 2016-06-01 西安科技大学 一种基于散乱点云数据的分层式压缩方法及装置
CN107845073A (zh) * 2017-10-19 2018-03-27 华中科技大学 一种基于深度图的局部自适应三维点云去噪方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310227A (zh) * 2023-05-18 2023-06-23 海纳云物联科技有限公司 三维稠密重建方法、装置、电子设备及介质
CN116310227B (zh) * 2023-05-18 2023-09-12 海纳云物联科技有限公司 三维稠密重建方法、装置、电子设备及介质
CN116681767A (zh) * 2023-08-03 2023-09-01 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备
CN116681767B (zh) * 2023-08-03 2023-12-29 长沙智能驾驶研究院有限公司 一种点云搜索方法、装置及终端设备

Also Published As

Publication number Publication date
CN111435551B (zh) 2023-01-13
CN111435551A (zh) 2020-07-21

Similar Documents

Publication Publication Date Title
WO2020147379A1 (fr) Procédé et dispositif de filtrage de nuage de points et support de stockage
US11704837B2 (en) Point cloud encoding method, point cloud decoding method, encoder, and decoder
US11388442B2 (en) Point cloud encoding method, point cloud decoding method, encoder, and decoder
US20210183110A1 (en) Point Cloud Encoding Method, Point Cloud Decoding Method, Encoder, and Decoder
US11875538B2 (en) Point cloud encoding method and encoder
US11961265B2 (en) Point cloud encoding and decoding method and apparatus
WO2020151496A1 (fr) Procédé et appareil de codage/décodage de nuage de points
WO2020063294A1 (fr) Procédé de codage et de décodage en nuage de points et codec
US20220007037A1 (en) Point cloud encoding method and apparatus, point cloud decoding method and apparatus, and storage medium
WO2020063718A1 (fr) Procédé de codage/décodage de nuage de points et codeur/décodeur
WO2020143725A1 (fr) Procédé de décodage de nuage de points et décodeur
WO2020015517A1 (fr) Procédé de codage de nuage de points, procédé de décodage de nuage de points et décodeur
JP2022513484A (ja) 点群符号化方法及びエンコーダ
WO2020187283A1 (fr) Procédé de codage de nuage de points, procédé de décodage de nuage de points, appareil et support de stockage
WO2020057338A1 (fr) Procédé de codage en nuage de points et codeur
WO2020220941A1 (fr) Procédé d'encodage de nuage de points, procédé de décodage de nuage de points, appareils et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19910096

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19910096

Country of ref document: EP

Kind code of ref document: A1