WO2020143725A1 - 点云译码方法及译码器 - Google Patents

点云译码方法及译码器 Download PDF

Info

Publication number
WO2020143725A1
WO2020143725A1 PCT/CN2020/071247 CN2020071247W WO2020143725A1 WO 2020143725 A1 WO2020143725 A1 WO 2020143725A1 CN 2020071247 W CN2020071247 W CN 2020071247W WO 2020143725 A1 WO2020143725 A1 WO 2020143725A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel block
pixel
point cloud
target
target position
Prior art date
Application number
PCT/CN2020/071247
Other languages
English (en)
French (fr)
Inventor
蔡康颖
张德军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020143725A1 publication Critical patent/WO2020143725A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals

Definitions

  • This application relates to the technical field of coding and decoding, in particular to a point cloud decoding method and decoder.
  • 3d sensor for example, 3d scanner
  • 3d scanner 3d scanner
  • the encoder In order to save the transmission overhead of the code stream, the encoder usually needs to downsample the occupancy map of the original resolution of the encoded point cloud when encoding the point cloud to be encoded, and the occupancy map processed by the downsampling (that is, the low-resolution Occupancy map) related information is sent to the decoder. Based on this, when reconstructing the point cloud, the encoder and decoder need to first upsample the occupancy map after downsampling to obtain the original resolution occupancy map, and then reconstruct the points based on the occupancy map after upsampling. cloud.
  • the occupancy map processed by the downsampling that is, the low-resolution Occupancy map
  • the above method will cause the occupancy map of the point cloud to be upsampled to produce jagged edges, resulting in more outlier points (i.e. outliers or abnormal points) in the point cloud reconstructed based on the occupancy map processed by the upsampling ), which in turn affects the codec performance of the point cloud.
  • the embodiments of the present application provide a point cloud codec method and codec, which helps reduce outlier points in the reconstructed point cloud, thereby helping to improve the codec performance of the point cloud.
  • a point cloud decoding method which includes: enlarging the first occupancy map of the point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution and the second The resolution of the occupancy map is the second resolution, and the second resolution is greater than the first resolution; if the reference pixel block in the first occupancy map is the boundary pixel block, the first pixel in the second occupancy map is to be processed.
  • the pixel at a target position is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied, resulting in a marked pixel block, wherein the pixel block to be processed corresponds to the reference pixel Block; if the reference pixel block is a non-boundary pixel block, then the pixels in the pixel block to be processed are marked as occupied or both marked as unoccupied to obtain a marked pixel block; according to the marked second occupancy map, Reconstruct the point cloud to be decoded; the marked second occupancy map includes the marked pixel block.
  • the pixels at local positions in the pixel block to be processed in the unmarked high-resolution point cloud occupancy map are marked as occupied (for example, set to 1) , And/or mark the pixel at the local position in the pixel block to be processed as unoccupied (for example, set to 0).
  • the occupancy graph processed by the upsampling is as close as possible to the original occupancy graph of the point cloud to be decoded.
  • the outlier points in the reconstructed point cloud can be reduced. Therefore, there are Help to improve the performance of codec.
  • Each pixel in the second occupancy map is not marked, so the second occupancy map can be considered as an empty occupancy map.
  • the “reference pixel block” is used in the embodiments of the present application to represent a certain pixel block in the first occupied map (specifically, it can be any pixel block in the first occupied map).
  • the certain pixel block corresponds to the pixel block to be processed in the second occupancy graph.
  • the pixel block to be processed is a pixel block obtained by enlarging the "reference pixel block”.
  • the "marker” in the embodiment of the present application may be replaced with "filling”, which is described here in a unified manner, and will not be described in detail below.
  • the reference pixel block is a non-boundary pixel block; if at least one spatial adjacent pixel block of a reference pixel block is an invalid pixel block, the reference pixel The block is a boundary pixel block. Among them, if at least one pixel included in the reference pixel block is an occupied pixel (for example, a pixel set to 1), the reference pixel block is a valid pixel block, and if all pixels included in the reference pixel block are unoccupied pixels ( For example, pixels set to 0), then the reference pixel block is an invalid pixel block.
  • the first target position and the second target position represent the positions of some pixels in the pixel block to be processed.
  • “and/or” is specifically “and”, there is no intersection between the first target position and the second target position, and the union of the first target position and the second target position is part or all of the pixel block to be processed
  • the position of the pixel; that is, for a pixel in the pixel block to be processed, the position of the pixel may be the first target position, or may be the second target position, or may not be the first target position, Nor is it the second target position.
  • the reference pixel block is a non-boundary pixel block
  • all pixels in the pixel block to be processed corresponding to the reference pixel block are marked as occupied or marked as unoccupied, including: If the reference pixel The block is a non-boundary pixel block, and is a valid pixel block, then the pixels in the pixel block to be processed corresponding to the reference pixel block are marked as occupied (for example, set to 1); if the reference pixel block is a non-boundary pixel block, and is For invalid pixel blocks, all pixels in the pixel block to be processed corresponding to the reference pixel block are marked as unoccupied (for example, set to 0).
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied ,
  • the target candidate mode includes one candidate mode or multiple candidate modes (in other words, the target candidate mode is one candidate mode or a combination of multiple candidate modes);
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied.
  • the distribution of adjacent pixel blocks in the invalid airspace of the reference pixel block (for example, indicated by a five-pointed star as shown in FIG. 9A) in the target candidate mode is consistent or substantially the same as the distribution of adjacent pixel blocks in the invalid airspace of the reference pixel block Or tend to be consistent.
  • the distribution of the neighboring pixel blocks of the invalid spatial domain of the reference pixel block in each candidate mode may indicate the orientation of the neighboring pixel blocks of the invalid spatial domain of the reference pixel block relative to the reference pixel block.
  • the distribution of adjacent pixel blocks of the invalid spatial domain of the reference pixel block in the target candidate mode is consistent or substantially the same or tends to be the same as the distribution of adjacent pixel blocks of the invalid spatial domain of the reference pixel block
  • Consistent can be replaced by: the distribution of adjacent pixel blocks of the effective spatial domain of the reference pixel block in the target candidate mode is consistent or substantially the same or tends to be consistent with the distribution of adjacent pixel blocks of the effective spatial domain of the reference pixel block.
  • the distribution of the adjacent pixel blocks of the effective spatial domain of the reference pixel block in each candidate mode may indicate the orientation of the adjacent pixel blocks of the effective spatial domain of the reference pixel block relative to the reference pixel block.
  • the distribution of adjacent pixel blocks of the invalid spatial domain of the reference pixel block in the target candidate mode is consistent or substantially the same or tends to be consistent with the distribution of adjacent pixel blocks of the invalid spatial domain of the reference pixel block
  • the distribution of neighboring pixel blocks in the spatial domain of the reference pixel block in the target candidate mode is consistent or substantially the same or tends to be consistent with the distribution of neighboring pixel blocks in the spatial domain of the reference pixel block.
  • the distribution of the spatial neighboring pixel blocks of the reference pixel block in each candidate mode may indicate the orientation of the spatial neighboring pixel blocks of the reference pixel block relative to the reference pixel block.
  • the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block in the target candidate mode is consistent or substantially the same or tends to be consistent with the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block, as Examples.
  • a candidate pattern can be understood as a pattern or a template representing the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block.
  • Reference pixel block is a concept proposed to facilitate description of candidate modes.
  • determining a target candidate mode suitable for the pixel block to be processed may include: selecting a target candidate mode suitable for the pixel block to be processed from the candidate pattern set, where the candidate pattern set includes at least two candidate patterns.
  • the set of candidate patterns may be predefined.
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the second target position in the pixel block to be processed Mark the pixels as unoccupied, including: according to the location distribution of the invalid pixels in the target candidate mode, mark the pixels at the first target position in the pixel block to be processed in the second occupied map as occupied, and/or mark the pending The pixel at the second target position in the pixel block is marked as unoccupied.
  • the position of the effective pixel determined based on the position distribution of the invalid pixels in the candidate mode is the first target position, and/or based on the candidate mode
  • the position of the invalid pixel determined by the position distribution of the invalid pixel in is the second position.
  • the target candidate mode includes multiple candidate modes (that is, the target candidate mode is a combination of multiple candidate modes)
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the second target position in the pixel block to be processed Mark the pixels as unoccupied, including: according to the sub-candidate module corresponding to the target candidate mode, mark the pixel at the first target position in the pixel block to be processed in the second occupied map as occupied, and/or the pixel to be processed
  • the pixel at the second target position in the block is marked as unoccupied.
  • different sub-candidate modes are used to describe the distribution of different positions of invalid pixels inside the pixel block.
  • the target candidate pattern is a candidate pattern
  • the target candidate pattern corresponds to a sub-candidate pattern
  • the sub-candidate pattern the The pixel at a target position is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied;
  • the target candidate pattern corresponds to multiple sub-candidate patterns, according to the multiple sub-candidate patterns
  • the method further includes: when the target candidate mode corresponds to multiple sub-candidate modes, coding identification information into the code stream, and the identification information is used to indicate the multiple sub-candidates In the candidate mode, the sub-candidate mode adopted when performing the marking operation.
  • the method further includes: parsing the code stream to obtain the identification information.
  • the target candidate mode includes the first candidate mode and the second candidate mode
  • the target sub-candidate mode mark the pixels of the first target position in the pixel block to be processed in the second occupied map Is occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied
  • the target sub-candidate mode includes a first sub-candidate mode and a second sub-candidate mode.
  • the first candidate mode corresponds to the first sub-candidate mode
  • the second candidate mode corresponds to the second sub-candidate mode
  • the first candidate mode corresponds to multiple sub-candidate modes, and the first sub-candidate is included in the multiple sub-candidate modes Mode
  • the second candidate mode corresponds to multiple sub-candidate modes, including the second sub-candidate mode among the multiple sub-candidate modes
  • the first candidate mode corresponds to the first sub-candidate mode
  • the second candidate mode corresponds to multiple sub-candidates Mode
  • the second sub-candidate mode is included in the multiple sub-candidate modes.
  • the method further includes: when the first candidate mode corresponds to multiple sub-candidate modes, the first identification information is encoded into the code stream, and the first identification information is used Yu represents the first sub-candidate mode.
  • the method further includes: parsing the code stream to obtain the first identification information.
  • the method further includes: when the second candidate mode corresponds to multiple sub-candidate modes, encode the second identification information into the code stream, and the second identification information is used to indicate the second Sub-candidate mode.
  • the method further includes: parsing the code stream to obtain the second identification information.
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied , Including: determining the type of pixel block to be processed; according to the type of pixel block to be processed, using the corresponding target processing method to mark the pixel at the first target position in the pixel block to be processed in the second occupied map as occupied, and// Or mark the pixel at the second target position in the pixel block to be processed as unoccupied.
  • This method is simple to implement.
  • determining the type of the pixel block to be processed includes: determining whether the invalid pixels in the pixel block to be processed in the pixel block to be processed are based on whether the spatial adjacent pixel block of the reference pixel block is an invalid pixel block Orientation information; where different types of pixel blocks correspond to different orientation information.
  • the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied , Including: when the number of effective pixel blocks in the spatial adjacent pixel blocks of the reference pixel block is greater than or equal to a preset threshold, marking the pixel at the first target position in the pixel block to be processed in the second occupancy map as Occupied, and/or mark the pixel at the second target position in the pixel block to be processed as unoccupied.
  • the number of effective pixel blocks in the spatial adjacent pixel blocks of the reference pixel block is large, there is more information that can be referenced around the reference pixel block.
  • the position distribution of the invalid pixels in the corresponding pixel block to be processed has a higher accuracy, so that the occupancy map after the subsequent upsampling process is closer to the original occupancy map of the point cloud to be decoded, thereby improving the performance of encoding and decoding.
  • a point cloud decoding method including: upsampling a first occupancy map of a point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution, and The resolution of the second occupancy map is the second resolution, and the second resolution is greater than the first resolution; where, if the reference pixel block in the first occupancy map is a boundary pixel block, the The pixel at the first target position in the corresponding pixel block is an occupied pixel (that is, a marked pixel, such as a pixel filled with 1), and/or the pixel block corresponding to the reference pixel block in the second occupied map The pixel at the second target position is an unoccupied pixel; if the reference pixel block is a non-boundary pixel block, the pixels in the pixel block corresponding to the reference pixel block in the second occupied image are either occupied pixels or unoccupied pixels Pixels (ie unmarked pixels, such as pixels filled with 0). According to the second occupancy graph, reconstruct the first occupancy map of a point cloud
  • a point cloud coding method which includes: determining instruction information used to indicate whether to process an occupancy map of an encoded point cloud according to a target point cloud coding method; the target point cloud coding method includes the above A point cloud decoding method (specifically a point cloud coding method) provided by any possible design of the first aspect or the first aspect, or any possible design of the second aspect or the second aspect; edit the instruction information Into the stream.
  • a point cloud decoding method which includes: parsing a code stream to obtain indication information used to indicate whether to process an occupancy map of a point cloud to be decoded according to a target point cloud decoding method; target point cloud
  • the decoding method includes the above-mentioned first aspect or any possible design of the first aspect, or the point cloud decoding method (specifically, the point cloud decoding method) provided by the second aspect or any possible design of the second aspect;
  • the instruction information indicates processing according to the target point cloud decoding method
  • the occupancy map of the decoded point cloud is processed according to the target point cloud decoding method.
  • a decoder including: an upsampling module for enlarging a first occupancy map of a point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution Rate, the resolution of the second occupancy map is the second resolution, and the second resolution is greater than the first resolution; if the reference pixel block in the first occupancy map is the boundary pixel block, the The pixel at the first target position in the pixel block is marked as occupied, and/or the pixel at the second target position in the pixel block to be processed is marked as unoccupied, and a marked pixel block is obtained; if the reference pixel block is a non-boundary pixel block, Then, the pixels in the pixel block to be processed are all marked as occupied or both are marked as unoccupied, and the marked pixel block is obtained.
  • the pixel block to be processed corresponds to the reference pixel block.
  • the point cloud reconstruction module is configured to reconstruct a point cloud to be decoded according to the marked second
  • a decoder including: an upsampling module for upsampling a first occupancy map of a point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first Resolution, the resolution of the second occupancy map is the second resolution, and the second resolution is greater than the first resolution; where, if the reference pixel block in the first occupancy map is a boundary pixel block, the The pixel at the first target position in the pixel block corresponding to the reference pixel block is an occupied pixel, and/or the pixel at the second target position in the pixel block corresponding to the reference pixel block in the second occupied map is an unoccupied pixel If the reference pixel block is a non-boundary pixel block, the pixels in the pixel block corresponding to the reference pixel block in the second occupied map are all occupied pixels or unoccupied pixels.
  • the point cloud reconstruction module is used to reconstruct the point cloud to be decoded according to the second occupancy map.
  • an encoder including: an auxiliary information encoding module for determining indication information for indicating whether to process an occupancy map of an encoded point cloud according to a target point cloud encoding method; a target point cloud
  • the encoding method includes the above-mentioned first aspect or any possible design of the first aspect, or the point cloud decoding method (specifically, the point cloud encoding method) provided by the second aspect or any possible design of the second aspect; Encode the instruction information into the code stream.
  • the occupancy graph processing module is configured to process the occupancy graph of the point cloud to be encoded according to the target point cloud encoding method when the indication information indicates that the occupancy graph of the point cloud to be encoded is encoded according to the target point cloud encoding method.
  • the occupancy graph processing module may be implemented by the upsampling module 111 and the point cloud reconstruction module 112 included in the encoder shown in FIG. 2.
  • a decoder which includes an auxiliary information decoding module for parsing a code stream to obtain indication information, where the indication information is used to indicate whether to occupy a decoded point cloud occupancy map according to a target point cloud decoding method Perform decoding;
  • the target point cloud decoding method includes the first aspect or any possible design of the first aspect, or the point cloud decoding method provided by the second aspect or any possible design of the second aspect ( Specifically, a point cloud decoding method); an occupancy graph processing module, configured to process the occupancy graph of the point cloud to be decoded according to the target point cloud decoding method when the indication information indicates processing according to the target point cloud decoding method.
  • the occupancy graph processing module may be implemented by the upsampling module 208 and the point cloud reconstruction module 205 included in the decoder shown in FIG. 5.
  • a decoding device including: a memory and a processor; wherein the memory is used to store program code; the processor is used to call the program code to perform the first aspect or any of the first aspect A possible design, or a point cloud decoding method provided by the second aspect or any possible design of the second aspect.
  • an encoding device including: a memory and a processor; wherein the memory is used to store program code; the processor is used to call the program code to perform the point cloud encoding method provided in the third aspect.
  • a decoding device including: a memory and a processor; wherein the memory is used to store program code; the processor is used to call the program code to perform the point cloud decoding method provided in the fourth aspect .
  • the present application also provides a computer-readable storage medium, including program code, which when run on a computer, causes the computer to perform the first aspect and its possible design as described above, or the second aspect and its possible design provision Any of the occupancy graph upsampling methods.
  • the present application also provides a computer-readable storage medium, including program code, which when run on a computer, causes the computer to perform the point cloud coding method provided in the third aspect.
  • the present application also provides a computer-readable storage medium, including program code, which when run on a computer, causes the computer to execute the point cloud coding method provided in the fourth aspect.
  • FIG. 1 is a schematic block diagram of a point cloud decoding system that can be used in an example of an embodiment of the present application
  • FIG. 2 is a schematic block diagram of an example encoder that can be used in an embodiment of the present application
  • FIG. 3 is a schematic diagram of a point cloud, a patch of a point cloud, and an occupancy diagram of a point cloud applicable to embodiments of the present application;
  • FIG. 4 is a comparison schematic diagram of a change process of an occupancy graph of an encoding endpoint cloud provided by an embodiment of the present application
  • FIG. 5 is a schematic block diagram of a decoder that can be used in an example of an embodiment of the present application
  • FIG. 6 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a position distribution of invalid pixels in a pixel block provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another kind of position distribution of invalid pixels in a pixel block provided by an embodiment of the present application.
  • 9A is a schematic diagram of a correspondence between a candidate mode and a sub-candidate mode provided by an embodiment of this application;
  • 9B is a schematic diagram of a correspondence between another candidate mode and a sub-candidate mode provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of an upsampling process when a target candidate mode is a combination of multiple candidate modes provided by an embodiment of the present application;
  • FIG. 11 is a schematic diagram of a corresponding relationship between an index, a discrimination method diagram, a schematic diagram, and description information of a type of boundary pixel block provided by an embodiment of the present application;
  • FIG. 12 is a schematic diagram for determining a first target position and/or a second target position provided by an embodiment of the present application
  • FIG. 13 is a schematic diagram of another boundary pixel block type index, discrimination method diagram, schematic diagram and corresponding relationship between description information provided by an embodiment of the present application;
  • FIG. 14 is another schematic diagram for determining a first target position and/or a second target position provided by an embodiment of this application;
  • 15 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • 16 is a schematic flowchart of a point cloud coding method provided by an embodiment of the present application.
  • FIG. 17 is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • 19A is a schematic block diagram of an encoder provided by an embodiment of the present application.
  • 19B is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • FIG. 20 is a schematic block diagram of an implementation manner of a decoding device used in an embodiment of the present application.
  • At least one (species) in the embodiments of the present application includes one (species) or a plurality (species).
  • Multiple (species) means two (species) or more than two (species).
  • at least one of A, B, and C includes the presence of A alone, B alone, A and B, A and C, B and C, and A, B, and C.
  • “/" means “or”, for example, A/B can mean A or B;
  • and/or” in this article is merely an association relationship describing an associated object, It means that there can be three kinds of relationships, for example, A and/or B.
  • FIG. 1 is a schematic block diagram of a point cloud decoding system 1 that can be used in an example of an embodiment of the present application.
  • the term "point cloud coding" or “coding” may generally refer to point cloud coding or point cloud decoding.
  • the encoder 100 of the point cloud decoding system 1 may encode the point cloud to be encoded according to any point cloud encoding method proposed in this application.
  • the decoder 200 of the point cloud decoding system 1 may decode the point cloud to be decoded according to the point cloud decoding method corresponding to the point cloud encoding method used by the encoder proposed in this application.
  • the point cloud decoding system 1 includes a source device 10 and a destination device 20.
  • the source device 10 generates encoded point cloud data. Therefore, the source device 10 may be referred to as a point cloud encoding device.
  • the destination device 20 may decode the encoded point cloud data generated by the source device 10. Therefore, the destination device 20 may be referred to as a point cloud decoding device.
  • Various implementations of source device 10, destination device 20, or both may include one or more processors and memory coupled to the one or more processors.
  • the memory may include, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (electrically erasable programmable-read-only memory, EEPROM ), flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures accessible by the computer, as described herein.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or any other medium that can be used to store the desired program code in the form of instructions or data structures accessible by the computer, as described herein.
  • Source device 10 and destination device 20 may include various devices, including desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, etc. Devices, televisions, cameras, display devices, digital media players, video game consoles, in-vehicle computers, or the like.
  • Link 30 may include one or more media or devices capable of moving the encoded point cloud data from source device 10 to destination device 20.
  • link 30 may include one or more communication media that enable source device 10 to send encoded point cloud data directly to destination device 20 in real time.
  • the source device 10 may modulate the encoded point cloud data according to a communication standard (eg, a wireless communication protocol), and may send the modulated point cloud data to the destination device 20.
  • the one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (eg, the Internet).
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from the source device 10 to the destination device 20.
  • the encoded data may be output from the output interface 140 to the storage device 40.
  • the encoded point cloud data can be accessed from the storage device 40 through the input interface 240.
  • the storage device 40 may include any of a variety of distributed or locally accessed data storage media, such as a hard disk drive, a Blu-ray disc, a digital versatile disc (DVD), a compact disc (read-only disc-read) only memory (CD-ROM), flash memory, volatile or non-volatile memory, or any other suitable digital storage medium for storing encoded point cloud data.
  • the storage device 40 may correspond to a file server or another intermediate storage device that may hold the encoded point cloud data generated by the source device 10.
  • the destination device 20 may access the stored point cloud data from the storage device 40 via streaming or download.
  • the file server may be any type of server capable of storing encoded point cloud data and transmitting the encoded point cloud data to the destination device 20.
  • Example file servers include network servers (for example, for websites), file transfer protocol (FTP) servers, network attached storage (NAS) devices, or local disk drives.
  • the destination device 20 can access the encoded point cloud data through any standard data connection, including an Internet connection.
  • This may include a wireless channel (eg, Wi-Fi connection), a wired connection (eg, digital subscriber line (DSL), cable modem, etc.), or a coded point cloud suitable for accessing storage on a file server A combination of both of the data.
  • the transmission of the encoded point cloud data from the storage device 40 may be streaming transmission, download transmission, or a combination of both.
  • the point cloud decoding system 1 illustrated in FIG. 1 is only an example, and the technology of the present application can be applied to point cloud decoding (for example, point cloud decoding that does not necessarily include any data communication between the point cloud encoding device and the point cloud decoding device Cloud coding or point cloud decoding) device.
  • point cloud decoding for example, point cloud decoding that does not necessarily include any data communication between the point cloud encoding device and the point cloud decoding device Cloud coding or point cloud decoding
  • data is retrieved from local storage, streamed on the network, and so on.
  • the point cloud encoding device may encode the data and store the data to the memory, and/or the point cloud decoding device may retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other but only encode data to and/or retrieve data from memory and decode the data.
  • the source device 10 includes a data source 120, an encoder 100 and an output interface 140.
  • the output interface 140 may include a regulator/demodulator (modem) and/or a transmitter (or referred to as a transmitter).
  • the data source 120 may include a point cloud capture device (eg, a camera), a point cloud archive containing previously captured point cloud data, a point cloud feed interface to receive point cloud data from a point cloud content provider, and/or For computer graphics systems that generate point cloud data, or a combination of these sources of point cloud data.
  • the encoder 100 may encode point cloud data from the data source 120.
  • the source device 10 sends the encoded point cloud data directly to the destination device 20 via the output interface 140.
  • the encoded point cloud data may also be stored on storage device 40 for later access by destination device 20 for decoding and/or playback.
  • the destination device 20 includes an input interface 240, a decoder 200 and a display device 220.
  • input interface 240 includes a receiver and/or a modem.
  • the input interface 240 may receive the encoded point cloud data via the link 30 and/or from the storage device 40.
  • the display device 220 may be integrated with the destination device 20 or may be external to the destination device 20. In general, the display device 220 displays decoded point cloud data.
  • the display device 220 may include various display devices, for example, a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, or other types of display devices.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the encoder 100 and the decoder 200 may each be integrated with an audio encoder and decoder, and may include an appropriate multiplexer-multiplexer- demultiplexer (MUX-DEMUX) unit or other hardware and software to handle the encoding of both audio and video in a common data stream or separate data streams.
  • MUX-DEMUX multiplexer-multiplexer- demultiplexer
  • the MUX-DEMUX unit may conform to the ITU H.223 multiplexer protocol, or other protocols such as user datagram protocol (UDP).
  • the encoder 100 and the decoder 200 may each be implemented as any one of various circuits such as one or more microprocessors, digital signal processing (DSP), application specific integrated circuits (application specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof. If the application is partially implemented in software, the device may store instructions for the software in a suitable non-volatile computer-readable storage medium, and may use one or more processors to execute the instructions in hardware Thus, the technology of the present application is implemented. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered as one or more processors. Each of the encoder 100 and the decoder 200 may be included in one or more encoders or decoders, either of which may be integrated as a combined encoder/decoder in the corresponding device Part of the decoder (codec).
  • codec codec
  • This application may generally refer to the encoder 100 as another device that “signals” or “sends” certain information to, for example, the decoder 200.
  • the term “signaling” or “sending” may generally refer to the transmission of syntax elements and/or other data used to decode compressed point cloud data. This transfer can occur in real time or almost real time. Alternatively, this communication may occur after a period of time, for example, may occur when the syntax element is stored in the encoded bitstream to a computer-readable storage medium at the time of encoding, and the decoding device may then store the syntax element to this medium At any time.
  • FIG. 2 it is a schematic block diagram of an encoder 100 that can be used in an example of an embodiment of the present application.
  • Figure 2 is an example of an MPEG (moving picture, expert, group) point cloud compression (PCC) coding framework as an example.
  • the encoder 100 may include a patch information generation module 101, a packaging module 102, a depth map generation module 103, a texture map generation module 104, a filling module 105, an image or video-based encoding module 106, and an occupancy map encoding Module 107, auxiliary information encoding module 108, multiplexing module 109, etc.
  • the encoder 100 may further include a down-sampling module 110, an up-sampling module 111, a point cloud reconstruction module 112, a point cloud filtering module 113, and the like.
  • the patch information generation module 101 is used to divide a frame of point cloud into a plurality of patches by a certain method, and obtain relevant information of the generated patch.
  • patch refers to a set of points in a frame of point cloud, usually a connected area corresponds to a patch.
  • Patch-related information may include but is not limited to at least one of the following information: the number of patches divided by the point cloud, the position information of each patch in three-dimensional space, the index of the normal coordinate axis of each patch, each The depth maps generated by projecting a patch from three-dimensional space to two-dimensional space, the size of the depth map of each patch (such as the width and height of the depth map), and the occupancy map generated by projecting each patch from three-dimensional space to two-dimensional space.
  • Part of the relevant information of the patch such as the number of patches divided by the point cloud, the index of the normal coordinate axis of each patch, the size of the depth map of each patch, the location information of each patch in the point cloud, each The size information of the patch's occupancy map, etc., can be sent as auxiliary information to the auxiliary information encoding module 108 for encoding (ie, compression encoding).
  • the depth map of the patch and the like can also be sent to the depth map generation module 103.
  • Part of the relevant information of the patch such as the occupancy map of each patch can be sent to the packaging module 102 for packaging.
  • the patches of the point cloud are arranged in a specific order, for example, according to the width of the occupancy map of each patch / High descending order (or ascending order); then, according to the order of the arranged patches, insert the patch occupancy map into the available area of the point cloud occupancy map in order to obtain the point cloud occupancy map, the points obtained at this time
  • the resolution of the cloud occupancy map is the original resolution.
  • FIG. 3 it is a schematic diagram of a point cloud, a patch of a point cloud, and an occupancy map of a point cloud applicable to the embodiments of the present application.
  • (a) in FIG. 3 is a schematic diagram of a frame of point cloud
  • (b) in FIG. 3 is a schematic diagram of a patch of a point cloud obtained based on (a) in FIG. 3,
  • (( c) The figure is a schematic diagram of the occupancy map of the point cloud obtained by packaging the occupancy map of each patch obtained by mapping each patch shown in (b) in FIG. 3 onto a two-dimensional plane.
  • the packaging information of the patch obtained by the packaging module 102 can be sent to the depth map generation module 103.
  • the occupancy map of the point cloud obtained by the packaging module 102 can be used to guide the depth map generation module 103 to generate the depth map of the point cloud and the texture map generation module 104 to generate the texture map of the point cloud. On the other hand, it can be used to reduce the resolution of the downsampling module 110 and send it to the occupancy map encoding module 107 for encoding.
  • the depth map generation module 103 is used to generate a depth map of the point cloud according to the occupancy map of the point cloud, the occupancy map of each patch of the point cloud and depth information, and send the generated depth map to the filling module 105, To fill the blank pixels in the depth map to obtain a filled depth map.
  • the texture map generation module 104 is used to generate a texture map of the point cloud according to the occupancy map of the point cloud, the occupancy map of each patch of the point cloud and texture information, and send the generated texture map to the filling module 105, Fill the blank pixels in the texture map to obtain a filled texture map.
  • the filled depth map and the filled texture map are sent by the filling module 105 to the image or video-based encoding module 106 for image or video-based encoding.
  • the image or video-based encoding module 106, occupancy map encoding module 107, and auxiliary information encoding module 108 send the resulting encoding results (ie, code streams) to the multiplexing module 109 to merge into one code stream.
  • the code stream may be sent to the output interface 140.
  • the encoding result (ie, code stream) obtained by the image or video encoding module 106 is sent to the point cloud reconstruction module 112 for point cloud reconstruction to obtain a reconstructed point cloud (specifically, the reconstructed point Cloud geometry information).
  • the point cloud reconstruction module 112 for point cloud reconstruction to obtain a reconstructed point cloud (specifically, the reconstructed point Cloud geometry information).
  • the occupancy map of the original resolution point cloud restored by the upsampling module 111 to obtain the reconstructed point cloud geometry information.
  • the geometric information of the point cloud refers to the coordinate values of the points in the point cloud (for example, each point in the point cloud) in three-dimensional space.
  • the “occupation map of the point cloud” herein may be an occupancy map obtained after the point cloud is filtered (or referred to as smoothing processing) by the filtering module 113.
  • the up-sampling module 111 is used for up-sampling the received low-resolution point cloud occupancy map sent from the down-sampling module 110, so as to restore the original resolution point cloud occupancy map.
  • the point cloud reconstruction module 112 can also send the texture information of the point cloud and the reconstructed point cloud geometry information to the shading module, the shading module is used to shade the reconstructed point cloud to obtain the reconstructed point cloud Texture information.
  • the texture map generation module 104 may also generate a texture map of the point cloud based on information obtained by filtering the reconstructed point cloud geometric information via the point cloud filter module 113.
  • the encoder 100 shown in FIG. 2 is only an example, and in specific implementation, the encoder 100 may include more or fewer modules than those shown in FIG. 2. This embodiment of the present application does not limit this.
  • FIG. 4 it is a comparison schematic diagram of a change process of an occupancy graph of an encoding endpoint cloud provided by an embodiment of the present application.
  • the occupancy map of the point cloud shown in (a) of FIG. 4 is the original occupancy map of the point cloud generated by the packaging module 102, and its resolution (that is, the original resolution) is 1280*864.
  • the occupancy map shown in (b) of FIG. 4 is the occupancy of the low-resolution point cloud obtained after the downsampling module 110 processes the original occupancy map of the point cloud shown in (a) of FIG. 4
  • the resolution of the picture is 320*216.
  • the occupancy map shown in (c) of FIG. 4 is the original resolution point cloud obtained by upsampling module 111 upsampling the occupancy map of the low-resolution point cloud shown in (b) of FIG. 4.
  • the occupancy graph has a resolution of 1280*864.
  • FIG. 4(d) is a partially enlarged view of the ellipse region in FIG. 4(a)
  • FIG. 4(e) is a partially enlarged view of the ellipse region in FIG. 4(c).
  • the partial enlarged view shown in (e) is obtained after the partial enlarged view shown in (d) is processed by the down-sampling module 110 and the up-sampling module 111.
  • FIG. 5 it is a schematic block diagram of a decoder 200 that can be used in an example of an embodiment of the present application.
  • the decoder 200 may include a demultiplexing module 201, an image or video-based decoding module 202, an occupancy graph decoding module 203, an auxiliary information decoding module 204, a point cloud reconstruction module 205, and a point cloud filtering module 206 and the texture information reconstruction module 207 of the point cloud.
  • the decoder 200 may include an up-sampling module 208. among them:
  • the demultiplexing module 201 is used to send the input code stream (ie, the combined code stream) to the corresponding decoding module. Specifically, the code stream containing the encoded texture map and the coded depth map are sent to the image or video-based decoding module 202; the code stream containing the encoded occupancy map is sent to the occupancy map decoding module 203 , The code stream containing the encoded auxiliary information is sent to the auxiliary information decoding module 204.
  • the occupancy graph decoding module 203 is configured to decode the received code stream containing the encoded occupancy graph, and send the decoded occupancy graph information to the point cloud reconstruction module 205.
  • the occupancy graph information decoded by the occupancy graph decoding module 203 is information of the occupancy graph of the low-resolution point cloud described above.
  • the occupancy map here may be the occupancy map of the point cloud shown in (b) in FIG. 4.
  • the occupancy graph decoding module 203 may first send the decoded occupancy graph information to the upsampling module 208 for upsampling processing, and then send the occupancy graph of the original resolution point cloud obtained after the upsampling processing to the point Cloud reconstruction module 205.
  • the occupancy map of the original resolution point cloud obtained after the upsampling process may be the occupancy map of the point cloud shown in (c) in FIG. 4.
  • the point cloud reconstruction module 205 is used to reconstruct the geometric information of the point cloud according to the received occupancy map information and auxiliary information. For the specific reconstruction process, refer to the reconstruction of the point cloud reconstruction module 112 in the encoder 100 The process will not be repeated here.
  • the geometric information of the reconstructed point cloud is filtered by the point cloud filtering module 206, it is sent to the texture information reconstruction module 207 of the point cloud.
  • the point cloud texture information reconstruction module 207 is used to reconstruct the point cloud texture information to obtain a reconstructed point cloud.
  • the decoder 200 shown in FIG. 5 is only an example, and in specific implementation, the decoder 200 may include more or fewer modules than those shown in FIG. 5. This embodiment of the present application does not limit this.
  • the above-described up-sampling modules may all include an amplification unit and a marking unit. That is to say, the upsampling processing operations include enlargement operations and marking operations.
  • the enlargement unit is used to enlarge the occupancy map of the low-resolution point cloud to obtain the occupancy map of the unmarked original resolution point cloud.
  • the marking unit is used to mark the occupancy map of the original cloud of unmarked resolution.
  • the up-sampling module 111 may be connected to the auxiliary information encoding module 108 to send the target sub-candidate pattern to the auxiliary information encoding module 108, so that the auxiliary information encoding module 108 will indicate the location where the marking is performed.
  • the identification information of the adopted sub-candidate mode is encoded into the code stream, or the identification information indicating the target processing mode is encoded into the code stream.
  • the up-sampling module 208 may be connected to the auxiliary information decoding module 204 for receiving the corresponding identification information obtained by the auxiliary information decoding module 204 parsing the code stream, so as to up-sample the occupancy map of the point cloud to be decoded.
  • this embodiment reference may be made to the following, which is not repeated here.
  • any of the following point cloud encoding methods may be performed by the source device 10 in the point cloud decoding system, and more specifically, by the source device 10 is performed by the encoder 100; any of the following point cloud decoding methods may be performed by the destination device 20 in the point cloud decoding system, and more specifically, by the decoder 200 in the destination device 20 .
  • any of the following occupancy map upsampling methods can be performed by the source device 10 or the destination device 20, specifically, by the encoder 100 or the decoder 200, and more specifically, by the upsampling module 111
  • the execution or upsampling module 208 executes.
  • the description is unified, and will not be repeated below.
  • the point cloud decoding method described below may include a point cloud encoding method or a point cloud decoding method if no description is made.
  • the point cloud decoding method is specifically a point cloud encoding method
  • the point cloud to be decoded in the embodiment shown in FIG. 6 is specifically a point cloud to be encoded
  • the point cloud decoding method is specifically a point cloud decoding method
  • the diagram The point cloud to be decoded in the embodiment shown in 6 is specifically a point cloud to be decoded. Since the embodiment of the point cloud decoding method includes the occupancy graph upsampling method, the embodiments of the present application do not separately lay out the occupancy graph upsampling method embodiments.
  • FIG. 6 it is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • the method may include:
  • S101 Enlarge the first occupancy map of the point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution, the resolution of the second occupancy map is the second resolution, and the second resolution The rate is greater than the first resolution.
  • the first occupancy map is the marked low-resolution point cloud occupancy map.
  • the "low resolution” here is relative to the resolution of the second occupancy map.
  • the first occupancy map may be the occupancy map of the low-resolution point cloud generated by the downsampling module 110 in FIG. 2.
  • the first occupancy map may be the occupancy map of the low-resolution point cloud output by the point cloud occupancy map decoding module 203 in FIG. 5.
  • the second occupancy map is an unmarked high-resolution point cloud occupancy map.
  • the "high resolution” here is relative to the resolution of the first occupancy map.
  • the resolution of the second occupancy map may be the original resolution of the occupancy map of the point cloud to be decoded, that is, the resolution of the original occupancy map of the point cloud to be decoded generated by the packaging module 102 in FIG. 2.
  • the second occupancy map may be an occupancy map of an unmarked high-resolution point cloud generated by the upsampling module 111 in FIG. 2 or the upsampling module 208 in FIG. 5.
  • the decoder ie, the encoder or the decoder
  • the decoder can use the point cloud occupancy map as the granularity to enlarge the first occupancy map to obtain the second occupancy map. Specifically, by performing an enlargement step once, each pixel block of B1*B1 in the first occupancy diagram can be enlarged into a pixel block of B2*B2.
  • the decoder may use the pixel block as the granularity to enlarge the first occupancy map to obtain the second occupancy map. Specifically, multiple enlargement steps are performed, so as to enlarge each pixel block of B1*B1 in the first occupancy graph into a pixel block of B2*B2. Based on this example, the decoder may not perform the following S102-S103 after generating the second occupancy map, but instead, after obtaining a B2*B2 pixel block, execute it as a to-be-processed pixel block S102-S103.
  • the pixel block of B1*B1 refers to a square matrix composed of pixels in rows B1 and B1.
  • the pixel block of B2*B2 refers to a square matrix of pixels in rows B2 and B2.
  • the reference pixel block in the first occupied map is the boundary pixel block in the first occupied map, mark the pixel at the first target position in the pixel block to be processed in the second occupied map as occupied, and/or The pixel at the second target position in the pixel block to be processed is marked as unoccupied, and the marked pixel block is obtained.
  • the pixel block to be processed corresponds to the reference pixel block.
  • the first occupancy map includes multiple reference pixel blocks, the multiple reference pixel blocks occupy the first occupancy map, and there is no overlap between the reference pixel blocks.
  • the second occupancy map includes a plurality of pixel blocks to be processed, the plurality of pixel blocks to be processed fill the second occupancy map, and there is no overlap between the pixel blocks to be processed.
  • the reference pixel block may be the pixel block of B1*B1 in the first occupancy map.
  • the pixel block to be processed may be a B2*B2 pixel block corresponding to the reference pixel block in the second occupancy graph.
  • the pixel block to be processed is a pixel block obtained by enlarging the reference pixel block. It should be noted that, in the embodiment of the present application, the pixel block to be processed and the reference pixel block are both squares as an example for illustration, and the pixel blocks are all rectangular.
  • the pixel blocks in the first occupancy graph can be divided into invalid pixel blocks and effective pixel blocks. If all pixels contained in a pixel block are unoccupied pixels (that is, pixels marked as "unoccupied”), the pixel block is an invalid pixel block. If at least one pixel contained in a pixel block is an occupied pixel (that is, a pixel marked as "occupied”), the pixel block is a valid pixel block.
  • Effective pixel blocks include boundary pixel blocks and non-boundary pixel blocks. If all spatial neighboring pixel blocks of a valid pixel block are valid pixel blocks, the valid pixel block is a non-boundary pixel block. If at least one spatial neighboring pixel block of an effective pixel block is an invalid pixel block, the effective pixel block is a boundary pixel block.
  • the spatial neighboring pixel block of a pixel block can also be called the peripheral spatial neighboring pixel block of a pixel block, which means that it is adjacent to the pixel block and is located directly above, directly below, and directly Pixel blocks in one or more orientations to the left, right, top left, bottom left, top right, and bottom right.
  • the spatial neighboring pixel blocks of the non-edge pixel blocks of the occupancy map of a frame of point clouds include those adjacent to the pixel block, and located directly above, below, to the left, and to the right of the pixel block 8 pixel blocks of square, upper left, lower left, upper right and lower right.
  • the number of adjacent pixel blocks in the spatial domain of the edge pixel blocks of the occupancy graph of a frame of point clouds is less than 8.
  • the edge pixel blocks of the first occupancy map refer to the reference pixel blocks in the first row, the last row, the first column, and the last column in the first occupancy map.
  • the pixel blocks at other positions in the first occupancy map are non-edge pixel blocks in the first occupancy map.
  • the spatial adjacent pixel blocks involved in the specific examples below are all explained by taking non-edge pixel blocks for the point cloud occupancy map as an example. It should be understood that all the solutions in the following can be applied to the adjacent pixel blocks in the spatial domain of the edge pixel block, which will be described here in a unified manner, and will not be repeated below.
  • the decoder may determine whether the two pixel blocks are adjacent and the orientation of one of the two pixel blocks relative to the other pixel block according to the coordinates of the two pixel blocks.
  • Both the first target position and the second target position represent the positions of some pixels in the pixel block to be processed. And, when S102 includes marking the pixels of the first target position as occupied and marking the pixels of the second target position as unoccupied, there is no intersection between the first target position and the second target position, and the first target position and The union of the second target positions is where some or all of the pixels in the pixel block to be processed are located.
  • the pixels in the pixel block to be processed are marked as occupied; when the reference pixel block is an invalid pixel block, the pixels in the pixel block to be processed are marked as Not occupied.
  • the decoder may use information such as different values or value ranges to mark occupied pixels and unoccupied pixels. Or, use information such as one or more values or value ranges to mark occupied pixels, and use "no mark” to indicate unoccupied pixels. Alternatively, one or more numerical values or numerical ranges can be used to mark unoccupied pixels, and "not marked” to indicate occupied pixels. This embodiment of the present application does not limit this.
  • the decoder uses "1" to mark a pixel as an occupied pixel, and "0" to mark a pixel as an unoccupied pixel.
  • the above S102 is specifically: if the reference pixel block in the first occupancy map is a boundary pixel block, then the pixel at the first target position in the pixel block to be processed corresponding to the reference pixel block in the second occupancy map Is set to 1, and/or the pixel at the second target position in the pixel block to be processed corresponding to the reference pixel block in the second occupancy map is set to 0.
  • the above S103 is specifically: if the reference pixel block is a non-boundary pixel block, then the values of pixels in the pixel block to be processed corresponding to the reference pixel block in the second occupancy map are all set to 1 or to 0.
  • the decoder can traverse each pixel block to be processed in the second occupancy graph, so as to execute S102 to S103 to obtain the marked second occupancy graph; or, the decoder can traverse the first occupancy graph For each reference pixel block of, execute S102 to S103 to obtain the marked second occupancy map.
  • the above S101 to S103 can be considered as a specific implementation process of “upsampling the first occupancy map”.
  • S104 Reconstruct the point cloud to be decoded according to the marked second occupancy map.
  • the marked second occupancy map includes marked pixel blocks. For example, perform video decoding according to the encoded depth map to obtain the decoded depth map of the point cloud, and use the decoded depth map, the processed occupancy map of the point cloud and the auxiliary information of each patch to obtain the reconstructed point cloud geometry information.
  • the point cloud decoding method provided by the embodiment of the present application in the process of up-sampling the point cloud to be decoded, marks the pixel at a local position in the pixel block to be processed in the unmarked high-resolution point cloud in the figure Is occupied, and/or marks a pixel at a local position in the pixel block to be processed as unoccupied.
  • the occupancy map processed by the upsampling is as close as possible to the original occupancy map of the point cloud to be decoded.
  • the outlier points in the reconstructed point cloud can be reduced. Therefore, there are Help to improve the performance of codec.
  • this technical solution compared with the technical solution of further processing the upsampled occupancy map to reduce the outlier points in the reconstructed point cloud, this technical solution only needs to perform on each pixel block in the unmarked high-resolution occupancy map One traversal, therefore, can reduce the processing complexity.
  • the above S102 includes: if the reference pixel block in the first occupancy map is a boundary pixel block in the first occupancy map, and the number of effective pixel blocks in the adjacent pixel blocks in the spatial domain of the reference pixel block is greater than or equal to a preset Threshold, then mark the pixel at the first target position in the pixel block to be processed corresponding to the reference pixel block in the second occupied map as occupied, and/or mark the second target in the pixel block to be processed in the second occupied map The pixels at the location are marked as unoccupied.
  • the pixels in the pixel block to be processed can be marked as occupied pixels or unoccupied pixels. Pixels.
  • the value of the preset threshold is not limited.
  • the preset threshold is a value greater than or equal to 4, for example, the preset threshold is equal to 6.
  • the above S102 may include the following steps S102A to S102B:
  • S102A Determine a target candidate mode suitable for the pixel block to be processed, wherein the target candidate mode is a candidate mode or a combination of multiple candidate modes; the distribution of neighboring pixel blocks in the invalid spatial domain of the reference pixel block in the target candidate mode and the The distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block is consistent (ie the same) or tends to be consistent (ie approximately the same).
  • S102B According to the target candidate mode, mark the pixel at the first target position as occupied, and/or mark the pixel at the second target position as unoccupied.
  • Reference pixel block is a concept proposed for the convenience of describing candidate modes. Since adjacent pixel blocks in the spatial domain are relative to a certain pixel block, the pixel block is defined as a reference pixel block.
  • the distribution of the neighboring pixel blocks of the invalid spatial domain of the reference pixel block in each candidate mode indicates that one or more neighboring pixel blocks of the invalid spatial domain (that is, the neighboring pixel blocks of the invalid spatial domain) are relative to the reference pixel block. position.
  • a candidate mode specifically indicates how many and which invalid space adjacent pixel blocks' orientations relative to the reference pixel block may be predefined.
  • Candidate mode A indicating that the adjacent pixel block of the invalid spatial domain is located directly above, directly below, and to the right of the reference pixel block to characterize the distribution of the adjacent pixel block of the invalid spatial domain of the reference pixel block; in this case, applies to the reference pixel
  • the target candidate mode of the pixel block to be processed corresponding to the block is candidate mode A.
  • the candidate mode D used to indicate that the adjacent pixel block of the invalid spatial domain is located to the right of the reference pixel block to characterize the distribution of the adjacent pixel blocks of the invalid spatial domain of the reference pixel block; in this case, it is applicable to the reference pixel block
  • the target candidate mode of the corresponding pixel block to be processed is a combination of candidate modes B, C, and D.
  • the candidate mode E for "neighboring adjacent pixel blocks in the invalid spatial domain is located directly below the reference pixel block", and "directly adjacent and right pixel blocks in the invalid spatial domain” may be used.
  • the candidate mode F collectively characterizes the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block; in this case, the target candidate mode applicable to the pixel block to be processed corresponding to the reference pixel block is a combination of candidate modes E and F.
  • the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block in the target candidate mode is consistent with the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block.
  • the closest candidate to the reference pixel block may be selected.
  • One candidate mode or a combination of multiple candidate modes of the distribution of adjacent pixel blocks in the invalid spatial domain is used as the target candidate mode. In this case, the distribution of the adjacent pixel blocks of the invalid spatial domain of the reference pixel block and the reference pixel block in the target candidate mode The distribution of adjacent pixel blocks in the invalid airspace tends to be consistent.
  • the above candidate mode C and candidate Mode D together characterizes the distribution of adjacent pixel blocks in the invalid spatial domain of the reference pixel block.
  • This example can realize that the distribution of neighboring pixel blocks of the invalid spatial domain of the reference pixel block in the target candidate mode tends to be consistent with the distribution of neighboring pixel blocks of the invalid spatial domain of the reference pixel block.
  • S102A may include: selecting a target candidate mode suitable for the pixel block to be processed from the candidate mode set.
  • the candidate pattern set is a set composed of at least two candidate patterns. Each candidate pattern in the candidate pattern set may be predefined.
  • the candidate pattern set may be used to indicate that the adjacent pixel blocks of the invalid airspace are located directly above, directly below, directly to the left, directly to the right, to the upper left, to the upper right, to the lower left, or to the reference pixel block.
  • the eight candidate patterns at the bottom right are configured, for example, reference can be made to each candidate pattern shown in FIG. 9A.
  • the decoder can select a target candidate mode suitable for the pixel block to be processed corresponding to the reference pixel block from the candidate pattern set, and the target candidate mode is unique, so that The encoder does not need to encode the identification information indicating the target candidate mode of the pixel block to be processed into the code stream, and therefore, the transmission overhead of the code stream can be saved.
  • the method can represent the distribution of adjacent pixel blocks in the invalid spatial domain of all possible reference pixel blocks by predefining 8 candidate modes, which is simple to implement and has high flexibility.
  • each candidate mode set as shown in FIG. 9B may be included in the candidate mode set.
  • the candidate mode set may also include other candidate modes.
  • the encoder can determine that it is suitable for the pixel to be processed For multiple target candidate modes of the block, the encoder may encode the identification information of one target candidate mode used in performing the upsampling process among the multiple target candidate modes into the code stream.
  • the target candidate mode suitable for the pixel block to be processed can be obtained by parsing the code stream, without selecting the target candidate mode suitable for the pixel block to be processed from the candidate pattern set. For example, suppose that the candidate pattern set includes various candidate patterns shown in FIGS.
  • the target candidate mode can be a combination of candidate modes 1, 2, and 4, or a combination of candidate modes 2 and 9, or a combination of candidate modes 1, 12. Based on this, the encoder needs to carry identification information in the code stream to identify which of the three combinations is used when performing the upsampling process.
  • Method 1A According to the position distribution of the invalid pixels in the target candidate mode, the pixel at the first target position is marked as occupied, and/or the pixel at the second target position is marked as unoccupied.
  • the position distribution of invalid pixels in different candidate modes is used to describe the different position distribution of invalid pixels inside the pixel block.
  • the position distribution of invalid pixels in a candidate mode is related to the orientation of the adjacent pixel block of the invalid spatial domain represented by the candidate mode in the reference pixel block.
  • the position distribution of the invalid pixels in the candidate mode may be: the invalid pixels inside a pixel block are in the target of the pixel block position.
  • the target orientation may be one or a combination of at least two of directly above, directly below, directly to the left, right to the right, upper left, upper right, lower left, and lower right.
  • the target candidate mode includes a candidate mode
  • the position of the pixel to be processed is the first target position; If it is determined that the pixel to be processed in the pixel block to be processed is an invalid pixel based on the position distribution of the invalid pixel in the candidate mode, the position of the pixel to be processed is the second target position.
  • the target candidate mode is a candidate mode that is used to indicate that the adjacent pixel block of the invalid spatial domain is directly above the reference pixel block
  • the position distribution of the invalid pixels in the candidate mode is that the pixels of the first 2 rows of a pixel block are Invalid pixels.
  • the pixels in the other rows are valid pixels.
  • the position of the pixels in the first 2 rows of the pixel block to be processed is the first target position, and the other positions are the second target position.
  • the target candidate mode includes multiple candidate modes
  • the position of the pixel to be processed is the first target Position
  • the pixel to be processed is determined to be an invalid pixel based on the position distribution of the invalid pixel in at least one of the multiple candidate modes, the position of the pixel to be processed is the second target position.
  • the target candidate mode is "can be used to indicate that the adjacent pixel block in the invalid spatial domain is directly above the reference pixel block" and "candidate mode a used to indicate that the adjacent pixel block in the invalid spatial domain is located directly to the left of the reference pixel block"
  • the combination of candidate mode b, and the position distribution of invalid pixels in candidate mode a is the pixels in the first 2 rows of a pixel block are invalid pixels, and the position distribution of invalid pixels in candidate mode b is the pixels in the first 2 columns of a pixel block are Invalid pixels; then the position of the pixels in the first 2 rows of the pixel block to be processed and the position of the pixels in the first 2 columns are both the first target position, and the other positions are the second target position.
  • Method 1B According to the sub-candidate module corresponding to the target candidate mode, mark the pixel at the first target position as occupied, and/or mark the pixel at the second target position as unoccupied. Among them, different sub-candidate modes are used to describe the distribution of different positions of invalid pixels inside the pixel block.
  • the position distribution of the invalid pixels inside the pixel block is information on the positions of the invalid pixels inside the pixel block.
  • the position of the invalid pixel inside the pixel block may be a position in the pixel block and the distance from the target effective pixel is greater than or equal to a preset threshold.
  • the position of the invalid pixel inside the pixel block may be a position in the pixel block and the distance between the straight line where the target valid pixel is located is greater than or equal to a preset threshold.
  • the straight line where the target effective pixel is located is related to the candidate mode. For a specific example, refer to the following.
  • the target effective pixel refers to the effective pixel with the longest distance from the effective pixel boundary, and the effective pixel boundary is the boundary between the effective pixel and the invalid pixel.
  • FIG. 7 it is a schematic diagram of a position distribution of invalid pixels in a pixel block applicable to this example.
  • the pixel block is 4*4, and the preset threshold is 2 (that is, 2 unit distances, and one unit distance is the distance between two adjacent pixels in the horizontal or vertical direction).
  • FIG. 8 it is a schematic diagram of a position distribution of an invalid pixel applicable to this example. Among them, (a) in FIG. 8 takes the position of the invalid pixel in the pixel block and the distance from the line where the target effective pixel is located greater than or equal to the preset threshold as an example to illustrate In FIG.
  • the position of the invalid pixel is in the pixel block, and the position of the invalid pixel whose distance from the target valid pixel is greater than or equal to the preset threshold is used as an example for description.
  • the pixel block is 4*4
  • the preset threshold is 2 (that is, 2 unit distances, one of which is the distance between two adjacent pixels in the direction of a 45-degree diagonal line).
  • the embodiment of the present application does not limit the specific embodiment of the corresponding relationship between the candidate mode and the sub-candidate mode, for example, it may be a table or a formula or logical judgment based on conditions (such as if else or switch operation, etc.).
  • the above corresponding relationship is specifically embodied in one or more tables, which is not limited in this embodiment of the present application.
  • the embodiments of the present application are described by taking the correspondence relationship specifically embodied in a table as an example.
  • FIG. 9A it is a schematic diagram of a correspondence between a candidate mode and a sub-candidate mode provided by an embodiment of the present application.
  • FIG. 9A includes candidate modes 1 to 8, the candidate modes correspond to the sub-candidate modes, and the pixel block to be processed is a 4*4 pixel block.
  • the sub-candidate modes corresponding to candidate modes 1 to 4 are described using the preset threshold of 2 as an example; the sub-candidate modes corresponding to candidate modes 5 to 8 are described using the preset threshold of 3 as an example.
  • each candidate pattern shown in FIG. 9A can be used as a set of candidate patterns.
  • FIG. 9A represents a pixel block, where the pixel block where the five-pointed star is located is a reference pixel block, and the pixel block marked in black is an invalid adjacent pixel block, and this embodiment of the present application is not concerned Whether the pixel block marked by diagonal hatching is a valid pixel block or an invalid pixel block.
  • Each small square in the sub-candidate mode represents one pixel, black marked small squares represent invalid pixels, and white marked small squares represent valid pixels.
  • the pixel block described in the sub-candidate mode is an example of a pixel block of 4*4.
  • FIG. 9B it is a schematic diagram of a correspondence between another candidate mode and a sub-candidate mode provided by an embodiment of the present application.
  • the explanation of each small square in FIG. 9B can refer to FIG. 9A.
  • FIG. 9B shows an example in which the candidate patterns indicate the orientations of a plurality of invalid spatial neighboring pixel blocks relative to the reference pixel block, and examples of sub-candidate patterns corresponding to each candidate pattern. Based on this, the sharp details contained in the pixel block to be processed can be processed well. In this way, the occupancy map processed by the upsampling process is closer to the original occupancy map of the point cloud, thereby improving the efficiency of encoding and decoding.
  • FIGS. 9A and 9B are only examples of the correspondence between the candidate modes and the sub-candidate modes. In specific implementation, other candidate modes and sub-candidate modes may be set as needed.
  • the same candidate mode may correspond to multiple preset thresholds, so that the same candidate mode corresponds to multiple sub-candidate modes.
  • the preset threshold is 2
  • the sub-candidate pattern corresponding to the candidate type 7 shown in FIG. 9A can refer to FIG. 8(a); when the preset threshold is 3, the candidate type 7 shown in FIG. 9A
  • the corresponding sub-candidate pattern can refer to FIG. 9A.
  • method 1B may include:
  • the target candidate mode includes a candidate mode
  • the target candidate mode corresponds to a sub-candidate mode
  • the pixel at the first target position is marked as occupied according to the sub-candidate mode, and/or the The pixels are marked as unoccupied.
  • the target candidate mode corresponds to multiple sub-candidate modes, then according to one of the multiple sub-candidate modes, the pixel at the first target position is marked as occupied, and/or the pixel at the second target position is marked as not Occupied.
  • the encoder may also encode identification information into the code stream, the identification information indicating the sub-candidate modes in the multiple sub-candidate modes and used for performing the marking operation.
  • the decoder can also parse the code stream to obtain the identification information, and subsequently, the decoder can perform a marking operation on the pixel block to be processed based on the sub-candidate mode indicated by the identification information.
  • the target candidate mode is a combination of multiple candidate modes, and the multiple candidate modes include the first candidate mode and the second candidate mode
  • the pixel at the first target position is marked as occupied according to the target sub-candidate mode, and/or Or mark the pixels at the second target position as unoccupied
  • the target sub-candidate modes include the first sub-candidate mode and the second sub-candidate mode.
  • the first candidate mode corresponds to the first sub-candidate mode
  • the second candidate mode corresponds to the second sub-candidate mode
  • the first candidate mode corresponds to multiple sub-candidate modes, the first sub-candidate mode is included in the multiple sub-candidate modes; and the second candidate mode corresponds to multiple sub-candidate modes, and the second sub-candidate mode is included in the multiple sub-candidate modes .
  • the first candidate mode corresponds to the first sub-candidate mode
  • the second candidate mode corresponds to multiple sub-candidate modes, including the second sub-candidate mode.
  • the N seed candidate patterns corresponding to the N candidate patterns jointly serve as the target sub-candidate pattern; wherein, the N The seed candidate modes respectively correspond to different candidate modes. Specifically, if it is determined that the pixels to be processed in the pixel block to be processed are valid pixels based on the position distribution of the invalid pixels described in the N seed candidate mode, the position of the pixel to be processed is the first target position; if based on this The position distribution of the invalid pixels described in the at least one candidate mode among the N seed candidate modes determines that the pixel to be processed is an invalid pixel, and the position of the pixel to be processed is the second target position. For example, assuming that the execution target candidate modes are Candidate Mode 1 and Candidate Mode 3 in FIG. 9A, the first target position and the second target position, and the marked pixel block, may be as shown in FIG.
  • the encoder may encode the first identification information into the code stream, and the first identification information is used to represent the first sub-candidate mode.
  • the decoder can also parse the code stream to obtain the first identification information, and subsequently, the decoder can perform a marking operation on the pixel block to be processed based on the first sub-candidate pattern indicated by the first identification information.
  • the encoder may encode the second identification information into the code stream, and the second identification information is used to represent the second sub-candidate mode.
  • the decoder can also parse the code stream to obtain the second identification information. Subsequently, the decoder can perform the marking operation on the pixel block to be processed based on the second sub-candidate pattern indicated by the second identification information.
  • the code stream may include separate representations Information about the sub-candidate modes used when performing the marking operation corresponding to each candidate mode.
  • the encoder and the decoder may predefine the sequence of the identification information indicating the sub-candidate modes included in the code stream .
  • the order of the identification information representing the sub-candidate modes included in the code stream is: the order of the corresponding candidate modes from small to large, when the first candidate mode corresponds to multiple sub-candidate modes, and the second candidate mode corresponds to When there are multiple sub-candidate modes, the identification information included in the code stream indicating the first sub-candidate mode is first, and the identification information indicating the second sub-candidate mode is second.
  • the above S102 may include the following steps S102C to S102D:
  • S102C Determine the type of pixel block to be processed.
  • the spatial adjacent pixel block of the reference pixel block determines the orientation information of the invalid pixel in the pixel block to be processed corresponding to the reference pixel block in the pixel block to be processed; wherein, different types of boundaries
  • the pixel block corresponds to different orientation information.
  • the definition of adjacent pixel blocks in the spatial domain can be referred to above.
  • the spatial adjacent pixel block of the target orientation of the reference pixel block is an invalid pixel block
  • the target orientation is one or a combination of at least two of directly above, directly below, directly to the left, right to the right, top left, top right, bottom left, and bottom right.
  • the pixel in the first target position is marked as occupied, and/or the pixel in the second target position is marked as unoccupied by using a corresponding target processing method.
  • the first target position is the position of the invalid pixel in the to-be-processed pixel block, and the distance from the target effective pixel is greater than or equal to a preset threshold.
  • the first target position is in the to-be-processed pixel block, and the distance between the straight line where the target valid pixel is located is greater than or equal to the preset threshold where the invalid pixel is located.
  • the line where the target effective pixel lies is related to the type of pixel block to be processed.
  • the second target position is the position of the invalid pixel in the to-be-processed pixel block, and the distance from the target valid pixel is less than a preset threshold.
  • the second target position is in the to-be-processed pixel block, and the distance between the straight line where the target valid pixel is located is less than the preset threshold where the invalid pixel is located.
  • different processing methods correspond to different first target positions, and/or correspond to different second target positions. If different processing methods can correspond to different first target positions, the target processing method is used to mark the pixels of the first target position in the pixel block to be processed as occupied when executing S102D. If different processing methods correspond to different second target positions, the target mode is used to mark the pixels at the second target position in the pixel block to be processed as unoccupied when executing S102D.
  • the specific implementation manner of the type of pixel block to be processed (specifically, the boundary pixel block to be processed) (or the orientation information of invalid pixels in the pixel block to be processed) is described.
  • the spatial neighboring pixel blocks on which it refers here refer to the spatial neighboring pixel blocks on which the type of the pixel block to be processed is based. It should not be understood that the pixel block to be processed has adjacent spatial pixel blocks.
  • the spatial adjacent pixel blocks of the reference pixel block include: pixel blocks adjacent to the reference pixel block and located directly above, directly below, to the left, and to the right of the reference pixel block:
  • the invalid pixels in the pixel block to be processed are within the pixel block to be processed Preset direction.
  • the preset direction includes one or a combination of at least two of right above, right below, left and right.
  • the corresponding types of boundary pixel blocks are called type 1, type 2, type 7, and type 8, respectively.
  • the corresponding type of boundary pixel block is called type 13; when the preset direction is directly above, right, left, and below, the corresponding boundary pixel The type of the block is called type 14.
  • the corresponding type of boundary pixel block is called type 15.
  • the corresponding boundary pixel block is called type 16.
  • the invalid pixels in the pixel block to be processed are within the pixel block to be processed Upper right.
  • the type of boundary pixel block corresponding to such orientation information may be referred to as type 3.
  • the invalid pixels in the pixel block to be processed are within the bottom left of the pixel block to be processed square.
  • the type of boundary pixel block corresponding to such orientation information may be referred to as type 4.
  • the invalid pixels in the pixel block to be processed are on the upper left inside the pixel block to be processed square.
  • type of boundary pixel block corresponding to such orientation information may be referred to as type 5.
  • the invalid pixels in the pixel block to be processed are to the right of the pixel block to be processed Below.
  • the type of boundary pixel block corresponding to such orientation information may be referred to as type 6.
  • the spatial neighboring pixel blocks of the reference pixel block include: adjacent to the reference pixel block and located directly above, directly below, directly to the left, directly to the right, to the upper left, to the upper right, to the lower left, and to the right
  • the adjacent pixel block in the airspace in the preset direction of the reference pixel block is an invalid pixel block, and the other adjacent pixel blocks in the airspace are all valid pixel blocks, the invalid pixels in the pixel block to be processed are located in the pending pixel block.
  • Process the preset direction inside the pixel block includes upper left, upper right, lower left or lower right.
  • the types of the boundary pixel blocks corresponding to the upper right, lower left, upper left, or lower right may be referred to as type 9, type 10, type 11, and type 12, respectively.
  • the spatial adjacent pixel block of the reference pixel block includes pixel blocks adjacent to the reference pixel block and located at the upper left, upper right, lower left, and lower right of the reference pixel block
  • the reference pixel block is preset
  • the adjacent pixel blocks in the spatial domain in the direction are invalid pixel blocks, and the adjacent pixel blocks in other spatial domains are all valid pixel blocks, then the invalid pixels in the pixel block to be processed are located in the preset direction inside the pixel block to be processed; the preset direction includes One or at least two of the upper left, upper right, lower left, and lower right.
  • each small square in FIG. 11 represents a pixel block
  • a pixel block marked with a five-pointed star in the center represents a reference pixel block
  • a black marked pixel block represents an invalid pixel block
  • a white marked pixel block represents an effective pixel block
  • the pixel blocks marked by diagonal hatching represent valid pixel blocks or invalid pixel blocks.
  • the discriminating mode diagram can be understood as a pattern used for discriminating the types of boundary pixel blocks
  • the schematic diagram can be understood as whether the adjacent pixel blocks in the spatial domain of the reference pixel block are valid pixel blocks or invalid pixel blocks.
  • p[i] in the following represents the ith boundary pixel block in the first occupied graph
  • the codec uses the same method to process the pixel block to be processed.
  • FIG. 12 it is a schematic diagram for determining a first target position and/or a second target position provided by an embodiment of the present application.
  • FIG. 13 for the index, judgment mode diagram, schematic diagram, and description information of the types (such as the above types 13 to 16) of the boundary pixel blocks.
  • the embodiment of the present application further provides a schematic diagram of the first target position and/or the second target position corresponding to the determination target processing manner when the type of the boundary pixel block is type 13 to 16, as shown in FIG. 14.
  • the pixel block to be processed is a 4*4 pixel block as an example for description.
  • the white marked part in FIG. 14 indicates the first target position, and the black marked part indicates the second target position.
  • (A) to (d) in FIG. 14 show the first target position and the second target position for types 13 to 16, respectively.
  • the first target position and the second target position shown in FIG. 14 are only examples, and they do not correspond to the first target position and the second target position when the type of the boundary image block provided in the embodiment of the present application is type 13 to 16.
  • the target position constitutes a limitation.
  • the first target position corresponding to the boundary image block type 13 is directly below the pixel block, and the first target position is "bump-shaped" or similar to "bump-shaped”.
  • the first target position corresponding to the boundary image block type 14 is the right side inside the pixel block, and the first target position is "bump-shaped” or similar to "bump-shaped” when viewed from the right side of the pixel block.
  • the first target position corresponding to the boundary image block type 15 is directly above the inside of the pixel block, and the first target position is an inverted “convex shape” or similar to a “convex shape”.
  • the first target position corresponding to the boundary image block type 16 is the positive left inside the pixel block, and the first target position is "bump-shaped” or similar to "bump-shaped” when viewed from the left side of the pixel block. Accordingly, the second target position can be obtained.
  • the above S102A may include the step of determining the processing mode corresponding to the type of pixel block to be processed according to the mapping relationship between the multiple types of boundary pixel blocks and multiple processing modes. If the type of pixel block to be processed corresponds to one processing method, the processing method corresponding to the type of pixel block to be processed is taken as the target processing method; or, if the type of pixel block to be processed corresponds to multiple processing methods, the pixel to be processed is to be processed One of multiple processing modes corresponding to the block type is used as the target processing mode.
  • the encoder and decoder can predefine (eg, by protocol) the mapping relationship between various types of boundary pixel blocks and multiple processing methods, for example, the number of predefined boundary pixel blocks The mapping relationship between various types of identification information and identification information of multiple processing methods.
  • both the encoder and the decoder can obtain the target processing method through the predefined mapping relationship. Therefore, in this case, the encoder does not need to send the identification information indicating the target processing mode to the decoder, which can save code stream transmission overhead.
  • the encoder may select one processing method from the multiple processing methods as the target processing method. Based on this, the encoder can encode the identification information into the code stream, and the identification information indicates the target processing mode of the pixel block to be processed.
  • the decoder can parse the code stream according to the type of pixel block to be processed to obtain the identification information.
  • the spatial neighboring pixel blocks of the pixel block to be processed include eight, based on whether each spatial neighboring pixel block is a valid pixel block or an invalid pixel block, the spatial neighboring pixel block of the pixel block to be processed may be There are 28 kinds of combinations, of these 28 kinds of one or at least two thereof may be used as a type, for example several types as shown in FIG. 11 and FIG. 13. In addition, in addition to the types of boundary pixel blocks listed above, boundary pixel blocks can also be classified into other types.
  • the decoder it can be based on the type of pixel block to be processed (specifically refers to the type of boundary pixel block that is encoded and decoded according to the technical solution provided in the above manner 2, or the boundary pixel block corresponding to multiple processing methods Type), determine whether to parse the code stream.
  • the code stream here refers to a code stream that carries identification information of the target processing mode.
  • the decoder may determine a pending When the type of processing pixel block is one of the types shown in FIG. 11 or FIG. 13, the code stream is parsed to obtain the target processing method corresponding to the type; when the type of the pixel block to be processed is not the type shown in FIG. 11 If it is not of the type shown in Figure 13, the code stream is not parsed.
  • FIG. 15 it is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • the method may include:
  • S201 Upsampling the first occupancy map of the point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution, and the resolution of the second occupancy map is the second resolution, the second The resolution is greater than the first resolution.
  • the pixel at the first target position in the pixel block corresponding to the reference pixel block in the second occupancy map is the occupied pixel, and/or the second The pixel at the second target position in the pixel block corresponding to the reference pixel block in the occupied image is an unoccupied pixel;
  • the pixels in the pixel block corresponding to the reference pixel block in the second occupied map are all occupied pixels or unoccupied pixels.
  • S202 Reconstruct the point cloud to be decoded according to the second occupancy graph.
  • the “second occupancy map” in this embodiment is the “second marked occupancy map” in the embodiment shown in FIG. 6 above.
  • FIG. 16 it is a schematic flowchart of a point cloud coding method provided by an embodiment of the present application.
  • the execution subject of this embodiment may be an encoder.
  • the method may include:
  • S301 Determine indication information used to indicate whether to process the occupancy map of the point cloud to be encoded according to the target encoding method;
  • the target encoding method includes any kind of point cloud encoding method provided in the embodiments of the present application, for example, FIG. 6 Or the point cloud decoding method shown in FIG. 15, and the decoding here specifically refers to coding.
  • there may be at least two encoding methods one of the at least two may be any point cloud encoding method provided in the embodiments of the present application, and the other may be existing technologies or points provided in the future Cloud coding method.
  • the indication information may specifically be an index of the target point cloud encoding/decoding method.
  • the encoder and the decoder may pre-appoint the indexes of at least two point cloud encoding/decoding methods supported by the encoder/decoder, and then, after the encoder determines the target encoding method, the index of the target encoding method.
  • the index or index of the decoding method corresponding to the target encoding method is encoded into the code stream as indication information.
  • the embodiment of the present application does not limit how the encoder determines whether the target encoding method is at least one of at least two encoding methods supported by the encoder.
  • This embodiment provides a technical solution for selecting a target encoding method.
  • the technical solution can be applied to a scenario where an encoder supports at least two point cloud encoding methods.
  • FIG. 17 it is a schematic flowchart of a point cloud decoding method provided by an embodiment of the present application.
  • the execution subject of this embodiment may be a decoder.
  • the method may include:
  • the target decoding method includes any point cloud decoding method provided by the embodiments of the present application, For example, it may be the point cloud decoding method shown in FIG. 6 or FIG. 15, and the decoding here specifically refers to decoding. Specifically, it is a decoding method corresponding to the encoding method described in FIG. 16.
  • the indication information is frame level information.
  • the point cloud decoding method provided in this embodiment corresponds to the point cloud encoding method provided in FIG. 16.
  • the above mainly introduces the solutions provided by the embodiments of the present application from the perspective of a method.
  • it includes hardware structures and/or software modules corresponding to performing each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driven hardware depends on the specific application of the technical solution and design constraints. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of the present application may divide the encoder/decoder function modules according to the above method examples, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of the modules in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 18 it is a schematic block diagram of a decoder 180 provided by an embodiment of the present application.
  • the decoder 180 may specifically be an encoder or a decoder.
  • the decoder 180 may include an upsampling module 1801 and a point cloud reconstruction module 1802.
  • the decoder 180 may be the encoder 100 in FIG. 2.
  • the upsampling module 1801 may be the upsampling module 111
  • the point cloud reconstruction module 1802 may be the point cloud reconstruction module 112.
  • the decoder 180 may be the decoder 200 in FIG. 5.
  • the upsampling module 1801 may be the upsampling module 208 and the point cloud reconstruction module 1802 may be the point cloud reconstruction module 205.
  • the upsampling module 1801 is used to enlarge the first occupancy map of the point cloud to be decoded to obtain a second occupancy map; the resolution of the first occupancy map is the first resolution, and the resolution of the second occupancy map is the second resolution The second resolution is greater than the first resolution; if the reference pixel block in the first occupied map is a boundary pixel block, the pixel at the first target position in the pixel block to be processed in the second occupied map is marked as occupied , And/or mark the pixel at the second target position in the pixel block to be processed as unoccupied to obtain a marked pixel block; the pixel block to be processed corresponds to the reference pixel block; if the reference pixel block is a non-boundary pixel block , Then the pixels in the to-be-processed pixel block are both marked as occupied or are marked as unoccupied to obtain the marked pixel block; the point cloud reconstruction module 1802 is used to reconstruct according to the marked second occupancy map A point cloud to be decoded; the marked second occupancy map
  • the pixel at the first target position in the pixel block to be processed in the second occupancy map is marked as occupied, and/or the pending The pixel marking at the second target position in the processing pixel block is marked as unoccupied.
  • the upsampling module 1801 is specifically configured to: if the reference pixel block in the first occupied map is a boundary pixel block, determine the pixel block applicable to the pixel block to be processed Target candidate mode, wherein the target candidate mode includes one candidate mode or multiple candidate modes; the distribution of adjacent pixel blocks of the invalid spatial domain of the reference pixel block and the distribution of adjacent pixel blocks of the invalid spatial domain of the pixel block to be referenced in the target candidate mode The distribution is consistent or tends to be consistent; according to the target candidate mode, the pixel at the first target position is marked as occupied, and/or the pixel at the second target position is marked as unoccupied.
  • the upsampling module 1801 is specifically configured to: The position distribution of the invalid pixels in the mode marks the pixels at the first target position as occupied, and/or marks the pixels at the second target position as unoccupied.
  • the upsampling module 1801 is specifically used to mark the pixel at the first target position as occupied, and/or mark the pixel at the second target position as unoccupied; where: if it is invalid based on multiple candidate modes
  • the pixel position distribution determines that the pixel to be processed in the pixel block to be processed is a valid pixel, and the position of the pixel to be processed is the first target position; if based on at least one of the multiple candidate modes, the invalid pixel
  • the position distribution of the pixel determines that the pixel to be processed is an invalid pixel, and the position of the pixel to be processed is the second target position.
  • the target candidate mode is a candidate mode
  • the pixels at the first target position are marked as occupied, and/or the pixels at the second target position are marked as unoccupied
  • the sampling module 1801 is specifically configured to: when the target candidate mode corresponds to a sub-candidate mode, mark the pixel at the first target position as occupied according to the sub-candidate mode, and/or mark the pixel at the second target position as unoccupied ;
  • the target candidate mode corresponds to multiple sub-candidate modes
  • the upsampling module 1801 is specifically used to mark pixels in the first target position as occupied according to the target sub-candidate mode, and/or mark pixels in the second target position as unoccupied, the target sub-candidate mode includes The first sub-candidate mode and the second sub-candidate mode; wherein, the first candidate mode corresponds to the first sub-candidate mode, and the second candidate mode corresponds to the second sub-candidate mode; or, the first candidate mode corresponds to multiple sub-candidates Mode and the first sub-candidate mode is included in multiple sub-candidate modes; and, the second candidate mode corresponds to multiple sub-candidate modes and the second sub-candidate mode is included in multiple sub-candidate modes; or, the first candidate mode corresponds to the first sub-candidate modes and the second sub-candidate mode is included in multiple sub-candidate modes; or, the first candidate mode corresponds to the first sub-candidate modes
  • the point cloud to be decoded is a point cloud to be encoded; referring to FIG. 19A, the decoder 180 further includes: an auxiliary information encoding module 1803, configured to encode the identification information when the target candidate mode corresponds to multiple sub-candidate modes Into the code stream, the identification information is used to indicate the sub-candidate mode adopted when performing the marking operation corresponding to the target candidate mode.
  • the auxiliary information encoding module 1803 may specifically be the auxiliary information encoding module 108.
  • the point cloud to be decoded is a point cloud to be decoded; referring to FIG. 19B, the decoder 180 further includes: an auxiliary information decoding module 1804, used to parse the code stream to obtain identification information, and the identification information is used to represent the target candidate The sub-candidate mode used when performing the marking operation corresponding to the mode.
  • the auxiliary information decoding module 1804 may specifically be the auxiliary information decoding module 204.
  • the upsampling module 1801 is specifically used to: if the reference pixel block in the first occupied map is a boundary pixel block, determine the type of the pixel block to be processed ; According to the type of the pixel block to be processed, the pixel in the first target position is marked as occupied by the corresponding target processing method, and/or the pixel in the second target position is marked as unoccupied.
  • the up-sampling module 1801 is specifically configured to: determine whether the pixel block to be processed is based on whether the spatial neighboring pixel block of the pixel block to be referenced is an invalid pixel block Location information of invalid pixels in the to-be-processed pixel block; wherein, different types of pixel blocks correspond to different location information.
  • the upsampling module 1801 is specifically used to mark the pixel at the first target position as occupied when the number of effective pixel blocks in the adjacent pixel block in the spatial domain of the pixel block to be referenced is greater than or equal to a preset threshold , And/or mark the pixel at the second target position as unoccupied.
  • the upsampling module 1801 is specifically configured to: set the value of the pixel at the first target position to 1, and/or set the value of the pixel at the second target position to 0.
  • each module in the decoder 180 provided by the embodiment of the present application is a functional body for implementing various execution steps included in the corresponding method provided above, that is, it is capable of implementing a complete implementation of the image adaptive filling of the present application.
  • the various steps in the upsampling method and the functional body of the expansion and deformation of these steps please refer to the introduction of the corresponding method above. For the sake of brevity, this article will not repeat them.
  • the decoding device 230 may include a processor 2310, a memory 2330, and a bus system 2350.
  • the processor 2310 and the memory 2330 are connected through a bus system 2350.
  • the memory 2330 is used to store instructions.
  • the processor 2310 is used to execute instructions stored in the memory 2330 to perform various point cloud decoding methods described in this application. In order to avoid repetition, they are not described in detail here.
  • the processor 2310 may be a central processing unit (CPU), and the processor 2310 may also be other general-purpose processors, DSP, ASIC, FPGA, or other programmable logic devices, discrete gates Or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 2330 may include a ROM device or a RAM device. Any other suitable type of storage device may also be used as the memory 2330.
  • the memory 2330 may include code and data 2331 accessed by the processor 2310 using the bus system 2350.
  • the memory 2330 may further include an operating system 2333 and an application program 2335, the application program 2335 including allowing the processor 2310 to perform the video encoding or decoding method described in this application (especially the current pixel block based on the block size of the current pixel block described in this application for the current pixel block At least one program of the filtering method).
  • the application program 2335 may include applications 1 to N, which further include a video encoding or decoding application (referred to as a video decoding application for short) that performs the video encoding or decoding method described in this application.
  • the bus system 2350 may also include a power bus, a control bus, and a status signal bus. However, for clarity, various buses are marked as the bus system 2350 in the figure.
  • the decoding device 230 may also include one or more output devices, such as a display 2370.
  • the display 2370 may be a tactile display that merges the display with a tactile unit that operably senses touch input.
  • the display 2370 may be connected to the processor 2310 via the bus system 2350.
  • Computer readable media may include computer readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol).
  • computer-readable media may generally correspond to non-transitory tangible computer-readable storage media, or communication media, such as signals or carrier waves.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • Such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or may be used to store instructions or data structures
  • the desired program code in the form of and any other medium that can be accessed by the computer. And, any connection is properly called a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source
  • coaxial cable Wire, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media.
  • the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are actually directed to non-transitory tangible storage media.
  • magnetic disks and optical disks include compact disks (CDs), laser optical disks, optical optical disks, DVDs, and Blu-ray disks, where magnetic disks typically reproduce data magnetically, while optical disks use lasers to reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • the term "processor” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functions described in the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or in combination Into the combined codec.
  • the techniques can be fully implemented in one or more circuits or logic elements. Under one example, various illustrative logical blocks, units, and modules in the encoder 100 and the decoder 200 may be understood as corresponding circuit devices or logic elements.
  • the technology of the present application may be implemented in a variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or a set of ICs (eg, chipsets).
  • ICs integrated circuits
  • a set of ICs eg, chipsets
  • Various components, modules or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but do not necessarily need to be implemented by different hardware units.
  • various units may be combined in a codec hardware unit in combination with suitable software and/or firmware, or by interoperating hardware units (including one or more processors as described above) provide.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了点云译码方法及译码器,涉及编解码技术领域,有助于减少经重构的点云中的outlier点,从而提高点云的编解码性能。点云译码方法包括:对待译码点云的第一占用图进行放大,得到第二占用图;第二占用图的分辨率大于第一占用图的分辨率;如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,该待处理像素块对应于该基准像素块;如果该基准像素块是非边界像素块,则将该待处理像素块中的像素均标记为已占用或均标记为未占用;根据经标记的第二占用图重构待译码点云。

Description

点云译码方法及译码器
本申请要求于2019年01月12日提交国家知识产权局、申请号为201910029219.7、申请名称为“点云译码方法及译码器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及编解码技术领域,尤其涉及点云(point cloud)译码方法及译码器。
背景技术
随着3d传感器(例如3d扫描仪)技术的不断发展,采集点云数据越来越便捷,所采集的点云数据的规模也越来越大。面对海量的点云数据,对点云的高质量压缩、存储和传输就变得非常重要。
为了节省码流传输开销,编码器对待编码点云进行编码时,通常需要对待编码点云的原始分辨率的占用图进行下采样处理,并将经下采样处理的占用图(即低分辨率的占用图)的相关信息发送给解码器。基于此,编码器和解码器在重构点云时,需要先对经下采样处理的占用图进行上采样处理,得到原始分辨率的占用图,再基于经上采样处理的占用图重构点云。
按照传统方法,对于经下采样处理的占用图中的任意一个像素而言,如果该像素的值为1,则该像素经上采样处理后得到的像素块中的所有像素的值均为1;如果该像素的值为0,则该像素经上采样处理后得到的像素块中的所有像素的值均为0。
上述方法会导致点云经上采样处理的占用图产生锯齿状的边缘,从而导致基于经上采样处理的占用图重构得到的点云中出现较多的outlier点(即离群点或异常点),进而影响点云的编解码性能。
发明内容
本申请实施例提供了点云编解码方法及编解码器,有助于减少经重构的点云中的outlier点,从而有助于提高点云的编解码性能。
第一方面,提供了一种点云译码方法,包括:对待译码点云的第一占用图进行放大,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率;如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块,其中,该待处理像素块对应于该基准像素块;如果该基准像素块是非边界像素块,则将该待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块;根据经标记的第二占用图,重构待译码点云;该经标记的第二占用图包括该经标记的像素块。本技术方案,在对待译码点云进行上采样处理的过程中,将未标记的高分辨率的点云占用图中的待处理像素块中局部位置的像素标记为已占用(例如置1),和/或将该待处理像素块中局部位置的像素标记为未占用(例如置0)。这样,有助于实现经上采样处理的占用图尽量接近待译码点云的原始占用图,相比基于传统的上采样方法, 可以减少经重构的点云中的outlier点,因此,有助于提高编解码性能。
其中,第二占用图中的每个像素均未进行标记,因此第二占用图可以认为是空占用图。
为了简化描述,本申请实施例中采用“基准像素块”表示第一占用图中的某个像素块(具体可以是第一占用图中的任意一个像素块)。其中,该某个像素块与第二占用图中的待处理像素块相对应,具体的,待处理像素块是该“基准像素块”经放大得到的像素块。另外,本申请实施例中的“标记”可以替换为“填充”,在此统一说明,下文不再赘述。
如果一个基准像素块的所有空域相邻像素块均是有效像素块,则该基准像素块是非边界像素块;如果一个基准像素块的至少一个空域相邻像素块是无效像素块,则该基准像素块是边界像素块。其中,如果基准像素块包含的至少一个像素是已占用的像素(例如置为1的像素),则该基准像素块是有效像素块,如果基准像素块包含的所有像素均是未占用的像素(例如置为0的像素),则该基准像素块是无效像素块。
第一目标位置和第二目标位置表示待处理像素块中的部分像素所在的位置。当“和/或”具体为“和”时,第一目标位置和第二目标位置之间没有交集,且第一目标位置和第二目标位置的并集是待处理像素块中的部分或全部像素所在的位置;也就是说,对于待处理像素块中的一个像素来说,该像素所在的位置可以是第一目标位置,或者可以是第二目标位置,或者可以既不是第一目标位置,也不是第二目标位置。
在一种可能的设计中,如果基准像素块是非边界像素块,则将与该基准像素块对应的待处理像素块中的像素均标记为已占用或均标记为未占用,包括:如果基准像素块是非边界像素块,且是有效像素块,则将与该基准像素块对应的待处理像素块中的像素均标记为已占用(例如置1);如果基准像素块是非边界像素块,且是无效像素块,则将与该基准像素块对应的待处理像素块中的像素均标记为未占用(例如置0)。
在一种可能的设计中,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将待处理像素块中第二目标位置的像素标记为未占用,包括:确定适用于待处理像素块的目标候选模式,目标候选模式包括一种候选模式或多种候选模式(换言之,即目标候选模式为一种候选模式或多种候选模式的组合);根据目标候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。
其中,目标候选模式中参考像素块(例如,以如图9A所示的五角星示意)的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致或实质一致或趋于一致。相应地,在一个示例中,每种候选模式中的参考像素块的无效空域相邻像素块的分布,可以表示该参考像素块的无效空域相邻像素块相对于该参考像素块的方位。
可选的,在另一种设计方式下,“目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致或实质一致或趋于一致”可以替换为:目标候选模式中参考像素块的有效空域相邻像素块的分布与该基准像素块的有效空域相邻像素块的分布一致或实质一致或趋于一致。相应地,在一个示例中,每种候选模式中的参考像素块的有效空域相邻像素块的分布,可以表示该参考像素块 的有效空域相邻像素块相对于该参考像素块的方位。
可选的,在不同的设计方式下,“目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致或实质一致或趋于一致”可以扩展为:目标候选模式中参考像素块的空域相邻像素块的分布与该基准像素块的空域相邻像素块的分布一致或实质一致或趋于一致。基于此,在一个示例中,每种候选模式中的参考像素块的空域相邻像素块的分布,可以表示该参考像素块的空域相邻像素块相对于该参考像素块的方位。
需要说明的是,下文中均是以目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致或实质一致或趋于一致,为例进行说明的。
其中,一种候选模式可以理解为是表示参考像素块的无效空域相邻像素块的分布的一个pattern或一个模版。“参考像素块”是为了便于描述候选模式而提出的概念。
可选的,确定适用于待处理像素块的目标候选模式,可以包括:从候选模式集中选择适用于待处理像素块的目标候选模式,其中,候选模式集包括至少两种候选模式。候选模式集可以是预定义的。
在一种可能的设计中,根据目标候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,包括:根据目标候选模式中无效像素的位置分布,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。
在一种可能的设计中,如果目标候选模式包括一种候选模式,则基于该候选模式中无效像素的位置分布所确定的有效像素所在的位置为第一目标位置,和/或基于该候选模式中无效像素的位置分布所确定的无效像素所在的位置为第二位置。
在一种可能的设计中,如果目标候选模式包括多种候选模式(即目标候选模式是多种候选模式的组合),则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用;其中:如果基于多种候选模式中无效像素的位置分布,均确定待处理像素块中的待处理像素为有效像素,则待处理像素所在的位置为第一目标位置;如果基于多种候选模式中的至少一种候选模式中无效像素的位置分布,确定待处理像素为无效像素,则待处理像素所在的位置为第二目标位置。
在一种可能的设计中,根据目标候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,包括:根据目标候选模式对应的子候选模块,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
在一种可能的设计中,如果目标候选模式为一种候选模式,则当目标候选模式对应于一种子候选模式时,根据该子候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为 未占用;当目标候选模式对应于多种子候选模式时,根据该多种子候选模式中的一种子候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。可选的,若待译码点云是待编码点云,则该方法还包括:当目标候选模式对应于多种子候选模式时,将标识信息编入码流,标识信息用于表示该多种子候选模式中的执行标记操作时所采用的子候选模式。相应的,若待译码点云是待解码点云,则该方法还包括:解析码流,以得到该标识信息。
在一种可能的设计中,如果目标候选模式包括第一候选模式和第二候选模式,则:根据目标子候选模式,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,目标子候选模式包括第一子候选模式和第二子候选模式。其中,第一候选模式对应于第一子候选模式,且第二候选模式对应于第二子候选模式;或者,第一候选模式对应于多种子候选模式,多种子候选模式中包括第一子候选模式;且第二候选模式对应于多种子候选模式,多种子候选模式中包括第二子候选模式;或者,第一候选模式对应于第一子候选模式,且第二候选模式对应于多种子候选模式,多种子候选模式中包括第二子候选模式。可选的,若待译码点云是待编码点云,则该方法还包括:当第一候选模式对应于多种子候选模式时,将第一标识信息编入码流,第一标识信息用于表示第一子候选模式。相应的,若待译码点云是待解码点云,则该方法还包括:解析码流,以得到第一标识信息。若待译码点云是待编码点云,则该方法还包括:当第二候选模式对应于多种子候选模式时,将第二标识信息编入码流,第二标识信息用于表示第二子候选模式。相应的,若待译码点云是待解码点云,该方法还包括:解析码流,以得到第二标识信息。
在一种可能的设计中,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将待处理像素块中第二目标位置的像素标记为未占用,包括:确定待处理像素块的类型;根据待处理像素块的类型,采用对应的目标处理方式将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。该方式实现简单。
在一种可能的设计中,确定待处理像素块的类型,包括:基于基准像素块的空域相邻像素块是否是无效像素块,确定待处理像素块中的无效像素在待处理像素块中的方位信息;其中,不同类型的像素块对应不同的方位信息。
在一种可能的设计中,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将待处理像素块中第二目标位置的像素标记为未占用,包括:在基准像素块的空域相邻像素块中有效像素块的个数大于或等于预设阈值的情况下,将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用。当基准像素块的空域相邻像素块中有效像素块的个数较多时,基准像素块周围可参考的信息较多,因此,根据基准像素块的空域相邻像素块,确定与基准像素块相对应的待处理像素块内部的无效像素的位置分布的精确度较高,这样可以使后续经上采样处理的占用图的更接近待译码点云的原始占用图,从而提高编解码性能。
第二方面,提供了一种点云译码方法,包括:对待译码点云的第一占用图进行上采样,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率;其中,若第一占用图中的基准像素块是边界像素块,则第二占用图中的与基准像素块对应的像素块中第一目标位置的像素是已占用的像素(即已标记的像素,例如填充为1的像素),和/或第二占用图中的与基准像素块对应的像素块中第二目标位置的像素是未占用的像素;若基准像素块是非边界像素块,则第二占用图中的与基准像素块对应的像素块中的像素均是已占用的像素或均是未占用的像素(即未标记的像素,例如填充为0的像素)。根据第二占用图,重构待译码点云。该方法的第二占用图是已标记的占用图。除此之外,其他术语的解释和有益效果的描述均可以参考上述第一方面。
第三方面,提供了一种点云编码方法,包括:确定指示信息,该指示信息用于指示是否按照目标点云编码方法对待编码点云的占用图进行处理;目标点云编码方法包括上述第一方面或第一方面的任一种可能的设计,或第二方面或第二方面的任一种可能的设计提供的点云译码方法(具体是点云编码方法);将该指示信息编入码流。
第四方面,提供了一种点云解码方法,包括:解析码流,以得到指示信息,该指示信息用于指示是否按照目标点云解码方法对待解码点云的占用图进行处理;目标点云解码方法包括上述第一方面或第一方面的任一种可能的设计,或第二方面或第二方面的任一种可能的设计提供的点云译码方法(具体是点云解码方法);当该指示信息指示按照目标点云解码方法进行处理时,按照目标点云解码方法对待解码点云的占用图进行处理。
第五方面,提供了一种译码器,包括:上采样模块,用于对待译码点云的第一占用图进行放大,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率;如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块;如果基准像素块是非边界像素块,则将待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块。该待处理像素块对应于该基准像素块。点云重构模块,用于根据经标记的第二占用图,重构待译码点云;该经标记的所述第二占用图包括该经标记的像素块。
第六方面,提供了一种译码器,包括:上采样模块,用于对待译码点云的第一占用图进行上采样,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率;其中,若第一占用图中的基准像素块是边界像素块,则第二占用图中的与基准像素块对应的像素块中第一目标位置的像素是已占用的像素,和/或第二占用图中的与基准像素块对应的像素块中第二目标位置的像素是未占用的像素;若基准像素块是非边界像素块,则第二占用图中的与基准像素块对应的像素块中的像素均是已占用的像素或均是未占用的像素。点云重构模块,用于根据第二占用图,重构待译码点云。
第七方面,提供了一种编码器,包括:辅助信息编码模块,用于确定指示信息,该指示信息用于指示是否按照目标点云编码方法对待编码点云的占用图进行处理;目 标点云编码方法包括上述第一方面或第一方面的任一种可能的设计,或第二方面或第二方面的任一种可能的设计提供的点云译码方法(具体是点云编码方法);将该指示信息编入码流。占用图处理模块,用于在该指示信息指示按照目标点云编码方法对待编码点云的占用图进行编码的情况下,按照目标点云编码方法对该待编码点云的占用图进行处理。示例的,占用图处理模块可以通过如图2所示的编码器包括的上采样模块111和点云重构模块112实现。
第八方面,提供了一种解码器,包括:辅助信息解码模块,用于解析码流,以得到指示信息,所述指示信息用于指示是否按照目标点云解码方法对待解码点云的占用图进行解码;所述目标点云解码方法包括上述第一方面或第一方面的任一种可能的设计,或第二方面或第二方面的任一种可能的设计提供的点云译码方法(具体是点云解码方法);占用图处理模块,用于当该指示信息指示按照目标点云解码方法进行处理时,按照目标点云解码方法对待解码点云的占用图进行处理。其中,占用图处理模块可以通过如图5所示的解码器包括的上采样模块208和点云重构模块205实现。
第九方面,提供了一种译码装置,包括:存储器和处理器;其中,该存储器用于存储程序代码;该处理器用于调用该程序代码,以执行上述第一方面或第一方面的任一种可能的设计,或第二方面或第二方面的任一种可能的设计提供的点云译码方法。
第十方面,提供了一种编码装置,包括:存储器和处理器;其中,该存储器用于存储程序代码;该处理器用于调用该程序代码,以执行上述第三方面提供的点云编码方法。
第十一方面,提供了一种解码装置,包括:存储器和处理器;其中,该存储器用于存储程序代码;该处理器用于调用该程序代码,以执行上述第四方面提供的点云解码方法。
本申请还提供一种计算机可读存储介质,包括程序代码,该程序代码在计算机上运行时,使得该计算机执行如上述第一方面及其可能的设计,或第二方面及其可能的设计提供的任一种占用图上采样方法。
本申请还提供一种计算机可读存储介质,包括程序代码,该程序代码在计算机上运行时,使得该计算机执行上述第三方面提供的点云编码方法。
本申请还提供一种计算机可读存储介质,包括程序代码,该程序代码在计算机上运行时,使得该计算机执行上述第四方面提供的点云编码方法。
应当理解的是,上述提供的任一种编解码器、编解码装置和计算机可读存储介质的有益效果均可以对应参考上文对应方面提供的方法实施例的有益效果,不再赘述。
附图说明
图1为可用于本申请实施例的一种实例的点云译码系统的示意性框图;
图2为可用于本申请实施例的一种实例的编码器的示意性框图;
图3为可适用于本申请实施例的一种点云、点云的patch以及点云的占用图的示意图;
图4为本申请实施例提供的一种编码端点云的占用图的变化过程的对比示意图;
图5为可用于本申请实施例的一种实例的解码器的示意性框图;
图6为本申请实施例提供的一种点云译码方法的流程示意图;
图7为本申请实施例提供的一种像素块内部无效像素的位置分布的示意图;
图8为本申请实施例提供的另一种像素块内部无效像素的位置分布的示意图;
图9A为本申请实施例提供的一种候选模式与子候选模式之间的对应关系的示意图;
图9B为本申请实施例提供的另一种候选模式与子候选模式之间的对应关系的示意图;
图10为本申请实施例提供的一种目标候选模式为多种候选模式的组合时上采样处理的示意图;
图11为本申请实施例提供的一种边界像素块的类型的索引、判别方式图、示意图以及描述信息之间的对应关系的示意图;
图12为本申请实施例提供的一种用于确定第一目标位置和/或第二目标位置的示意图;
图13为本申请实施例提供的另一种边界像素块的类型的索引、判别方式图、示意图以及描述信息之间的对应关系的示意图;
图14为本申请实施例提供的另一种用于确定第一目标位置和/或第二目标位置的示意图;
图15为本申请实施例提供的一种点云译码方法的流程示意图;
图16为本申请实施例提供的一种点云编码方法的流程示意图;
图17为本申请实施例提供的一种点云解码方法的流程示意图;
图18为本申请实施例提供的一种译码器的示意性框图;
图19A为本申请实施例提供的一种编码器的示意性框图;
图19B为本申请实施例提供的一种解码器的示意性框图;
图20为用于本申请实施例的译码设备的一种实现方式的示意性框图。
具体实施方式
本申请实施例中的术语“至少一个(种)”包括一个(种)或多个(种)。“多个(种)”是指两个(种)或两个(种)以上。例如,A、B和C中的至少一种,包括:单独存在A、单独存在B、同时存在A和B、同时存在A和C、同时存在B和C,以及同时存在A、B和C。在本申请的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
图1为可用于本申请实施例的一种实例的点云译码系统1的示意性框图。术语“点云译码”或“译码”可一般地指代点云编码或点云解码。点云译码系统1的编码器100可以根据本申请提出的任一种点云编码方法对待编码点云进行编码。点云译码系统1的解码器200可以根据本申请提出的与编码器使用的点云编码方法相对应的点云解码方法对待解码点云进行解码。
如图1所示,点云译码系统1包含源装置10和目的地装置20。源装置10产生经编码点云数据。因此,源装置10可被称为点云编码装置。目的地装置20可对由源装置10所产生的经编码的点云数据进行解码。因此,目的地装置20可被称为点云解码装置。源装置10、目的地装置20或两个的各种实施方案可包含一或多个处理器以及耦合到所述一或多个处理器的存储器。所述存储器可包含但不限于随机存取存储器(random access memory,RAM)、只读存储器(read-only memory,ROM)、带电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,如本文所描述。
源装置10和目的地装置20可以包括各种装置,包含桌上型计算机、移动计算装置、笔记型(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机或其类似者。
目的地装置20可经由链路30从源装置10接收经编码点云数据。链路30可包括能够将经编码点云数据从源装置10移动到目的地装置20的一或多个媒体或装置。在一个实例中,链路30可包括使得源装置10能够实时将经编码点云数据直接发送到目的地装置20的一或多个通信媒体。在此实例中,源装置10可根据通信标准(例如无线通信协议)来调制经编码点云数据,且可将经调制的点云数据发送到目的地装置20。所述一或多个通信媒体可包含无线和/或有线通信媒体,例如射频(radio frequency,RF)频谱或一或多个物理传输线。所述一或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络例如为局域网、广域网或全球网络(例如,因特网)。所述一或多个通信媒体可包含路由器、交换器、基站或促进从源装置10到目的地装置20的通信的其它设备。
在另一实例中,可将经编码数据从输出接口140输出到存储装置40。类似地,可通过输入接口240从存储装置40存取经编码点云数据。存储装置40可包含多种分布式或本地存取的数据存储媒体中的任一者,例如硬盘驱动器、蓝光光盘、数字多功能光盘(digital versatile disc,DVD)、只读光盘(compact disc read-only memory,CD-ROM)、快闪存储器、易失性或非易失性存储器,或用于存储经编码点云数据的任何其它合适的数字存储媒体。
在另一实例中,存储装置40可对应于文件服务器或可保持由源装置10产生的经编码点云数据的另一中间存储装置。目的地装置20可经由流式传输或下载从存储装置40存取所存储的点云数据。文件服务器可为任何类型的能够存储经编码的点云数据并且将经编码的点云数据发送到目的地装置20的服务器。实例文件服务器包含网络服务器(例如,用于网站)、文件传输协议(file transfer protocol,FTP)服务器、网络附属存储(network attached storage,NAS)装置或本地磁盘驱动器。目的地装置20可通过任何标准数据连接(包含因特网连接)来存取经编码点云数据。这可包含无线信道(例如,Wi-Fi连接)、有线连接(例如,数字用户线路(digital subscriber line,DSL)、电缆调制解调器等),或适合于存取存储在文件服务器上的经编码点云数据的两者的组合。经编码点云数据从存储装置40的传输可为流式传输、下载传输或两者的组合。
图1中所说明的点云译码系统1仅为实例,并且本申请的技术可适用于未必包含点云编码装置与点云解码装置之间的任何数据通信的点云译码(例如,点云编码或点云解码)装置。在其它实例中,数据从本地存储器检索、在网络上流式传输等等。点云编码装置可对数据进行编码并且将数据存储到存储器,和/或点云解码装置可从存储器检索数据并且对数据进行解码。在许多实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的装置执行编码和解码。
在图1的实例中,源装置10包含数据源120、编码器100和输出接口140。在一些实例中,输出接口140可包含调节器/解调器(调制解调器)和/或发送器(或称为发射器)。数据源120可包括点云捕获装置(例如,摄像机)、含有先前捕获的点云数据的点云存档、用以从点云内容提供者接收点云数据的点云馈入接口,和/或用于产生点云数据的计算机图形系统,或点云数据的这些来源的组合。
编码器100可对来自数据源120的点云数据进行编码。在一些实例中,源装置10经由输出接口140将经编码点云数据直接发送到目的地装置20。在其它实例中,经编码点云数据还可存储到存储装置40上,供目的地装置20以后存取来用于解码和/或播放。
在图1的实例中,目的地装置20包含输入接口240、解码器200和显示装置220。在一些实例中,输入接口240包含接收器和/或调制解调器。输入接口240可经由链路30和/或从存储装置40接收经编码点云数据。显示装置220可与目的地装置20集成或可在目的地装置20外部。一般来说,显示装置220显示经解码点云数据。显示装置220可包括多种显示装置,例如,液晶显示器(liquid crystal display,LCD)、等离子显示器、有机发光二极管(organic light-emitting diode,OLED)显示器或其它类型的显示装置。
尽管图1中未图示,但在一些方面,编码器100和解码器200可各自与音频编码器和解码器集成,且可包含适当的多路复用器-多路分用器(multiplexer-demultiplexer,MUX-DEMUX)单元或其它硬件和软件,以处置共同数据流或单独数据流中的音频和视频两者的编码。在一些实例中,如果适用的话,那么MUX-DEMUX单元可符合ITU H.223多路复用器协议,或例如用户数据报协议(user datagram protocol,UDP)等其它协议。
编码器100和解码器200各自可实施为例如以下各项的多种电路中的任一者:一或多个微处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件来实施本申请,那么装置可将用于软件的指令存储在合适的非易失性计算机可读存储媒体中,且可使用一或多个处理器在硬件中执行所述指令从而实施本申请技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可被视为一或多个处理器。编码器100和解码器200中的每一者可包含在一或多个编码器或解码器中,所述编码器或解码器中的任一者可集成为相应装置中的组合编码器/解码器(编码解码器)的一部分。
本申请可大体上将编码器100称为将某些信息“发信号通知”或“发送”到例如解码器200的另一装置。术语“发信号通知”或“发送”可大体上指代用以对经压缩点云数据 进行解码的语法元素和/或其它数据的传送。此传送可实时或几乎实时地发生。替代地,此通信可经过一段时间后发生,例如可在编码时在经编码位流中将语法元素存储到计算机可读存储媒体时发生,解码装置接着可在所述语法元素存储到此媒体之后的任何时间检索所述语法元素。
如图2所示,为可用于本申请实施例的一种实例的编码器100的示意性框图。图2是以MPEG(moving picture expert group)点云压缩(point cloud compression,PCC)编码框架为例进行说明的。在图2的实例中,编码器100可以包括patch信息生成模块101、打包模块102、深度图生成模块103、纹理图生成模块104、填充模块105、基于图像或视频的编码模块106、占用图编码模块107、辅助信息编码模块108和复用模块109等。另外,编码器100还可以包括下采样模块110、上采样模块111、点云重构模块112和点云滤波模块113等。
patch信息生成模块101,用于采用某种方法将一帧点云分割产生多个patch,以及获得所生成的patch的相关信息等。其中,patch是指一帧点云中部分点构成的集合,通常一个连通区域对应一个patch。patch的相关信息可以包括但不限于以下信息中的至少一项:点云所分成的patch的个数、每个patch在三维空间中的位置信息、每个patch的法线坐标轴的索引、每个patch从三维空间投影到二维空间产生的深度图、每个patch的深度图大小(例如深度图的宽和高)、每个patch从三维空间投影到二维空间产生的占用图等。
patch的相关信息中的部分,如点云所分成的patch的个数,每个patch的法线坐标轴的索引,每个patch的深度图大小、每个patch在点云中的位置信息、每个patch的占用图的尺寸信息等,可以作为辅助信息被发送到辅助信息编码模块108,以进行编码(即压缩编码)。另外,patch的深度图等还可以被发送到深度图生成模块103。
patch的相关信息中的部分,如每个patch的占用图可以被发送到打包模块102进行打包,具体的,将该点云的各patch按照特定的顺序进行排列例如按照各patch的占用图的宽/高降序(或升序)排列;然后,按照排列后的各patch的顺序,依次将patch的占用图插入该点云占用图的可用区域中,得到该点云的占用图,此时获得的点云的占用图的分辨率是原始分辨率。
如图3所示,为可适用于本申请实施例的一种点云、点云的patch以及点云的占用图的示意图。其中,图3中的(a)图为一帧点云的示意图,图3中的(b)图为基于图3中的(a)图获得的点云的patch的示意图,图3中的(c)图为图3中的(b)图所示的各patch映射到二维平面上所得到的各patch的占用图经打包得到的该点云的占用图的示意图。
打包模块102获得的patch的打包信息如各patch在该点云占用图中的具体位置信息等可以被发送到深度图生成模块103。
打包模块102获得的该点云的占用图,一方面可以用于指导深度图生成模块103生成该点云的深度图和指导纹理图生成模块104生成该点云的纹理图。另一方面可以用于经下采样模块110降低分辨率后发送到占用图编码模块107以进行编码。
深度图生成模块103,用于根据该点云的占用图、该点云的各patch的占用图和深度信息,生成该点云的深度图,并将所生成的深度图发送到填充模块105,以对深度 图中的空白像素点进行填充,得到经填充的深度图。
纹理图生成模块104,用于根据该点云的占用图、该点云的各patch的占用图和纹理信息,生成该点云的纹理图,并将所生成的纹理图发送到填充模块105,以对纹理图中的空白像素点进行填充,得到经填充的纹理图。
经填充的深度图和经填充的纹理图被填充模块105发送到基于图像或视频的编码模块106,以进行基于图像或视频的编码。后续:
一方面,基于图像或视频的编码模块106、占用图编码模块107、辅助信息编码模块108,将所得到的编码结果(即码流)发送到复用模块109,以合并成一个码流,该码流可以被发送到输出接口140。
另一方面,基于图像或视频的编码模块106所得到的编码结果(即码流)发送到点云重构模块112进行点云重构得到经重构的点云(具体是得到重构的点云几何信息)。具体的,对基于图像或视频的编码模块106所得到的经编码的深度图进行视频解码,获得该点云的解码深度图,利用解码深度图、该点云的占用图和各patch的辅助信息,以及上采样模块111恢复出的原始分辨率的点云的占用图,获得重构的点云几何信息。其中,点云的几何信息是指点云中的点(例如点云中的每个点)在三维空间中的坐标值。应用于在本申请实施例时,这里的“该点云的占用图”可以是该点云经滤波模块113滤波(或称为平滑处理)后得到的占用图。
其中,上采样模块111用于对接收到的来自下采样模块110发送的低分辨率的点云的占用图进行上采样处理,从而恢复成原始分辨率的点云的占用图。其中,恢复得到的点云的占用图越接近真实的点云的占用图(即打包模块102生成的点云的占用图),重构得到的点云就越接近原始点云,点云编码性能会越高。
可选的,点云重构模块112还可以将该点云的纹理信息和重构的点云几何信息发送到着色模块,着色模块用于对重构点云进行着色,以获得重构点云的纹理信息。可选的,纹理图生成模块104还可以基于经点云滤波模块113对重构的点云几何信息进行滤波得到的信息生成该点云的纹理图。
可以理解的,图2所示的编码器100仅为示例,具体实现时,编码器100可以包括比图2中所示的更多或更少的模块。本申请实施例对此不进行限定。
如图4所示,为本申请实施例提供的一种编码端点云的占用图的变化过程的对比示意图。
其中,图4中的(a)图所示的点云的占用图是打包模块102生成的点云的原始占用图,其分辨率(即原始分辨率)是1280*864。
图4中的(b)图所示的占用图是下采样模块110对图4中的(a)图所示的点云的原始占用图进行处理后,得到的低分辨率的点云的占用图,其分辨率是320*216。
图4中的(c)图所示的占用图是上采样模块111对图4中的(b)图所示的低分辨率的点云的占用图进行上采样得到的原始分辨率的点云的占用图,其分辨率是1280*864。
图4中的(d)图是图4中的(a)图的椭圆区域的局部放大图,图4中的(e)图是图4中的(c)图的椭圆区域的局部放大图。并且,(e)图所示的局部放大图是(d)图所示的局部放大图经下采样模块110和上采样模块111处理之后得到的。
如图5所示,为可用于本申请实施例的一种实例的解码器200的示意性框图。其中,图5中是以MPEG PCC解码框架为例进行说明的。在图5的实例中,解码器200可以包括解复用模块201、基于图像或视频的解码模块202、占用图解码模块203、辅助信息解码模块204、点云重构模块205、点云滤波模块206和点云的纹理信息重构模块207。另外,解码器200可以包括上采样模块208。其中:
解复用模块201用于将输入的码流(即合并的码流)发送到相应解码模块。具体的,将包含经编码的纹理图的码流和经编码的深度图的码流发送给基于图像或视频的解码模块202;将包含经编码的占用图的码流发送给占用图解码模块203,将包含经编码的辅助信息的码流发送给辅助信息解码模块204。
基于图像或视频的解码模块202,用于对接收到的经编码的纹理图和经编码的深度图进行解码;然后,将解码得到的纹理图信息发送给点云的纹理信息重构模块207,将解码得到的深度图信息发送给点云重构模块205。占用图解码模块203,用于对接收到的包含经编码的占用图的码流进行解码,并将解码得到的占用图信息发送给点云重构模块205。其中,占用图解码模块203解码得到的占用图信息是上文中所描述的低分辨率的点云的占用图的信息。例如,这里的占用图可以是图4中的(b)图所示的点云的占用图。
具体的,占用图解码模块203可以先将解码得到的占用图信息发送给上采样模块208,以进行上采样处理,然后将上采样处理后得到的原始分辨率的点云的占用图发送给点云重构模块205。例如,上采样处理后得到的原始分辨率的点云的占用图可以是图4中的(c)图所示的点云的占用图。
点云重构模块205,用于根据接收到的占用图信息和辅助信息对点云的几何信息进行重构,具体的重构过程可以参考编码器100中的点云重构模块112的重构过程,此处不再赘述。经重构的点云的几何信息经点云滤波模块206滤波之后,被发送到点云的纹理信息重构模块207。点云的纹理信息重构模块207用于对点云的纹理信息进行重构,得到经重构的点云。
可以理解的,图5所示的解码器200仅为示例,具体实现时,解码器200可以包括比图5中所示的更多或更少的模块。本申请实施例对此不进行限定。
上述所描述的上采样模块(包括上采样模块111和上采样模块208),均可以包括放大单元和标记单元。也就是说,上采样处理操作包括放大操作和标记操作。其中,放大单元用于将低分辨率的点云的占用图进行放大,得到未经标记的原始分辨率的点云的占用图。标记单元用于对未经标记的原始分辨率的点云的占用图进行标记。
在本申请的一些实施例中,上采样模块111可以与辅助信息编码模块108连接,用于向辅助信息编码模块108发送目标子候选模式,以使得辅助信息编码模块108将表示执行标记时所所采用的子候选模式的标识信息编入码流,或将表示目标处理方式的标识信息编入码流。相应的,上采样模块208可以与辅助信息解码模块204连接,用于接收辅助信息解码模块204解析码流得到的相应的标识信息,从而对待解码点云的占用图进行上采样处理。关于该实施例的具体实现方式及相关说明可以参考下文,此处不再赘述。
以下,对本申请实施例提供的占用图上采样方法、点云编解码方法进行说明。需 要说明的是,结合图1所示的点云译码系统,下文中的任一种点云编码方法可以是点云译码系统中的源装置10执行的,更具体的,是由源装置10中的编码器100执行的;下文中的任一种点云解码方法可以是点云译码系统中的目的装置20执行的,更具体的,是由目的装置20中的解码器200执行的。另外,下文中的任一种占用图上采样方法均可以是由源装置10或目的装置20执行,具体的,可以由编码器100或解码器200执行,更具体的,可以由上采样模块111执行或上采样模块208执行。在此统一说明,下文不再赘述。
为了描述上的简洁,如果不加说明,下文中描述的点云译码方法可以包括点云编码方法或点云解码方法。当点云译码方法具体是点云编码方法时,图6所示的实施例中的待译码点云具体是待编码点云;当点云译码方法具体是点云解码方法时,图6所示的实施例中的待译码点云具体是待解码点云。由于点云译码方法的实施例中包含占用图上采样方法,因此,本申请实施例不单独布局占用图上采样方法的实施例。
如图6所示,为本申请实施例提供的一种点云译码方法的流程示意图。该方法可以包括:
S101:对待译码点云的第一占用图进行放大,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率。
第一占用图,是已标记的低分辨率的点云占用图。这里的“低分辨率”是相对第二占用图的分辨率而言的。当待译码点云是待编码点云时,第一占用图可以是图2中的下采样模块110生成的低分辨率的点云的占用图。当待译码点云是待解码点云时,第一占用图可以是图5中的点云占用图解码模块203输出的低分辨率的点云的占用图。
第二占用图,是未标记的高分辨率的点云占用图。这里的“高分辨率”是相对第一占用图的分辨率而言的。其中,第二占用图的分辨率可以是待译码点云的占用图的原始分辨率,即图2中的打包模块102生成的待译码点云的原始占用图的分辨率。第二占用图可以是图2中的上采样模块111或图5中的上采样模块208生成的未标记的高分辨率的点云的占用图。
在一种实现方式中,译码器(即编码器或解码器)可以以点云的占用图为粒度,对第一占用图进行放大,得到第二占用图。具体的,执行一次放大步骤,即可将第一占用图中的每个B1*B1的像素块,放大成B2*B2的像素块。
在另一种实现方式中,译码器可以以像素块为粒度,对第一占用图进行放大,得到第二占用图。具体的,执行多次放大步骤,从而将第一占用图中的每个B1*B1的像素块,放大成B2*B2的像素块。基于该示例,译码器可以不在生成第二占用图之后,执行以下S102~S103,而是,在得到一个B2*B2的像素块之后,就将其作为一个待处理像素块执行S102~S103。
其中,B1*B1的像素块是指B1行B1列的像素构成的方阵。B2*B2的像素块是指B2行B2列的像素构成的方阵。B1<B2。通常,B1和B2均是2的整数次幂。例如,B1=1或2等。又如,B2=2、4、8或16等。
S102:若第一占用图中的基准像素块是第一占用图的边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块。其中,该待处理像素块对 应于该基准像素块。
第一占用图包括多个基准像素块,该多个基准像素块占满第一占用图,且基准像素块之间无交叠。第二占用图包括多个待处理像素块,该多个待处理像素块占满第二占用图,且待处理像素块之间无交叠。
作为一个示例,基准像素块可以是第一占用图中的B1*B1的像素块。待处理像素块可以是第二占用图中的对应于该基准像素块的B2*B2的像素块,具体的,待处理像素块是该基准像素块经放大得到的像素块。需要说明的是,本申请实施例中是以待处理像素块和基准像素块均是方形为例进行说明的,可扩展的,这些像素块均是矩形。
第一占用图中的像素块可以分为无效像素块和有效像素块。如果一个像素块包含的所有像素均是未占用的像素(即标记为“未占用”的像素),则该像素块是无效像素块。如果一个像素块包含的至少一个像素是已占用的像素(即标记为“已占用”的像素),则该像素块是有效像素块。
有效像素块包括边界像素块和非边界像素块。如果一个有效像素块的所有空域相邻像素块均是有效像素块,则该有效像素块是非边界像素块。如果一个有效像素块的至少一个空域相邻像素块是无效像素块,则该有效像素块是边界像素块。
其中,一个像素块的空域相邻像素块,也可以称为一个像素块的周边空域相邻像素块,是指与该像素块相邻的,且位于该像素块的正上方、正下方、正左方、正右方、左上方、左下方、右上方和右下方的一个或多个方位的像素块。
可以理解的是,一帧点云的占用图的非边缘像素块的空域相邻像素块包括与该像素块相邻的,且位于该像素块的正上方、正下方、正左方、正右方、左上方、左下方、右上方和右下方的8个像素块。一帧点云的占用图的边缘像素块的空域相邻像素块的个数小于8个。以第一占用图为例,第一占用图的边缘像素块是指第一占用图中的第1行、最后1行、第1列和最后1列的基准像素块。第一占用图中的其他位置的像素块是第一占用图的非边缘像素块。为了便于描述,下文中的具体示例中所涉及到的空域相邻像素块,均是以针对点云的占用图的非边缘像素块为例进行说明的。应理解,下文中的所有方案均可适用于边缘像素块的空域相邻像素块,在此统一说明,下文不再赘述。另外,具体实现的过程中,译码器可以根据两个像素块的坐标,确定这两个像素块是否相邻以及这两个像素块中的一个像素块相对另一个像素块的方位。
第一目标位置和第二目标位置均表示待处理像素块中的部分像素所在的位置。并且,当S102包括将第一目标位置的像素标记为已占用和将第二目标位置的像素标记为未占用时,第一目标位置和第二目标位置之间没有交集,且第一目标位置和第二目标位置的并集是待处理像素块中的部分或全部像素所在的位置。
S103:若基准像素块是第一占用图的非边界像素块,则将待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块。
具体的,当基准像素块是有效像素块时,将该待处理像素块中的像素均标记为已占用;当基准像素块是无效像素块时,将该待处理像素块中的像素均标记为未占用。
为了区分已占用的像素和未占用的像素,可选的,译码器可以使用不同的数值或数值范围等信息来标记已占用的像素和未占用的像素。或者,使用一个或多个数值或数值范围等信息来标记已占用的像素,而用“不标记”来表示未占用的像素。或者,使 用一个或多个数值或数值范围来等信息标记未占用的像素,而用“不标记”来表示已占用的像素。本申请实施例对此不进行限定。
例如,译码器使用“1”来标记一个像素是已占用的像素,使用“0”来标记一个像素是未占用的像素。基于该示例,上述S102具体为:若第一占用图中的基准像素块是边界像素块,则将第二占用图中的与该基准像素块对应的待处理像素块中第一目标位置的像素的值置1,和/或将第二占用图中的与该基准像素块对应的待处理像素块中第二目标位置的像素置0。上述S103具体为:若该基准像素块是非边界像素块,则将第二占用图中的与该基准像素块对应的待处理像素块中的像素的值均置1或均置0。
具体实现时,译码器可以遍历第二占用图中的每个待处理像素块,从而执行S102~S103,以得到经标记的第二占用图;或者,译码器可以遍历第一占用图中的每个基准像素块,从而执行S102~S103,以得到经标记的第二占用图。
上述S101~S103可以认为是“对第一占用图进行上采样处理”的具体实现过程。
S104:根据经标记的第二占用图,重构待译码点云。其中,经标记的第二占用图包括经标记的像素块。例如,根据经编码的深度图进行视频解码,获得该点云的解码深度图,利用解码深度图、该点云的经处理过的占用图和各patch的辅助信息,获得重构的点云几何信息。
本申请实施例提供的点云译码方法,在对待译码点云进行上采样处理的过程中,将未标记的高分辨率的点云占用图中的待处理像素块中局部位置的像素标记为已占用,和/或将该待处理像素块中局部位置的像素标记为未占用。这样,有助于实现经上采样处理的占用图尽量接近待译码点云的原始占用图,相比基于传统的上采样方法,可以减少经重构的点云中的outlier点,因此,有助于提高编解码性能。另外,相比对经上采样的占用图进一步处理从而减少经重构的点云中的outlier点的技术方案,本技术方案只需要对未标记的高分辨率的占用图中的各像素块进行一次遍历,因此,可以降低处理复杂度。
可选的,上述S102包括:若第一占用图中的基准像素块是第一占用图的边界像素块,且基准像素块的空域相邻像素块中有效像素块的个数大于或等于预设阈值,则将第二占用图中的与基准像素块对应的待处理像素块中第一目标位置的像素标记为已占用,和/或将第二占用图中的待处理像素块中第二目标位置的像素标记为未占用。另外,若基准像素块的空域相邻像素块中有效像素块的个数小于该预设阈值,则可以将该待处理像素块中的像素均标记为已占用的像素或均标记为未占用的像素。
其中,本申请实施例对该预设阈值的取值不进行限定,例如,该预设阈值是大于或等于4的值,如该预设阈值等于6。
可以理解的是,当基准像素块的空域相邻像素块中有效像素块的个数较多时,该基准像素块周围可参考的信息较多,也就是说,根据该基准像素块的空域相邻像素块,确定与该基准像素块相对应的待处理像素块内部的无效像素的位置分布的精确度较高,这样可以使经上采样处理的占用图的更接近待译码点云的原始占用图,从而提高编解码性能。
以下,从另一角度说明上述S102的具体实现方式。
方式1:上述S102可以包括如下步骤S102A~S102B:
S102A:确定适用于待处理像素块的目标候选模式,其中,目标候选模式为一种候选模式或多种候选模式的组合;目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致(即相同)或趋于一致(即近似相同)。
S102B:根据目标候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
“参考像素块”是为了便于描述候选模式而提出的概念,由于空域相邻像素块是相对某一个像素块而言的,因此,将该像素块定义为参考像素块。
每种候选模式中的参考像素块的无效空域相邻像素块的分布,表示一个或多个无效空域相邻像素块(即为无效像素块的空域相邻像素块)相对于该参考像素块的方位。一个候选模式具体表示几个及哪几个无效空域相邻像素块相对于该参考像素块的方位可以是预定义的。
作为一个示例,假设基准像素块的无效空域相邻像素块的分布为该基准像素块的无效空域相邻像素位于该基准像素块的正上方、正下方和正右方,那么:可以通过“用于表示无效空域相邻像素块位于参考像素块的正上方、正下方和正右方”的候选模式A来表征该基准像素块的无效空域相邻像素块的分布;该情况下,适用于该基准像素块对应的待处理像素块的目标候选模式为候选模式A。或者,可以通过“用于表示无效空域相邻像素块位于参考像素块的正上方”的候选模式B、“用于表示无效空域相邻像素块位于参考像素块的正下方”的候选模式C,以及“用于表示无效空域相邻像素块位于参考像素块的正右方”的候选模式D共同来表征基准像素块的无效空域相邻像素块的分布;该情况下,适用于该基准像素块对应的待处理像素块的目标候选模式为候选模式B、C、D的组合。或者,可以通过“用于表示无效空域相邻像素块位于参考像素块的正下方”的候选模式E,以及“用于表示无效空域相邻像素块位于参考像素块的正上方和正右方”的候选模式F共同来表征基准像素块的无效空域相邻像素块的分布;该情况下,适用于该基准像素块对应的待处理像素块的目标候选模式为候选模式E、F的组合。本示例可以实现目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布一致。
可以理解的是,由于可能存在预定义的多种候选模式中不能选择与基准像素块的无效空域相邻像素块的分布完全一致的目标候选模式,此时,可以选择最接近该基准像素块的无效空域相邻像素块的分布的一种候选模式或多种候选模式的组合作为目标候选模式,该情况下,目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布趋于一致。
例如,若基准像素块的无效空域相邻像素块的分布为该基准像素块的无效空域相邻像素位于该基准像素块的正上方、正下方和正右方,则可以通过上述候选模式C和候选模式D共同来表征该基准像素块的无效空域相邻像素块的分布。本示例可实现目标候选模式中参考像素块的无效空域相邻像素块的分布与该基准像素块的无效空域相邻像素块的分布趋于一致。
可选的,S102A可以包括:从候选模式集中选择适用于待处理像素块的目标候选模式。候选模式集是至少两种候选模式构成的集合。候选模式集中的各候选模式可以 是预定义的。
在一种实现方式中,候选模式集可以由分别用于表示无效空域相邻像素块位于参考像素块的正上方、正下方、正左方、正右方、左上方、右上方、左下方或右下方的8种候选模式构成,例如可以参考图9A所示的各候选模式。基于此,对于任意一个基准像素块来说,译码器可以从候选模式集中选择出适用于该基准像素块对应的待处理像素块的目标候选模式,且该目标候选模式是唯一的,这样,编码器不需要将表示该待处理像素块的目标候选模式的标识信息编入码流,因此,可以节省码流传输开销。另外,该方法通过预定义8种候选模式,即可表示所有可能的基准像素块的无效空域相邻像素块的分布,实现简单,且灵活度高。
在另一种实现方式中,候选模式集中可以包括如图9B所示的各候选模式,当然,该候选模式集中还可以包括其他的候选模式。
需要说明的是,对于一个待处理像素块来说,执行上采样处理时仅采用一种目标候选模式,基于此,如果在预定义的候选模式集中,编码器能够确定出适用于该待处理像素块的多种目标候选模式,则编码器可以将该多种目标候选模式中的执行上采样处理时所采用的一种目标候选模式的标识信息编入码流。相应地,对于解码器来说,可以通过解析码流得到适用于该待处理像素块的目标候选模式,而不需要从候选模式集中选择适用于该待处理像素块的目标候选模式。例如,假设候选模式集包括图9A和图9B所示的各种候选模式,且基准像素块的无效空域相邻像素块的分布为该基准像素块的无效空域相邻像素位于该基准像素块的正上方、正下方和正右方,那么:目标候选模式可以是候选模式1、2、4的组合,也可以是候选模式2和9的组合,还可以是候选模式1、12的组合。基于此,编码器需要在码流中携带标识信息,以标识执行上采样处理时所采用的是这3种组合中的哪一种。
为了便于描述,下文中均以对于一个待处理像素块来说,从所预定义的候选模式集只能确定唯一的目标候选模式为例进行说明,在此统一说明,下文不再赘述。
上述S102A可以通过以下方式1A或方式1B实现:
方式1A:根据目标候选模式中无效像素的位置分布,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
不同候选模式中无效像素的位置分布,用于描述像素块内部的无效像素的不同位置分布。
作为一个示例,一种候选模式中无效像素的位置分布,与该候选模式所表示的无效空域相邻像素块位于参考像素块的方位相关。具体的,假设一个候选模式所表示的无效空域相邻像素块位于参考像素块的目标方位,则该候选模式中无效像素的位置分布可以是:一个像素块内部的无效像素在该像素块的目标方位。该目标方位可以是正上方、正下方、正左方、正右方、左上方、右上方、左下方和右下方中的其中一种或至少两种的组合。
当目标候选模式包括一种候选模式时,如果基于该候选模式中无效像素的位置分布,确定待处理像素块中的待处理像素为有效像素,则待处理像素所在的位置为第一目标位置;如果基于该候选模式中无效像素的位置分布,确定待处理像素块中的待处理像素为无效像素,则待处理像素所在的位置为第二目标位置。
例如,如果目标候选模式是“用于表示无效空域相邻像素块位于参考像素块的正上方”的候选模式,且该候选模式中无效像素的位置分布是一个像素块的前2行的像素是无效像素,其他行的像素是有效像素,则待处理像素块的前2行的像素所在的位置为第一目标位置,其他位置是第二目标位置。
当目标候选模式包括多种候选模式时,如果基于多种候选模式中无效像素的位置分布,均确定待处理像素块中的待处理像素为有效像素,则待处理像素所在的位置为第一目标位置;如果基于多种候选模式中的至少一种候选模式中无效像素的位置分布,确定待处理像素为无效像素,则待处理像素所在的位置为第二目标位置。
例如,如果目标候选模式是“用于表示无效空域相邻像素块位于参考像素块的正上方”的候选模式a和“用于表示无效空域相邻像素块位于参考像素块的正左方”的候选模式b的组合,且候选模式a中无效像素的位置分布是一个像素块的前2行的像素是无效像素,候选模式b中无效像素的位置分布是一个像素块的前2列的像素是无效像素;则待处理像素块的前2行的像素所在的位置和前2列的像素所在的位置均为第一目标位置,其他位置是第二目标位置。
方式1B:根据目标候选模式对应的子候选模块,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
像素块内部的无效像素的位置分布是像素块内部的无效像素所在的位置的信息。
在一种实现方式中,像素块内部的无效像素所在的位置,可以是该像素块中的且与目标有效像素之间的距离大于或等于预设阈值的位置。
在另一种实现方式中,像素块内部的无效像素所在的位置,可以是该像素块中的且与目标有效像素所在的直线之间的距离大于或等于预设阈值的位置。其中,目标有效像素所在的直线与候选模式相关,具体示例可以参考下文。
其中,目标有效像素,是指与有效像素边界的距离最远的有效像素,有效像素边界为有效像素与无效像素的界限。
例如,若像素块中的无效像素在该像素块内部的正上方,则有效像素在该像素块内部的正下方,目标有效像素是该像素块中的最下方一行的像素。如图7所示,为可适用于该示例的一种像素块内部无效像素的位置分布的示意图。图7中,像素块是4*4,且预设阈值是2(即2个单位距离,一个单位距离是水平或竖直方向上相邻两个像素之间的距离)。
再如,若像素块中的无效像素在该像素块内部的左下方,则有效像素在该像素块内部的右上方,目标有效像素是该像素块中的最右上方的一个或多个像素。如图8所示,为可适用于该示例的一种无效像素的位置分布的示意图。其中,图8中的(a)图是以无效像素的位置是像素块中的,且与目标有效像素所在的直线之间的距离大于或等于预设阈值的无效像素所在的位置为例进行说明,图8中的(b)图是以无效像素的位置是像素块中的,且与目标有效像素之间的距离大于或等于预设阈值的无效像素所在的位置为例进行说明的。并且,图8中,像素块是4*4,且预设阈值是2(即2个单位距离,其中一个单位距离是是45度斜线方向上相邻两个像素之间的距离)。
本申请实施例对候选模式和子候选模式之间的对应关系的具体体现形式不进行限 定,例如可以是表格或者是公式或者是根据条件进行逻辑判断(如if else或者switch操作等)等。上述对应关系具体体现在一个或多个表格中,本申请实施例对此不进行限定。为了便于描述,本申请实施例以对应关系具体体现在一个表格中为例进行说明。
如图9A所示,为本申请实施例提供的一种候选模式与子候选模式之间的对应关系的示意图。图9A中包括候选模式1~8,候选模式与子候选模式一一对应,且待处理像素块是4*4的像素块。其中,候选模式1~4所对应的子候选模式是以预设阈值是2为例进行说明的;候选模式5~8所对应的子候选模式是以预设阈值是3为例进行说明的。具体实现时,图9A所示的各候选模式,可以作为一个候选模式集合。图9A所示的候选模式中的每个小方格表示一个像素块,其中,五角星所在的像素块是参考像素块,黑色标记的像素块是无效相邻像素块,本申请实施例不关注斜线阴影标记的像素块是有效像素块还是无效像素块。子候选模式中的每个小方格表示一个像素,黑色标记的小方格表示无效像素,白色标记的小方格表示有效像素。图9A中是以子候选模式所描述的像素块是4*4的像素块为例进行说明的。
如图9B所示,为本申请实施例提供的另一种候选模式和子候选模式之间的对应关系的示意图。图9B中的各小方格的解释可以参考图9A。图9B中给出了候选模式表示多个无效空域相邻像素块相对于该参考像素块的方位的示例,以及各候选模式对应的子候选模式的示例。基于此,可以很好地处理待处理像素块中包含的尖锐状细节部分,这样,有助于使经上采样处理的占用图更接近点云的原始占用图,从而提高编解码效率。
可以理解的是,图9A和图9B仅为候选模式与子候选模式之间的对应关系的示例,具体实现时,可以根据需要设置其他的候选模式,以及子候选模式。
作为一个示例,同一种候选模式可以对应多种预设阈值,从而使得同一种候选模式对应多种子候选方式。例如,当预设阈值是2时,图9A所示的候选类型7对应的子候选模式可以参考图8中的(a)图;当预设阈值是3时,图9A所示的候选类型7对应的子候选模式可以参考图9A。
具体的,方式1B可以包括:
当目标候选模式包括一种候选模式时,如果目标候选模式对应于一种子候选模式,则根据该子候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。如果目标候选模式对应于多种子候选模式,则根据该多种子候选模式中的一种子候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,如果目标候选模式对应于多种子候选模式,则编码器还可以将标识信息编入码流,该标识信息表示该多种子候选模式中的且执行标记操作所采用的子候选模式。相应的,解码器还可以解析码流,以得到该标识信息,后续,解码器可以基于该标识信息所指示的子候选模式对待处理像素块执行标记操作。
当目标候选模式为多种候选模式的组合,且该多种候选模式包括第一候选模式和第二候选模式时,根据目标子候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用,目标子候选模式包括第一子候选模式和第二子候选模式。
其中,第一候选模式对应于第一子候选模式,且第二候选模式对应于第二子候选模式。
或者,第一候选模式对应于多种子候选模式,该多种子候选模式中包括第一子候选模式;且第二候选模式对应于多种子候选模式,该多种子候选模式中包括第二子候选模式。
或者,第一候选模式对应于第一子候选模式,且第二候选模式对应于多种子候选模式,该多种子候选模式中包括第二子候选模式。
也就是说,当目标候选模式为N种候选模式的组合,且N是大于或等于2的整数时,该N种候选模式所对应的N种子候选模式共同作为目标子候选模式;其中,该N种子候选模式分别对应不同的候选模式。具体的,如果基于该N种子候选模式所描述的无效像素的位置分布,均确定待处理像素块中的待处理像素为有效像素,则待处理像素所在的位置为第一目标位置;如果基于该N种子候选模式中的至少一种候选模式所描述的无效像素的位置分布,确定待处理像素为无效像素,则待处理像素所在的位置为第二目标位置。例如,假设执行目标候选模式是图9A中的候选模式1和候选模式3,那么,第一目标位置和第二目标位置,以及经标记的像素块,可以如图10所示。
可选的,如果第一候选模式对应于多种子候选模式,则编码器可以将第一标识信息编入码流,该第一标识信息用于表示第一子候选模式。相应的,解码器还可以解析码流,以得到第一标识信息,后续,解码器可以基于第一标识信息所表示的第一子候选模式对待处理像素块执行标记操作。同理,如果第二候选模式对应于多种子候选模式,则编码器可以将第二标识信息编入码流,该第二标识信息用于表示第二子候选模式。相应的,解码器还可以解析码流,以得到第二标识信息,后续,解码器可以基于第二标识信息所表示的第二子候选模式对待处理像素块执行标记操作。
具体的,如果目标候选模式为多种候选模式的组合,且该多种候选模式中的至少两种候选模式中的每种候选模式均对应多种子候选模式时,则码流中可以包括分别表示该每种候选模式对应的执行标记操作时所采用的子候选模式的信息。该情况下,为了使解码器获知执行标记操作所使用的每种子候选模式具体对应哪一种候选模式,编码器和解码器可以预定义码流中包括的表示子候选模式的标识信息的先后顺序。例如,若码流中包括的表示子候选模式的标识信息的顺序是:所对应的候选模式从小到大的顺序,则当第一候选模式对应于多种子候选模式,且第二候选模式对应于多种子候选模式时,码流中所包括的表示第一子候选模式的标识信息在前,表示第二子候选模式的标识信息在后。
方式2:上述S102可以包括如下步骤S102C~S102D:
S102C:确定待处理像素块的类型。
例如,基于基准像素块的空域相邻像素块是否是无效像素块,确定该基准像素块对应的待处理像素块中的无效像素在该待处理像素块中的方位信息;其中,不同类型的边界像素块对应不同的方位信息。空域相邻像素块的定义可以参考上文。
可选的,如果基准像素块的目标方位的空域相邻像素块是无效像素块,则确定与该基准像素块对应的待处理像素块中的无效像素在待处理边界像素块中的该目标方位。该目标方位是正上方、正下方、正左方、正右方、左上方、右上方、左下方和右下方 中的其中一种或者至少两种的组合。
S102D:根据待处理像素块的类型,采用对应的目标处理方式将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,第一目标位置是该待处理像素块中的,且与目标有效像素之间的距离大于或等于预设阈值的无效像素所在的位置。或者,第一目标位置是该待处理像素块中的,且与目标有效像素所在的直线之间的距离大于或等于预设阈值的无效像素所在的位置。其中,目标有效像素所在的直线与待处理像素块的类型相关。
类似的,第二目标位置是该待处理像素块中的,且与目标有效像素之间的距离小于预设阈值的无效像素所在的位置。或者,第二目标位置是该待处理像素块中的,且与目标有效像素所在的直线之间的距离小于预设阈值的无效像素所在的位置。
关于目标有效像素定义及具体示例可以参考上文。
其中,不同处理方式对应不同的第一目标位置,和/或对应不同的第二目标位置。如果不同处理方式可以对应不同第一目标位置,则执行S102D时采用目标处理方式将待处理像素块中的第一目标位置的像素标记为已占用。如果不同处理方式对应不同的第二目标位置,则执行S102D时采用目标方式将待处理像素块中的第二目标位置的像素标记为未占用。
下文中,均以不同处理方式对应不同的第一目标位置为例进行说明。
以下,基于所依据的空域相邻像素块不同,说明待处理像素块(具体是待处理边界像素块)的类型(或无效像素在待处理像素块中的方位信息)的具体实现方式。需要说明的是,这里的所依据的空域相邻像素块是指,确定待处理像素块的类型时所依据的空域相邻像素块。而不应理解为待处理像素块所具有的空域相邻像素块。
当基准像素块的空域相邻像素块包括:与该基准像素块相邻且位于该基准像素块的正上方、正下方、正左方和正右方的像素块时:
若该基准像素块的预设方向的空域相邻像素块是无效像素块,且其他空域相邻像素块均是有效像素块,则待处理像素块中的无效像素在该待处理像素块内部的预设方向。其中,该预设方向包括正上方、正下方、正左方和正右方中的其中一种或至少两种的组合。
例如,可以将预设方向分别是正上方、正下方、正左方或正右方时,所对应的边界像素块的类型分别称为类型1、类型2、类型7、类型8。
例如,可以将预设方向是正上方、正左方和正右方时,所对应的边界像素块的类型称为类型13;将预设方向是正上方、正左方和正下方时,所对应的边界像素块的类型称为类型14,将预设方向是正下方、正左方和正右边时,所对应的边界像素块的类型称为类型15,将预设方向是正上方、正下方和正右方时,所对应的边界像素块的类型称为类型16。
若该基准像素块的正上方和正右方的像素块为无效像素块,且正下方和正左方的像素块是有效像素块,则待处理像素块中的无效像素在该待处理像素块内部的右上方。示例的,可以将这种方位信息对应的边界像素块的类型称为类型3。
若基准像素块的正下方和正左方的像素块为无效像素块,且正上方和正右方的像素块是有效像素块,则待处理像素块中的无效像素在该待处理像素块内部的左下方。 示例的,可以将这种方位信息对应的边界像素块的类型称为类型4。
若基准像素块的正上方和正左方的像素块为无效像素块,且正下方和正右方的像素块是有效像素块,则待处理像素块中的无效像素在该待处理像素块内部的左上方。示例的,可以将这种方位信息对应的边界像素块的类型称为类型5。
若基准像素块的正下方和正右方的像素块为无效像素块,且正上方和正左方的像素块是有效像素块,则待处理像素块中的无效像素在该待处理像素块内部的右下方。示例的,可以将这种方位信息对应的边界像素块的类型称为类型6。
当基准像素块的空域相邻像素块包括:与基准像素块相邻的且位于该基准像素块的正上方、正下方、正左方、正右方、左上方、右上方、左下方和右下方的像素块时,若基准像素块的预设方向的空域相邻像素块是无效像素块,且其他空域相邻像素块均是有效像素块,则待处理像素块中的无效像素位于该待处理像素块内部的预设方向。其中,该预设方向包括左上方、右上方、左下方或右下方。示例的,可以将右上方、左下方、左上方或右下方对应的边界像素块的类型分别称为类型9、类型10、类型11、类型12。
当基准像素块的空域相邻像素块包括与该基准像素块相邻的且位于该基准像素块的左上方、右上方、左下方和右下方的像素块时,若该基准像素块的预设方向的空域相邻像素块是无效像素块,且其他空域相邻像素块均是有效像素块,则待处理像素块中的无效像素位于该待处理像素块内部的预设方向;预设方向包括左上方、右上方、左下方和右下方其中一种或至少两种。
上述边界像素块的类型(如上述类型1~12)的索引、判别方式图、示意图以及描述信息等可以参考图11。其中,图11中的每个小方格表示一个像素块,最中心的标记有五角星的像素块表示基准像素块,黑色标记的像素块表示无效像素块,白色标记的像素块表示有效像素块,斜线阴影标记的像素块表示有效像素块或无效像素块。判别方式图可以理解为用于判别边界像素块的类型所使用的pattern,示意图可以理解为基准像素块的空域相邻像素块是有效像素块还是无效像素块。
以下,基于待处理像素块的类型说明第一目标位置的具体实现方式。在此之前,需要说明的是,下文中的p[i]表示第一占用图中的第i个边界像素块,p[i].type=j表示边界像素块p[i]的类型的索引是j。另外,为了便于描述,图12中对像素进行了编号,其中每个小方格表示一个像素;并且,图12中以B2=4为例进行说明。此外,无论待处理像素块是哪一种类型,以及该类型对应一种还是多种处理方式,编解码器采用同一种方式对待处理像素块进行处理。
如图12所示,为本申请实施例提供的一种用于确定第一目标位置和/或第二目标位置的示意图。
基于图12的(a)图,若p[i].type=1,则第一目标位置的像素可以是待处理像素块中编号为{1}、{1,2}或{1,2,3}的像素。若p[i].type=2,则第一目标位置的像素可以是待处理像素块中编号为{4}、{3,4}或{2,3,4}的像素。
基于图12的(b)图,若p[i].type=3或9,则第一目标位置的像素可以是待处理像素块中编号为{1}、{1,2}、{1,2,3}……或{1,2,3……7}的像素。若p[i].type=4或10则第一目标位置的像素可以是待处理像素块中编号为{7}、{6,7}、{5,6,7}…… 或{1,2,3……7}的像素。
基于图12的(c)图,若p[i].type==5或11,则第一目标位置的像素可以是待处理像素块中的编号为{1}、{1,2}……或{1,2……7}的像素。若p[i].type==6或12,则第一目标位置的像素可以是待处理像素块中的编号为{7}、{6,7}、{5,6,7}……或{1,2,3……7}的像素。
基于图12的(d)图,若p[i].type=7,则第一目标位置的像素可以是待处理像素块中编号为{4}、{3,4}……或{1,2……4}的像素。若p[i].type=8,则第一目标位置的像素可以是待处理像素块中编号为{1}、{1,2}或{1,2……4}的像素。
相应的,基于上述示例,可以得到第二目标位置的具体实现方式,此处不再一一说明。
上述边界像素块的类型(如上述类型13~16)的索引、判别方式图、示意图以及描述信息等可以参考图13。其中,图13中的各小方格的说明可以参考图11。本申请实施例还提供了当边界像素块的类型是类型13~16时所对应的确定目标处理方式所对应的第一目标位置和/或第二目标位置的示意图,如图14所示。其中,图14中是以待处理像素块是4*4的像素块为例进行说明的,图14中白色标记部分表示第一目标位置,黑色标记部分表示第二目标位置。图14中的(a)~(d)图分别示出了类型13~16时第一目标位置和第二目标位置。
图14中所示的第一目标位置和第二目标位置仅为示例,其不对本申请实施例所提供的边界图像块的类型是类型13~16时,所对应的第一目标位置和第二目标位置构成限定。例如,可扩展的,边界图像块类型13对应的第一目标位置是像素块内部的正下方,且第一目标位置呈“凸字形”或类似于“凸字形”。边界图像块类型14对应的第一目标位置是像素块内部的正右方,且从像素块的右侧看第一目标位置呈“凸字形”或类似于“凸字形”。边界图像块类型15对应的第一目标位置是像素块内部的正上方,且第一目标位置呈倒“凸字形”或类似于“凸字形”。边界图像块类型16对应的第一目标位置是像素块内部的正左方,且从像素块的左侧看第一目标位置呈“凸字形”或类似于“凸字形”。相应的,可以得出第二目标位置。
可选的,上述S102A可以包括如下步骤:根据边界像素块的多种类型与多种处理方式之间的映射关系,确定待处理像素块的类型对应的处理方式。若待处理像素块的类型对应一种处理方式,则将待处理像素块的类型对应的处理方式作为目标处理方式;或者,若待处理像素块的类型对应多种处理方式,则将待处理像素块的类型对应的多种处理方式的其中一种处理方式作为目标处理方式。
在该可选的实现方式中,编码器和解码器可以预定义(如通过协议预定义)边界像素块的多种类型与多种处理方式之间的映射关系,例如预定义边界像素块的多种类型的标识信息与多种处理方式的标识信息之间的映射关系。
如果待处理像素块对应一种处理方式,则编码器和解码器均可以通过预定义的上述映射关系,获得目标处理方式。因此,该情况下,编码器可以不用向解码器发送用于表示目标处理方式的标识信息,这样可以节省码流传输开销。
如果待处理像素块对应多种处理方式,则编码器可以从该多种处理方式中选择一种处理方式作为目标处理方式。基于此,编码器可以将标识信息编入码流,该标识信 息表示待处理像素块的目标处理方式。解码器可以根据待处理像素块的类型解析码流,以得到该标识信息。
可以理解的,如果待处理像素块的空域相邻像素块包括8个,则基于每个空域相邻像素块是有效像素块还是无效像素块,该待处理像素块的空域相邻像素块可能的组合共有2 8种,这2 8种的其中一种或者至少两种的组合可以作为一种类型,例如如图11和图13所示的若干种类型。另外,除了上文中所列举的边界像素块的类型之外,边界像素块还可以被归为其他类型。实际实现的过程中,由于待处理像素块的空域相邻像素块可能的组合比较多,因此,可以选择出现概率比较高的类型,或者对编码效率增益贡献较大的类型,来执行上述方式2提供的技术方案,对于其他类型,可以不执行上述方式2提供的技术方案。基于此,对于解码器来说,可以根据待处理像素块的类型(具体是指按照上述方式2提供的技术方案进行编解码的边界像素块的类型,或者对应多种处理方式的边界像素块的类型),确定是否解析码流。其中,这里的码流是指携带目标处理方式的标识信息的码流。
例如,若编码器和解码器预定义:针对如图11和图13所示的各种类型的边界像素块,按照上述方式2提供的技术方案进行编解码;那么,解码器可以在确定一个待处理像素块的类型是图11或图13中所示的一种类型时,解析码流,以得到该类型对应的目标处理方式;当该待处理像素块的类型不是图11中所示的类型且不是图13中所示的类型时,不解析码流。
如图15所示,为本申请实施例提供的一种点云译码方法的流程示意图。该方法可以包括:
S201:对待译码点云的第一占用图进行上采样,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率。
其中,若第一占用图中的基准像素块是边界像素块,则第二占用图中的与基准像素块对应的像素块中第一目标位置的像素是已占用的像素,和/或第二占用图中的与基准像素块对应的像素块中第二目标位置的像素是未占用的像素;
若基准像素块是非边界像素块,则第二占用图中的与基准像素块对应的像素块中的像素均是已占用的像素或均是未占用的像素。
S202:根据第二占用图,重构待译码点云。
需要说明的是,本实施例中的“第二占用图”是上述图6所示的实施例中的“经标记后的第二占用图”。其他内容的解释以及有益效果等均可以参考上文,此处不再赘述。
如图16所示,为本申请实施例提供的一种点云编码方法的流程示意图。本实施例的执行主体可以是编码器。该方法可以包括:
S301:确定指示信息,该指示信息用于指示是否按照目标编码方法对待编码点云的占用图进行处理;目标编码方法包括本申请实施例提供的任一种点云编码方法,例如可以是图6或图15所示的点云译码方法,且这里的译码具体是指编码。
具体实现的过程中,编码方法可以有至少两种,该至少两种的其中一种可以是本申请实施例提供的任一种点云编码方法,其他种可以是现有技术或未来提供的点云编 码方法。
可选的,该指示信息具体可以是目标点云编码/解码方法的索引。具体实现的过程中,编码器和解码器可以预先约定编码器/解码器所支持的至少两种点云编码/解码方法的索引,然后,在编码器确定目标编码方法之后,将目标编码方法的索引或该目标编码方法对应的解码方法的索引作为指示信息编入码流。本申请实施例对编码器如何确定目标编码方法是编码器所支持的至少两种编码方法中的哪一种不进行限定。
S302:将该指示信息编入码流。
本实施例提供了一种选择目标编码方法的技术方案,该技术方案可以应用于编码器支持至少两种点云编码方法的场景中。
如图17所示,为本申请实施例提供的一种点云解码方法的流程示意图。本实施例的执行主体可以是解码器。该方法可以包括:
S401:解析码流,以得到指示信息,该指示信息用于指示是否按照目标解码方法对待解码点云的占用图进行处理;目标解码方法包括本申请实施例提供的任一种点云解码方法,例如可以是图6或图15所示的点云译码方法,且这里的译码具体是指解码。具体是与图16中所描述的编码方法相对应的解码方法。其中,该指示信息是帧级别的信息。
S402:当该指示信息用于指示按照目标解码方法对待解码点云的占用图进行处理时,按照目标解码方法对待解码点云的占用图进行处理。其中,具体的处理过程可以参考上文。
本实施例提供的点云解码方法与图16提供的点云编码方法相对应。
上述主要从方法的角度对本申请实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对编码器/解码器进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
如图18所示,为本申请实施例提供的一种译码器180的示意性框图。译码器180具体可以是编码器或解码器。译码器180可以包括上采样模块1801和点云重构模块1802。
例如,译码器180可以是图2中的编码器100,该情况下,上采样模块1801可以是上采样模块111,点云重构模块1802可以是点云重构模块112。
又如,译码器180可以是图5中的解码器200,该情况下,上采样模块1801可以是上采样模块208,点云重构模块1802可以是点云重构模205。
上采样模块1801,用于对待译码点云的第一占用图进行放大,得到第二占用图;第一占用图的分辨率是第一分辨率,第二占用图的分辨率是第二分辨率,第二分辨率大于第一分辨率;如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块;该待处理像素块对应于该基准像素块;如果该基准像素块是非边界像素块,则将该待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块;点云重构模块1802,用于根据经标记的第二占用图,重构待译码点云;该经标记的第二占用图包括该经标记的像素块。例如,结合图6,上采样模块1801可以用于执行S101~S103,点云重构模块1802可以用于执行S104。
可选的,在如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用的方面,上采样模块1801具体用于:如果第一占用图中的基准像素块是边界像素块,则确定适用于该待处理像素块的目标候选模式,其中,目标候选模式包括一种候选模式或多种候选模式;目标候选模式中参考像素块的无效空域相邻像素块的分布与该待基准像素块的无效空域相邻像素块的分布一致或趋于一致;根据目标候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,在根据目标候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用的方面,上采样模块1801具体用于:根据目标候选模式中无效像素的位置分布,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,如果目标候选模式包括多种候选模式,在根据目标候选模式中无效像素的位置分布,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用的方面,上采样模块1801具体用于:将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用;其中:如果基于多种候选模式中无效像素的位置分布,均确定该待处理像素块中的待处理像素为有效像素,则待处理像素所在的位置为第一目标位置;如果基于多种候选模式中的至少一种候选模式中无效像素的位置分布,确定待处理像素为无效像素,则待处理像素所在的位置为第二目标位置。
可选的,如果目标候选模式为一种候选模式,在根据目标候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用的方面,上采样模块1801具体用于:当目标候选模式对应于一种子候选模式时,根据子候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用;当目标候选模式对应于多种子候选模式时,根据多种子候选模式中的一种子候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用;其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
可选的,如果目标候选模式包括第一候选模式和第二候选模式,在根据目标候选模式,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用的方面,上采样模块1801具体用于:根据目标子候选模式,将第一目标位置的像素 标记为已占用,和/或将第二目标位置的像素标记为未占用,目标子候选模式包括第一子候选模式和第二子候选模式;其中,第一候选模式对应于第一子候选模式,且第二候选模式对应于第二子候选模式;或者,第一候选模式对应于多种子候选模式且多种子候选模式中包括第一子候选模式;以及,第二候选模式对应于多种子候选模式且多种子候选模式中包括第二子候选模式;或者,第一候选模式对应于第一子候选模式,以及,第二候选模式对应于多种子候选模式且多种子候选模式中包括第二子候选模式;其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
可选的,待译码点云是待编码点云;参见图19A,译码器180还包括:辅助信息编码模块1803,用于当目标候选模式对应于多种子候选模式时,将标识信息编入码流,标识信息用于表示目标候选模式对应的执行标记操作时所采用的子候选模式。例如,结合图2,辅助信息编码模块1803具体可以是辅助信息编码模块108。
可选的,待译码点云是待解码点云;参见图19B,译码器180还包括:辅助信息解码模块1804,用于解析码流,以得到标识信息,标识信息用于表示目标候选模式对应的执行标记操作时所采用的子候选模式。例如,结合图5,辅助信息解码模块1804具体可以是辅助信息解码模块204。
可选的,在如果第一占用图中的基准像素块是边界像素块,则将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用的方面,的上采样模块1801具体用于:如果第一占用图中的基准像素块是边界像素块,则确定该待处理像素块的类型;根据该待处理像素块的类型,采用对应的目标处理方式将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,在确定该待处理像素块的类型的方面,的上采样模块1801具体用于:基于该待基准像素块的空域相邻像素块是否是无效像素块,确定该待处理像素块中的无效像素在该待处理像素块中的方位信息;其中,不同类型的像素块对应不同的方位信息。
可选的,在将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用的方面,的上采样模块1801具体用于:在该待基准像素块的空域相邻像素块中有效像素块的个数大于或等于预设阈值的情况下,将第一目标位置的像素标记为已占用,和/或将第二目标位置的像素标记为未占用。
可选的,在将第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将该待处理像素块中第二目标位置的像素标记为未占用的方面,的上采样模块1801具体用于:将第一目标位置的像素的值置1,和/或将第二目标位置的像素的值置0。
可以理解的,本申请实施例提供的译码器180中的各模块为实现上文提供的相应的方法中所包含的各种执行步骤的功能主体,即具备实现完整实现本申请图像自适应填充/上采样方法中的各个步骤以及这些步骤的扩展及变形的功能主体,具体请参见上文中相应方法的介绍,为简洁起见,本文将不再赘述。
图20为用于本申请实施例的编码设备或解码设备(简称为译码设备230)的一种 实现方式的示意性框图。其中,译码设备230可以包括处理器2310、存储器2330和总线系统2350。其中,处理器2310和存储器2330通过总线系统2350相连,该存储器2330用于存储指令,该处理器2310用于执行该存储器2330存储的指令,以执行本申请描述的各种点云译码方法。为避免重复,这里不再详细描述。
在本申请实施例中,该处理器2310可以是中央处理单元(central processing unit,CPU),该处理器2310还可以是其他通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器2330可以包括ROM设备或者RAM设备。任何其他适宜类型的存储设备也可以用作存储器2330。存储器2330可以包括由处理器2310使用总线系统2350访问的代码和数据2331。存储器2330可以进一步包括操作系统2333和应用程序2335,该应用程序2335包括允许处理器2310执行本申请描述的视频编码或解码方法(尤其是本申请描述的基于当前像素块的块尺寸对当前像素块进行滤波的方法)的至少一个程序。例如,应用程序2335可以包括应用1至N,其进一步包括执行在本申请描述的视频编码或解码方法的视频编码或解码应用(简称视频译码应用)。
该总线系统2350除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统2350。
可选的,译码设备230还可以包括一个或多个输出设备,诸如显示器2370。在一个示例中,显示器2370可以是触感显示器,其将显示器与可操作地感测触摸输入的触感单元合并。显示器2370可以经由总线系统2350连接到处理器2310。
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于非暂时性的有形计算机可读存储媒体,或通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、 光学光盘、DVD和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。
可通过例如一或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。在一种示例下,编码器100及解码器200中的各种说明性逻辑框、单元、模块可以理解为对应的电路器件或逻辑元件。
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (36)

  1. 一种点云译码方法,其特征在于,包括:
    对待译码点云的第一占用图进行放大,得到第二占用图;所述第一占用图的分辨率是第一分辨率,所述第二占用图的分辨率是第二分辨率,所述第二分辨率大于所述第一分辨率;
    如果所述第一占用图中的基准像素块是边界像素块,则将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块;其中,所述待处理像素块对应于所述基准像素块;
    如果所述基准像素块是非边界像素块,则将所述待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块;
    根据经标记的所述第二占用图,重构所述待译码点云;所述经标记的所述第二占用图包括所述经标记的像素块。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,包括:
    确定适用于所述待处理像素块的目标候选模式,其中,所述目标候选模式包括一种候选模式或多种候选模式;所述目标候选模式中参考像素块的无效空域相邻像素块的分布与所述基准像素块的无效空域相邻像素块的分布一致或趋于一致;
    根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,包括:
    根据所述目标候选模式中无效像素的位置分布,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  4. 根据权利要求3所述的方法,其特征在于,如果所述目标候选模式包括多种候选模式,所述根据所述目标候选模式中无效像素的位置分布,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,包括:
    将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;其中:
    如果基于所述多种候选模式中无效像素的位置分布,均确定所述待处理像素块中的待处理像素为有效像素,则所述待处理像素所在的位置为所述第一目标位置;
    如果基于所述多种候选模式中的至少一种候选模式中无效像素的位置分布,确定所述待处理像素为无效像素,则所述待处理像素所在的位置为所述第二目标位置。
  5. 根据权利要求2所述的方法,其特征在于,如果所述目标候选模式为一种候选模式,所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,包括:
    当所述目标候选模式对应于一种子候选模式时,根据所述子候选模式,将所述第 一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;
    当所述目标候选模式对应于多种子候选模式时,根据所述多种子候选模式中的一种子候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;
    其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
  6. 根据权利要求2所述的方法,其特征在于,如果所述目标候选模式包括第一候选模式和第二候选模式,所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,包括:
    根据目标子候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,所述目标子候选模式包括第一子候选模式和第二子候选模式;
    其中,所述第一候选模式对应于所述第一子候选模式,且所述第二候选模式对应于所述第二子候选模式;
    或者,所述第一候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第一子候选模式;以及,所述第二候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第二子候选模式;
    或者,所述第一候选模式对应于所述第一子候选模式,以及,所述第二候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第二子候选模式;
    其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
  7. 根据权利要求5或6所述的方法,其特征在于,所述待译码点云是待编码点云;所述方法还包括:
    当所述目标候选模式对应于多种子候选模式时,将标识信息编入码流,所述标识信息用于表示所述目标候选模式对应的执行标记操作时所采用的子候选模式。
  8. 根据权利要求5或6所述的方法,其特征在于,所述待译码点云是待解码点云;所述方法还包括:
    解析码流,以得到标识信息,所述标识信息用于表示所述目标候选模式对应的执行标记操作时所采用的子候选模式。
  9. 根据权利要求1所述的方法,其特征在于,所述将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,包括:
    确定所述待处理像素块的类型;
    根据所述待处理像素块的类型,采用对应的目标处理方式将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  10. 根据权利要求9所述的方法,其特征在于,所述确定所述待处理像素块的类型,包括:
    基于所述基准像素块的空域相邻像素块是否是无效像素块,确定所述待处理像素块中的无效像素在所述待处理像素块中的方位信息;
    其中,不同类型的像素块对应不同的方位信息。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,所述将所述第二占用 图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,包括:
    在所述基准像素块的空域相邻像素块中有效像素块的个数大于或等于预设阈值的情况下,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  12. 根据权利要求1至11任一项所述的方法,其特征在于,所述将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,包括:
    将所述第一目标位置的像素的值置1,和/或将所述第二目标位置的像素的值置0。
  13. 一种点云译码方法,其特征在于,包括:
    对待译码点云的第一占用图进行上采样,得到第二占用图;所述第一占用图的分辨率是第一分辨率,所述第二占用图的分辨率是第二分辨率,所述第二分辨率大于所述第一分辨率;
    其中,若所述第一占用图中的基准像素块是边界像素块,则所述第二占用图中的与所述基准像素块对应的像素块中第一目标位置的像素是已占用的像素,和/或所述第二占用图中的与所述基准像素块对应的像素块中第二目标位置的像素是未占用的像素;
    若所述基准像素块是非边界像素块,则所述第二占用图中的与所述基准像素块对应的像素块中的像素均是已占用的像素或均是未占用的像素;
    根据所述第二占用图,重构所述待译码点云。
  14. 一种点云编码方法,其特征在于,包括:
    确定指示信息,所述指示信息用于指示是否按照目标点云编码方法对待编码点云的占用图进行编码;所述目标点云编码方法包括如权利要求1~7、9~13任一项所述的点云译码方法;
    将所述指示信息编入码流。
  15. 一种点云解码方法,其特征在于,包括:
    解析码流,以得到指示信息,所述指示信息用于指示是否按照目标点云解码方法对待解码点云的占用图进行处理;所述目标点云解码方法包括如权利要求1~6、8~13任一项所述的点云译码方法;
    当所述指示信息指示按照所述目标点云解码方法进行处理时,按照所述目标点云解码方法对所述待解码点云的占用图进行处理。
  16. 一种译码器,其特征在于,包括:
    上采样模块,用于对待译码点云的第一占用图进行放大,得到第二占用图;所述第一占用图的分辨率是第一分辨率,所述第二占用图的分辨率是第二分辨率,所述第二分辨率大于所述第一分辨率;如果所述第一占用图中的基准像素块是边界像素块,则将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用,得到经标记的像素块;所述待处理像素块对应于所述基准像素块;如果所述基准像素块是非边界像素块,则将所述待处理像素块中的像素均标记为已占用或均标记为未占用,得到经标记的像素块;
    点云重构模块,用于根据经标记的所述第二占用图,重构所述待译码点云;所述 经标记的所述第二占用图包括所述经标记的像素块。
  17. 根据权利要求16所述的译码器,其特征在于,在所述如果所述第一占用图中的基准像素块是边界像素块,则将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用的方面,所述上采样模块具体用于:
    如果所述第一占用图中的基准像素块是边界像素块,则确定适用于所述待处理像素块的目标候选模式,其中,所述目标候选模式包括一种候选模式或多种候选模式;所述目标候选模式中参考像素块的无效空域相邻像素块的分布与所述基准像素块的无效空域相邻像素块的分布一致或趋于一致;
    根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  18. 根据权利要求17所述的译码器,其特征在于,在所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用的方面,所述上采样模块具体用于:
    根据所述目标候选模式中无效像素的位置分布,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  19. 根据权利要求18所述的译码器,其特征在于,如果所述目标候选模式包括多种候选模式,在所述根据所述目标候选模式中无效像素的位置分布,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用的方面,所述上采样模块具体用于:
    将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;其中:
    如果基于所述多种候选模式中无效像素的位置分布,均确定所述待处理像素块中的待处理像素为有效像素,则所述待处理像素所在的位置为所述第一目标位置;
    如果基于所述多种候选模式中的至少一种候选模式中无效像素的位置分布,确定所述待处理像素为无效像素,则所述待处理像素所在的位置为所述第二目标位置。
  20. 根据权利要求17所述的译码器,其特征在于,如果所述目标候选模式为一种候选模式,在所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用的方面,所述上采样模块具体用于:
    当所述目标候选模式对应于一种子候选模式时,根据所述子候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;
    当所述目标候选模式对应于多种子候选模式时,根据所述多种子候选模式中的一种子候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用;
    其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
  21. 根据权利要求17所述的译码器,其特征在于,如果所述目标候选模式包括第一候选模式和第二候选模式,在所述根据所述目标候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用的方面,所述上采样模块具体用于:
    根据目标子候选模式,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用,所述目标子候选模式包括第一子候选模式和第二子候选模式;
    其中,所述第一候选模式对应于所述第一子候选模式,且所述第二候选模式对应于所述第二子候选模式;
    或者,所述第一候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第一子候选模式;以及,所述第二候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第二子候选模式;
    或者,所述第一候选模式对应于所述第一子候选模式,以及,所述第二候选模式对应于多种子候选模式且所述多种子候选模式中包括所述第二子候选模式;
    其中,不同的子候选模式用于描述像素块内部的无效像素的不同位置分布。
  22. 根据权利要求20或21所述的译码器,其特征在于,所述待译码点云是待编码点云;所述译码器还包括:
    辅助信息编码模块,用于当所述目标候选模式对应于多种子候选模式时,将标识信息编入码流,所述标识信息用于表示所述目标候选模式对应的执行标记操作时所采用的子候选模式。
  23. 根据权利要求20或21所述的译码器,其特征在于,所述待译码点云是待解码点云;所述译码器还包括:
    辅助信息解码模块,用于解析码流,以得到标识信息,所述标识信息用于表示所述目标候选模式对应的执行标记操作时所采用的子候选模式。
  24. 根据权利要求16所述的译码器,其特征在于,在所述如果所述第一占用图中的基准像素块是边界像素块,则将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用的方面,所述的上采样模块具体用于:
    如果所述第一占用图中的基准像素块是边界像素块,则确定所述待处理像素块的类型;
    根据所述待处理像素块的类型,采用对应的目标处理方式将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  25. 根据权利要求24所述的译码器,其特征在于,在所述确定所述待处理像素块的类型的方面,所述的上采样模块具体用于:
    基于所述基准像素块的空域相邻像素块是否是无效像素块,确定所述待处理像素块中的无效像素在所述待处理像素块中的方位信息;
    其中,不同类型的像素块对应不同的方位信息。
  26. 根据权利要求16至25任一项所述的译码器,其特征在于,在所述将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用的方面,所述的上采样模块具体用于:在所述基准像素块的空域相邻像素块中有效像素块的个数大于或等于预设阈值的情况下,将所述第一目标位置的像素标记为已占用,和/或将所述第二目标位置的像素标记为未占用。
  27. 根据权利要求16至26任一项所述的译码器,其特征在于,在所述将所述第二占用图中的待处理像素块中第一目标位置的像素标记为已占用,和/或将所述待处理像素块中第二目标位置的像素标记为未占用的方面,所述的上采样模块具体用于:
    将所述第一目标位置的像素的值置1,和/或将所述第二目标位置的像素的值置0。
  28. 一种译码器,其特征在于,包括:
    上采样模块,用于对待译码点云的第一占用图进行上采样,得到第二占用图;所述第一占用图的分辨率是第一分辨率,所述第二占用图的分辨率是第二分辨率,所述第二分辨率大于所述第一分辨率;
    其中,若所述第一占用图中的基准像素块是边界像素块,则所述第二占用图中的与所述基准像素块对应的像素块中第一目标位置的像素是已占用的像素,和/或所述第二占用图中的与所述基准像素块对应的像素块中第二目标位置的像素是未占用的像素;
    若所述基准像素块是非边界像素块,则所述第二占用图中的与所述基准像素块对应的像素块中的像素均是已占用的像素或均是未占用的像素;
    点云重构模块,用于根据所述第二占用图,重构所述待译码点云。
  29. 一种编码器,其特征在于,包括:
    辅助信息编码模块,用于确定指示信息,所述指示信息用于指示是否按照目标点云编码方法对待编码点云的占用图进行处理;所述目标点云编码方法包括如权利要求1~7、9~13任一项所述的点云译码方法;将所述指示信息编入码流;
    占用图处理模块,用于在所述指示信息指示按照目标点云编码方法对待编码点云的占用图进行处理的情况下,按照所述目标点云编码方法对所述待编码点云的占用图进行处理。
  30. 一种解码器,其特征在于,包括:
    辅助信息解码模块,用于解析码流,以得到指示信息,所述指示信息用于指示是否按照目标点云解码方法对待解码点云的占用图进行处理;所述目标点云解码方法包括如权利要求1~6、8~13任一项所述的点云译码方法;
    占用图处理模块,用于当所述指示信息指示按照所述目标点云解码方法进行处理时,按照所述目标点云解码方法对所述待解码点云的占用图进行处理。
  31. 一种译码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求1至13任一项所述的点云译码方法。
  32. 一种编码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求14所述的点云编码方法。
  33. 一种解码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求15所述的点云解码方法。
  34. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求1至13任一项所述的点云译码方法。
  35. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计 算机上运行时,使得所述计算机执行如权利要求14所述的点云编码方法。
  36. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求15所述的点云解码方法。
PCT/CN2020/071247 2019-01-12 2020-01-09 点云译码方法及译码器 WO2020143725A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910029219.7 2019-01-12
CN201910029219.7A CN111435992B (zh) 2019-01-12 2019-01-12 点云译码方法及装置

Publications (1)

Publication Number Publication Date
WO2020143725A1 true WO2020143725A1 (zh) 2020-07-16

Family

ID=71520662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071247 WO2020143725A1 (zh) 2019-01-12 2020-01-09 点云译码方法及译码器

Country Status (2)

Country Link
CN (1) CN111435992B (zh)
WO (1) WO2020143725A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138694B2 (en) * 2018-12-05 2021-10-05 Tencent America LLC Method and apparatus for geometric smoothing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9445108B1 (en) * 2015-05-26 2016-09-13 International Business Machines Corporation Document compression with neighborhood biased pixel labeling
US20170249401A1 (en) * 2016-02-26 2017-08-31 Nvidia Corporation Modeling point cloud data using hierarchies of gaussian mixture models
US20170347122A1 (en) * 2016-05-28 2017-11-30 Microsoft Technology Licensing, Llc Scalable point cloud compression with transform, and corresponding decompression
US20180268570A1 (en) * 2017-03-16 2018-09-20 Samsung Electronics Co., Ltd. Point cloud and mesh compression using image/video codecs
CN109196559A (zh) * 2016-05-28 2019-01-11 微软技术许可有限责任公司 动态体素化点云的运动补偿压缩

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9445108B1 (en) * 2015-05-26 2016-09-13 International Business Machines Corporation Document compression with neighborhood biased pixel labeling
US20170249401A1 (en) * 2016-02-26 2017-08-31 Nvidia Corporation Modeling point cloud data using hierarchies of gaussian mixture models
US20170347122A1 (en) * 2016-05-28 2017-11-30 Microsoft Technology Licensing, Llc Scalable point cloud compression with transform, and corresponding decompression
CN109196559A (zh) * 2016-05-28 2019-01-11 微软技术许可有限责任公司 动态体素化点云的运动补偿压缩
US20180268570A1 (en) * 2017-03-16 2018-09-20 Samsung Electronics Co., Ltd. Point cloud and mesh compression using image/video codecs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIM, G. ET AL.: "Real-time Point Cloud Compression", 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2 October 2015 (2015-10-02), XP032832361, DOI: 20200304164845A *
XU, YILING ET AL.: "Introduction to Point Cloud Compression", ZTE COMMUNICATIONS, vol. 16, no. 3, 30 September 2018 (2018-09-30), XP055687159, DOI: 20200304164615A *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138694B2 (en) * 2018-12-05 2021-10-05 Tencent America LLC Method and apparatus for geometric smoothing
US11727536B2 (en) 2018-12-05 2023-08-15 Tencent America LLC Method and apparatus for geometric smoothing

Also Published As

Publication number Publication date
CN111435992B (zh) 2021-05-11
CN111435992A (zh) 2020-07-21

Similar Documents

Publication Publication Date Title
WO2020001565A1 (zh) 点云编解码方法和编解码器
CN110719497B (zh) 点云编解码方法和编解码器
US11388442B2 (en) Point cloud encoding method, point cloud decoding method, encoder, and decoder
CN110971898B (zh) 点云编解码方法和编解码器
CN110944187B (zh) 点云编码方法和编码器
CN111479114B (zh) 点云的编解码方法及装置
CN111435551B (zh) 点云滤波方法、装置及存储介质
CN111726615B (zh) 点云编解码方法及编解码器
WO2020119509A1 (zh) 点云编解码方法和编解码器
US20220007037A1 (en) Point cloud encoding method and apparatus, point cloud decoding method and apparatus, and storage medium
WO2020143725A1 (zh) 点云译码方法及译码器
WO2020063718A1 (zh) 点云编解码方法和编解码器
WO2020187191A1 (zh) 点云编解码方法及编解码器
WO2020015517A1 (en) Point cloud encoding method, point cloud decoding method, encoder and decoder
WO2020057338A1 (zh) 点云编码方法和编码器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20738509

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20738509

Country of ref document: EP

Kind code of ref document: A1