WO2022188582A1 - 点云中邻居点的选择方法、装置及编解码器 - Google Patents
点云中邻居点的选择方法、装置及编解码器 Download PDFInfo
- Publication number
- WO2022188582A1 WO2022188582A1 PCT/CN2022/075528 CN2022075528W WO2022188582A1 WO 2022188582 A1 WO2022188582 A1 WO 2022188582A1 CN 2022075528 W CN2022075528 W CN 2022075528W WO 2022188582 A1 WO2022188582 A1 WO 2022188582A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- component
- target points
- distribution value
- weight coefficient
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 150
- 238000009826 distribution Methods 0.000 claims description 385
- 230000008569 process Effects 0.000 description 25
- 238000013139 quantization Methods 0.000 description 20
- 238000003860 storage Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 230000006835 compression Effects 0.000 description 13
- 238000007906 compression Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 9
- 230000003068 static effect Effects 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000002310 reflectometry Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 101000638069 Homo sapiens Transmembrane channel-like protein 2 Proteins 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 102100032054 Transmembrane channel-like protein 2 Human genes 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- the embodiments of the present application relate to the technical field of video coding and decoding, and in particular, to a method, apparatus, and codec for selecting neighbor points in a point cloud.
- the surface of the object is collected through the collection equipment to form point cloud data, which includes hundreds of thousands or even more points.
- point cloud data is transmitted between the video production device and the video playback device in the form of point cloud media files.
- video production equipment needs to compress the point cloud data before transmission.
- the compression of point cloud data mainly includes the compression of position information and the compression of attribute information.
- the attribute information is compressed, the redundant information in the point cloud data is reduced or eliminated by prediction, for example, the current point is obtained from the encoded point.
- One or more adjacent points of predict the attribute information of the current point according to the attribute information of the adjacent points.
- the present application provides a method, device and codec for selecting neighbor points in a point cloud, so as to improve the accuracy of neighbor point selection.
- the present application provides a method for selecting neighbor points in a point cloud, including:
- For at least two target points in the target area determine the weight coefficient of each point in the at least two target points, the at least two target points do not include the current point;
- weight coefficient and geometric information of each of the at least two target points, and the geometric information of the current point determine the weight of each of the at least two target points
- At least one neighbor point of the current point is selected from the at least two target points according to the weight of each of the at least two target points.
- the present application provides a method for selecting neighbor points in a point cloud, including:
- the target area where the current point is located is determined from the point cloud data, and the target area includes a plurality of points;
- weight coefficient and geometric information of each of the at least two target points, and the geometric information of the current point determine the weight of each of the at least two target points
- At least one neighbor point of the current point is selected from the at least two target points according to the weight of each of the at least two target points.
- an apparatus for neighbor points in a point cloud including:
- an acquisition unit used for acquiring point cloud data, and determining the target area where the current point is located from the point cloud data, and the target area includes multiple points;
- a weight coefficient determination unit configured to determine the weight coefficient of each point in the at least two target points for at least two target points in the target area, and the current point is not included in the at least two target points;
- a weight determination unit configured to determine the weight of each point in the at least two target points according to the weight coefficient and geometric information of each of the at least two target points and the geometric information of the current point;
- the neighbor point selection unit is configured to select at least one neighbor point of the current point from the at least two target points according to the weight of each point in the at least two target points.
- a device for selecting neighbor points in a point cloud including:
- the decoding unit is used to decode the code stream and obtain the geometric information of the point in the point cloud data
- the area determination unit is used to determine the target area where the current point is located from the point cloud data according to the geometric information of the point in the point cloud data, and the target area includes a plurality of points;
- a weight coefficient determination unit configured to determine the weight coefficient of each point in the at least two target points for at least two decoded target points in the target area, and the current point is not included in the at least two target points;
- a weight determination unit configured to determine the weight of each point in the at least two target points according to the weight coefficient and geometric information of each of the at least two target points and the geometric information of the current point;
- the neighbor point determination unit is configured to select at least one neighbor point of the current point from the at least two target points according to the weight of each point in the at least two target points.
- an encoder including a processor and a memory.
- the memory is used for storing computer-readable instructions
- the processor is used for invoking and executing the computer-readable instructions stored in the memory, so as to perform the method in the above-mentioned first aspect or each of its implementations.
- a decoder including a processor and a memory.
- the memory is used for storing computer-readable instructions
- the processor is used for invoking and executing the computer-readable instructions stored in the memory, so as to perform the method in the above-mentioned second aspect or each of its implementations.
- a chip for implementing the method in any one of the above-mentioned first aspect to the second aspect or each of the implementation manners thereof.
- the chip includes: a processor for invoking and executing computer-readable instructions from a memory, so that a device on which the chip is installed executes any one of the first to second aspects or implementations thereof method in method.
- a computer-readable storage medium for storing computer-readable instructions, the computer-readable instructions cause a computer to execute any one of the first to second aspects or each of the implementations thereof. method.
- a computer-readable instruction product comprising computer-readable instructions, the computer-readable instructions causing a computer to execute the method in any one of the above-mentioned first to second aspects or each of its implementations.
- the present application determines the target area where the current point is located from the point cloud data, and selects at least two target points from the target area to determine the weight coefficient of each point in the at least two target points; according to this
- the weight coefficient and geometric information of each of the at least two target points, and the geometric information of the current point determine the weight of each of the at least two target points, according to the weight of each of the at least two target points , select at least one neighbor point of the current point from the at least two target points, so as to realize the accurate selection of the neighbor point of the current point.
- the accuracy of the attribute prediction can be improved, thereby improving the coding efficiency of the point cloud.
- FIG. 1 is a schematic block diagram of a point cloud video encoding and decoding system according to an embodiment of the application
- FIG. 2 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
- FIG. 3 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
- FIG. 4 is a flowchart of a method for selecting neighbor points in a point cloud according to an embodiment of the present application
- 5A is a schematic diagram of the arrangement of point clouds in the original Morton order
- 5B is a schematic diagram of the arrangement of point clouds under the offset Morton order
- 5C is a schematic diagram of the spatial relationship of the adjacent points of the current point
- 5D is a schematic diagram of a Morton code relationship between adjacent points coplanar with the current point
- 5E is a schematic diagram of a Morton code relationship between adjacent points collinear with the current point
- FIG. 6 is a flowchart of a method for selecting neighbor points in a point cloud according to another embodiment of the present application.
- FIG. 7 is a flowchart of a method for selecting a neighbor point in a point cloud according to another embodiment of the present application.
- FIG. 8 is a schematic block diagram of an apparatus for a neighbor point in a point cloud according to an embodiment of the present application.
- FIG. 9 is a schematic block diagram of an apparatus for selecting a neighbor point in a point cloud according to an embodiment of the present application.
- FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present application.
- B corresponding to A means that B is associated with A.
- B may be determined from A.
- determining B according to A does not mean that B is only determined according to A, and B may also be determined according to A and/or other information.
- Point cloud refers to a set of discrete points in space that are irregularly distributed and express the spatial structure and surface properties of 3D objects or 3D scenes.
- Point cloud data is a specific record form of point cloud, and the points in the point cloud can include point location information and point attribute information.
- the position information of the point may be three-dimensional coordinate information of the point.
- the position information of the point may also be referred to as the geometric information of the point.
- the attribute information of the points may include color information and/or reflectivity, among others.
- the color information may be information in any color space.
- the color information may be (RGB).
- the color information may be luminance chrominance (YcbCr, YUV) information.
- Y represents luminance (Luma)
- Cb(U) represents blue color difference
- Cr(V) represents red color
- U and V represent chromaticity (Chroma) for describing color difference information.
- a point cloud obtained according to the principle of laser measurement the points in the point cloud may include three-dimensional coordinate information of the point and laser reflection intensity (reflectance) of the point.
- a point cloud obtained according to the principle of photogrammetry the points in the point cloud may include three-dimensional coordinate information of the point and color information of the point.
- a point cloud is obtained by combining the principles of laser measurement and photogrammetry, and the points in the point cloud may include three-dimensional coordinate information of the point, laser reflection intensity (reflectance) of the point, and color information of the point.
- the acquisition approach of point cloud data may include, but is not limited to, at least one of the following: (1) Generated by computer equipment.
- the computer device can generate point cloud data according to the virtual three-dimensional object and the virtual three-dimensional scene.
- the visual scene of the real world is acquired through a 3D photography device (ie, a set of cameras or a camera device with multiple lenses and sensors) to obtain point cloud data of the visual scene of the real world, and dynamic real-world three-dimensional objects can be obtained through 3D photography.
- 3D photography device ie, a set of cameras or a camera device with multiple lenses and sensors
- point cloud data of biological tissues and organs can be obtained by medical equipment such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and electromagnetic positioning information.
- MRI Magnetic Resonance Imaging
- CT Computed Tomography
- the point cloud can be divided into: dense point cloud and sparse point cloud according to the acquisition method.
- the point cloud is divided into:
- the first static point cloud that is, the object is static, and the device that obtains the point cloud is also static;
- the second type of dynamic point cloud the object is moving, but the device that obtains the point cloud is stationary;
- the third type of dynamic point cloud acquisition the device that acquires the point cloud is moving.
- point clouds According to the use of point clouds, it is divided into two categories:
- Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and rescue and relief robots;
- Category 2 Human eye perception point cloud, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
- FIG. 1 is a schematic block diagram of a point cloud encoding and decoding system 100 according to an embodiment of the present application. It should be noted that FIG. 1 is only an example, and the point cloud encoding and decoding system in the embodiment of the present application includes but is not limited to that shown in FIG. 1 .
- the point cloud encoding and decoding system 100 includes an encoding device 110 and a decoding device 120 .
- the encoding device is used to encode the point cloud data (which can be understood as compression) to generate a code stream, and transmit the code stream to the decoding device.
- the decoding device decodes the code stream encoded by the encoding device to obtain decoded point cloud data.
- the encoding device 110 in the embodiment of the present application can be understood as a device with a point cloud encoding function
- the decoding device 120 can be understood as a device with a point cloud decoding function, that is, the encoding device 110 and the decoding device 120 in the embodiment of the present application include a wider range of Devices, including, for example, smartphones, desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, televisions, cameras, display devices, digital media players, point cloud game consoles, in-vehicle computers, etc. .
- the encoding device 110 may transmit the encoded point cloud data (eg, a code stream) to the decoding device 120 via the channel 130 .
- Channel 130 may include one or more media and/or devices capable of transmitting encoded point cloud data from encoding device 110 to decoding device 120 .
- channel 130 includes one or more communication media that enables encoding device 110 to transmit encoded point cloud data directly to decoding device 120 in real-time.
- encoding apparatus 110 may modulate the encoded point cloud data according to a communication standard, and transmit the modulated point cloud data to decoding apparatus 120 .
- the communication medium includes a wireless communication medium, such as a radio frequency spectrum, optionally, the communication medium may also include a wired communication medium, such as one or more physical transmission lines.
- channel 130 includes a storage medium that can store point cloud data encoded by encoding device 110 .
- Storage media include a variety of locally accessible data storage media such as optical discs, DVDs, flash memory, and the like.
- the decoding device 120 may obtain the encoded point cloud data from the storage medium.
- channel 130 may include a storage server that may store point cloud data encoded by encoding device 110 .
- the decoding device 120 may download the stored encoded point cloud data from the storage server.
- the storage server may store the encoded point cloud data and may transmit the encoded point cloud data to the decoding device 120, such as a web server (eg, for a website), a file transfer protocol (FTP) server, etc. .
- FTP file transfer protocol
- the encoding device 110 includes a point cloud encoder 112 and an output interface 113 .
- the output interface 113 may include a modulator/demodulator (modem) and/or a transmitter.
- the encoding device 110 may include a point cloud source 111 in addition to the point cloud encoder 112 and the input interface 113 .
- the point cloud source 111 may include at least one of a point cloud acquisition device (eg, a scanner), a point cloud archive, a point cloud input interface, a computer graphics system for receiving from a point cloud content provider Point cloud data, computer graphics systems are used to generate point cloud data.
- a point cloud acquisition device eg, a scanner
- a point cloud archive e.g., a point cloud archive
- a point cloud input interface e.g., a point cloud input interface
- a computer graphics system for receiving from a point cloud content provider Point cloud data
- computer graphics systems are used to generate point cloud data.
- the point cloud encoder 112 encodes the point cloud data from the point cloud source 111 to generate a code stream.
- the point cloud encoder 112 directly transmits the encoded point cloud data to the decoding device 120 via the output interface 113 .
- the encoded point cloud data may also be stored on a storage medium or a storage server for subsequent reading by the decoding device 120 .
- decoding device 120 includes input interface 121 and point cloud decoder 122 .
- the decoding device 120 may include a display device 123 in addition to the input interface 121 and the point cloud decoder 122 .
- the input interface 121 includes a receiver and/or a modem.
- the input interface 121 can receive the encoded point cloud data through the channel 130 .
- the point cloud decoder 122 is configured to decode the encoded point cloud data, obtain the decoded point cloud data, and transmit the decoded point cloud data to the display device 123 .
- the display device 123 displays the decoded point cloud data.
- the display device 123 may be integrated with the decoding apparatus 120 or external to the decoding apparatus 120 .
- the display device 123 may include various display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
- LCD liquid crystal display
- plasma display a plasma display
- OLED organic light emitting diode
- FIG. 1 is only an example, and the technical solutions of the embodiments of the present application are not limited to FIG. 1 .
- the technology of the present application may also be applied to single-sided point cloud encoding or single-sided point cloud decoding.
- the point cloud is a collection of massive points, storing the point cloud not only consumes a lot of memory, but also is not conducive to transmission, and there is no such a large bandwidth to support the point cloud to be transmitted directly at the network layer without compression. Cloud compression is necessary.
- point clouds can be compressed through the point cloud encoding framework.
- the point cloud coding framework can be the Geometry Point Cloud Compression (G-PCC) codec framework provided by the Moving Picture Experts Group (MPEG) or the Video Point Cloud Compression (Video Point Cloud) Compression, V-PCC) codec framework, it can also be the AVS-PCC codec framework provided by the Audio Video Standard (AVS) organization. Both G-PCC and AVS-PCC are aimed at static sparse point clouds, and their coding frameworks are roughly the same.
- the G-PCC codec framework can be used to compress the first static point cloud and the third type of dynamically acquired point cloud, and the V-PCC codec framework can be used to compress the second type of dynamic point cloud.
- the G-PCC codec framework is also called point cloud codec TMC13, and the V-PCC codec framework is also called point cloud codec TMC2.
- FIG. 2 is a schematic block diagram of a coding framework provided by an embodiment of the present application.
- the encoding framework 200 can obtain the position information (also referred to as geometric information or geometric position) and attribute information of the point cloud from the acquisition device.
- the encoding of point cloud includes position encoding and attribute encoding.
- the process of position coding includes: preprocessing the original point cloud, such as coordinate transformation, quantization to remove duplicate points, etc.; after constructing an octree, coding is performed to form a geometric code stream.
- the attribute coding process includes: by given the reconstruction information of the position information of the input point cloud and the true value of the attribute information, select one of the three prediction modes for point cloud prediction, quantify the predicted result, and perform arithmetic coding to form attribute code stream.
- position encoding can be achieved by the following units:
- Coordinate translation coordinate quantization unit 201 octree construction unit 202 , octree reconstruction unit 203 , entropy encoding unit 204 .
- the coordinate translation and coordinate quantization unit 201 can be used to transform the world coordinates of the points in the point cloud into relative coordinates, and quantize the coordinates, which can reduce the number of coordinates; after quantization, different points may be assigned the same coordinates.
- the octree construction unit 202 may encode the position information of the quantized points using an octree encoding method.
- the point cloud is divided in the form of an octree, so that the position of the point can be in a one-to-one correspondence with the position of the octree.
- the flag (flag) is recorded as 1, for geometry encoding.
- the octree reconstruction unit 203 is used for reconstructing the geometrical position of each point in the point cloud to obtain the reconstructed geometrical position of the point.
- the entropy encoding unit 204 can use the entropy encoding method to perform arithmetic encoding on the position information output by the octree construction unit 202, that is, the position information output by the octree construction unit 202 uses the arithmetic encoding method to generate a geometric code stream; the geometric code stream may also be called is a geometry bitstream.
- Attribute encoding can be achieved through the following units:
- a spatial transformation unit 210 an attribute interpolation unit 211 , an attribute prediction unit 212 , a residual quantization unit 213 , and an entropy encoding unit 214 .
- the spatial transformation unit 210 may be used to transform the RGB color space of the points in the point cloud into YCbCr format or other formats.
- the attribute interpolation unit 211 may be used to transform attribute information of points in the point cloud to minimize attribute distortion.
- the attribute conversion unit 211 may be used to obtain the true value of the attribute information of the point.
- the attribute information may be color information of dots.
- the attribute prediction unit 212 may be configured to predict the attribute information of the point in the point cloud to obtain the predicted value of the attribute information of the point, and then obtain the residual value of the attribute information of the point based on the predicted value of the attribute information of the point.
- the residual value of the attribute information of the point may be the actual value of the attribute information of the point minus the predicted value of the attribute information of the point.
- the residual quantization unit 213 may be used to quantize residual values of attribute information of points.
- the entropy coding unit 214 may perform entropy coding on the residual value of the attribute information of the point by using zero run length coding, so as to obtain the attribute code stream.
- the attribute code stream may be bit stream information.
- Pre-processing including coordinate transformation (Transform coordinates) and voxelization (Voxelize). By zooming and panning, the point cloud data in 3D space is converted into integer form, and its minimum geometric position is moved to the coordinate origin.
- Geometry encoding contains two modes, which can be used under different conditions:
- Octree-based geometric coding Octree is a tree-shaped data structure. In the 3D space division, the preset bounding boxes are evenly divided, and each node has eight child nodes . By using '1' and '0' to indicate whether each child node of the octree is occupied, the occupancy code information (occupancy code) is obtained as the code stream of the point cloud geometric information.
- Geometric encoding based on triangular representation (Trisoup): Divide the point cloud into blocks of a certain size, locate the intersection of the point cloud surface at the edge of the block and construct triangles. Compression of geometric information is achieved by encoding the location of intersections.
- Geometry quantization the fineness of quantization is usually determined by the quantization parameter (QP). On the contrary, if the value of QP is small, the coefficients representing a small value range will be quantized into the same output, so it usually brings less distortion, and corresponds to a relatively small value. high bit rate. In point cloud coding, quantization is performed directly on the coordinate information of points.
- QP quantization parameter
- Geometry entropy encoding perform statistical compression encoding on the occupancy code information of the octree, and finally output a binary (0 or 1) compressed code stream.
- Statistical coding is a lossless coding method that can effectively reduce the code rate required to express the same signal.
- a commonly used statistical coding method is context-based binary arithmetic coding (CABAC, Content Adaptive Binary Arithmetic Coding).
- Attribute recoloring In the case of lossy encoding, after the geometric information is encoded, the encoding end needs to decode and reconstruct the geometric information, that is, to restore the coordinate information of each point in the 3D point cloud. Find the attribute information corresponding to one or more adjacent points in the original point cloud as the attribute information of the reconstructed point.
- Attribute prediction coding During attribute prediction coding, one or more points are selected as the predicted value through the proximity relationship of geometric information or attribute information, and the weighted average is obtained to obtain the final attribute predicted value. The difference between the predicted values is encoded.
- Attribute transform coding (Transform): The attribute transform coding includes three modes, which can be used under different conditions.
- Predicting Transform Select sub-point sets according to distance, divide the point cloud into multiple different levels of detail (LoD), and realize the point cloud representation from rough to fine. Bottom-up prediction can be implemented between adjacent layers, that is, the attribute information of the points introduced in the fine layer is predicted from the adjacent points in the rough layer, and the corresponding residual signal is obtained. Among them, the lowest point is encoded as reference information.
- Attribute quantization The fineness of quantization is usually determined by the quantization parameter (QP).
- QP quantization parameter
- entropy coding is performed after quantizing residual values
- RAHT entropy coding is performed after quantizing transform coefficients.
- Attribute entropy coding The quantized attribute residual signal or transform coefficient generally uses run length coding and arithmetic coding to achieve final compression. Corresponding coding modes, quantization parameters and other information are also coded by an entropy coder.
- FIG. 3 is a schematic block diagram of a decoding framework provided by an embodiment of the present application.
- the decoding framework 300 can obtain the code stream of the point cloud from the encoding device, and obtain the position information and attribute information of the points in the point cloud by parsing the code.
- the decoding of point cloud includes position decoding and attribute decoding.
- the process of position decoding includes: arithmetic decoding of the geometric code stream; merging after building an octree, and reconstructing the position information of the point to obtain the reconstruction information of the position information of the point; coordinate the reconstruction information of the position information of the point Transform to get the position information of the point.
- the position information of the point may also be referred to as the geometric information of the point.
- the attribute decoding process includes: obtaining the residual value of the attribute information of the point in the point cloud by parsing the attribute code stream; obtaining the residual value of the attribute information of the point after inverse quantization by inverse quantizing the residual value of the attribute information of the point value; based on the reconstruction information of the position information of the point obtained in the position decoding process, select one of the three prediction modes to perform point cloud prediction, and obtain the reconstructed value of the attribute information of the point; the reconstructed value of the attribute information of the point is color space Inverse transformation to get the decoded point cloud.
- position decoding can be achieved by the following units:
- An entropy decoding unit 301 An entropy decoding unit 301 , an octree reconstruction unit 302 , an inverse coordinate quantization unit 303 , and an inverse coordinate translation unit 304 .
- Attribute encoding can be achieved through the following units:
- An entropy decoding unit 310 An entropy decoding unit 310 , an inverse quantization unit 311 , an attribute reconstruction unit 312 , and an inverse spatial transformation unit 313 .
- Decompression is an inverse process of compression, and similarly, the functions of each unit in the decoding framework 300 may refer to the functions of the corresponding units in the encoding framework 200 .
- the decoder After the decoder obtains the compressed code stream, it first performs entropy decoding to obtain various mode information, quantized geometric information and attribute information. First, the geometric information is inverse quantized to obtain the reconstructed 3D point position information. On the other hand, the attribute information is inversely quantized to obtain residual information, and the reference signal is confirmed according to the transformation mode adopted to obtain the reconstructed attribute information, which is corresponding to the geometric information in order to generate the output reconstructed point cloud data.
- the decoding framework 300 can divide the point cloud into multiple LoDs according to the Euclidean distance between the points in the point cloud; then, decode the attribute information of the points in the LoD in sequence; number (zero_cnt), to decode the residual based on zero_cnt; then, the decoding framework 300 may perform inverse quantization based on the decoded residual value, and add the predicted value of the current point based on the inverse quantized residual value to obtain the The reconstructed value of the point cloud until all point clouds have been decoded. The current point will be used as the nearest neighbor of the subsequent LoD midpoint, and the reconstructed value of the current point will be used to predict the attribute information of the subsequent point.
- the point cloud encoder 200 mainly includes two parts functionally: a position encoding module and an attribute encoding module, wherein the position encoding module is used to realize the encoding of the position information of the point cloud, form a geometric code stream, and attribute encoding.
- the module is used to realize the encoding of the attribute information of the point cloud to form an attribute code stream.
- This application mainly relates to the encoding of the attribute information.
- the mode information or parameter information such as prediction, quantization, encoding, filtering, etc., determined during the encoding of the attribute information at the encoding end is carried in the attribute code stream when necessary.
- the decoding end determines the same prediction, quantization, coding, filtering and other mode information or parameter information as the encoding end by parsing the attribute code stream and analyzing the existing information, so as to ensure the reconstructed value of the attribute information obtained by the encoding end and that obtained by the decoding end.
- the reconstructed values of the attribute information are the same.
- the above is the basic process of the point cloud codec based on the G-PCC codec framework. With the development of technology, some modules or steps of the framework or process may be optimized. This application is applicable to the G-PCC codec-based codec.
- the basic process of the point cloud codec under the decoding framework but not limited to this framework and process.
- FIG. 4 is a flowchart of a method for selecting a neighbor point in a point cloud according to an embodiment of the present application.
- the execution body of the method is a device with a function of selecting a neighbor point in a point cloud, such as selection of neighbor points in a point cloud.
- the device for selecting neighbor points in the point cloud may be the above-mentioned point cloud encoder or a part of the point cloud encoder. As shown in Figure 4, this embodiment includes:
- S410 Acquire point cloud data, and determine a target area where the current point is located from the point cloud data, where the target area includes multiple points.
- this embodiment involves the encoding process of the attribute information of the point cloud, and the encoding of the attribute information of the point cloud is performed after the encoding of the position information.
- the encoding process of the attribute information of the point cloud in the embodiment of the present application is as follows: for each point in the point cloud data, the target area of the current point whose current attribute information is to be encoded is determined from the point cloud data, and the current point is located in the target area. , the target area includes multiple points. At least two target points are determined from the target area, and a weight coefficient of each of the at least two target points is determined. According to the weight coefficient and geometric information of each of the at least two target points, and the geometric information of the current point, the weight of each of the at least two target points is determined. According to the weight of each of the at least two target points, at least one neighbor point of the current point is selected from the at least two target points.
- the predicted value of the attribute information of the current point is determined.
- the residual value of the attribute information of the current point is determined.
- the residual value of the attribute information of the current point is quantized, and the quantized residual value is encoded to obtain a code stream.
- the embodiments of the present application mainly relate to the selection process of the neighbor points of the current point in the above encoding process.
- the target area includes all points in the point cloud data.
- the above-mentioned target area is any point cloud area including the current point in the point cloud data.
- the above-mentioned target area is a point cloud area composed of the current point and the adjacent points of the current point.
- obtain the geometric information of some or all of the points in the point cloud data calculate the distance between each of these points and the current point, and select from these points the number of points that are within a predetermined distance from the current point according to the distance. point.
- the multiple points are determined as the adjacent points of the current point, and these adjacent points and the current point constitute the target area where the current point is located.
- some or all of the points in the above point cloud data may be points whose attribute information has been encoded, or may be points whose attribute information has not been encoded.
- the attribute information of the current point includes a color attribute and/or a reflectivity attribute.
- the manner of determining the adjacent points of the current point may be different.
- Example 1 if the attribute information of the current point is reflectivity information, the methods for determining the adjacent points of the current point include but are not limited to the following methods:
- the Morton sequence can be used to select the adjacent points of the current point, specifically:
- Morton order 2 is obtained according to Morton order, as shown in FIG. 5B .
- the corresponding Morton codes also change, but their relative positions remain unchanged.
- the Morton code of point D is 23, and the Morton code of its adjacent point B is 21, so point B can be found by searching two points forward from point D at most. But in FIG. 5A , from point D (Morton code 16), it needs to search forward at most 14 points to find point B (Morton code 2).
- Morton order coding find the nearest predicted point of the current point, select the first N1 coded points of the current point in Morton order 1 as the N1 adjacent points of the current point, and the value range of N1 is greater than or equal to 1,
- the first N2 coded points of the current point are selected as the N2 adjacent points of the current point, and the value range of N2 is greater than or equal to 1, and then N1+N2 adjacent points of the current point are obtained.
- Method 2 Calculate the first maxNumOfNeighbours (maximum number of adjacent points) coded points of the current point in Hilbert order, and use the maxNumOfNeighbours coded points as adjacent points of the current point.
- maxNumOfNeighbours is 128.
- Example 2 if the attribute information of the current point is color information, the method of determining the adjacent points of the current point includes:
- the spatial relationship of the adjacent points of the current point is shown in FIG. 5C , in which the solid line box represents the current point, and it is assumed that the search range of the adjacent points is the 3 ⁇ 3 ⁇ 3 neighborhood of the current point.
- the Morton code of the current point is used to obtain the block with the smallest Morton code value in the 3X3X3 neighborhood, the block is used as the reference block, and the reference block is used to find the coded adjacent points that are coplanar and collinear with the current point.
- FIG. 5D The Morton code relationship between adjacent points that are coplanar with the current point in the neighborhood is shown in FIG. 5D
- the Morton code relationship between adjacent points that are collinear with the current point is shown in FIG. 5E below.
- the reference block is used to search for multiple coded adjacent points that are coplanar and collinear with the current point.
- a point cloud area composed of the plurality of adjacent points and the current point is determined as the target area of the current point.
- step S401 After the above step S401 is performed, after the target area of the current point is determined, at least two target points are then selected from the target area, wherein N target points can be selected, and N is a positive integer greater than or equal to 2.
- the above-mentioned at least two target points are any at least two target points in the target area.
- the above-mentioned at least two target points are at least two target points closest to the current point in the target area.
- a weight coefficient of each point in the at least two target points is determined.
- each of the at least two target points has the same weight coefficient.
- At least two of the at least two target points have different weighting coefficients.
- S430 Determine the weight of each of the at least two target points according to the weight coefficient and geometric information of each of the at least two target points and the geometric information of the current point.
- the process of determining the weight of each of the at least two target points in this embodiment is the same.
- the process of determining the weight of one point in the at least two target points is taken as an example.
- the distance between the point and the current point is determined according to the geometric information of the point and the geometric information of the current point, and the weight of the point is obtained according to the weight coefficient of the distance and the point.
- the reciprocal of the product of the distance and the weight coefficient of the point is determined as the weight of the point.
- the weight coefficient includes the weight coefficient of the first component, the weight coefficient of the second component and the weight coefficient of the third component, then the weight of the point is determined according to the following formula (1):
- wij is the weight of the point
- a is the weight coefficient of the first component of the point
- b is the weight coefficient of the second component of the point
- c is the weight coefficient of the third component of the point
- (xi, yi, zi) is the geometric information of the current point
- (xij, yij, zij) is the geometric information of the point.
- the a, b, and c can be obtained by looking up a table, or are preset fixed values.
- the above at least one neighbor point is k neighbor points, and k is a positive integer.
- the top k points with the largest weights are selected from the at least two target points as neighbor points of the current point.
- k points whose weights are within a preset range are selected from the at least two target points as neighbor points of the current point.
- the target area where the current point is located is determined from the point cloud data, and at least two target points are selected from the target area to determine the at least two target points.
- the weight of each point in the two target points, at least one neighbor point of the current point is selected from the at least two target points, so as to realize the accurate selection of the neighbor point of the current point. In this way, when the attribute prediction of the current point is performed based on the accurately selected neighbor points, the accuracy of the attribute prediction can be improved, thereby improving the coding efficiency of the point cloud.
- the manner of determining the weight coefficient of each point in the at least two target points in the above S420 includes but is not limited to the following:
- Mode 1 Divide the at least two target points into at least one group, and determine the default weight coefficient corresponding to each group in the at least one group as the weight coefficient of each point in each group, wherein the default weight corresponding to each group The coefficients are different.
- the first group includes M1 points and the second group includes M2 points.
- the default weight coefficient corresponding to the group is the weight coefficient 1
- the default weight coefficient corresponding to the second group is the weight coefficient 2
- the authority coefficient 1 is different from the weight coefficient 2.
- the weight coefficient 1 is determined as the weight coefficient of the M1 points in the first group, that is, the weight coefficient of each of the M1 points in the first group is the same, which is the authority coefficient 1.
- the right coefficient 1 is determined as the weight coefficient of the M2 points in the second group, that is, the weight coefficient of each of the M2 points in the second group is the same, which is the authority coefficient 2.
- the default weighting coefficients corresponding to at least two groups are the same.
- the at least two target points are divided into a group, the default weight coefficient is set as the weight coefficient of the at least two target points, and the weight coefficient of each of the at least two target points is the same.
- the weighting coefficients include a weighting coefficient for the first component, a weighting coefficient for the second component, and a weighting coefficient for the third component.
- x is called the first component
- y is called the second component
- z is called the third component
- the geometric coordinates of the point in the corresponding point cloud are (x, y, z).
- r is called the first component
- ⁇ is called the second component
- ⁇ is called the third component
- the geometric coordinates of the corresponding point cloud at the point are (r, ⁇ , ⁇ ).
- the at least two target points are divided into at least one group, and the weight coefficient of the first component, the weight coefficient of the second component, and the weight coefficient of the third component corresponding to the same group are all equal.
- the weight coefficient of each of the M1 points in the first group is the same as the weight coefficient 1, and the weight coefficient 1 includes the weight coefficient a1 of the first component, the weight coefficient b1 of the second component, and the weight of the third component.
- At least two of the weight coefficients of the first component, the weight coefficient of the second component, and the weight coefficient of the third component corresponding to the same group in at least one group are not equal.
- the weight coefficient of each of the M1 points in the first group is the same as the weight coefficient 1
- the weight coefficient 1 includes the weight coefficient a1 of the first component, the weight coefficient b1 of the second component, and the weight coefficient of the third component.
- the weight coefficient of each point in the at least two target points is determined according to the geometric information of each point in the at least two target points.
- the weight coefficient of each point in the at least two target points is determined according to the spatial distribution of the at least two target points. In this manner, each of the determined at least two target points has the same weight coefficient.
- the above S420 includes the following S420-A1 and S420-A2:
- S420-A2 Determine the weight coefficient of each point in the at least two target points according to the first distribution value, the second distribution value and the third distribution value.
- the distribution values of the at least two target points on different components are determined according to the geometric information of the at least two target points, and the weight coefficient of each point is determined according to the distribution values corresponding to the different components, so as to realize the point-to-point distribution.
- Accurate calculation of weight coefficients so that when neighbor points are selected based on weight coefficients, the selection accuracy of neighbor points can be provided, thereby improving the prediction accuracy of point clouds.
- the above S421 includes: according to the geometric information of each of the at least two target points, projecting the at least two target points in the directions of the first component, the second component and the third component respectively, The projection of the two target points in the direction of the first component is taken as the first distribution value, the projection of the at least two target points in the direction of the second component is taken as the second distribution value, and the projection of the at least two target points in the direction of the third component is taken as the second distribution value. projection as the third distribution value.
- the above S421 includes: determining a first value range of the at least two target points in the first component direction according to the geometric information of each of the at least two target points in the first component direction. Assuming that the first component is x, the first value range is [xmax, xmin], where xmax is the maximum value in the x direction in the geometric information of at least two target points, and xmin is the geometric information of at least two target points. the maximum value in the direction. A second value range of the at least two target points in the direction of the second component is determined according to the geometric information of each point of the at least two target points in the direction of the second component.
- the second component is y
- the second value range is [ymax, ymin], where ymax is the maximum value in the y direction in the geometric information of at least two target points, and ymin is the y value in the geometric information of at least two target points the maximum value in the direction.
- the third value range of the at least two target points in the direction of the third component is determined. Assuming that the third component is z, and the third value range is [zmax, zmin], where zmax is the maximum value in the z direction in the geometric information of at least two target points, and zmin is the geometric information of at least two target points. the maximum value in the direction.
- the first distribution value, the second distribution value and the third distribution value are determined according to the first value range, the second value range and the third value range.
- the above S421 includes: determining the first variance of the at least two target points in the direction of the first component according to the geometric information of each of the at least two target points in the direction of the first component; Geometric information of each of the two target points in the direction of the second component, determine the second variance of the at least two target points in the direction of the second component; according to the direction of the third component of each of the at least two target points Determine the third variance of the at least two target points in the direction of the third component; determine the first distribution value, the second distribution value and the third distribution according to the first variance, the second variance and the third variance value.
- the first variance is determined according to the following formula (2):
- x is the first component
- Mx is the average value of the geometric information of the at least two target points on the first component
- the first variance is determined according to the following formula (3):
- y is the second component
- My is the average value of the geometric information of at least two target points on the second component
- the first variance is determined according to the following formula (4):
- z is the third component
- Mz is the average value of the geometric information of at least two target points on the third component
- the first distribution value, the second distribution value and the third distribution value are determined according to the first variance, the second variance and the third variance.
- the first mean square error is obtained according to the first variance
- the first mean square error is taken as the first distribution value
- the second mean square error is obtained according to the second variance
- the second mean square error is taken as the second distribution value
- the third mean square error is taken as the second distribution value.
- the variance obtains the third mean square error
- the third mean square error is taken as the third distribution value.
- the first variance is determined as the first distribution value
- the second variance is determined as the second distribution value
- the third variance is determined as the third distribution value.
- the above S420-A2 includes: determining the first distribution value as the weight coefficient of the first component, determining the second distribution value as the weight coefficient of the second component, and determining the third distribution value as the third component weight factor.
- the weighting coefficient a ⁇ x of the first component
- the weighting coefficient b ⁇ y of the second component
- the weighting coefficient c ⁇ z of the third component.
- the above S420-A2 includes: determining the sum of the first distribution value, the second distribution value and the third distribution value, determining the weight coefficient of the first component according to the ratio of the first distribution value to the sum, and determining the weight coefficient of the first component according to the The ratio of the second distribution value to the total sum determines the weight coefficient of the second component, and according to the ratio of the third distribution value to the total sum, the weight coefficient of the third component is determined.
- the ratio of the first distribution value to the sum is determined as the weight coefficient of the first component
- the ratio of the second distribution value to the sum is determined as the weight coefficient of the second component
- the ratio of the third distribution value to the sum is determined as the weight coefficient of the second component.
- a ⁇ x/( ⁇ x+ ⁇ y+ ⁇ z)
- b ⁇ y/( ⁇ x+ ⁇ y+ ⁇ z)
- c ⁇ z/( ⁇ x+ ⁇ y+ ⁇ z).
- Manner 3 Determine the weight coefficient of each point in the at least two target points according to the attribute information of each point in the at least two target points.
- the weight coefficient of each point of the at least two target points is determined according to the attribute information of the at least two target points. In this manner, each of the determined at least two target points has the same weight coefficient.
- the above S420 includes:
- S420-B1 according to the geometric information of each point of the at least two target points, determine the center point of the area enclosed by the at least two target points.
- S420-B2 from at least two target points, respectively determine a first point that is farthest from the center point in the direction of the first component, a second point that is the farthest in the direction of the second component, and a second point that is farthest in the direction of the third component the third point farthest away;
- S420-B3 according to the attribute information of each of the first point, the second point and the third point, respectively determine the first distribution value of the at least two target points in the first component direction, the distribution value in the second component direction the second distribution value and the third distribution value in the direction of the third component;
- S420-B4 Determine the weight coefficient of each point in the at least two target points according to the first distribution value, the second distribution value and the third distribution value.
- the above S420-B3 includes: acquiring attribute information of the center point; determining a first distribution value of the at least two target points in the first component direction according to the attribute information of the first point and the attribute information of the center point , according to the attribute information of the second point and the attribute information of the center point, determine the second distribution value of at least two target points in the direction of the second component, and determine at least two distribution values according to the attribute information of the third point and the attribute information of the center point.
- the third distribution value of each target point in the direction of the third component includes: acquiring attribute information of the center point; determining a first distribution value of the at least two target points in the first component direction according to the attribute information of the first point and the attribute information of the center point , according to the attribute information of the second point and the attribute information of the center point, determine the second distribution value of at least two target points in the direction of the second component, and determine at least two distribution values according to the attribute information of the third point and the attribute information of the center point.
- the difference between the attribute information of the first point and the attribute information of the center point is determined as the first distribution value of the at least two target points in the direction of the first component; the attribute information of the second point and the attribute information of the center point are determined
- the difference value of the information is determined as the second distribution value of at least two target points in the direction of the second component; the difference value between the attribute information of the third point and the attribute information of the center point is determined as the at least two target points in the first The third distribution value in the three-component direction.
- the above S420-B3 includes: according to the attribute information of each of the at least two target points, determining the average value of the attribute information of the at least two target points; according to the attribute information of the first point and the at least two
- the average value of the attribute information of the target points determines the first distribution value of the at least two target points in the direction of the first component, and according to the attribute information of the second point and the average value of the attribute information of the at least two target points, at least two target points are determined.
- the second distribution values of the target points in the direction of the second component, according to the average value of the attribute information of the third point and the attribute information of the at least two target points determine the third distribution value of the at least two target points in the direction of the third component distribution value.
- the difference between the attribute information of the first point and the average value of the attribute information of the at least two target points is determined as the first distribution value of the at least two target points in the direction of the first component;
- the difference between the information and the average value of the attribute information of the at least two target points is determined as the second distribution value of the at least two target points in the second component direction;
- the difference between the average values of the attribute information is determined as the third distribution value of the at least two target points in the direction of the third component.
- the above S420-B4 is executed, and at least two distribution values are determined according to the first distribution value, the second distribution value and the third distribution value
- the above S420-A2 for the specific execution process, which will not be repeated here.
- the above describes how the encoder determines the weight coefficient of at least two target points at each point.
- the encoder can use any one of the above-mentioned methods 1, 2, and 3 to determine the weight coefficient of each point in the at least two target points. If the decoding end and the coding end are consistent, the coding end can carry the indication information of the determination method of the weight coefficient in the code stream. It is sent to the decoding end in the code stream, so that the decoding end determines the weight coefficient of the point according to the second method.
- the encoding end and the decoding end may use one of the above-mentioned default methods to determine the weight coefficient of the point.
- both the encoding end and the decoding end use the above-mentioned method 2 to determine the weight coefficient of the point by default.
- the indication information of the determination method of the weight coefficient may not be carried in the code stream.
- the encoder can directly carry the weight coefficient of the determined point in the code stream and send it to the decoder, so that the decoder can directly decode the weight coefficient of the point from the code stream to select neighbor points without needing to It does the recalculation on its own, which reduces the difficulty of decoding.
- FIG. 7 is a flowchart of a method for selecting neighbor points in a point cloud according to another embodiment of the present application, as shown in FIG. 7 , including:
- the decoder parses the code stream, firstly decodes the position information of the point cloud, and then decodes the attribute information of the point cloud.
- S702 according to the geometric information of the points in the point cloud data, determine the target area where the current point is located from the point cloud data, and the target area includes a plurality of points.
- the distance between the current point and each point is obtained, and according to the distance between the current point and each point, from the point cloud data Obtain the N decoded points closest to the current point as the N neighbors of the current point.
- the decoding end obtains at least two decoded target points from the target area, for example, N can be selected.
- the above N decoded points are arbitrary points in the target area.
- the above N decoded points are N decoded adjacent points of the current point in the target area.
- the determination method of the adjacent points reference may be made to the description of the above S410.
- the manner of determining the weight coefficient of each of the at least two target points in S703 includes, but is not limited to, the following manners:
- Mode 1 Divide the at least two target points into at least one group, and determine the default weight coefficient corresponding to each group in the at least one group as the weight coefficient of each point in each group, wherein the default weight corresponding to each group The coefficients are different.
- the encoding end carries the weight coefficient determination method indicated by the indication information of the weight coefficient determination method in the code stream as mode 1, or the default method of the decoding end for determining the authority coefficient is mode 1, then the decoding end according to the above method. 1. Divide the at least two target points into at least one group, and determine the default weight coefficient corresponding to each group in the at least one group as the weight coefficient of each point in each group. In some embodiments, the default weighting coefficients corresponding to at least two groups are the same.
- the above-mentioned weight coefficient includes a weight coefficient of the first component, a weight coefficient of the second component, and a weight coefficient of the third component.
- the weight coefficient of the first component, the weight coefficient of the second component, and the weight coefficient of the third component corresponding to the same group are all equal.
- At least two of the weight coefficients of the first component, the weight coefficient of the second component, and the weight coefficient of the third component corresponding to the same group are not equal.
- the weight coefficient of each point in the at least two target points is determined according to the geometric information of each point in the at least two target points.
- the encoding end carries the weight coefficient determination method in the code stream indicated by the indication information of the weight coefficient determination method as mode 2, or, the default method of the decoding end for determining the authority coefficient is mode 2, then the decoding end according to the above method.
- the above-mentioned determining the weight coefficient of each point in the at least two target points according to the geometric information of each point in the at least two target points includes the following steps:
- S703-A2 Determine the weight coefficient of each point in the at least two target points according to the first distribution value, the second distribution value and the third distribution value, wherein the weight coefficient of each point in the at least two target points is the same.
- the above S703-A1 includes: determining a first value range of the at least two target points in the first component direction according to the geometric information of each of the at least two target points in the first component direction ; According to the geometric information of each point in the second component direction in the at least two target points, determine the second value range of the at least two target points in the second component direction; According to each point in the at least two target points The geometric information in the third component direction determines the third value range of the at least two target points in the third component direction; according to the first value range, the second value range and the third value range, determine the third value range A distribution value, a second distribution value, and a third distribution value.
- the range value of the first value range is determined as the first distribution value; the range value of the second value range is determined as the second distribution value; the range value of the third value range is determined as the third distribution value.
- the above S703-A1 includes: determining the first variance of the at least two target points in the first component direction according to the geometric information of each of the at least two target points in the first component direction; According to the geometric information of each point of the at least two target points in the direction of the second component, the second variance of the at least two target points in the direction of the second component is determined; The geometric information in the component direction determines the third variance of the at least two target points in the third component direction; according to the first variance, the second variance and the third variance, the first distribution value, the second distribution value and the third variance are determined. Three distribution values.
- the first variance is determined as the first distribution value
- the second variance is determined as the second distribution value
- the third variance is determined as the third distribution value.
- Manner 3 Determine the weight coefficient of each point in the at least two target points according to the attribute information of each point in the at least two target points.
- the encoding end carries the weight coefficient determination method indicated by the indication information of the weight coefficient determination method in the code stream as mode three, or the default method of the decoding end to determine the authority coefficient is mode three, then the decoding end according to the above method.
- the above-mentioned determining the weight coefficient of each point in the at least two target points according to the attribute information of each point in the at least two target points includes the following steps:
- S703-B1 according to the geometric information of each point in the at least two target points, determine the center point of the area enclosed by the at least two target points;
- S703-B1 from at least two target points, respectively determine a first point with the farthest distance from the center point in the direction of the first component, a second point with the farthest distance in the direction of the second component, and a second point in the direction of the third component the third point farthest away;
- S703-B1 according to the attribute information of each of the first point, the second point and the third point, respectively determine the first distribution value of the at least two target points in the direction of the first component, the distribution value of the at least two target points in the direction of the second component the second distribution value and the third distribution value in the direction of the third component;
- S703-B Determine the weight coefficient of each point in the at least two target points according to the first distribution value, the second distribution value and the third distribution value, wherein the weight coefficient of each point in the at least two target points is the same.
- the above S703-B1 includes: acquiring attribute information of the center point; determining the first distribution value of the at least two target points in the first component direction according to the attribute information of the first point and the attribute information of the center point , according to the attribute information of the second point and the attribute information of the center point, determine the second distribution value of at least two target points in the direction of the second component, and determine at least two distribution values according to the attribute information of the third point and the attribute information of the center point.
- the third distribution value of each target point in the direction of the third component includes: acquiring attribute information of the center point; determining the first distribution value of the at least two target points in the first component direction according to the attribute information of the first point and the attribute information of the center point , according to the attribute information of the second point and the attribute information of the center point, determine the second distribution value of at least two target points in the direction of the second component, and determine at least two distribution values according to the attribute information of the third point and the attribute information of the center point.
- the difference between the attribute information of the first point and the attribute information of the center point is determined as the first distribution value of the at least two target points in the direction of the first component; the attribute information of the second point and the attribute information of the center point are determined
- the difference value of the information is determined as the second distribution value of at least two target points in the direction of the second component; the difference value between the attribute information of the third point and the attribute information of the center point is determined as the at least two target points in the first The third distribution value in the three-component direction.
- the above S703-B1 includes: according to the attribute information of each of the at least two target points, determining the average value of the attribute information of the at least two target points; according to the attribute information of the first point and the at least two
- the average value of the attribute information of the target points determines the first distribution value of the at least two target points in the direction of the first component, and according to the attribute information of the second point and the average value of the attribute information of the at least two target points, at least two target points are determined.
- the second distribution values of the target points in the direction of the second component, according to the average value of the attribute information of the third point and the attribute information of the at least two target points determine the third distribution value of the at least two target points in the direction of the third component distribution value.
- the difference between the attribute information of the first point and the average value of the attribute information of the at least two target points is determined as the first distribution value of the at least two target points in the direction of the first component;
- the difference between the information and the average value of the attribute information of the at least two target points is determined as the second distribution value of the at least two target points in the second component direction;
- the difference between the average values of the attribute information is determined as the third distribution value of the at least two target points in the direction of the third component.
- the above S703-A2 and S703-B3 include: determining the first distribution value as the weight coefficient of the first component, determining the second distribution value as the weight coefficient of the second component, and determining the third distribution value is the weight coefficient of the third component.
- the above S703-A2 and S703-B3 include: determining the sum of the first distribution value, the second distribution value and the third distribution value, and determining the weight of the first component according to the ratio of the first distribution value to the sum coefficient, the weight coefficient of the second component is determined according to the ratio of the second distribution value to the sum, and the weight coefficient of the third component is determined according to the ratio of the third distribution value to the sum.
- the ratio of the first distribution value to the sum is determined as the weight coefficient of the first component; the ratio of the second distribution value to the sum is determined as the weight coefficient of the second component; the third distribution value and the sum are determined as the weight coefficient of the second component The ratio is determined as the weight coefficient of the third component.
- Manner 4 Decode the code stream to obtain the weight coefficient of each point in the at least two target points.
- the weight coefficient of the point is carried in the code stream.
- the decoding end directly decodes the weight coefficient of each of the at least two target points from the code stream, and does not need to recalculate by itself, thereby reducing the difficulty of decoding.
- S705. Select at least one neighbor point of the current point from the at least two target points according to the weight of each point in the at least two target points.
- the method for selecting neighbor points in the point cloud is an inverse process of the method for selecting neighbor points in the point cloud above.
- steps in the method for selecting neighbor points in the point cloud reference may be made to the corresponding steps in the method for selecting neighbor points in the point cloud, which will not be repeated here in order to avoid repetition.
- the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be dealt with in the present application.
- the implementation of the embodiments constitutes no limitation.
- FIG. 8 is a schematic block diagram of an apparatus for a neighbor point in a point cloud according to an embodiment of the present application.
- the apparatus 10 may be an encoding device, or may be a part of the encoding device.
- the apparatus 10 for neighbor points in the point cloud may include:
- the acquisition unit 11 is used to acquire point cloud data, and determine the target area where the current point is located from the point cloud data, and the target area includes a plurality of points;
- the weight coefficient determination unit 12 is used to determine the weight coefficient of each point in the at least two target points for at least two target points in the target area, and the current point is not included in the at least two target points;
- the weight determination unit 13 is used for determining the weight of each point in the at least two target points according to the weight coefficient and geometric information of each point in the at least two target points, and the geometric information of the current point;
- the neighbor point selection unit 14 is configured to select at least one neighbor point of the current point from the at least two target points according to the weight of each point in the at least two target points.
- the weight coefficient determination unit 12 is specifically configured to divide the at least two target points into at least one group, and determine the default weight coefficient corresponding to each group in the at least one group as the value of each point in each group. weight coefficient; or, according to the geometric information of each point in the at least two target points, determine the weight coefficient of each point in the at least two target points; or, according to the attribute information of each point in the at least two target points, determine Weight coefficient for each of the at least two target points.
- the weight coefficient includes a weight coefficient of the first component, a weight coefficient of the second component, and a weight coefficient of the third component.
- the weight coefficient of the first component, the weight coefficient of the second component, and the weight coefficient of the third component corresponding to the same group of the at least two groups are equal; or, the first component corresponding to the same group At least two of the weight coefficients of the second component, the weight coefficients of the second component, and the weight coefficients of the third component are not equal.
- the weight coefficient determination unit 12 is specifically configured to determine, according to the geometric information of each of the at least two target points, the first distribution value of the at least two target points in the first component direction, the second distribution value in the direction of the two components and the third distribution value in the direction of the third component; determine the weight of each point in the at least two target points according to the first distribution value, the second distribution value and the third distribution value coefficient.
- the weight coefficient determining unit 12 is specifically configured to determine the first component direction of the at least two target points in the first component direction according to the geometric information of each point in the first component direction of the at least two target points. Value range; according to the geometric information of each of the at least two target points in the direction of the second component, determine the second value range of the at least two target points in the direction of the second component; The geometric information of each point in the direction of the third component determines the third value range of the at least two target points in the direction of the third component; according to the first value range, the second value range and the third value range , determine the first distribution value, the second distribution value and the third distribution value.
- the weight coefficient determination unit 12 is specifically configured to determine the range value of the first value range as the first distribution value; determine the range value of the second value range as the second distribution value; The range value of the value range is determined as the third distribution value.
- the weight coefficient determination unit 12 is specifically configured to determine a first ratio between the number of the at least two target points and the range value of the first value range, and determine the first distribution value from the first ratio; determining a second ratio between the number of at least two target points and the range value of the second value range, and determining the second ratio to a second distribution value; determining the difference between the number of at least two target points and the third value range a third ratio between the range values and determining the third distribution value.
- the weight coefficient determining unit 12 is specifically configured to determine the first component direction of the at least two target points in the first component direction according to the geometric information of each point in the first component direction of the at least two target points. variance; according to the geometric information of each of the at least two target points in the direction of the second component, determine the second variance of the at least two target points in the direction of the second component; according to each point of the at least two target points.
- the geometric information in the direction of the third component determines the third variance of the at least two target points in the direction of the third component; according to the first variance, the second variance and the third variance, determine the first distribution value, the second distribution value and the third distribution value.
- the weight coefficient determination unit 12 is specifically configured to determine the first variance as the first distribution value; determine the second variance as the second distribution value; and determine the third variance as the third distribution value.
- the weight coefficient determination unit 12 is specifically configured to determine the center point of the area enclosed by the at least two target points according to the geometric information of each point of the at least two target points; Determine the first point that is farthest from the center point in the direction of the first component, the second point that is farthest in the direction of the second component, and the third point that is farthest in the direction of the third component; The attribute information of each point in one point, the second point and the third point, respectively determine the first distribution value of the at least two target points in the direction of the first component, the second distribution value in the direction of the second component, and the a third distribution value in the direction of the third component; according to the first distribution value, the second distribution value and the third distribution value, determine the weight coefficient of each point in the at least two target points.
- the weight coefficient determination unit 12 is specifically configured to acquire attribute information of the center point; according to the attribute information of the first point and the attribute information of the center point, determine the first position of the at least two target points in the direction of the first component. a distribution value, according to the attribute information of the second point and the attribute information of the center point, to determine the second distribution value of at least two target points in the direction of the second component, according to the attribute information of the third point and the attribute information of the center point, A third distribution value in the direction of the third component of the at least two target points is determined.
- the weight coefficient determination unit 12 is specifically configured to determine the difference between the attribute information of the first point and the attribute information of the center point as the first distribution value of the at least two target points in the direction of the first component ; Determine the difference between the attribute information of the second point and the attribute information of the center point as the second distribution value of at least two target points in the direction of the second component; the attribute information of the third point and the attribute information of the center point The difference is determined as the third distribution value of at least two target points in the direction of the third component.
- the weight coefficient determination unit 12 is specifically configured to determine the average value of the attribute information of the at least two target points according to the attribute information of each of the at least two target points; according to the attribute information of the first point and the attribute information of the first point
- the average value of the attribute information of the at least two target points, the first distribution value of the at least two target points in the direction of the first component is determined, and according to the average value of the attribute information of the second point and the attribute information of the at least two target points, Determine the second distribution value of the at least two target points in the direction of the second component, and determine that the at least two target points are in the direction of the third component according to the average value of the attribute information of the third point and the attribute information of the at least two target points The third distribution value of .
- the weight coefficient determination unit 12 is specifically configured to determine the difference between the attribute information of the first point and the average value of the attribute information of the at least two target points as the at least two target points in the first component direction
- the difference between the attribute information of the second point and the average value of the attribute information of at least two target points is determined as the second distribution value of the at least two target points in the direction of the second component
- the difference between the attribute information of the third point and the average value of the attribute information of the at least two target points is determined as a third distribution value of the at least two target points in the direction of the third component.
- the weight coefficient determination unit 12 is specifically configured to determine the first distribution value as the weight coefficient of the first component, determine the second distribution value as the weight coefficient of the second component, and determine the third distribution value as the weight coefficient of the third component; or,
- the weight coefficient determination unit 12 is specifically configured to determine the ratio of the first distribution value to the sum as the weight coefficient of the first component; determine the ratio of the second distribution value to the sum as the ratio of the second distribution value to the sum Weight coefficient; the ratio of the third distribution value to the sum is determined as the weight coefficient of the third component.
- the weight coefficient of each of the at least two target points is carried in the code stream.
- the apparatus embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, details are not repeated here.
- the apparatus shown in FIG. 8 can execute the above method embodiments, and the aforementioned and other operations and/or functions of each module in the apparatus are intended to implement the method embodiments corresponding to the encoder, and are not omitted here for brevity. Repeat.
- the apparatus of the embodiments of the present application is described above from the perspective of functional modules with reference to the accompanying drawings.
- the functional modules can be implemented in the form of hardware, can also be implemented by instructions in the form of software, and can also be implemented by a combination of hardware and software modules.
- the steps of the method embodiments in the embodiments of the present application may be completed by hardware integrated logic circuits in the processor and/or instructions in the form of software, and the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as hardware
- the execution of the decoding processor is completed, or the execution is completed by a combination of hardware and software modules in the decoding processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and other storage media mature in the art.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps in the above method embodiments in combination with its hardware.
- FIG. 9 is a schematic block diagram of an apparatus for selecting neighbor points in a point cloud according to an embodiment of the present application.
- the apparatus 20 may be the above-mentioned decoding device, or may be a part of the decoding device.
- the device 20 for selecting neighbor points in the point cloud may include:
- the decoding unit 21 is used for decoding the code stream to obtain the geometric information of the point in the point cloud data
- the area determination unit 22 is used to determine the target area where the current point is located from the point cloud data according to the geometric information of the point in the point cloud data, and the target area includes a plurality of points;
- a weight coefficient determination unit 23 configured to determine the weight coefficient of each point in the at least two target points for at least two decoded target points in the target area, and the current point is not included in the at least two target points;
- the weight determination unit 24 is used for determining the weight of each point in the at least two target points according to the weight coefficient and geometric information of each point in the at least two target points, and the geometric information of the current point;
- the neighbor point determination unit 25 is configured to select at least one neighbor point of the current point from the at least two target points according to the weight of each point in the at least two target points.
- the weight determination unit 24 is specifically configured to decode the code stream to obtain the weight coefficient of each point in the at least two target points; or, divide the at least two target points into at least one group, and divide the at least one group of The default weight coefficient corresponding to each group in each group is determined as the weight coefficient of each point in each group; or, according to the geometric information of each point in the at least two target points, determine the weight coefficient of each point in the at least two target points. weight coefficient; or, according to the attribute information of each point of the at least two target points, determine the weight coefficient of each point of the at least two target points.
- the weight coefficient includes a weight coefficient of the first component, a weight coefficient of the second component, and a weight coefficient of the third component.
- the weight coefficient of the first component, the weight coefficient of the second component and the weight coefficient of the third component corresponding to the same group in at least one group are all equal; or, the weight of the first component corresponding to the same group At least two of the coefficients, the weighting coefficients of the second component, and the weighting coefficients of the third component are not equal.
- the weight determination unit 24 is specifically configured to determine, according to the geometric information of each of the at least two target points, the first distribution value of the at least two target points in the direction of the first component, the second distribution value in the component direction and the third distribution value in the third component direction; according to the first distribution value, the second distribution value and the third distribution value, determine the weight coefficient of each point in the at least two target points .
- the weight determination unit 24 is specifically configured to determine the first weight of the at least two target points in the first component direction according to the geometric information of each point in the first component direction of the at least two target points. value range; according to the geometric information of each of the at least two target points in the direction of the second component, determine the second value range of the at least two target points in the direction of the second component; according to each of the at least two target points
- the geometric information of the points in the direction of the third component determines the third value range of the at least two target points in the direction of the third component; according to the first value range, the second value range and the third value range, A first distribution value, a second distribution value, and a third distribution value are determined.
- the weight determination unit 24 is specifically configured to determine the range value of the first value range as the first distribution value; determine the range value of the second value range as the second distribution value; The range value of the value range is determined as the third distribution value.
- the weight determination unit 24 is specifically configured to determine a first ratio between the number of the at least two target points and the range value of the first value range, and determine the first distribution value from the first ratio; determine A second ratio between the number of the at least two target points and the range value of the second value range, and determining the second distribution value from the second ratio; determining the number of the at least two target points and the range of the third value range a third ratio between the values, and the third ratio determines a third distribution value.
- the weight determination unit 24 is specifically configured to determine the first direction of the at least two target points in the direction of the first component according to the geometric information of each point of the at least two target points in the direction of the first component difference; according to the geometric information of each of the at least two target points in the direction of the second component, determine the second variance of the at least two target points in the direction of the second component;
- the geometric information in the direction of the third component is used to determine the third variance of the at least two target points in the direction of the third component; the first distribution value and the second distribution value are determined according to the first variance, the second variance and the third variance. and the third distribution value.
- the weight determination unit 24 is specifically configured to determine the first variance as the first distribution value; determine the second variance as the second distribution value; and determine the third variance as the third distribution value.
- the weight determination unit 24 is specifically configured to determine the center point of the area enclosed by the at least two target points according to the geometric information of each point of the at least two target points; Determine the first point farthest from the center point in the direction of the first component, the second point the farthest in the direction of the second component, and the third point the farthest in the direction of the third component; according to the first The attribute information of each point in the point, the second point and the third point, respectively determine the first distribution value of the at least two target points in the direction of the first component, the second distribution value in the direction of the second component, and the The third distribution value in the three-component direction; according to the first distribution value, the second distribution value and the third distribution value, determine the weight coefficient of each point in the at least two target points.
- the weight determination unit 24 is specifically configured to acquire attribute information of the center point; according to the attribute information of the first point and the attribute information of the center point, determine the first position of the at least two target points in the direction of the first component.
- Distribution value According to the attribute information of the second point and the attribute information of the center point, determine the second distribution value of at least two target points in the direction of the second component, and determine the attribute information of the third point and the center point according to the attribute information of the center point. A third distribution value of the at least two target points in the direction of the third component.
- the weight determination unit 24 is specifically configured to determine the difference between the attribute information of the first point and the attribute information of the center point as the first distribution value of the at least two target points in the direction of the first component; The difference between the attribute information of the second point and the attribute information of the center point is determined as the second distribution value of the at least two target points in the direction of the second component; the difference between the attribute information of the third point and the attribute information of the center point is determined. The difference value is determined as the third distribution value of the at least two target points in the direction of the third component.
- the weight determination unit 24 is specifically configured to determine the average value of the attribute information of the at least two target points according to the attribute information of each of the at least two target points; according to the attribute information of the first point and the at least The average value of the attribute information of the two target points, to determine the first distribution value of the at least two target points in the direction of the first component, according to the average value of the attribute information of the second point and the attribute information of the at least two target points, to determine The second distribution value of the at least two target points in the direction of the second component is determined according to the average value of the attribute information of the third point and the attribute information of the at least two target points, and the distribution value of the at least two target points in the direction of the third component is determined. The third distribution value.
- the weight determination unit 24 is specifically configured to determine the difference between the attribute information of the first point and the average value of the attribute information of the at least two target points as the at least two target points in the direction of the first component
- the first distribution value of the second point; the difference between the attribute information of the second point and the average value of the attribute information of the at least two target points is determined as the second distribution value of the at least two target points in the direction of the second component;
- the difference between the attribute information of the three points and the average value of the attribute information of the at least two target points is determined as the third distribution value of the at least two target points in the third component direction.
- the weight determination unit 24 is specifically configured to determine the first distribution value as the weight coefficient of the first component, determine the second distribution value as the weight coefficient of the second component, and determine the third distribution value as the first distribution value The weight coefficient of the three components; or, determine the sum of the first distribution value, the second distribution value and the third distribution value, determine the weight coefficient of the first component according to the ratio of the first distribution value to the sum, and determine the weight coefficient of the first component according to the second distribution value and the sum.
- the ratio of the sum is used to determine the weight coefficient of the second component, and the weight coefficient of the third component is determined according to the ratio of the third distribution value to the sum.
- the weight determination unit 24 is specifically configured to determine the ratio of the first distribution value to the sum as the weight coefficient of the first component; and determine the ratio of the second distribution value to the sum as the weight of the second component Coefficient; the ratio of the third distribution value to the sum is determined as the weight coefficient of the third component.
- the apparatus embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, details are not repeated here.
- the apparatus shown in FIG. 9 can execute the method embodiment, and the foregoing and other operations and/or functions of each module in the apparatus are respectively for implementing the method embodiment corresponding to the decoder, and are not repeated here for brevity.
- the apparatus of the embodiments of the present application is described above from the perspective of functional modules with reference to the accompanying drawings.
- the functional modules can be implemented in the form of hardware, can also be implemented by instructions in the form of software, and can also be implemented by a combination of hardware and software modules.
- the steps of the method embodiments in the embodiments of the present application may be completed by hardware integrated logic circuits in the processor and/or instructions in the form of software, and the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as hardware
- the execution of the decoding processor is completed, or the execution is completed by a combination of hardware and software modules in the decoding processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and other storage media mature in the art.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps in the above method embodiments in combination with its hardware.
- FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present application.
- the computer device in FIG. 10 may be the above-mentioned point cloud encoder or point cloud decoder.
- the computer equipment 30 may include:
- the processor 32 may call and execute the computer-readable instructions 33 from the memory 31 to implement the methods in the embodiments of the present application.
- the processor 32 may be configured to perform the steps of the method 200 described above according to the instructions in the computer readable instructions 33 .
- the processor 32 may include, but is not limited to:
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the memory 31 includes but is not limited to:
- Non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which acts as an external cache.
- RAM Random Access Memory
- RAM Static RAM
- DRAM Dynamic RAM
- SDRAM Synchronous DRAM
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM DDR SDRAM
- enhanced SDRAM ESDRAM
- synchronous link dynamic random access memory SLDRAM
- Direct Rambus RAM Direct Rambus RAM
- the computer-readable instructions 33 may be divided into one or more modules, and the one or more modules are stored in the memory 31 and executed by the processor 32 to complete the present invention. Apply for the provided method of recording the page.
- the one or more modules may be a series of computer-readable instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 33 in the computer device 900 .
- the computer device 30 may further include:
- a transceiver 34 which can be connected to the processor 32 or the memory 31 .
- the processor 32 can control the transceiver 34 to communicate with other devices, specifically, can send information or data to other devices, or receive information or data sent by other devices.
- Transceiver 34 may include a transmitter and a receiver.
- the transceiver 34 may further include antennas, and the number of the antennas may be one or more.
- each component in the computer device 30 is connected through a bus system, wherein the bus system includes a power bus, a control bus and a status signal bus in addition to a data bus.
- a computer storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a computer, the computer can perform the method of the above method embodiment.
- the embodiments of the present application further provide a computer-readable instruction product containing instructions, and when the instructions are executed by the computer, the computer executes the method of the foregoing method embodiment.
- a computer-readable instruction product or computer-readable instruction comprising a computer-readable instruction stored in a computer-readable instruction in the storage medium.
- the processor of the computer device reads the computer-readable instructions from the computer-readable storage medium, and the processor executes the computer-readable instructions, so that the computer device performs the method of the above method embodiment.
- the computer-readable instruction product includes one or more computer-readable instructions.
- the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer-readable instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer-readable instructions may be downloaded from a website, computer, server or A data center transmits to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
- the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), and the like.
- the disclosed system, apparatus and method may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
- multiple modules or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.
- Modules described as separate components may or may not be physically separated, and components shown as modules may or may not be physical modules, that is, may be located in one place, or may be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. For example, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
本申请提供了一种点云中邻居点的选择方法、装置及编解码器,该方法包括:从点云数据中确定当前点所在的目标区域,并从该目标区域内选择出至少两个目标点,确定这至少两个目标点中每个点的权重系数;根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重,根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点,实现当前点的邻居点的准确选择。这样基于准确选择的邻居点,对当前点进行属性预测时,可以提高属性预测的准确性,进而提高点云的编码效率。
Description
本申请要求于2021年03月12日提交中国专利局,申请号为2021102699523,申请名称为“点云中邻居点的选择方法、装置及编解码器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及视频编解码技术领域,尤其涉及一种点云中邻居点的选择方法、装置及编解码器。
通过采集设备对物体表面进行数据采集,形成点云数据,点云数据包括几十万甚至更多的点。在视频制作过程中,将点云数据以点云媒体文件的形式在视频制作设备和视频播放设备之间传输。但是,如此庞大的点给传输带来了挑战,因此,视频制作设备需要对点云数据进行压缩后传输。
点云数据的压缩主要包括位置信息的压缩和属性信息的压缩,在属性信息压缩时,通过预测来减小或消除点云数据中的冗余信息,例如,从已编码的点中获得当前点的一个或多个相邻点,根据相邻点的属性信息,来预测当前点的属性信息。
发明内容
本申请提供一种点云中邻居点的选择方法、装置及编解码器,提高邻居点选择的准确性。
第一方面,本申请提供一种点云中邻居点的选择方法,包括:
获取点云数据,并从所述点云数据中确定当前点所在的目标区域,所述目标区域包括多个点;
针对所述目标区域内的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,所述至少两个目标点中未包括所述当前点;
根据所述至少两个目标点中每个点的权重系数和几何信息,以及所述当前点的几何信息,确定所述至少两个目标点中每个点的权重;
根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
第二方面,本申请提供一种点云中邻居点的选择方法,包括:
解码码流,获取点云数据中点的几何信息;
根据所述点云数据中点的几何信息,从所述点云数据中确定当前点所在的目标区域,所述目标区域包括多个点;
针对所述目标区域内已解码的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,所述至少两个目标点中未包括所述当前点;
根据所述至少两个目标点中每个点的权重系数和几何信息,以及所述当前点的几何信息,确定所述至少两个目标点中每个点的权重;
根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
第三方面,提供了一种点云中邻居点的装置,包括:
获取单元,用于获取点云数据,并从点云数据中确定当前点所在的目标区域,目标区域包括多个点;
权重系数确定单元,用于针对目标区域内的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
权重确定单元,用于根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重;
邻居点选择单元,用于根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
第四方面,提供了一种点云中邻居点的选择装置,包括:
解码单元,用于解码码流,获取点云数据中点的几何信息;
区域确定单元,用于根据点云数据中点的几何信息,从点云数据中确定当前点所在的目标区域,目标区域包括多个点;
权重系数确定单元,用于针对目标区域内已解码的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
权重确定单元,用于根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重;
邻居点确定单元,用于根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
第五方面,提供了一种编码器,包括处理器和存储器。所述存储器用于存储计算机可读指令,所述处理器用于调用并运行所述存储器中存储的计算机可读指令,以执行上述第一方面或其各实现方式中的方法。
第六方面,提供了一种解码器,包括处理器和存储器。所述存储器用于存储计算机可读指令,所述处理器用于调用并运行所述存储器中存储的计算机可读指令,以执行上述第二方面或其各实现方式中的方法。
第七方面,提供了一种芯片,用于实现上述第一方面至第二方面中任一方面或其各实现方式中的方法。具体地,所述芯片包括:处理器,用于从存储器中调用并运行计算机可读指令,使得安装有所述芯片的设备执行如上述第一方面至第二方面中任一方面或其各实现方式中的方法。
第八方面,提供了一种计算机可读存储介质,用于存储计算机可读指令,所述计算机可读指令使得计算机执行上述第一方面至第二方面中任一方面或其各实现方式中的方法。
第九方面,提供了一种计算机可读指令产品,包括计算机可读指令,所述计算机可读指令使得计算机执行上述第一方面至第二方面中任一方面或其各实现方式中的方法。
第十方面,提供了一种计算机可读指令,当其在计算机上运行时,使得计算机执行上述第一方面至第二方面中任一方面或其各实现方式中的方法。
综上,本申请通过从点云数据中确定当前点所在的目标区域,并从该目标区域内选择出至少两个目标点,确定这至少两个目标点中每个点的权重系数;根据这至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定这至少两个目标点中每个点的权重,根据这至少两个目标点中每个点的权重,从这至少两个目标点中选择当前点的至少一个邻居点,实现当前点的邻居点的准确选择。这样基于准确选择的邻居点,对当前点进行属性 预测时,可以提高属性预测的准确性,进而提高点云的编码效率。
图1为本申请实施例涉及的一种点云视频编解码系统的示意性框图;
图2是本申请实施例提供的编码框架的示意性框图;
图3是本申请实施例提供的解码框架的示意性框图;
图4为本申请实施例提供的一实施例的点云中邻居点的选择方法的流程图;
图5A为原始莫顿顺序下点云的排列示意图;
图5B为偏移莫顿顺序下点云的排列示意图;
图5C为当前点的相邻点的空间关系示意图;
图5D为与当前点共面的相邻点之间的莫顿码关系示意图;
图5E为与当前点共线的相邻点之间的莫顿码关系示意图;
图6为本申请实施例提供的另一实施例的点云中邻居点的选择方法的流程图;
图7为本申请实施例提供的另一实施例的点云中邻居点的选择方法的流程图;
图8是本申请实施例的一点云中邻居点的装置的示意性框图;
图9是本申请实施例的一点云中邻居点的选择装置的示意性框图;
图10是本申请实施例提供的计算机设备的示意性框图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
应理解,在本发明实施例中,“与A对应的B”表示B与A相关联。在一种实现方式中,可以根据A确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。
另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。为了便于理解本申请的实施例,首先对本申请实施例涉及到的相关概念进行如下简单介绍:
点云(Point Cloud)是指空间中一组无规则分布的、表达三维物体或三维场景的空间结构及表面属性的离散点集。
点云数据(Point Cloud Data)是点云的具体记录形式,点云中的点可以包括点的位置信息和点的属性信息。例如,点的位置信息可以是点的三维坐标信息。点的位置信息也可称为点的几何信息。例如,点的属性信息可包括颜色信息和/或反射率等等。例如,所述颜色信息可以是任意一种色彩空间上的信息。例如,所述颜色信息可以是(RGB)。再如,所述颜色信息可以是于亮度色度(YcbCr,YUV)信息。例如,Y表示明亮度(Luma),Cb(U)表示蓝色色差,Cr(V)表示红色,U和V表示为色度(Chroma)用于描述色差信息。例如,根据激光测量原理得到的点云,所述点云中的点可以包括点的三维坐标信息和点的激光反射强度(reflectance)。再如,根据摄影测量原理得到的点云,所述点云中的点可以可包括点的三维坐标信息和点的颜色信息。再如,结合激光测量和摄影测量原理得到点云,所述点云中的 点可以可包括点的三维坐标信息、点的激光反射强度(reflectance)和点的颜色信息。
点云数据的获取途径可以包括但不限于以下至少一种:(1)计算机设备生成。计算机设备可以根据虚拟三维物体及虚拟三维场景的生成点云数据。(2)3D(3-Dimension,三维)激光扫描获取。通过3D激光扫描可以获取静态现实世界三维物体或三维场景的点云数据,每秒可以获取百万级点云数据;(3)3D摄影测量获取。通过3D摄影设备(即一组摄像机或具有多个镜头和传感器的摄像机设备)对现实世界的视觉场景进行采集以获取现实世界的视觉场景的点云数据,通过3D摄影可以获得动态现实世界三维物体或三维场景的点云数据。(4)通过医学设备获取生物组织器官的点云数据。在医学领域可以通过磁共振成像(Magnetic Resonance Imaging,MRI)、电子计算机断层扫描(Computed Tomography,CT)、电磁定位信息等医学设备获取生物组织器官的点云数据。
点云可以按获取的途径分为:密集型点云和稀疏性点云。
点云按照数据的时序类型划分为:
第一静态点云:即物体是静止的,获取点云的设备也是静止的;
第二类动态点云:物体是运动的,但获取点云的设备是静止的;
第三类动态获取点云:获取点云的设备是运动的。
按点云的用途分为两大类:
类别一:机器感知点云,其可以用于自主导航系统、实时巡检系统、地理信息系统、视觉分拣机器人、抢险救灾机器人等场景;
类别二:人眼感知点云,其可以用于数字文化遗产、自由视点广播、三维沉浸通信、三维沉浸交互等点云应用场景。
图1为本申请实施例涉及的一种点云编解码系统100的示意性框图。需要说明的是,图1只是一种示例,本申请实施例的点云编解码系统包括但不限于图1所示。如图1所示,该点云编解码系统100包含编码设备110和解码设备120。其中编码设备用于对点云数据进行编码(可以理解成压缩)产生码流,并将码流传输给解码设备。解码设备对编码设备编码产生的码流进行解码,得到解码后的点云数据。
本申请实施例的编码设备110可以理解为具有点云编码功能的设备,解码设备120可以理解为具有点云解码功能的设备,即本申请实施例对编码设备110和解码设备120包括更广泛的装置,例如包含智能手机、台式计算机、移动计算装置、笔记本(例如,膝上型)计算机、平板计算机、机顶盒、电视、相机、显示装置、数字媒体播放器、点云游戏控制台、车载计算机等。
在一些实施例中,编码设备110可以经由信道130将编码后的点云数据(如码流)传输给解码设备120。信道130可以包括能够将编码后的点云数据从编码设备110传输到解码设备120的一个或多个媒体和/或装置。
在一个实例中,信道130包括使编码设备110能够实时地将编码后的点云数据直接发射到解码设备120的一个或多个通信媒体。在此实例中,编码设备110可根据通信标准来调制编码后的点云数据,且将调制后的点云数据发射到解码设备120。其中通信媒体包含无线通信媒体,例如射频频谱,可选的,通信媒体还可以包含有线通信媒体,例如一根或多根物理传输线。
在另一实例中,信道130包括存储介质,该存储介质可以存储编码设备110编码后的点云数据。存储介质包含多种本地存取式数据存储介质,例如光盘、DVD、快闪存储器等。在该 实例中,解码设备120可从该存储介质中获取编码后的点云数据。
在另一实例中,信道130可包含存储服务器,该存储服务器可以存储编码设备110编码后的点云数据。在此实例中,解码设备120可以从该存储服务器中下载存储的编码后的点云数据。可选的,该存储服务器可以存储编码后的点云数据且可以将该编码后的点云数据发射到解码设备120,例如web服务器(例如,用于网站)、文件传送协议(FTP)服务器等。
一些实施例中,编码设备110包含点云编码器112及输出接口113。其中,输出接口113可以包含调制器/解调器(调制解调器)和/或发射器。
在一些实施例中,编码设备110除了包括点云编码器112和输入接口113外,还可以包括点云源111。
点云源111可包含点云采集装置(例如,扫描仪)、点云存档、点云输入接口、计算机图形系统中的至少一个,其中,点云输入接口用于从点云内容提供者处接收点云数据,计算机图形系统用于产生点云数据。
点云编码器112对来自点云源111的点云数据进行编码,产生码流。点云编码器112经由输出接口113将编码后的点云数据直接传输到解码设备120。编码后的点云数据还可存储于存储介质或存储服务器上,以供解码设备120后续读取。
在一些实施例中,解码设备120包含输入接口121和点云解码器122。
在一些实施例中,解码设备120除包括输入接口121和点云解码器122外,还可以包括显示装置123。
其中,输入接口121包含接收器及/或调制解调器。输入接口121可通过信道130接收编码后的点云数据。
点云解码器122用于对编码后的点云数据进行解码,得到解码后的点云数据,并将解码后的点云数据传输至显示装置123。
显示装置123显示解码后的点云数据。显示装置123可与解码设备120整合或在解码设备120外部。显示装置123可包括多种显示装置,例如液晶显示器(LCD)、等离子体显示器、有机发光二极管(OLED)显示器或其它类型的显示装置。
此外,图1仅为实例,本申请实施例的技术方案不限于图1,例如本申请的技术还可以应用于单侧的点云编码或单侧的点云解码。
由于点云是海量点的集合,存储所述点云不仅会消耗大量的内存,而且不利于传输,也没有这么大的带宽可以支持将点云不经过压缩直接在网络层进行传输,因此对点云进行压缩是很有必要的。
截止目前,可通过点云编码框架对点云进行压缩。
点云编码框架可以是运动图像专家组(Moving Picture Experts Group,MPEG)提供的基于几何的点云压缩(Geometry Point Cloud Compression,G-PCC)编解码框架或基于视频的点云压缩(Video Point Cloud Compression,V-PCC)编解码框架,也可以是音点云编码标准(Audio Video Standard,AVS)组织提供的AVS-PCC编解码框架。G-PCC及AVS-PCC均针对静态的稀疏型点云,其编码框架大致相同。G-PCC编解码框架可用于针对第一静态点云和第三类动态获取点云进行压缩,V-PCC编解码框架可用于针对第二类动态点云进行压缩。G-PCC编解码框架也称为点云编解码器TMC13,V-PCC编解码框架也称为点云编解码器TMC2。
下面以G-PCC编解码框架对本申请实施例可适用的编解码框架进行说明。
图2是本申请实施例提供的编码框架的示意性框图。
如图2所示,编码框架200可以从采集设备获取点云的位置信息(也称为几何信息或几何位置)和属性信息。点云的编码包括位置编码和属性编码。
位置编码的过程包括:对原始点云进行坐标变换、量化去除重复点等预处理;构建八叉树后进行编码形成几何码流。
属性编码过程包括:通过给定输入点云的位置信息的重建信息和属性信息的真实值,选择三种预测模式的一种进行点云预测,对预测后的结果进行量化,并进行算术编码形成属性码流。
如图2所示,位置编码可通过以下单元实现:
坐标平移坐标量化单元201、八叉树构建单元202、八叉树重建单元203、熵编码单元204。
坐标平移坐标量化单元201可用于将点云中点的世界坐标变换为相对坐标,并对坐标进行量化,可减少坐标的数目;量化后原先不同的点可能被赋予相同的坐标。
八叉树构建单元202可利用八叉树(octree)编码方式编码量化的点的位置信息。例如,将点云按照八叉树的形式进行划分,由此,点的位置可以和八叉树的位置一一对应,通过统计八叉树中有点的位置,并将其标识(flag)记为1,以进行几何编码。
八叉树重建单元203用于重建点云中各点的几何位置,得到点的重建几何位置。
熵编码单元204可以采用熵编码方式对八叉树构建单元202输出的位置信息进行算术编码,即将八叉树构建单元202输出的位置信息利用算术编码方式生成几何码流;几何码流也可称为几何比特流(geometry bitstream)。
属性编码可通过以下单元实现:
空间变换单元210、属性插值单元211、属性预测单元212、残差量化单元213以及熵编码单元214。
空间变换单元210可用于将点云中点的RGB色彩空间变换为YCbCr格式或其他格式。
属性插值单元211可用于转换点云中点的属性信息,以最小化属性失真。例如,属性转化单元211可用于得到点的属性信息的真实值。例如,所述属性信息可以是点的颜色信息。
属性预测单元212可用于对点云中点的属性信息进行预测,以得到点的属性信息的预测值,进而基于点的属性信息的预测值得到点的属性信息的残差值。例如,点的属性信息的残差值可以是点的属性信息的真实值减去点的属性信息的预测值。
残差量化单元213可用于量化点的属性信息的残差值。
熵编码单元214可使用零行程编码(Zero run length coding)对点的属性信息的残差值进行熵编码,以得到属性码流。所述属性码流可以是比特流信息。
结合图2,本申请对于几何结构编码,主要操作和处理如下:
(1)预处理(Pre-processing):包括坐标变换(Transform coordinates)和体素化(Voxelize)。通过缩放和平移的操作,将3D空间中的点云数据转换成整数形式,并将其最小几何位置移至坐标原点处。
(2)几何编码(Geometry encoding):几何编码中包含两种模式,可在不同条件下使用:
(a)基于八叉树的几何编码(Octree):八叉树是一种树形数据结构,在3D空间划分中,对预先设定的包围盒进行均匀划分,每个节点都具有八个子节点。通过对八叉树各个子节点的占用与否采用‘1’和‘0’指示,获得占用码信息(occupancy code)作为点云几何信息的码流。
(b)基于三角表示的几何编码(Trisoup):将点云划分为一定大小的块(block),定位点云表面在块的边缘的交点并构建三角形。通过编码交点位置实现几何信息的压缩。
(3)几何量化(Geometry quantization):量化的精细程度通常由量化参数(QP)来决定,QP取值越大,表示更大取值范围的系数将被量化为同一个输出,因此通常会带来更大的失真,及较低的码率;相反,QP取值较小,表示较小取值范围的系数将被量化为同一个输出,因此通常会带来较小的失真,同时对应较高的码率。在点云编码中,量化是直接对点的坐标信息进行的。
(4)几何熵编码(Geometry entropy encoding):针对八叉树的占用码信息,进行统计压缩编码,最后输出二值化(0或者1)的压缩码流。统计编码是一种无损编码方式,可以有效的降低表达同样的信号所需要的码率。常用的统计编码方式是基于上下文的二值化算术编码(CABAC,Content Adaptive Binary Arithmetic Coding)。
对于属性信息编码,主要操作和处理如下:
(1)属性重上色(Recoloring):有损编码情况下,在几何信息编码后,需编码端解码并重建几何信息,即恢复3D点云的各点坐标信息。在原始点云中寻找对应一个或多个邻近点的属性信息,作为该重建点的属性信息。
(2)属性预测编码(Predition):属性预测编码时,通过对几何信息或属性信息的邻近关系,选择一个或多个点作为预测值,并求加权平均获得最终属性预测值,对真实值与预测值之间的差值进行编码。
(3)属性变换编码(Transform):属性变换编码中包含三种模式,可在不同条件下使用。
(a)预测变换编码(Predicting Transform):根据距离选择子点集,将点云划分成多个不同的细节层(Level of Detail,LoD),实现由粗糙到精细化的点云表示。相邻层之间可以实现自下而上的预测,即由粗糙层中的邻近点预测精细层中引入的点的属性信息,获得对应的残差信号。其中,最底层的点作为参考信息进行编码。
(b)提升变换编码(Lifting Transform):在LoD相邻层预测的基础上,引入邻域点的权重更新策略,最终获得各点的预测属性值,获得对应的残差信号。
(c)分层区域自适应变换编码(Region Adaptive Hierarchical Transform,RAHT):属性信息经过RAHT变换,将信号转换到变换域中,称之为变换系数。
(4)属性信息量化(Attribute quantization):量化的精细程度通常由量化参数(QP)来决定。在预测变换编码及提升变换编码中,是对残差值进行量化后进行熵编码;在RAHT中,是对变换系数进行量化后进行熵编码。
(5)属性熵编码(Attribute entropy coding):量化后的属性残差信号或变换系数一般使用行程编码(run length coding)及算数编码(arithmetic coding)实现最终的压缩。相应的编码模式,量化参数等信息也同样采用熵编码器进行编码。
图3是本申请实施例提供的解码框架的示意性框图。
如图3所示,解码框架300可以从编码设备获取点云的码流,通过解析码得到点云中的点的位置信息和属性信息。点云的解码包括位置解码和属性解码。
位置解码的过程包括:对几何码流进行算术解码;构建八叉树后进行合并,对点的位置信息进行重建,以得到点的位置信息的重建信息;对点的位置信息的重建信息进行坐标变换,得到点的位置信息。点的位置信息也可称为点的几何信息。
属性解码过程包括:通过解析属性码流,获取点云中点的属性信息的残差值;通过对点的属性信息的残差值进行反量化,得到反量化后的点的属性信息的残差值;基于位置解码过程中获取的点的位置信息的重建信息,选择三种预测模式的一种进行点云预测,得到点的属性信息的重建值;对点的属性信息的重建值进行颜色空间反转化,以得到解码点云。
如图3所示,位置解码可通过以下单元实现:
熵解码单元301、八叉树重建单元302、逆坐标量化单元303以及逆坐标平移单元304。
属性编码可通过以下单元实现:
熵解码单元310、逆量化单元311、属性重建单元312以及逆空间变换单元313。
解压缩是压缩的逆过程,类似的,解码框架300中的各个单元的功能可参见编码框架200中相应的单元的功能。
在解码端,解码器获得压缩码流后,首先进行熵解码,获得各种模式信息及量化后的几何信息以及属性信息。首先,几何信息经过逆量化,得到重建的3D点位置信息。另一方面,属性信息经过逆量化得到残差信息,并根据采用的变换模式确认参考信号,得到重建的属性信息,按顺序与几何信息一一对应,产生输出的重建点云数据。
例如,解码框架300可根据点云中点与点之间的欧式距离将点云划分为多个LoD;然后,依次对LoD中点的属性信息进行解码;例如,计算零行程编码技术中零的数量(zero_cnt),以基于zero_cnt对残差进行解码;接着,解码框架300可基于解码出的残差值进行逆量化,并基于逆量化后的残差值与当前点的预测值相加得到该点云的重建值,直到解码完所有的点云。当前点将会作为后续LoD中点的最近邻居,并利用当前点的重建值对后续点的属性信息进行预测。
由上述图2可知,点云编码器200从功能上主要包括了两部分:位置编码模块和属性编码模块,其中位置编码模块用于实现点云的位置信息的编码,形成几何码流,属性编码模块用于实现点云的属性信息的编码,形成属性码流,本申请主要涉及属性信息的编码。
需要说明的是,编码端属性信息编码时确定的预测、量化、编码、滤波等模式信息或者参数信息等在必要时携带在属性码流中。解码端通过解析属性码流及根据已有信息进行分析确定与编码端相同的预测、量化、编码、滤波等模式信息或者参数信息,从而保证编码端获得的属性信息的重建值和解码端获得的属性信息的重建值相同。
上述是基于G-PCC编解码框架下的点云编解码器的基本流程,随着技术的发展,该框架或流程的一些模块或步骤可能会被优化,本申请适用于该基于G-PCC编解码框架下的点云编解码器的基本流程,但不限于该框架及流程。
下面将对本申请技术方案进行详细阐述:
首先以编码端为例。
图4为本申请实施例提供的一实施例的点云中邻居点的选择方法的流程图,该方法的执行主体是具有选择点云中邻居点功能的装置,例如点云中邻居点的选择装置,该点云中邻居点的选择装置可以为上述所述的点云编码器或者为点云编码器中的一部分。如图4所示,本实施例包括:
S410、获取点云数据,并从点云数据中确定当前点所在的目标区域,该目标区域包括多个点。
需要说明的是,本实施例涉及点云的属性信息的编码过程,点云的属性信息编码是在位置信息编码后执行的。
本申请实施例的点云的属性信息的编码过程为,针对点云数据中的每一个点,从点云数据中确定当前属性信息待编码的当前点的目标区域,当前点位于该目标区域内,该目标区域包括多个点。从目标区域中确定出至少两个目标点,并确定这至少两个目标点中每个点的权重系数。根据这至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定这至少两个目标点中每个点的权重。根据这至少两个目标点中每个点的权重,从这至少两个目标点中选择当前点的至少一个邻居点。根据至少一个邻居点中每个邻居点的属性信息,确定当前点的属性信息的预测值。根据当前点的属性信息和属性信息的预测值,确定出当前点的属性信息的残差值。对当前点的属性信息的残差值进行量化,并对量化后的残差值进行编码,得到码流。
本申请实施例主要涉及上述编码过程中当前点的邻居点的选择过程。
在一些实施例中,上述目标区域包括点云数据中的所有点。
在一些实施例中,上述目标区域为点云数据中任意一个包括当前点的点云区域。
在一些实施例中,上述目标区域为当前点和当前点的相邻点所组成的点云区域。
例如,获得点云数据中部分或全部的点的几何信息,计算这些点中每一个点与当前点之间的距离,根据距离大小,从这些点中选取距离当前点在预定距离范围内的多个点。将这多个点确定为当前点的相邻点,这些相邻点和当前点组成当前点所在的目标区域。
可选的,上述点云数据中部分或全部的点可以为属性信息已编码的点,也可以是属性信息未编码的点。
当前点的属性信息包括颜色属性和/或反射率属性,在编码当前点的不同属性信息时,确定当前点的相邻点的方式可以不同。
示例一,若当前点的属性信息为反射率信息,则确定当前点的相邻点的方式包括但不限于如下几种方式:
方式一,对当前点的反射率属性进行预测时,可以采用莫顿序来选取当前点的相邻点,具体是:
获取点云数据中所有点云的坐标,并按照莫顿排序得到莫顿顺序1,如图5A所示。
接着,把所有点云的坐标(x,y,z)加上一个固定值(j1,j2,j3),用新的坐标(x+j1,y+j2,z+j3)生成点云对应的莫顿码,按照莫顿排序得到莫顿顺序2,如图5B所示。注意在图5A中的A,B,C,D移到图5B中的不同位置,对应的莫顿码也发生了变化,但它们的相对位置保持不变。另外,在图5B中,点D的莫顿码是23,它的相邻点B的莫顿码是21,所以从点D向前最多搜索两个点就可以找到点B。但在图5A中,从点D(莫顿码16)最多需要向前搜索14个点才能找到点B(莫顿码2)。
根据莫顿顺序编码,查找当前点的最近预测点,在莫顿顺序1中选取该当前点的前N1个已编码点作为当前点的N1个相邻点,N1取值范围是大于等于1,在莫顿顺序2中选取当前点的前N2个点已编码点作为当前点的N2个相邻点,N2的取值范围是大于等于1,进而获得当前点的N1+N2个相邻点。
可选的,在PCEM软件中,上述j1=j2=j3=42,N1=N2=4。
方式二,计算在Hilbert(希尔伯特)顺序下当前点的前maxNumOfNeighbours(最大数量个相邻点)个已编码点,将maxNumOfNeighbours个已编码点作为当前点的相邻点。
可选的,maxNumOfNeighbours默认取值为128。
示例二,若当前点的属性信息为颜色信息,则确定当前点的相邻点的方式包括:
当前点的相邻点的空间关系如图5C所示,其中实线框表示当前点,假设相邻点的查找范围为当前点的3X3X3邻域。首先利用当前点的莫顿码得到该3X3X3邻域中莫顿码值最小的块,将该块作为基准块,利用基准块来查找与当前点共面、共线的已编码相邻点。该邻域范围内与当前点共面的相邻点之间的莫顿码关系如图5D所示,与当前点共线的相邻点之间的莫顿码关系如下图5E所示。
利用基准块来搜索与当前点共面、共线的已编码的多个相邻点。
在该实施例中,根据上述方法确定出当前点的多个相邻点后,将这多个相邻点和当前点组成的点云区域确定为当前点的目标区域。
S420、针对目标区域内的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
执行上述S401的步骤,确定出当前点的目标区域后,接着从这目标区域内选择至少两个目标点,其中,可以选择N个目标点,N为大于或等于2的正整数。
在一些实施例中,上述至少两个目标点为目标区域内的任意的至少两个目标点。
在一些实施例中,上述至少两个目标点为目标区域内距离当前点最近的至少两个目标点。
从目标点区域内确定出至少两个目标点后,确定这至少两个目标点中的每一个点的权重系数。
在一些实施例中,这至少两个目标点中每个点的权重系数相同。
在一些实施例中,这至少两个目标点中至少两个点的权重系数不同。
S430、根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重。
本实施例确定至少两个目标点中每个点的权重的过程相同,为了便于描述,在此以至少两个目标点中一个点的权重确定过程为例。
在一种示例中,根据该点的几何信息和当前点的几何信息,确定出该点与当前点之间的距离,根据该距离与该点的权重系数,得到该点的权重。例如,将该距离与该点的权重系数的乘积的倒数,确定为该点的权重。
在另一种示例中,权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数,则根据如下公式(1)确定出该点的权重:
其中,wij为该点的权重,a为该点的第一分量的权重系数,b为该点的第二分量的权重系数,c为该点的第三分量的权重系数,(xi,yi,zi)为当前点的几何信息,(xij,yij,zij)为该点的几何信息。可选的,该a、b、c可以查表获得,或者为预设的固定值。
S440、根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
假设上述至少一个邻居点为k个邻居点,k为正整数。
在一些实施例中,根据至少两个目标点中每个点的权重,从至少两个目标点中选择权重最大的前k个点作为当前点的邻居点。
在一些实施例中,根据至少两个目标点中每个点的权重,从至少两个目标点中选择权重在预设范围内的k个点作为当前点的邻居点。
本申请实施例提供的点云中邻居点的选择方法,通过从点云数据中确定当前点所在的目 标区域,并从该目标区域内选择出至少两个目标点,确定这至少两个目标点中每个点的权重系数;根据这至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定这至少两个目标点中每个点的权重,根据这至少两个目标点中每个点的权重,从这至少两个目标点中选择当前点的至少一个邻居点,实现当前点的邻居点的准确选择。这样基于准确选择的邻居点,对当前点进行属性预测时,可以提高属性预测的准确性,进而提高点云的编码效率。
下面结合具体的实施例,详细介绍上述S420。
上述S420中确定至少两个目标点中每个点的权重系数的方式包括但不限于如下几种:
方式一,将至少两个目标点划分为至少一组,将至少一组中每一组对应的默认权重系数确定为每一组中每个点的权重系数,其中,每一组对应的默认权重系数不同。
举例说明,假设将至少两个目标点划分为两组,分别为第一组和第二组,假设第一组包括M1个点,第二组包括M2个点,M1+M2=N,第一组对应的默认权重系数为权重系数1,第二组对应的默认权重系数为权重系数2,权限系数1与权重系数2不相同。这样,将权重系数1确定为该第一组内M1个点的权重系数,也就是说,第一组内这M1个中每一个点的权重系数相同,均为权限系数1。将权利系数1确定为该第二组内M2个点的权重系数,也就是说,第二组内这M2个中每一个点的权重系数相同,均为权限系数2。在一些实施例中,至少两个组对应的默认权重系数相同。
在一些实施例中,将至少两个目标点划分为一组,将默认权重系数设定为该至少两个目标点的权重系数,这至少两个目标点中每个点的权重系数相同。
在一些实施例中,权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数。
需要说明的是,在不同的度量空间,则上述各分量不同。例如,在欧式空间,则将x称为第一分量,将y称为第二分量,将z称为第三分量,对应的点云中点的几何坐标为(x,y,z)。在极坐标空间,则将r称为第一分量,将φ称为第二分量,将θ称为第三分量,对应的点云在点的几何坐标为(r,φ,θ)。
在一些实施例中,上述将至少两个目标点划分为至少一组,同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等。例如,上述第一组内M1个点中每个点的权重系数均相同为权重系数1,该权重系数1包括第一分量的权重系数a1、第二分量的权重系数b1和第三分量的权重系数c1,其中a1=b1=c1,例如a1=b1=c1=1。上述第二组内M2个点中每个点的权重系数均相同为权重系数2,该权重系数2包括第一分量的权重系数a2、第二分量的权重系数b2和第三分量的权重系数c2,其中a2=b2=c2。
在一些实施例中,至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数中至少两个系数不相等。例如,第一组内M1个点中每个点的权重系数均相同为权重系数1,该权重系数1包括第一分量的权重系数a1、第二分量的权重系数b1和第三分量的权重系数c1,其中a1、b1和c1中至少两个系数不相等,例如,a1=b1≠c1,或者,a1≠b1≠c1,或者,a1≠b1=c1,或者,a1=c1≠b1。可选的,a1=b1=1,c1=16。
方式二,根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数。
该实现方式中,根据至少两个目标点的空间分布情况,来确定至少两个目标点中每个点 的权重系数。在该方式中,确定出的至少两个目标点中每个点的权重系数相同。
在一种可能的实现方式中,如图7所示,上述S420包括如下S420-A1和S420-A2:
S420-A1、根据至少两个目标点中每个点的几何信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;
S420-A2、根据第一分布值、第二分布值和第三分布值,确定至少两个目标点中每个点的权重系数。
由于部分点云数据集,例如激光雷达(light detection and ranging,简称LiDAR)扫描的点云数据,在各分量方向的分布情况有一定的差异性。因此,该方式中,根据至少两个目标点的几何信息,确定出至少两个目标点在不同分量上的分布值,根据不同分量对应的分布值,确定各点的权重系数,实现对点的权重系数的准确计算,这样在基于权重系数选择邻居点时,可以提供邻居点的选择准确性,进而提高点云的预测准确性。
在一些实施例中,上述S421包括:根据至少两个目标点中每个点的几何信息,将至少两个目标点分别向第一分量、第二分量和第三分量方向上进行投影,将至少两个目标点在第一分量方向上的投影作为第一分布值,将至少两个目标点在第二分量方向上的投影作为第二分布值,将至少两个目标点在第三分量方向上的投影作为第三分布值。
在一些实施例中,上述S421包括:根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一取值范围。假设第一分量为x,第一取值范围为[xmax,xmin],其中xmax为至少两个目标点的几何信息中x方向上的最大值,xmin为至少两个目标点的几何信息中x方向上的最大值。根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二取值范围。假设第二分量为y,第二取值范围为[ymax,ymin],其中ymax为至少两个目标点的几何信息中y方向上的最大值,ymin为至少两个目标点的几何信息中y方向上的最大值。根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三取值范围。假设第三分量为z,第三取值范围为[zmax,zmin],其中zmax为至少两个目标点的几何信息中z方向上的最大值,zmin为至少两个目标点的几何信息中z方向上的最大值。
接着,根据第一取值范围、第二取值范围和第三取值范围,确定第一分布值、第二分布值和第三分布值。
在一种示例中,将第一取值范围的范围值确定为第一分布值,例如第一分布值ρx=xmax-xmin;将第二取值范围的范围值确定为第二分布值,例如第二分布值ρy=ymax-ymin;将第三取值范围的范围值确定为第三分布值,例如第三分布值ρz=zmax-zmin。
在另一种示例中,确定至少两个目标点的数量与第一取值范围的范围值之间的第一比值,并将第一比值确定为第一分布值,例如第一分布值ρx=N/(xmax-xmin);确定至少两个目标点的数量与第二取值范围的范围值之间的第二比值,并将第二比值确定为第二分布值,例如第二分布值ρy=N/(ymax-ymin);确定至少两个目标点的数量与第三取值范围的范围值之间的第三比值,并将第三比值确定为第三分布值,例如第三分布值ρz=N/(zmax-zmin)。
在一些实施例中,上述S421包括:根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一方差;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二方差;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三方差;根据第一方差、第二方差和第三方差,确定第一分布值、第二分布值和第三 分布值。
例如,根据如下公式(2),确定第一方差:
例如,根据如下公式(3),确定第一方差:
例如,根据如下公式(4),确定第一方差:
根据上述方法确定出第一方差、第二方差和第三方差后,根据第一方差、第二方差和第三方差,确定第一分布值、第二分布值和第三分布值。
例如,根据第一方差得到第一均方差,将该第一均方差作为第一分布值,根据第二方差得到第二均方差,将该第二均方差作为第二分布值,根据第三方差得到第三均方差,将该第三均方差作为第三分布值。
例如,将第一方差确定为第一分布值;将第二方差确定为第二分布值;将第三方差确定为第三分布值。
根据上述方法确定出第一分布值、第二分布值和第三分布值后,执行S420-A2,根据第一分布值、第二分布值和第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,上述S420-A2包括:将第一分布值确定为第一分量的权重系数,将第二分布值确定为第二分量的权重系数,将第三分布值确定为第三分量的权重系数。例如,第一分量的权重系数a=ρx,第二分量的权重系数b=ρy,第三分量的权重系数c=ρz。
在一些实施例中,上述S420-A2包括:确定第一分布值、第二分布值和第三分布值的总和,根据第一分布值与总和的比,确定第一分量的权重系数,根据第二分布值与总和的比,确定第二分量的权重系数,根据第三分布值与总和的比,确定第三分量的权重系数。例如,将第一分布值与总和的比,确定为第一分量的权重系数;将第二分布值与总和的比,确定为第二分量的权重系数;将第三分布值与总和的比,确定为第三分量的权重系数。例如,a=ρx/(ρx+ρy+ρz),b=ρy/(ρx+ρy+ρz),c=ρz/(ρx+ρy+ρz)。
上文详细介绍了方式二中根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数的具体过程。下面对确定至少两个目标点中每个点的权重系数的第三种方式进行介绍。
方式三,根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数。
该实现方式中,根据至少两个目标点的属性信息,来确定至少两个目标点中每个点的权重系数。在该方式中,确定出的至少两个目标点中每个点的权重系数相同。
在一些实施例中,如图6所示,上述S420包括:
S420-B1、根据至少两个目标点中每个点的几何信息,确定至少两个目标点围成的区域的中心点。
S420-B2、从至少两个目标点中分别确定与中心点在第一分量方向上距离最远的第一点、在第二分量方向上距离最远的第二点,以及在第三分量方向上距离最远的第三点;
S420-B3、根据第一点、第二点和第三点中每个点的属性信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;
S420-B4、根据第一分布值、第二分布值和在第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,上述S420-B3包括:获取中心点的属性信息;根据第一点的属性信息与中心点的属性信息,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与中心点的属性信息,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与中心点的属性信息,确定至少两个目标点在第三分量方向上的第三分布值。
例如,将第一点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,上述S420-B3包括:根据至少两个目标点中每个点的属性信息,确定至少两个目标点的属性信息的平均值;根据第一点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第三分量方向上的第三分布值。
例如,将第一点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
根据上述方法,确定出第一分布值、第二分布值和在第三分布值后,执行上述S420-B4,根据第一分布值、第二分布值和在第三分布值,确定至少两个目标点中每个点的权重系数,具体执行过程参照上述S420-A2的描述,在此不再赘述。
上文对编码端确定至少两个目标点在每个点的权重系数进行介绍。
在一些实施例中,由上述可知,编码端可以采用上述方式一、方式二和方式三,3种方式中的任意一种方式确定出至少两个目标点中每个点的权重系数,为了保持解码端和编码端 的一致,则编码端可以在码流中携带权重系数的确定方式的指示信息,例如,编码端采用上述方式二确定点的权重系数,则编码端可以将方式二的指示信息携带在码流中发送给解码端,使得解码端根据该方式二,确定点的权重系数。
在一些实施例中,编码端和解码端可以采用上述各方式中的一种默认的方式来确定点的权重系数。例如,编码端和解码端均默认采用上述方式二来确定点的权重系数。在这种情况下,码流中可以不携带权重系数的确定方式的指示信息。
在一些实施例中,编码端可以直接将确定好的点的权重系数携带在码流中发送给解码端,这样解码端直接从码流中解码出点的权重系数进行邻居点的选择,不需要自行进行重新计算,进而降低了解码难度。
下面结合图7,以解码端为例,对本申请的技术方案进行介绍。
图7为本申请实施例提供的另一实施例的点云中邻居点的选择方法的流程图,如图7所示,包括:
S701、解码码流,获取点云数据中点的几何信息。
需要说明的,解码器解析码流,优先解码点云的位置信息,之后再解码点云的属性信息。
S702、根据点云数据中点的几何信息,从点云数据中确定当前点所在的目标区域,目标区域包括多个点。
例如,根据点云数据中每个点的位置信息和当前点的位置信息,获得当前点与每个点之间的距离,并根据当前点与每个点之间的距离,从点云数据中获得距离当前点最近的N个已解码点作为当前点的N个相邻点。
其中上述S702的具体实现过程可以参照上述S402的具体描述,在此不再赘述。
S703、针对目标区域内已解码的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
解码端从目标区域内获取已解码的至少两个目标点,比如,可以选择N个。
可选的,上述N个已解码点为目标区域内任意的点。
可选的,上述N个已解码点为目标区域内当前点的N个已解码的相邻点。其中相邻点的确定方式可以参照上述S410的描述。
在一些实施例中,上述S703中确定至少两个目标点中每个点的权重系数的方式包括但不限于如下几种方式:
方式一,将至少两个目标点划分为至少一组,将至少一组中每一组对应的默认权重系数确定为每一组中每个点的权重系数,其中,每一组对应的默认权重系数不同。
例如,编码端在码流中携带权重系数的确定方式的指示信息所指示的权重系数的确定方式为方式一,或者,解码端默认的确定权限系数的方式为方式一,则解码端根据上述方式一,将至少两个目标点划分为至少一组,将至少一组中每一组对应的默认权重系数确定为每一组中每个点的权重系数。在一些实施例中,至少两个组对应的默认权重系数相同。
具体的,上述方式一参照上述S420中的方式一的具体描述,在此不再赘述。
可选的,上述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数。
在一种示例中,同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等。
在另一种示例中,同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分 量的权重系数中至少两个系数不相等。
方式二,根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数。
例如,编码端在码流中携带权重系数的确定方式的指示信息所指示的权重系数的确定方式为方式二,或者,解码端默认的确定权限系数的方式为方式二,则解码端根据上述方式二,根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数。
在一种可能的实现方式中,上述根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数包括如下步骤:
S703-A1、根据至少两个目标点中每个点的几何信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;
S703-A2、根据第一分布值、第二分布值和第三分布值,确定至少两个目标点中每个点的权重系数,其中至少两个目标点中每个点的权重系数相同。
在一些实施例中,上述S703-A1包括:根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一取值范围;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二取值范围;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三取值范围;根据第一取值范围、第二取值范围和第三取值范围,确定第一分布值、第二分布值和第三分布值。
例如,将第一取值范围的范围值确定为第一分布值;将第二取值范围的范围值确定为第二分布值;将第三取值范围的范围值确定为第三分布值。
再例如,确定N与第一取值范围的范围值之间的第一比值,并将第一比值确定第一分布值;确定N与第二取值范围的范围值之间的第二比值,并将第二比值确定第二分布值;确定N与第三取值范围的范围值之间的第三比值,并将第三比值确定第三分布值。
在一些实施例中,上述S703-A1包括:根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一方差;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二方差;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三方差;根据第一方差、第二方差和第三方差,确定第一分布值、第二分布值和第三分布值。
例如,将第一方差确定为第一分布值;将第二方差确定为第二分布值;将第三方差确定为第三分布值。
方式三,根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数。
例如,编码端在码流中携带权重系数的确定方式的指示信息所指示的权重系数的确定方式为方式三,或者,解码端默认的确定权限系数的方式为方式三,则解码端根据上述方式三,根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数。
在一种可能的实现方式中,上述根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数包括如下步骤:
S703-B1、根据至少两个目标点中每个点的几何信息,确定至少两个目标点围成的区域的中心点;
S703-B1、从至少两个目标点中分别确定与中心点在第一分量方向上距离最远的第一点、在第二分量方向上距离最远的第二点,以及在第三分量方向上距离最远的第三点;
S703-B1、根据第一点、第二点和第三点中每个点的属性信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;
S703-B1、根据第一分布值、第二分布值和在第三分布值,确定至少两个目标点中每个点的权重系数,其中至少两个目标点中每个点的权重系数相同。
在一些实施例中,上述S703-B1包括:获取中心点的属性信息;根据第一点的属性信息与中心点的属性信息,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与中心点的属性信息,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与中心点的属性信息,确定至少两个目标点在第三分量方向上的第三分布值。
例如,将第一点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,上述S703-B1包括:根据至少两个目标点中每个点的属性信息,确定至少两个目标点的属性信息的平均值;根据第一点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第三分量方向上的第三分布值。
例如,将第一点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,上述S703-A2和S703-B3包括:将第一分布值确定为第一分量的权重系数,将第二分布值确定为第二分量的权重系数,将第三分布值确定为第三分量的权重系数。
在一些实施例中,上述S703-A2和S703-B3包括:确定第一分布值、第二分布值和第三分布值的总和,根据第一分布值与总和的比,确定第一分量的权重系数,根据第二分布值与总和的比,确定第二分量的权重系数,根据第三分布值与总和的比,确定第三分量的权重系数。
例如,将第一分布值与之和的比,确定为第一分量的权重系数;将第二分布值与之和的比,确定为第二分量的权重系数;将第三分布值与之和的比,确定为第三分量的权重系数。
方式四,解码码流,得到所述至少两个目标点中每个点的权重系数。
该方式中,编码端确定出点的权重系数后,将点的权重系数携带在码流中。这样,解码端直接从码流中解码出至少两个目标点中每个点的权重系数,不需要自行进行重新计算,进而降低了解码难度。
解码端根据上述方式确定出至少两个目标点中每个点的权重系数后,执行S704和S705。
S704、根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重;
S705、根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
上述S704和S705的具体实现过程可以参照上述S404和S405的具体描述,在此不再赘述。
应理解,点云中邻居点的选择方法为上述点云中邻居点的选择方法的逆过程。点云中邻居点的选择方法中的步骤可以参考点云中邻居点的选择方法中的相应步骤,为了避免重复,在此不再赘述。
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。
还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上文结合图1至图7,详细描述了本申请的方法实施例,下文结合图8至图10,详细描述本申请的装置实施例。
图8是本申请实施例的一点云中邻居点的装置的示意性框图。该装置10可以为编码设备,也可以为编码设备中的一部分。
如图8所示,点云中邻居点的装置10可包括:
获取单元11,用于获取点云数据,并从点云数据中确定当前点所在的目标区域,目标区域包括多个点;
权重系数确定单元12,用于针对目标区域内的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
权重确定单元13,用于根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重;
邻居点选择单元14,用于根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
在一些实施例中,权重系数确定单元12,具体用于将至少两个目标点划分为至少一组,将至少一组中每一组对应的默认权重系数确定为每一组中每个点的权重系数;或者,根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数;或者,根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数。
可选的,权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数。
在一些实施例中,上述至少两个组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等;或者,同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数中至少两个系数不相等。
在一些实施例中,权重系数确定单元12,具体用于根据至少两个目标点中每个点的几何信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;根据第一分布值、第二分布值和第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,权重系数确定单元12,具体用于根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一取值范围;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二取值范围;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三取值范围;根据第一取值范围、第二取值范围和第三取值范围,确定第一分布值、第二分布值和第三分布值。
在一些实施例中,权重系数确定单元12,具体用于将第一取值范围的范围值确定为第一分布值;将第二取值范围的范围值确定为第二分布值;将第三取值范围的范围值确定为第三分布值。
在一些实施例中,权重系数确定单元12,具体用于确定至少两个目标点的数量与第一取值范围的范围值之间的第一比值,并将第一比值确定第一分布值;确定至少两个目标点的数量与第二取值范围的范围值之间的第二比值,并将第二比值确定第二分布值;确定至少两个目标点的数量与第三取值范围的范围值之间的第三比值,并将第三比值确定第三分布值。
在一些实施例中,权重系数确定单元12,具体用于根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一方差;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二方差;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三方差;根据第一方差、第二方差和第三方差,确定第一分布值、第二分布值和第三分布值。
在一些实施例中,权重系数确定单元12,具体用于将第一方差确定为第一分布值;将第二方差确定为第二分布值;将第三方差确定为第三分布值。
在一些实施例中,权重系数确定单元12,具体用于根据至少两个目标点中每个点的几何信息,确定至少两个目标点围成的区域的中心点;从至少两个目标点中分别确定与中心点在第一分量方向上距离最远的第一点、在第二分量方向上距离最远的第二点,以及在第三分量方向上距离最远的第三点;根据第一点、第二点和第三点中每个点的属性信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;根据第一分布值、第二分布值和在第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,权重系数确定单元12,具体用于获取中心点的属性信息;根据第一点的属性信息与中心点的属性信息,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与中心点的属性信息,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与中心点的属性信息,确定至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重系数确定单元12,具体用于将第一点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第 三点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重系数确定单元12,具体用于根据至少两个目标点中每个点的属性信息,确定至少两个目标点的属性信息的平均值;根据第一点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重系数确定单元12,具体用于将第一点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重系数确定单元12,具体用于将第一分布值确定为第一分量的权重系数,将第二分布值确定为第二分量的权重系数,将第三分布值确定为第三分量的权重系数;或者,
确定第一分布值、第二分布值和第三分布值的总和,根据第一分布值与总和的比,确定第一分量的权重系数,根据第二分布值与总和的比,确定第二分量的权重系数,根据第三分布值与总和的比,确定第三分量的权重系数。
在一些实施例中,权重系数确定单元12,具体用于将第一分布值与总和的比,确定为第一分量的权重系数;将第二分布值与总和的比,确定为第二分量的权重系数;将第三分布值与总和的比,确定为第三分量的权重系数。
在一些实施例中,在码流中携带有至少两个目标点中每个点的权重系数。
应理解的是,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图8所示的装置可以执行上述方法的实施例,并且装置中的各个模块的前述和其它操作和/或功能分别为了实现编码器对应的方法实施例,为了简洁,在此不再赘述。
上文中结合附图从功能模块的角度描述了本申请实施例的装置。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。可选地,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图9是本申请实施例的一点云中邻居点的选择装置的示意性框图。该装置20可以为上述解码设备,也可以为解码设备中的一部分。
如图9所示,点云中邻居点的选择装置20可包括:
解码单元21,用于解码码流,获取点云数据中点的几何信息;
区域确定单元22,用于根据点云数据中点的几何信息,从点云数据中确定当前点所在的目标区域,目标区域包括多个点;
权重系数确定单元23,用于针对目标区域内已解码的至少两个目标点,确定至少两个目标点中每个点的权重系数,至少两个目标点中未包括当前点;
权重确定单元24,用于根据至少两个目标点中每个点的权重系数和几何信息,以及当前点的几何信息,确定至少两个目标点中每个点的权重;
邻居点确定单元25,用于根据至少两个目标点中每个点的权重,从至少两个目标点中选择当前点的至少一个邻居点。
在一些实施例中,权重确定单元24,具体用于解码码流,得到至少两个目标点中每个点的权重系数;或者,将至少两个目标点划分为至少一组,将至少一组中的每一组对应的默认权重系数确定为每一组中每个点的权重系数;或者,根据至少两个目标点中每个点的几何信息,确定至少两个目标点中每个点的权重系数;或者,根据至少两个目标点中每个点的属性信息,确定至少两个目标点中每个点的权重系数。
可选的,权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数。
在一些实施例中,至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等;或者,同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数中至少两个系数不相等。
在一些实施例中,权重确定单元24,具体用于根据至少两个目标点中每个点的几何信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;根据第一分布值、第二分布值和第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,权重确定单元24,具体用于根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一取值范围;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二取值范围;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三取值范围;根据第一取值范围、第二取值范围和第三取值范围,确定第一分布值、第二分布值和第三分布值。
在一些实施例中,权重确定单元24,具体用于将第一取值范围的范围值确定为第一分布值;将第二取值范围的范围值确定为第二分布值;将第三取值范围的范围值确定为第三分布值。
在一些实施例中,权重确定单元24,具体用于确定至少两个目标点的数量与第一取值范围的范围值之间的第一比值,并将第一比值确定第一分布值;确定至少两个目标点的数量与第二取值范围的范围值之间的第二比值,并将第二比值确定第二分布值;确定至少两个目标点的数量与第三取值范围的范围值之间的第三比值,并将第三比值确定第三分布值。
在一些实施例中,权重确定单元24,具体用于根据至少两个目标点中每个点在第一分量方向上的几何信息,确定至少两个目标点在第一分量方向上的第一方差;根据至少两个目标点中每个点在第二分量方向上的几何信息,确定至少两个目标点在第二分量方向上的第二方差;根据至少两个目标点中每个点在第三分量方向上的几何信息,确定至少两个目标点在第三分量方向上的第三方差;根据第一方差、第二方差和第三方差,确定第一分布值、第二分 布值和第三分布值。
在一些实施例中,权重确定单元24,具体用于将第一方差确定为第一分布值;将第二方差确定为第二分布值;将第三方差确定为第三分布值。
在一些实施例中,权重确定单元24,具体用于根据至少两个目标点中每个点的几何信息,确定至少两个目标点围成的区域的中心点;从至少两个目标点中分别确定与中心点在第一分量方向上距离最远的第一点、在第二分量方向上距离最远的第二点,以及在第三分量方向上距离最远的第三点;根据第一点、第二点和第三点中每个点的属性信息,分别确定至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;根据第一分布值、第二分布值和在第三分布值,确定至少两个目标点中每个点的权重系数。
在一些实施例中,权重确定单元24,具体用于获取中心点的属性信息;根据第一点的属性信息与中心点的属性信息,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与中心点的属性信息,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与中心点的属性信息,确定至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重确定单元24,具体用于将第一点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与中心点的属性信息的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重确定单元24,具体用于根据至少两个目标点中每个点的属性信息,确定至少两个目标点的属性信息的平均值;根据第一点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第一分量方向上的第一分布值,根据第二点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第二分量方向上的第二分布值,根据第三点的属性信息与至少两个目标点的属性信息的平均值,确定至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重确定单元24,具体用于将第一点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第一分量方向上的第一分布值;将第二点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第二分量方向上的第二分布值;将第三点的属性信息与至少两个目标点的属性信息的平均值的差值,确定为至少两个目标点在第三分量方向上的第三分布值。
在一些实施例中,权重确定单元24,具体用于将第一分布值确定为第一分量的权重系数,将第二分布值确定为第二分量的权重系数,将第三分布值确定为第三分量的权重系数;或者,确定第一分布值、第二分布值和第三分布值的总和,根据第一分布值与总和的比,确定第一分量的权重系数,根据第二分布值与总和的比,确定第二分量的权重系数,根据第三分布值与总和的比,确定第三分量的权重系数。
在一些实施例中,权重确定单元24,具体用于将第一分布值与总和的比,确定为第一分量的权重系数;将第二分布值与总和的比,确定为第二分量的权重系数;将第三分布值与总和的比,确定为第三分量的权重系数。
应理解的是,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。 为避免重复,此处不再赘述。具体地,图9所示的装置可以执行方法实施例,并且装置中的各个模块的前述和其它操作和/或功能分别为了实现解码器对应的方法实施例,为了简洁,在此不再赘述。
上文中结合附图从功能模块的角度描述了本申请实施例的装置。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。可选地,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图10是本申请实施例提供的计算机设备的示意性框图,图10的计算机设备可以为上述的点云编码器或者为点云解码器。
如图10所示,该计算机设备30可包括:
存储器31和处理器32,该存储器31用于存储计算机可读指令33,并将该程序代码33传输给该处理器32。换言之,该处理器32可以从存储器31中调用并运行计算机可读指令33,以实现本申请实施例中的方法。
例如,该处理器32可用于根据该计算机可读指令33中的指令执行上述方法200中的步骤。
在本申请的一些实施例中,该处理器32可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本申请的一些实施例中,该存储器31包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本申请的一些实施例中,该计算机可读指令33可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器31中,并由该处理器32执行,以完成本申请提供的录制页面的方法。该一个或多个模块可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述该计算机可读指令33在该计算机设备900中的执行过程。
如图10所示,该计算机设备30还可包括:
收发器34,该收发器34可连接至该处理器32或存储器31。
其中,处理器32可以控制该收发器34与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器34可以包括发射机和接收机。收发器34还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该计算机设备30中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
根据本申请的一个方面,提供了一种计算机存储介质,其上存储有计算机可读指令,该计算机可读指令被计算机执行时使得该计算机能够执行上述方法实施例的方法。或者说,本申请实施例还提供一种包含指令的计算机可读指令产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
根据本申请的另一个方面,提供了一种计算机可读指令产品或计算机可读指令,该计算机可读指令产品或计算机可读指令包括计算机可读指令,该计算机可读指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机可读指令,处理器执行该计算机可读指令,使得该计算机设备执行上述方法实施例的方法。
换言之,当使用软件实现时,可以全部或部分地以计算机可读指令产品的形式实现。该计算机可读指令产品包括一个或多个计算机可读指令。在计算机上加载和执行该计算机可读指令指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机可读指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机可读指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。例如,在本 申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。
Claims (33)
- 一种点云中邻居点的选择方法,由计算机设备执行,其特征在于,包括:获取点云数据,并从所述点云数据中确定当前点所在的目标区域,所述目标区域包括多个点;针对所述目标区域内的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,其中,所述至少两个目标点中未包括所述当前点;根据所述至少两个目标点中每个点的权重系数和几何信息,以及所述当前点的几何信息,确定所述至少两个目标点中每个点的权重;根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
- 根据权利要求1所述的方法,其特征在于,所述确定所述至少两个目标点中每个点的权重系数,包括:将所述至少两个目标点划分为至少一组,将所述至少一组中每一组对应的默认权重系数确定为所述每一组中每个点的权重系数;或者,根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点中每个点的权重系数;或者,根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求2所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数;其中,所述至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等;或者,所述至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数中至少两个系数不相等。
- 根据权利要求2所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数,所述根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点中每个点的权重系数,包括:根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在第一分量方向上的第一分布值、在第二分量方向上的第二分布值和在第三分量方向上的第三分布值;根据所述第一分布值、所述第二分布值和所述第三分布值,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求4所述的方法,其特征在于,所述根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点在所述第一分量方向上的几何信息,确定所述至少两个目标点在所述第一分量方向上的第一取值范围;根据所述至少两个目标点中每个点在所述第二分量方向上的几何信息,确定所述至少两个目标点在所述第二分量方向上的第二取值范围;根据所述至少两个目标点中每个点在所述第三分量方向上的几何信息,确定所述至少两个目标点在所述第三分量方向上的第三取值范围;根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值。
- 根据权利要求5所述的方法,其特征在于,所述根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:将所述第一取值范围的范围值确定为所述第一分布值;将所述第二取值范围的范围值确定为所述第二分布值;将所述第三取值范围的范围值确定为所述第三分布值。
- 根据权利要求5所述的方法,其特征在于,所述根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:确定所述至少两个目标点的数量与所述第一取值范围的范围值之间的第一比值,并将所述第一比值确定所述第一分布值;确定所述至少两个目标点的数量与所述第二取值范围的范围值之间的第二比值,并将所述第二比值确定所述第二分布值;确定所述至少两个目标点的数量与所述第三取值范围的范围值之间的第三比值,并将所述第三比值确定所述第三分布值。
- 根据权利要求4所述的方法,其特征在于,所述根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点在所述第一分量方向上的几何信息,确定所述至少两个目标点在所述第一分量方向上的第一方差;根据所述至少两个目标点中每个点在所述第二分量方向上的几何信息,确定所述至少两个目标点在所述第二分量方向上的第二方差;根据所述至少两个目标点中每个点在所述第三分量方向上的几何信息,确定所述至少两个目标点在所述第三分量方向上的第三方差;根据所述第一方差、所述第二方差和所述第三方差,确定所述第一分布值、所述第二分布值和所述第三分布值。
- 根据权利要求8所述的方法,其特征在于,所述根据所述第一方差、所述第二方差和所述第三方差,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:将所述第一方差确定为所述第一分布值;将所述第二方差确定为所述第二分布值;将所述第三方差确定为所述第三分布值。
- 根据权利要求4-9任一项所述的方法,其特征在于,所述至少两个目标点中每个点的权重系数相等。
- 根据权利要求2所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数,所述根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点中每个点的权重系数,包括:根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点围成的区域的中心点;从所述至少两个目标点中分别确定与所述中心点在所述第一分量方向上距离最远的第一点、在所述第二分量方向上距离最远的第二点,以及在所述第三分量方向上距离最远的第三点;根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值;根据所述第一分布值、所述第二分布值和在所述第三分布值,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求11所述的方法,其特征在于,所述根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:获取所述中心点的属性信息;将所述第一点的属性信息与所述中心点的属性信息的差值,确定为所述至少两个目标点在所述第一分量方向上的第一分布值;将所述第二点的属性信息与所述中心点的属性信息的差值,确定为所述至少两个目标点在所述第二分量方向上的第二分布值;将所述第三点的属性信息与所述中心点的属性信息的差值,确定为所述至少两个目标点在所述第三分量方向上的第三分布值。
- 根据权利要求11所述的方法,其特征在于,所述根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点的属性信息的平均值;将所述第一点的属性信息与所述至少两个目标点的属性信息的平均值的差值,确定为所述至少两个目标点在所述第一分量方向上的第一分布值;将所述第二点的属性信息与所述至少两个目标点的属性信息的平均值的差值,确定为所述至少两个目标点在所述第二分量方向上的第二分布值;将所述第三点的属性信息与所述至少两个目标点的属性信息的平均值的差值,确定为所述至少两个目标点在所述第三分量方向上的第三分布值。
- 根据权利要求4或11所述的方法,其特征在于,所述根据所述第一分布值、所述第二分布值和在所述第三分布值,确定所述至少两个目标点中每个点的权重系数,包括:将所述第一分布值确定为所述第一分量的权重系数,将所述第二分布值确定为所述第二分量的权重系数,将所述第三分布值确定为所述第三分量的权重系数;或者,确定所述第一分布值、所述第二分布值和所述第三分布值的总和,根据所述第一分布值与所述总和的比,确定所述第一分量的权重系数,根据所述第二分布值与所述总和的比,确定所述第二分量的权重系数,根据所述第三分布值与所述总和的比,确定所述第三分量的权重系数。
- 一种点云中邻居点的选择方法,由计算机设备执行,其特征在于,包括:解码码流,获取点云数据中点的几何信息;根据所述点云数据中点的几何信息,从所述点云数据中确定当前点所在的目标区域,所 述目标区域包括多个点;针对所述目标区域内已解码的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,所述至少两个目标点中未包括所述当前点;根据所述至少两个目标点中每个点的权重系数和几何信息,以及所述当前点的几何信息,确定所述至少两个目标点中每个点的权重;根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
- 根据权利要求15所述的方法,其特征在于,所述确定所述至少两个目标点中每个点的权重系数,包括:解码码流,得到所述至少两个目标点中每个点的权重系数;或者,将所述至少两个目标点划分为至少一组,将所述至少一组中每一组对应的默认权重系数确定为所述每一组中每个点的权重系数;或者,根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点中每个点的权重系数;或者,根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求16所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数;所述至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数均相等;或者,所述至少一组中同一个组对应的第一分量的权重系数、第二分量的权重系数和第三分量的权重系数中至少两个系数不相等。
- 根据权利要求16所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数,所述根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点中每个点的权重系数,包括:根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值;根据所述第一分布值、所述第二分布值和所述第三分布值,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求18所述的方法,其特征在于,所述根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点在所述第一分量方向上的几何信息,确定所述至少两个目标点在所述第一分量方向上的第一取值范围;根据所述至少两个目标点中每个点在所述第二分量方向上的几何信息,确定所述至少两个目标点在所述第二分量方向上的第二取值范围;根据所述至少两个目标点中每个点在所述第三分量方向上的几何信息,确定所述至少两个目标点在所述第三分量方向上的第三取值范围;根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值。
- 根据权利要求19所述的方法,其特征在于,所述根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:将所述第一取值范围的范围值确定为所述第一分布值;将所述第二取值范围的范围值确定为所述第二分布值;将所述第三取值范围的范围值确定为所述第三分布值。
- 根据权利要求19所述的方法,其特征在于,所述根据所述第一取值范围、所述第二取值范围和所述第三取值范围,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:确定所述至少两个目标点的数量与所述第一取值范围的范围值之间的第一比值,并将所述第一比值确定所述第一分布值;确定所述至少两个目标点的数量与所述第二取值范围的范围值之间的第二比值,并将所述第二比值确定所述第二分布值;确定所述至少两个目标点的数量与所述第三取值范围的范围值之间的第三比值,并将所述第三比值确定所述第三分布值。
- 根据权利要求18所述的方法,其特征在于,所述根据所述至少两个目标点中每个点的几何信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点在所述第一分量方向上的几何信息,确定所述至少两个目标点在所述第一分量方向上的第一方差;根据所述至少两个目标点中每个点在所述第二分量方向上的几何信息,确定所述至少两个目标点在所述第二分量方向上的第二方差;根据所述至少两个目标点中每个点在所述第三分量方向上的几何信息,确定所述至少两个目标点在所述第三分量方向上的第三方差;根据所述第一方差、所述第二方差和所述第三方差,确定所述第一分布值、所述第二分布值和所述第三分布值。
- 根据权利要求22所述的方法,其特征在于,所述根据所述第一方差、所述第二方差和所述第三方差,确定所述第一分布值、所述第二分布值和所述第三分布值,包括:将所述第一方差确定为所述第一分布值;将所述第二方差确定为所述第二分布值;将所述第三方差确定为所述第三分布值。
- 根据权利要求18-23任一项所述的方法,其特征在于,所述至少两个目标点中每个点的权重系数相等。
- 根据权利要求16所述的方法,其特征在于,所述权重系数包括第一分量的权重系数、第二分量的权重系数和第三分量的权重系数,所述根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点中每个点的权重系数,包括:根据所述至少两个目标点中每个点的几何信息,确定所述至少两个目标点围成的区域的中心点;从所述至少两个目标点中分别确定与所述中心点在所述第一分量方向上距离最远的第一点、在所述第二分量方向上距离最远的第二点,以及在所述第三分量方向上距离最远的第三点;根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值;根据所述第一分布值、所述第二分布值和在所述第三分布值,确定所述至少两个目标点中每个点的权重系数。
- 根据权利要求25所述的方法,其特征在于,所述根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:获取所述中心点的属性信息;根据所述第一点的属性信息与所述中心点的属性信息,确定所述至少两个目标点在所述第一分量方向上的第一分布值,根据所述第二点的属性信息与所述中心点的属性信息,确定所述至少两个目标点在所述第二分量方向上的第二分布值,根据所述第三点的属性信息与所述中心点的属性信息,确定所述至少两个目标点在所述第三分量方向上的第三分布值。
- 根据权利要求25所述的方法,其特征在于,所述根据所述第一点、所述第二点和所述第三点中每个点的属性信息,分别确定所述至少两个目标点在所述第一分量方向上的第一分布值、在所述第二分量方向上的第二分布值和在所述第三分量方向上的第三分布值,包括:根据所述至少两个目标点中每个点的属性信息,确定所述至少两个目标点的属性信息的平均值;根据所述第一点的属性信息与所述至少两个目标点的属性信息的平均值,确定所述至少两个目标点在所述第一分量方向上的第一分布值,根据所述第二点的属性信息与所述至少两个目标点的属性信息的平均值,确定所述至少两个目标点在所述第二分量方向上的第二分布值,根据所述第三点的属性信息与所述至少两个目标点的属性信息的平均值,确定所述至少两个目标点在所述第三分量方向上的第三分布值。
- 根据权利要求18或25所述的方法,其特征在于,所述根据所述第一分布值、所述第二分布值和在所述第三分布值,确定所述至少两个目标点中每个点的权重系数,包括:将所述第一分布值确定为所述第一分量的权重系数,将所述第二分布值确定为所述第二分量的权重系数,将所述第三分布值确定为所述第三分量的权重系数;或者,确定所述第一分布值、所述第二分布值和所述第三分布值的总和,根据所述第一分布值与所述总和的比,确定所述第一分量的权重系数,根据所述第二分布值与所述总和的比,确定所述第二分量的权重系数,根据所述第三分布值与所述总和的比,确定所述第三分量的权重系数。
- 一种点云中邻居点的选择装置,其特征在于,包括:获取单元,用于获取点云数据,并从所述点云数据中确定当前点所在的目标区域,所述目标区域包括多个点;权重系数确定单元,用于针对所述目标区域内的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,所述至少两个目标点中未包括所述当前点;权重确定单元,用于根据所述至少两个目标点中每个点的权重系数和几何信息,以及所 述当前点的几何信息,确定所述至少两个目标点中每个点的权重;邻居点选择单元,用于根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
- 一种点云中邻居点的选择装置,其特征在于,包括:解码单元,用于解码码流,获取点云数据中点的几何信息;区域确定单元,用于根据所述点云数据中点的几何信息,从所述点云数据中确定当前点所在的目标区域,所述目标区域包括多个点;权重系数确定单元,用于针对所述目标区域内已解码的至少两个目标点,确定所述至少两个目标点中每个点的权重系数,所述至少两个目标点中未包括所述当前点;权重确定单元,用于根据所述至少两个目标点中每个点的权重系数和几何信息,以及所述当前点的几何信息,确定所述至少两个目标点中每个点的权重;邻居点确定单元,用于根据所述至少两个目标点中每个点的权重,从所述至少两个目标点中选择所述当前点的至少一个邻居点。
- 一种编码设备,其特征在于,所述编码设备用于执行如权利要求1至14任一项所述的方法。
- 一种解码设备,其特征在于,所述解码设备用于执行如权利要求15至28任一项所述的方法。
- 一种计算机设备,包括处理器和存储器;所述存储器,用于存储计算机可读指令;所述处理器,用于执行所述计算机可读指令以实现如上述权利要求1至28任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22766109.7A EP4307687A4 (en) | 2021-03-12 | 2022-02-08 | METHOD AND APPARATUS FOR SELECTING A NEIGHBOR POINT IN A POINT CLOUD, AND CODEC |
US17/978,116 US12113963B2 (en) | 2021-03-12 | 2022-10-31 | Method and apparatus for selecting neighbor point in point cloud, encoder, and decoder |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110269952.3 | 2021-03-12 | ||
CN202110269952.3A CN115086716B (zh) | 2021-03-12 | 2021-03-12 | 点云中邻居点的选择方法、装置及编解码器 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/978,116 Continuation US12113963B2 (en) | 2021-03-12 | 2022-10-31 | Method and apparatus for selecting neighbor point in point cloud, encoder, and decoder |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022188582A1 true WO2022188582A1 (zh) | 2022-09-15 |
Family
ID=83226320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/075528 WO2022188582A1 (zh) | 2021-03-12 | 2022-02-08 | 点云中邻居点的选择方法、装置及编解码器 |
Country Status (5)
Country | Link |
---|---|
US (1) | US12113963B2 (zh) |
EP (1) | EP4307687A4 (zh) |
CN (2) | CN115086716B (zh) |
TW (1) | TWI806481B (zh) |
WO (1) | WO2022188582A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2619550A (en) * | 2022-06-10 | 2023-12-13 | Sony Interactive Entertainment Europe Ltd | Systems and methods for contolling content streams |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2528669A (en) * | 2014-07-25 | 2016-02-03 | Toshiba Res Europ Ltd | Image Analysis Method |
CN107403456A (zh) * | 2017-07-28 | 2017-11-28 | 北京大学深圳研究生院 | 一种基于kd树和优化图变换的点云属性压缩方法 |
US20170347100A1 (en) * | 2016-05-28 | 2017-11-30 | Microsoft Technology Licensing, Llc | Region-adaptive hierarchical transform and entropy coding for point cloud compression, and corresponding decompression |
CN109842799A (zh) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | 颜色分量的帧内预测方法及装置 |
CN110418135A (zh) * | 2019-08-05 | 2019-11-05 | 北京大学深圳研究生院 | 一种基于邻居的权重优化的点云帧内预测方法及设备 |
CN110572655A (zh) * | 2019-09-30 | 2019-12-13 | 北京大学深圳研究生院 | 一种基于邻居权重的参数选取和传递的点云属性编码和解码的方法及设备 |
CN110765298A (zh) * | 2019-10-18 | 2020-02-07 | 中国电子科技集团公司第二十八研究所 | 矢量数据几何属性解耦的瓦片编码方法 |
WO2020246689A1 (ko) * | 2019-06-05 | 2020-12-10 | 엘지전자 주식회사 | 포인트 클라우드 데이터 전송 장치, 포인트 클라우드 데이터 전송 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
CN112385222A (zh) * | 2019-06-12 | 2021-02-19 | 浙江大学 | 点云处理的方法与装置 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI3514968T3 (fi) * | 2018-01-18 | 2023-05-25 | Blackberry Ltd | Menetelmiä ja laitteita pistepilvien entropiakoodausta varten |
US10904564B2 (en) * | 2018-07-10 | 2021-01-26 | Tencent America LLC | Method and apparatus for video coding |
US11010931B2 (en) * | 2018-10-02 | 2021-05-18 | Tencent America LLC | Method and apparatus for video coding |
WO2020123469A1 (en) * | 2018-12-11 | 2020-06-18 | Futurewei Technologies, Inc. | Hierarchical tree attribute coding by median points in point cloud coding |
CN112019845B (zh) * | 2019-05-30 | 2024-04-26 | 腾讯美国有限责任公司 | 对点云进行编码的方法、装置以及存储介质 |
CN111699697B (zh) * | 2019-06-14 | 2023-07-11 | 深圳市大疆创新科技有限公司 | 一种用于点云处理、解码的方法、设备及存储介质 |
CN113615181B (zh) * | 2019-06-26 | 2024-02-20 | 腾讯美国有限责任公司 | 用于点云编解码的方法、装置 |
WO2021002665A1 (ko) * | 2019-07-01 | 2021-01-07 | 엘지전자 주식회사 | 포인트 클라우드 데이터 처리 장치 및 방법 |
WO2021023206A1 (zh) * | 2019-08-05 | 2021-02-11 | 北京大学深圳研究生院 | 基于邻居权重优化的点云属性预测、编码和解码方法及设备 |
EP4006839B1 (en) * | 2019-10-03 | 2024-10-16 | LG Electronics Inc. | Device for transmitting point cloud data, method for transmitting point cloud data, device for receiving point cloud data, and method for receiving point cloud data |
CN111145090B (zh) * | 2019-11-29 | 2023-04-25 | 鹏城实验室 | 一种点云属性编码方法、解码方法、编码设备及解码设备 |
JP2023507879A (ja) * | 2019-12-26 | 2023-02-28 | エルジー エレクトロニクス インコーポレイティド | ポイントクラウドデータ送信装置、ポイントクラウドデータ送信方法、ポイントクラウドデータ受信装置及びポイントクラウドデータ受信方法 |
CN111242997B (zh) * | 2020-01-13 | 2023-11-10 | 北京大学深圳研究生院 | 一种基于滤波器的点云属性预测方法及设备 |
CN111405281A (zh) * | 2020-03-30 | 2020-07-10 | 北京大学深圳研究生院 | 一种点云属性信息的编码方法、解码方法、存储介质及终端设备 |
CN111405284B (zh) * | 2020-03-30 | 2022-05-31 | 北京大学深圳研究生院 | 一种基于点云密度的属性预测方法及设备 |
CN111953998B (zh) * | 2020-08-16 | 2022-11-11 | 西安电子科技大学 | 基于dct变换的点云属性编码及解码方法、装置及系统 |
CN111986115A (zh) * | 2020-08-22 | 2020-11-24 | 王程 | 激光点云噪声和冗余数据的精准剔除方法 |
CN112218079B (zh) * | 2020-08-24 | 2022-10-25 | 北京大学深圳研究生院 | 一种基于空间顺序的点云分层方法、点云预测方法及设备 |
CN112330702A (zh) * | 2020-11-02 | 2021-02-05 | 蘑菇车联信息科技有限公司 | 点云补全方法、装置、电子设备及存储介质 |
-
2021
- 2021-03-12 CN CN202110269952.3A patent/CN115086716B/zh active Active
- 2021-03-12 CN CN202311362227.6A patent/CN118042192A/zh active Pending
-
2022
- 2022-02-08 WO PCT/CN2022/075528 patent/WO2022188582A1/zh active Application Filing
- 2022-02-08 EP EP22766109.7A patent/EP4307687A4/en active Pending
- 2022-03-08 TW TW111108453A patent/TWI806481B/zh active
- 2022-10-31 US US17/978,116 patent/US12113963B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2528669A (en) * | 2014-07-25 | 2016-02-03 | Toshiba Res Europ Ltd | Image Analysis Method |
US20170347100A1 (en) * | 2016-05-28 | 2017-11-30 | Microsoft Technology Licensing, Llc | Region-adaptive hierarchical transform and entropy coding for point cloud compression, and corresponding decompression |
CN107403456A (zh) * | 2017-07-28 | 2017-11-28 | 北京大学深圳研究生院 | 一种基于kd树和优化图变换的点云属性压缩方法 |
CN109842799A (zh) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | 颜色分量的帧内预测方法及装置 |
WO2020246689A1 (ko) * | 2019-06-05 | 2020-12-10 | 엘지전자 주식회사 | 포인트 클라우드 데이터 전송 장치, 포인트 클라우드 데이터 전송 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
CN112385222A (zh) * | 2019-06-12 | 2021-02-19 | 浙江大学 | 点云处理的方法与装置 |
CN110418135A (zh) * | 2019-08-05 | 2019-11-05 | 北京大学深圳研究生院 | 一种基于邻居的权重优化的点云帧内预测方法及设备 |
CN110572655A (zh) * | 2019-09-30 | 2019-12-13 | 北京大学深圳研究生院 | 一种基于邻居权重的参数选取和传递的点云属性编码和解码的方法及设备 |
CN110765298A (zh) * | 2019-10-18 | 2020-02-07 | 中国电子科技集团公司第二十八研究所 | 矢量数据几何属性解耦的瓦片编码方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4307687A4 |
Also Published As
Publication number | Publication date |
---|---|
CN115086716B (zh) | 2023-09-08 |
EP4307687A4 (en) | 2024-08-28 |
TWI806481B (zh) | 2023-06-21 |
CN118042192A (zh) | 2024-05-14 |
US12113963B2 (en) | 2024-10-08 |
US20230051431A1 (en) | 2023-02-16 |
TW202236853A (zh) | 2022-09-16 |
EP4307687A1 (en) | 2024-01-17 |
CN115086716A (zh) | 2022-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11910017B2 (en) | Method for predicting point cloud attribute, encoder, decoder, and storage medium | |
TW202249488A (zh) | 點雲屬性的預測方法、裝置及編解碼器 | |
US20240015325A1 (en) | Point cloud coding and decoding methods, coder, decoder and storage medium | |
WO2022188582A1 (zh) | 点云中邻居点的选择方法、装置及编解码器 | |
WO2023024842A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2022257528A1 (zh) | 点云属性的预测方法、装置及相关设备 | |
WO2024145933A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024065269A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024026712A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024145913A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024207463A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024145935A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024178632A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024065272A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024197680A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024212113A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024145911A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024145934A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024212114A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2024145912A1 (zh) | 点云编解码方法、装置、设备及存储介质 | |
WO2023103565A1 (zh) | 点云属性信息的编解码方法、装置、设备及存储介质 | |
WO2022257150A1 (zh) | 点云编解码方法、装置、点云编解码器及存储介质 | |
WO2023197338A1 (zh) | 索引确定方法、装置、解码器以及编码器 | |
JP2024500701A (ja) | 点群符号化方法、点群復号化方法、点群符号化と復号化システム、点群エンコーダ及び点群デコーダ | |
CN117615136A (zh) | 点云解码方法、点云编码方法、解码器、电子设备以及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22766109 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022766109 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022766109 Country of ref document: EP Effective date: 20231012 |